NVIDIA DATA CENTER PLATFORM

The NVIDIA Datacenter Platform accelerates a broad array of workloads, from AI training and inference to scientific computing and virtual desktop infrastructure (VDI) applications, with a diverse range of GPUs, all powered by a single unified architecture. For optimal performance, let Koi Computers help you identify the ideal GPU for your specific workload.

Features H100 80GB PCIe H100 80GB SXM A100 80GB PCIe A30
FP64 26 TFLOPS 34 TFLOPS 9.7 TFLOPS 5.2 TFLOPS
FP64 Tensor Core 51 TFLOPS 67 TFLOPS 19.5 TFLOPS 10.3 TFLOPS
FP32 51 TFLOPS 67 TFLOPS 19.5 TFLOPS 10.3 TFLOPS
Tensor Float 32 (TF32) 756 TFLOPS  989 TFLOPS 156 TFLOPS | 312 TFLOPS* 82 TFLOPS | 165 TFLOPS*
BFLOAT16 Tensor Core 1513 TFLOPS 1979 TFLOPS 312 TFLOPS | 624 TFLOPS* 165 TFLOPS | 330 TFLOPS*
FP16 Tensor Core 1513 TFLOPS 1979 TFLOPS 312 TFLOPS | 624 TFLOPS* 165 TFLOPS | 330 TFLOPS*
FP8 Tensor Core 3026 TFLOPS 3958 TFLOPS 624 TOPS | 1248 TFLOPS* 330 TOPS | 661 TOPS*
INT8 Tensor Core 3026 TOPS 3958 TOPS 1,248 TOPS | 2,496 TOPS* 661 TOPS | 1321 TOPS*
GPU Memory 80GB 80GB 80GB HBM2e 24GB HBM2
GPU Memory Bandwidth 3.35TB/s 2TB/s 1,935GB/s 933GB/s
Max Thermal Design
Power (TDP)
300-350W (configurable) Up to 700W (configurable) 300W 165W
Multi-Instance GPU Up to 7 MIGs @ 10GB Up to 7 MIGs @ 10GB Up to 7 MIGs @ 10GB 4 GPU instances @ 6GB each
2 GPU instances @ 12GB each
1 GPU instance @ 24GB
Form Factor PCIe SXM PCIe PCIe
Interconnect NVLink:
600GB/s PCIe
Gen5: 128GB/s
NVLink:
900GB/s PCIe
Gen5: 128GB/s
NVIDIA® NVLink® Bridge for 2 GPUs: 600GB/s**
PCIe Gen4: 64GB/s
PCIe Gen4: 64GB/s
Third-gen NVLINK: 200GB/s**

* With sparsity
** SXM4 GPUs via HGX A100 server boards; PCIe GPUs via NVLink Bridge for up to two GPUs

CONTACT US TO PURCHASE YOUR Customized NVIDIA DATACENTER SOLUTIONs TODAY

Resources