NVIDIA DATA CENTER PLATFORM

The NVIDIA Datacenter Platform accelerates a broad array of workloads, from AI training and inference to scientific computing and virtual desktop infrastructure (VDI) applications, with a diverse range of GPUs, all powered by a single unified architecture. For optimal performance, let Koi Computers help you identify the ideal GPU for your specific workload.

Features A100 40GB PCIe A100 80GB PCIe A100 40GB SXM A100 80GB SXM A30
FP64 9.7 TFLOPS 5.2 teraFLOPS
FP64 Tensor Core 19.5 TFLOPS 10.3 teraFLOPS
FP32 19.5 TFLOPS 10.3 teraFLOPS
Tensor Float 32 (TF32) 156 TFLOPS | 312 TFLOPS* 82 teraFLOPS | 165 teraFLOPS*
BFLOAT16 Tensor Core 312 TFLOPS | 624 TFLOPS* 165 teraFLOPS | 330 teraFLOPS*
FP16 Tensor Core 312 TFLOPS | 624 TFLOPS* 165 teraFLOPS | 330 teraFLOPS*
INT8 Tensor Core 624 TOPS | 1248 TOPS* 330 TOPS | 661 TOPS*
INT4 Tensor Core 661 TOPS | 1321 TOPS*
GPU Memory 40GB HBM2 80GB HBM2e 40GB HBM2 80GB HBM2e 24GB HBM2
GPU Memory Bandwidth 1,555GB/s 1,935GB/s 1,555GB/s 2,039GB/s 933GB/s
Max Thermal Design
Power (TDP)
250W 300W 400W 400W 165W
Multi-Instance GPU Up to 7 MIGs @ 5GB Up to 7 MIGs @ 10GB Up to 7 MIGs @ 5GB Up to 7 MIGs @ 10GB 4 GPU instances @ 6GB each
2 GPU instances @ 12GB each
1 GPU instance @ 24GB
Form Factor PCIe SXM PCIe
Interconnect NVIDIA® NVLink® Bridge for 2 GPUs: 600GB/s **
PCIe Gen4: 64GB/s
NVLink: 600GB/s
PCIe Gen4: 64GB/s
PCIe Gen4: 64GB/s
Third-gen NVLINK: 200GB/s**

* With sparsity
** SXM4 GPUs via HGX A100 server boards; PCIe GPUs via NVLink Bridge for up to two GPUs