NVIDIA DATA CENTER PLATFORM

The NVIDIA Datacenter Platform accelerates a broad array of workloads, from AI training and inference to scientific computing and virtual desktop infrastructure (VDI) applications, with a diverse range of GPUs, all powered by a single unified architecture. For optimal performance, let Koi Computers help you identify the ideal GPU for your specific workload.

Features A100 80GB PCIe A100 80GB SXM A30 A2
FP64 9.7 TFLOPS 9.7 TFLOPS 5.2 TFLOPS N/A
FP64 Tensor Core 19.5 TFLOPS 19.5 TFLOPS 10.3 TFLOPS N/A
FP32 19.5 TFLOPS 19.5 TFLOPS 10.3 TFLOPS 4.5 TFLOPS
Tensor Float 32 (TF32) 156 TFLOPS | 312 TFLOPS* 156 TFLOPS | 312 TFLOPS* 82 TFLOPS | 165 TFLOPS* 9 TFLOPS | 18 TFLOPS*
BFLOAT16 Tensor Core 312 TFLOPS | 624 TFLOPS* 312 TFLOPS | 624 TFLOPS* 165 TFLOPS | 330 TFLOPS* 18 TFLOPS | 36 TFLOPS*
FP16 Tensor Core 312 TFLOPS | 624 TFLOPS* 312 TFLOPS | 624 TFLOPS* 165 TFLOPS | 330 TFLOPS* 18 TFLOPS | 36 TFLOPS*
INT8 Tensor Core 624 TOPS | 1248 TOPS* 624 TOPS | 1248 TFLOPS* 330 TOPS | 661 TOPS* 36 TOPS | 72 TOPS*
INT4 Tensor Core 1,248 TOPS | 2,496 TOPS* 1,248 TOPS | 2,496 TOPS* 661 TOPS | 1321 TOPS* 72 TOPS | 144 TOPS*
GPU Memory 80GB HBM2e 80GB HBM2e 24GB HBM2 16GB GDDR6
GPU Memory Bandwidth 1,935GB/s 2,039GB/s 933GB/s 200GB/s
Max Thermal Design
Power (TDP)
300W 400W 165W 40-50W
Multi-Instance GPU Up to 7 MIGs @ 10GB Up to 7 MIGs @ 10GB 4 GPU instances @ 6GB each
2 GPU instances @ 12GB each
1 GPU instance @ 24GB
N/A
Form Factor PCIe SXM PCIe PCIe
Interconnect NVIDIA® NVLink® Bridge for 2 GPUs: 600GB/s**
PCIe Gen4: 64GB/s
NVLink: 600GB/s
PCIe Gen4: 64GB/s
PCIe Gen4: 64GB/s
Third-gen NVLINK: 200GB/s**
PCIe Gen4 x 8

* With sparsity
** SXM4 GPUs via HGX A100 server boards; PCIe GPUs via NVLink Bridge for up to two GPUs

CONTACT US TO PURCHASE YOUR Customized NVIDIA DATACENTER SOLUTIONs TODAY

Resources