Propel your data center into the next era of accelerated computing and generative AI.
More manufacturers and configurations coming soon.
The NVIDIA HGX B200 revolutionizes data centers with accelerated computing and generative AI powered by NVIDIA Blackwell GPUs. Featuring eight GPUs, it delivers 15X faster trillion-parameter inference with 12X lower costs and energy use, supported by 1.4 TB of GPU memory and 60 TB/s bandwidth. Designed for demanding AI, analytics, and HPC workloads, the HGX B200 sets a new performance standard.
Train, fine-tune, and deploy AI models like GPT, LLAMA, and Stable Diffusion with ease.
Accelerate database queries by and enjoy 2X better performance than previous-generation GPUs.
Leverage advanced computational fluid dynamics, structural simulation, and physics-based simulations.
NVIDIA B200 SXM | Specifications |
---|---|
GPU Architecture | NVIDIA Blackwell Architecture |
FP4 Tensor Core | 18 petaFLOPS |
FP8/FP6 Tensor Core | 9 petaFLOPS |
INT8 Tensor Core | 9 petaFLOPS |
FP16/BF16 Tensor Core | 4.5 petaFLOPS |
TF32 Tensor Core | 2.2 petaFLOPS |
FP64 Tensor Core | 40 teraFLOPS |
GPU Memory | 190 GB HBM3e |
GPU Memory Bandwidth | 8 TB/s |
Multi-Instance GPU (MIG) | 7 |
Decoders | 2x 7 NVDEC | 2x 7 NVJPEG |
Interconnect | 5th Generation NVLink: 1.8TB/s PCIe Gen6: 256GB/s |
Max thermal design power (TDP) | Up to 700W (configurable) |