AVAILABLE Soon

NVIDIA HGX B200 Servers

Propel your data center into the next era of accelerated computing and generative AI.

Choose Your Configuration

More manufacturers and configurations coming soon.

Supermicro's NVIDIA HGX B200 - Air Cooled

NVIDIA HGX B200 System
10U Air Cooled

GPU
8x B200 190GB SXM
CPU
2x Intel Xeon processors
Form Factor
10U / air cooled
Manufacturer
Supermicro
System Specifications
Supermicro's NVIDIA HGX B200 - Liquid Cooled

NVIDIA HGX B200 System
4U Liquid Cooled

GPU
8x B200 190GB SXM
CPU
2x Intel Xeon processors
Form Factor
4U / liquid cooled
Manufacturer
Supermicro
System Specifications

NVIDIA B200 SXM Tensor Core GPU

The NVIDIA HGX B200 revolutionizes data centers with accelerated computing and generative AI powered by NVIDIA Blackwell GPUs. Featuring eight GPUs, it delivers 15X faster trillion-parameter inference with 12X lower costs and energy use, supported by 1.4 TB of GPU memory and 60 TB/s bandwidth. Designed for demanding AI, analytics, and HPC workloads, the HGX B200 sets a new performance standard.

Primary Use Cases

Media Processing

Generative AI

Train, fine-tune, and deploy AI models like GPT, LLAMA, and Stable Diffusion with ease.

DL Training

Data Analytics

Accelerate database queries by and enjoy 2X better performance than previous-generation GPUs.

Language Processing

HPC Workloads

Leverage advanced computational fluid dynamics, structural simulation, and physics-based simulations.

GPU Specifications

NVIDIA B200 SXMSpecifications
GPU ArchitectureNVIDIA Blackwell Architecture
FP4 Tensor Core18 petaFLOPS
FP8/FP6 Tensor Core9 petaFLOPS
INT8 Tensor Core9 petaFLOPS
FP16/BF16 Tensor Core4.5 petaFLOPS
TF32 Tensor Core2.2 petaFLOPS
FP64 Tensor Core40 teraFLOPS
GPU Memory190 GB HBM3e
GPU Memory Bandwidth8 TB/s
Multi-Instance GPU (MIG)7
Decoders2x 7 NVDEC | 2x 7 NVJPEG
Interconnect5th Generation NVLink: 1.8TB/s PCIe Gen6: 256GB/s
Max thermal design power (TDP)Up to 700W (configurable)