Experience unparalleled performance and efficiency with Arc Compute's NVIDIA HGX B200 systems, powered by the latest Blackwell architecture.
Arc Compute offers customizable configurations to meet your specific requirements:
Arc Compute's NVIDIA HGX B200 servers, featuring the cutting-edge Blackwell architecture, deliver 15x faster inference for trillion-parameter models and 12x lower energy consumption. With 1.4 TB of GPU memory and 60 TB/s bandwidth, these systems are engineered for the most demanding AI, analytics, and HPC workloads.
Efficiently train and deploy large-scale models like GPT, LLAMA, and Stable Diffusion.
Accelerate complex queries and data processing tasks with enhanced performance.
Tackle advanced simulations in computational fluid dynamics, structural analysis, and more.
NVIDIA B200 SXM | Specifications |
---|---|
GPU Architecture | NVIDIA Blackwell Architecture |
FP4 Tensor Core | 18 petaFLOPS |
FP8/FP6 Tensor Core | 9 petaFLOPS |
INT8 Tensor Core | 9 petaFLOPS |
FP16/BF16 Tensor Core | 4.5 petaFLOPS |
TF32 Tensor Core | 2.2 petaFLOPS |
FP64 Tensor Core | 40 teraFLOPS |
GPU Memory | 180 GB HBM3e |
GPU Memory Bandwidth | 8 TB/s |
Multi-Instance GPU (MIG) | 7 |
Decoders | 2x 7 NVDEC | 2x 7 NVJPEG |
Interconnect | 5th Generation NVLink: 1.8TB/s PCIe Gen6: 256GB/s |
Max thermal design power (TDP) | Up to 700W (configurable) |