Unleash Hopper architecture in your data center with NVIDIA H200 Tensor Core GPUs. Perfect for large-scale AI training, high-throughput inference, and advanced HPC workloads.
Arc Compute offers customizable server configurations, featuring up to 8x NVIDIA HGX™ H200 GPUs, manufactured by various OEMs.






The NVIDIA H200 was the first GPU to offer 141 gigabytes (GB) of HBM3e memory at 4.8 terabytes per second (TB/s). That’s nearly double the capacity of the NVIDIA H100 Tensor Core GPU with 1.4X more memory bandwidth.
Unprecedented computational power for scientific research and simulations with large datasets and intricate calculations.
Enabling faster and more accurate deep learning tasks for rapid advancements in artificial intelligence.
Empowering applications for tasks like sentiment analysis and language translation with remarkable precision.
Enhancing the processing speed and efficiency of chatbots and virtual assistants for more engaging user experiences.
| GPU Architecture | NVIDIA Hopper Architecture |
| FP64 TFLOPS | 34 |
| FP64 Tensor Core TFLOPS | 67 |
| FP32 TFLOPS | 67 |
| TF32 Tensor Core TFLOPS | 989 |
| BFLOAT16 Tensor Core TFLOPS | 1,979 |
| FP16 Tensor Core | 1,979 |
| FP8 Tensor Core | 3,958 |
| INT8 Tensor Core | 3,958 TOPS |
| GPU memory | 141GB |
| GPU memory bandwidth | 4.8TB/s |
| Decoders | 7 NVDEC | 7 JPEG |
| Max thermal design power (TDP) | Up to 700W (configurable) |
| Multi-Instance GPUs | Up to 7 MIGS @ 16.5GB each |
| Form factor | SXM |
| NVLink Support | NVLink: 900GB/s PCIe Gen5: 128GB/s |