Reserved NVIDIA H100 SXM SuperPods
Utilize the best GPUs in the industry today, starting at $1.80/hr
Reserve Your NVIDIA H100 Cloud Cluster
Enable large-scale model training with top-of-the-line NVIDIA H100 SXM GPUs. Arc Compute's cloud clusters are available for a minimum 2-year commitment.
NVIDIA A100 80GB Cloud Instances
NVIDIA A100 40GB Cloud Instances
Transformational AI Training
H100 features fourth-generation Tensor Cores and the Transformer Engine with FP8 precision that provides up to 9X faster training over the prior generation for mixture-of-experts (MoE) models. The combination of fourth-generation NVlink, which offers 900 gigabytes per second (GB/s) of GPU-to-GPU interconnect; NVLINK Switch System, which accelerates communication by every GPU across nodes; PCIe Gen5; and NVIDIA Magnum IO™ software delivers efficient scalability from small enterprises to massive, unified GPU clusters.Learn More
Deploying H100 GPUs at data center scale delivers outstanding performance and brings the next generation of exascale high-performance computing (HPC) and trillion-parameter AI within the reach of all researchers.
Up to 9X Higher AI Training on Largest Models
NVSwitch + InfiniBand
NVLink is a direct GPU-to-GPU interconnect that scales multi-GPU input/output (IO) within the server and is available in both form factors. Exclusive to SXM5, NVSwitch connects multiple NVLinks to provide all-to-all GPU communication at full NVLink speed within a single node. InfiniBand enables NVSwitch connections to be extended across nodes to create a seamless, high-bandwidth, multi-node GPU cluster - effectively forming a data center-sized GPU, making it possible to solve even the most extensive AI jobs rapidly.