Reserved NVIDIA H100 SXM SuperPods

Utilize the best GPUs in the industry today, starting at $1.80/hr

Reserve Now

Reserve Your NVIDIA H100 Cloud Cluster

Enable large-scale model training with top-of-the-line NVIDIA H100 SXM GPUs. Arc Compute's cloud clusters are available for a minimum 2-year commitment.

2-Year

Starting at $2.00/hr per GPU

3-Year

Starting at $1.80/hr per GPU
Instance Type
8 x NVIDIA H100 SXM5 GPUs with 80 GB of GPU memory
vCPUs
224 vCPUs featuring Intel or AMD processors
Storage
Minimum of 6 TB NVMe SSD local storage
Network Bandwidth
3,200 Gbps unblocked InfiniBand
Reserve Now

NVIDIA A100 80GB Cloud Instances

6-Month

Starting at $1.20/hr per GPU

1-Year

Starting at $1.15/hr per GPU

3-Year

Starting at $1.05/hr per GPU
Instance Type
8 x NVIDIA A100 SXM4 GPUs with 80 GB of GPU memory
vCPUs
224 vCPUs featuring Intel or AMD processors
Storage
Minimum of 6 TB NVMe SSD local storage
Network Bandwidth
1,600 Gbps unblocked InfiniBand
Reserve Now

NVIDIA A100 40GB Cloud Instances

6-Month

Starting at $1.15/hr per GPU

1-Year

Starting at $1.10/hr per GPU

3-Year

Starting at $1.00/hr per GPU
Instance Type
8 x NVIDIA A100 SXM4 GPUs with 40 GB of GPU memory
vCPUs
224 vCPUs featuring Intel or AMD processors
Storage
Minimum of 6 TB NVMe SSD local storage
Network Bandwidth
800 Gbps unblocked InfiniBand
Reserve Now

Transformational AI Training

H100 features fourth-generation Tensor Cores and the Transformer Engine with FP8 precision that provides up to 9X faster training over the prior generation for mixture-of-experts (MoE) models. The combination of fourth-generation NVlink, which offers 900 gigabytes per second (GB/s) of GPU-to-GPU interconnect; NVLINK Switch System, which accelerates communication by every GPU across nodes; PCIe Gen5; and NVIDIA Magnum IO™ software delivers efficient scalability from small enterprises to massive, unified GPU clusters.

Deploying H100 GPUs at data center scale delivers outstanding performance and brings the next generation of exascale high-performance computing (HPC) and trillion-parameter AI within the reach of all researchers.
Learn More
Up to 9X Higher AI Training on Largest Models
H100 vs A100 Performance