Next-Generation
GPU Acceleration

ACCELERATING with Arc

CDW
Intel
Nvidia
Supermicro
ThinkOn
Data Machines
AI Hub
Tenstorrent
Dell
Liqid
CDW
Intel
Nvidia
Supermicro
ThinkOn
Data Machines
AI Hub
Tenstorrent
Dell
Liqid

Better Performance. Lower Cost.

Thanks to our next-generation GPU hypervisor, you'll experience up to 80% better performance while utilizing Arc's GPU cloud instances when compared to offerings from other public clouds like AWS. You'll also spend up to 60% less. Optimized for a wide range of AI, ML, DL, & HPC use cases, our cloud services will help you gain the flexibility you need with custom solutions built just for you. With Arc Compute, you'll never compromise again.

The Arc Compute Difference

gpu chip
Superior Performance

Arc's proprietary software increases GPU performance for AI/ML/DL workloads by up to 80%

flexible infrastructure
Flexible Infrastructure

Custom GPU instances and Composable Disaggregated Infrastructure

maximum gpu utilization
Max GPU Utilization

Our GPU hypervisor allocates GPU cores and VRAM at run-time for 100% utilization at all times.

transparent pricing
Transparent Pricing

With fixed monthly prices you won't be charged any additional fees for usage or ingress/egress

Custom GPU Infrastructure

Arc Compute is dedicated to providing our customers with the best cloud computing experience possible. Unlike other cloud providers, we have developed our own GPU hypervisor, called Hyperborea. Hyperborea allows us to cutout the middleman and virtualize our GPUs at a drastically reduced cost. We pass these cost savings on to our customers with fixed monthly pricing for GPU compute with no additional fees like ingress/egress. We’re committed to working with our customers to build them a custom cloud that will meet their unique business needs. We have no set menu for our GPU instances because we understand that different workloads across many industries have unique compute requirements.
Speak to a GPU Expert

The Arc Stack

The ultimate software stack for AI and Machine Learning

We've optimized our GPU instances for training and inference. Preinstalled in all of our instances, the Arc Stack is always ready to go with managed updates. Spin up a virtual machine with TensorFlow, TensorBoard, PyTorch, Keras, ONNX, Weights & Biases, NumPy, SciPy, JAX, CUDA, cuDNN, and all NVIDIA Drivers already waiting for you. Never waste time on set up again.

Reserve GPUs
TensorFlow
TensorFlow + TensorBoard

Neural Networks made easy by Google + the standard in experiment recording

Learn More
PyTorch
PyTorch

Neural Networks made easy by Meta

Learn More
Keras
Keras

Neural Networks made easier by Google

Learn More
ONNX
ONNX

Inter-framework model sharing + inference made easy

Learn More
Weights & Biases
Weights & Biases

Experiment recording made easy

Learn More
NumPy
NumPy

Math made easy

Learn More
SciPy
SciPy

Traditional data science made easy

Learn More
JAX
JAX

Neural Networks made faster by Google

Learn More

Free features included on all our servers

Arc Compute doesn't charge ingress or egress fees.
Ingress & Egress
Arc Compute offers 24/7 support.
24/7 Support
Arc Compute offers managed security-as-a-service.
Managed Security-as-a-Service
Arc Compute offers preconfigured images.
Preconfigured Images

Find Your Use Case

Running your workloads in our cloud opens up a world of opportunities, flexibility, and performance at a lower cost.
Analyze data in Arc Compute's GPU Cloud
Data Analytics
Learn More
Conduct computer-aided drug-design in Arc Compute's GPU Cloud
Computational Drug Discovery
Learn More
Train AI models in Arc Compute's GPU Cloud
Artificial Intelligence
Learn More
Train neural networks and other ML/DL models in Arc Compute's GPU Cloud
Machine & Deep Learning
Learn More
Create 3D models and run simulations in Arc Compute's GPU Cloud
3D Modeling & Simulations
Learn More
Train NLP models in Arc Compute's GPU Cloud
Natural Language Processing
Learn More
Analyze and render video in Arc Compute's GPU Cloud
Video Analysis & Rendering
Learn More
Mine cryptocurrency in Arc Compute's GPU Cloud
Crypto Mining
Learn More
Request Custom Quote
We like to stay ahead of the curve.

Simultaneous multi-virtual GPU

A unique feature of our hypervisor is the ability to connect multiple multiplexed GPUs into a single virtual machine. Sharing multiplexed GPUs among various VMs allows for 100% GPU resource utilization at all times thanks to memory allocation at run-time.

Run-time GPU memory allocation

Host mediated vtg_balloon like API calls use hot plugging to maintain IOMMU address separation while allowing GPU memory to be allocated at run-time.

Spend less, perform better

Virtualizing GPUs often involves extremely costly third-party software. We run our own software to provide you with better performance at lower costs.

Wider hardware support

With Arc Compute, our virtualization software supports enterprise and consumer GPUs from all the major vendors.

Arc Compute GPU Cloud

Choose Your GPUs

NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale to power the world’s highest-performing elastic data centers for AI, data analytics, and HPC. Powered by the NVIDIA Ampere Architecture, A100 is the engine of the NVIDIA data center platform. A100 provides up to 20X higher performance over the prior generation and can be partitioned into seven GPU instances to dynamically adjust to shifting demands. Available in 40GB and 80GB memory versions, A100 80GB debuts the world’s fastest memory bandwidth at over 2 terabytes per second (TB/s) to run the largest models and datasets.
Learn More
S-A100-80
1 x NVIDIA A100 (80 GB)
24 vCPUs
62 GB RAM
1-5 TB Storage
Starting at
$4.90
per hour
M-A100-80
2 x NVIDIA A100 (40 GB)
48 vCPUs
125 GB RAM
1-5 TB Storage
Starting at
$7.53
per hour
L- A100-80
4 x NVIDIA A100 (40 GB)
96 vCPUs
250 GB RAM
5-10 TB Storage
Starting at
$12.56
per hour
XL-A100-80
8 x NVIDIA A100 (40 GB)
128 vCPUs
500 GB RAM
10-20 TB Storage
Starting at
$22.83
per hour
NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale to power the world’s highest-performing elastic data centers for AI, data analytics, and HPC. Powered by the NVIDIA Ampere Architecture, A100 is the engine of the NVIDIA data center platform. A100 provides up to 20X higher performance over the prior generation and can be partitioned into seven GPU instances to dynamically adjust to shifting demands. Available in 40GB and 80GB memory versions, A100 80GB debuts the world’s fastest memory bandwidth at over 2 terabytes per second (TB/s) to run the largest models and datasets.
Learn More
S-A100-40
NVIDIA A100 (40 GB)
24 vCPUs
62 GB RAM
1-5 TB Storage
Starting at
$4.40
per hour
M-A100-40
NVIDIA A100 (40 GB)
48 vCPUs
125 GB RAM
1-5 TB Storage
Starting at
$6.78
per hour
L-A100-40
NVIDIA A100 (40 GB)
96 vCPUs
250 GB RAM
5-10 TB Storage
Starting at
$11.30
per hour
XL-A100-40
NVIDIA A100 (40 GB)
128 vCPUs
500 GB RAM
10-20 TB Storage
Starting at
$20.55
per hour
The NVIDIA A40 GPU is an evolutionary leap in performance and multi-workload capabilities from the data center, combining best-in-class professional graphics with powerful compute and AI acceleration to meet today’s design, creative, and scientific challenges. Driving the next generation of virtual workstations and server-based workloads, NVIDIA A40 brings state-of-the-art features for ray-traced rendering, simulation, virtual production, and more to professionals anytime, anywhere.
Learn More
S-A40-48
1 x NVIDIA A40 (48 GB)
24 vCPUs
62 GB RAM
1-5 TB Storage
Starting at
$3.42
per hour
M-A40-48
2 x NVIDIA A40 (48 GB)
48 vCPUs
125 GB RAM
1-5 TB Storage
Starting at
$5.27
per hour
L-A40-48
4 x NVIDIA A40 (48 GB)
96 vCPUs
250 GB RAM
5-10 TB Storage
Starting at
$8.79
per hour
XL-A40-48
8 x NVIDIA A40 (48 GB)
128 vCPUs
500 GB RAM
10-20 TB Storage
Starting at
$15.98
per hour
These hourly prices are for GPUs that are reserved for a 1-year period. Our prices are even lower for longer reservation periods.
Request Custom Quote