Linux, Ubuntu or Centos.
Pytorch native, TensorFlow and other frameworks via ONNX.
C++ based low level programming model for adding new ML ops, or non-ML applications, so you’re not dependent on CUDA.
With the addition of Arc Compute’s mediated device driver, users can unlock even greater power and efficiencies.
An Arc exclusive, “Simultaneous Multi-Virtual GPU” allows for superior performance and much higher utilization by dynamically allocating resources at run-time, shifting execution capabilities and GPU cores from under-utilized or idle resources.
Arc’s proprietary hypervisor stack was developed to allow users unparalleled control over the GPUs in their system. This stack is in constant development to continue innovation in GPU virtualization and is an ideal solution for training large workloads (artificial intelligence, machine, learning, deep learning).
Thanks to our unique ability to utilize the efficiencies of "Simultaneous Multi-Virtual GPU" we're able to train heavy workloads much faster than the standard straight pass through approach. Training a NeRF-SH neural network model took the industry standard configuration 9h 15m 11s with four Nvidia A100-SXM4 (40 GM) GPUs. It took our exclusive configuration only 8h 27m 34s with four ARC A100 (40 GB) GPUs, thanks to our hypervisor stack, Hyperborea™.
On-Premises or In the Cloud
Hyperborea is utilized in Arc’s GPU cloud service and can also be licensed for your cloud or on-premise environments. It's easily integrated and extremely powerful. Stop relying on legacy software. Gain a competitive advantage by taking your computing to the next level with Arc Compute!