Harnessing L2 Cache Optimizations for NVIDIA GPUs

The Truth Behind GPU Optimization

GPU Optimization Challenges Across Industries: From Thread Divergence to Memory Efficiency

September 13, 2023

Learn why GPUs, while incredibly powerful, face hidden challenges that impact their optimization. From thread divergence to memory efficiency, explore the nuanced world of GPU computing and how these challenges are overcome in real-world applications.

Addressing the Strained Supply of NVIDIA H100 SXM5 GPUs

How ArcHPC can help optimize your current GPU infrastructure

July 12, 2023

With the current boom in Generative AI, demand for enterprise graphics cards is at an all-time high, and NVIDIA is dominating the industry.

GPU Utilization & Total Cost of Infrastructure Ownership

How CTOs and HPC managers are increasing GPU utilization, lowering the TCO of their on-premise infrastructure

March 2, 2023

One of the primary issues faced across industries is the under-utilization of computing resources, especially GPUs. 

NVIDIA H100 PCIe vs. SXM5

Which GPU is right for your company?

February 27, 2023

With NVIDIA being the leading player in the GPU market, it’s challenging to determine which NVIDIA GPU server is suitable for your organization. In this blog post, we compare the PCIe and SXM5 form factors for NVIDIA H100 GPUs, the highest-performing GPUs currently available, and contrast performance and costs to help you make an informed decision.‍

Addressing Utilization Issues with GPU Job Schedulers

How ArcHPC resolves them

February 10, 2023

A GPU Job Scheduler is a tool that manages and schedules the allocation of GPUs in a cluster environment, although, they have drawbacks when it comes to maximizing utilization and performance.

Our Latest GPU Systems

HGX-B200-Supermicro

NVIDIA B200 HGX Servers

The NVIDIA HGX B200 revolutionizes data centers with accelerated computing and generative AI powered by NVIDIA Blackwell GPUs. Featuring eight GPUs, it delivers 15X faster trillion-parameter inference with 12X lower costs and energy use, supported by 1.4 TB of GPU memory and 60 TB/s bandwidth. Designed for demanding AI, analytics, and HPC workloads, the HGX B200 sets a new performance standard.

Dell H200 System

NVIDIA H200 HGX Servers

The NVIDIA H200 was the first GPU to offer 141 gigabytes (GB) of HBM3e memory at 4.8 terabytes per second (TB/s). That’s nearly double the capacity of the NVIDIA H100 Tensor Core GPU with 1.4X more memory bandwidth.

8x H100 SXM5 Server

NVIDIA H100 HGX Servers

Tap into unprecedented performance, scalability, and security for every workload with the NVIDIA H100 Tensor Core GPU. This is NVIDIA's best-selling enterprise GPU and one of the most powerful available.