The gold standard for AI infrastructure.

The World’s Proven Choice for Enterprise AI

Expand the frontiers of business innovation and optimization with NVIDIA DGX H100. The latest iteration of NVIDIA’s legendary DGX systems and the foundation of NVIDIA DGX SuperPOD, DGX H100 is the AI powerhouse that’s accelerated by the groundbreaking performance of the NVIDIA H100 Tensor Core GPU.

The Most Complete AI Platform

NVIDIA AI software solutions

The Cornerstone of Your AI Center of Excellence

DGX H100 is the fourth generation of the world’s premier purpose-built AI infrastructure, a fully optimized platform powered by the operating system of the accelerated data center, NVIDIA Base Command, a rich ecosystem of third-party support, and access to expert advice from NVIDIA professional services.

NVIDIA DGX SuperPOD for Enterprise

Break Through the Barriers to AI at Scale

NVIDIA DGX H100 features up to 9X more performance, 2X faster networking, and high-speed scalability for NVIDIA DGX SuperPOD. The next-generation architecture is supercharged for the largest workloads such as natural language processing and deep learning recommendation models.

Leadership-Class Infrastructure

Leadership-Class Infrastructure on Your Terms

DGX H100 can be installed on-premises for direct management, colocated in NVIDIA DGX-Ready data centers, and accessed through NVIDIA-certified managed service providers. And with DGX-Ready Lifecycle Management, organizations get a predictable financial model to keep their deployment at the leading edge.

Explore DGX H100

  1. 8x NVIDIA H100 GPUs With 640 Gigabytes of Total GPU Memory
    18x NVIDIA® NVLink® connections per GPU, 900 gigabytes per second of bidirectional GPU-to-GPU bandwidth

  2. 4x NVIDIA NVSwitches
    7.2 terabytes per second of bidirectional GPU-to-GPU bandwidth, 1.5X more than previous generation

  3. 10x NVIDIA ConnectX®-7 400 Gigabits-Per-Second Network Interface
    1 terabyte per second of peak bidirectional network bandwidth

  4. Dual x86 CPUs and 2 Terabytes of System Memory
    Powerful CPUs for the most intensive AI jobs

  5. 30 Terabytes NVMe SSD
    High speed storage for maximum performance



Leadership-Class AI Infrastructure