The gold standard for AI infrastructure.
Expand the frontiers of business innovation and optimization with NVIDIA DGX™ H100. The latest iteration of NVIDIA’s legendary DGX systems and the foundation of NVIDIA DGX SuperPOD™, DGX H100 is the AI powerhouse that’s accelerated by the groundbreaking performance of the NVIDIA H100 Tensor Core GPU.
DGX H100 is the fourth generation of the world’s premier purpose-built AI infrastructure, a fully optimized platform powered by the operating system of the accelerated data center, NVIDIA Base Command, a rich ecosystem of third-party support, and access to expert advice from NVIDIA professional services.
NVIDIA DGX H100 features up to 9X more performance, 2X faster networking, and high-speed scalability for NVIDIA DGX SuperPOD. The next-generation architecture is supercharged for the largest workloads such as natural language processing and deep learning recommendation models.
DGX H100 can be installed on-premises for direct management, colocated in NVIDIA DGX-Ready data centers, and accessed through NVIDIA-certified managed service providers. And with DGX-Ready Lifecycle Management, organizations get a predictable financial model to keep their deployment at the leading edge.
8x NVIDIA H100 GPUs With 640 Gigabytes of Total GPU Memory 18x NVIDIA® NVLink® connections per GPU, 900 gigabytes per second of bidirectional GPU-to-GPU bandwidth
4x NVIDIA NVSwitches™ 7.2 terabytes per second of bidirectional GPU-to-GPU bandwidth, 1.5X more than previous generation
10x NVIDIA ConnectX®-7 400 Gigabits-Per-Second Network Interface 1 terabyte per second of peak bidirectional network bandwidth
Dual x86 CPUs and 2 Terabytes of System Memory Powerful CPUs for the most intensive AI jobs
30 Terabytes NVMe SSD High speed storage for maximum performance
Leadership-Class AI Infrastructure