NVIDIA Converged Accelerators

Where powerful performance, enhanced networking, and robust security come together—in one package.

Faster, More Secure AI systems

In one unique, efficient architecture, NVIDIA converged accelerators combine the powerful performance of NVIDIA GPUs with the enhanced network and security of NVIDIA smart network interface cards (SmartNICs) and data processing units (DPUs). Deliver maximum performance and enhanced security for I/O intensive GPU accelerated workloads, from the data center to the edge.

Unprecedented GPU Performance

NVIDIA Tensor Core GPUs deliver unprecedented performance and scalability for AI, high-performance computing (HPC), data analytics, and other compute-intensive workloads. And with Multi-Instance GPU (MIG), each GPU can be partitioned into multiple GPU instances—fully isolated and secured at the hardware level. Systems can be configured to offer right-sized GPU acceleration for optimal utilization and sharing across applications big and small in both bare-metal and virtualized environments.

Enhanced Networking and Security

NVIDIA® ConnectX® family of smart network interface cards (SmartNICs offer best-in-class network performance, advanced hardware offloads, and accelerations. NVIDIA BlueField® DPUs combine the performance of ConnectX with full infrastructure-on-chip programmability. By offloading, accelerating, and isolating networking, storage, and security services, BlueField DPUs provide a secure, accelerated infrastructure for any workload in any environment.

A New Level of Data Efficiency

NVIDIA converged accelerators include an integrated PCIe switch, allowing data to travel between the GPU and network without flowing across the server PCIe system. This enables a new level of data center performance, efficiency and security for IO-intensive, GPU-accelerated workloads.

A More Powerful, Secure Enterprise

High-performance 5G

NVIDIA Aerial is an application framework for building high-performance, software-defined, cloud-native 5G networks to address increasing user demand. It enables GPU-accelerated signal and data processing for 5G virtual radio access networks (vRANs). NVIDIA converged accelerators provide the highest-performing platform for running 5G workloads. Because data doesn’t need to go through the host PCIe system, processing latency is greatly reduced. The resulting higher throughput also allows for a greater subscriber density per server.

Faster 5G
AI-Based Cybersecurity

AI-Based Cybersecurity

Converged accelerators open up a new range of possibilities for AI-based cybersecurity and networking. The DPU’s Arm cores can be programmed using the NVIDIA Morpheus application framework to perform GPU-accelerated advanced network functions, such as threat detection, data leak prevention, and anomalous behavior profiling. GPU processing can be applied directly to network traffic at a high data rate, and data travels on a direct path between the GPU and DPU, providing better isolation.

Accelerating AI-on-5G at the Edge

NVIDIA AI-on-5G is made up of the NVIDIA EGX platform, the NVIDIA Aerial SDK for software-defined 5G virtual RANs (vRANs), and enterprise AI frameworks, including SDKs such as NVIDIA Isaac and NVIDIA Metropolis. This platform enables edge devices such as video cameras and industrial sensors and robots to use AI and communicate with the data center over 5G. Converged cards make it possible to provide all this functionality in a single enterprise server, without having to deploy more costly purpose-built systems. The same converged card used to accelerate 5G signal processing can also be used for edge AI, with NVIDIA’s Multi-Instance GPU (MIG) technology making it possible to share the GPU among several different applications.

Balanced, Optimized Design

Balanced, Optimized Design

Integrating a GPU, DPU, and PCIe switch into a single device creates a balanced architecture by design. In systems where multiple GPUs and DPUs are desired, a converged accelerator card avoids contention on the server’s PCIe system, so the performance scales linearly with additional devices. In addition, a converged card provides much more predictable performance. Having these components on one physical card also improves space and energy efficiency. Converged cards significantly simplify deployment and ongoing maintenance, particularly when installing in volume servers at scale.

Meet NVIDIA’s Converged Accelerators

This device enables data-intensive edge and data center workloads to run with maximum security and performance.

A100X Converged Accelerator product image with A100 GPU and BlueFied-2DPU


The NVIDIA A30X combines the NVIDIA A30 Tensor Core GPU with the BlueField-2 DPU. With MIG, the GPU can be partitioned into as many as four GPU instances, each running a separate service.

The design of this card provides a good balance of compute and input/output (IO) performance for use cases such as 5G vRAN and AI-based cybersecurity. Multiple services can run on the GPU, with the low latency and predictable performance provided by the onboard PCIe switch.


The NVIDIA A100X brings together the power of the NVIDIA A100 Tensor Core GPU with the BlueField-2 DPU. With MIG, each A100 can be partitioned into as many as seven GPU instances, allowing even more services to run simultaneously.

The A100X is ideal for use cases where the compute demands are more intensive. Examples include 5G with massive multiple-input and multiple-output (MIMO) capabilities, AI-on-5G deployments, and specialized workloads such as signal processing and multi-node training.

Sign Up for the NVIDIA Converged Accelerator Developer Kit

Interested in building the next generation of edge AI and cybersecurity applications? Want to be one of the first people to get hands-on experience with the new converged accelerators? Sign up to receive information about the Converged Accelerator Developer Kit and to get early access to the hardware and software components.

Stay Up To Date

Sign up to get the latest information and resources on NVIDIA EGX converged accelerators, straight to your inbox