Workgroup Appliance for the Age of AI
Data science teams are at the leading edge of innovation, but they’re often left searching for available AI compute cycles to complete projects. They need a dedicated resource that can plug in anywhere and provide maximum performance for multiple, simultaneous users anywhere in the world. NVIDIA DGX Station™ A100 brings AI supercomputing to data science teams, offering data center technology without a data center or additional IT infrastructure. Powerful performance, a fully optimized software stack, and direct access to NVIDIA DGXperts ensure faster time to insights.
Rent a DGX Station A100 for your data science team! Get monthly access to a supercomputer powered by four of the world’s fastest AI accelerators and 320GB RAM. Install it almost anywhere, return it when you’re done.
With DGX Station A100, organizations can provide multiple users with a centralized AI resource for all workloads—training, inference, data analytics—that delivers an immediate on-ramp to NVIDIA DGX™-based infrastructure and works alongside other NVIDIA-Certified Systems. And with Multi-Instance GPU (MIG), it’s possible to allocate up to 28 separate GPU devices to individual users and jobs.
DGX Station A100 is a server-grade AI system that doesn’t require data center power and cooling. It includes four NVIDIA A100 Tensor Core GPUs, a top-of-the-line, server-grade CPU, super-fast NVMe storage, and leading-edge PCIe Gen4 buses, along with remote management so you can manage it like a server.
Designed for today's agile data science teams working in corporate offices, labs, research facilities, or even from home, DGX Station A100 requires no complicated installation or significant IT infrastructure. Simply plug it into any standard wall outlet to get up and running in minutes and work from anywhere.
NVIDIA DGX Station A100 is the world’s only office-friendly system with four fully interconnected and MIG-capable NVIDIA A100 GPUs, leveraging NVIDIA® NVLink® for running parallel jobs and multiple users without impacting system performance. Train large models using a fully GPU-optimized software stack and up to 320 gigabytes (GB) of GPU memory.
Deep learning datasets are becoming larger and more complex, with workloads like conversational AI, recommender systems, and computer vision becoming increasingly prevalent across industries. NVIDIA DGX Station A100, which comes with an integrated software stack, is designed to deliver the fastest time to solution on complex AI models compared with PCIe-based workstations.
Typically, inference workloads are deployed in the data center, as they utilize every compute resource available and require an agile, elastic infrastructure that can scale out. NVIDIA DGX Station A100 is perfectly suited for testing inference performance and results locally before deploying in the data center, thanks to integrated technologies like MIG that accelerate inference workloads and provide the highest throughput and real-time responsiveness needed to bring AI applications to life.
Every day, businesses are generating and collecting unprecedented amounts of data. This massive amount of information represents a missed opportunity for those not using GPU-accelerated analytics. The more data you have, the more you can learn. With NVIDIA DGX Station A100, data science teams can derive actionable insights from their data faster than ever before.
High-performance computing (HPC) is one of the most essential tools fueling the advancement of science. Optimizing over 700 applications across a broad range of domains, NVIDIA GPUs are the engine of the modern HPC data center. Equipped with four NVIDIA A100 Tensor Core GPUs, DGX Station A100 is the perfect system for developers to test out scientific workloads before deploying on their HPC clusters, enabling them to deliver breakthrough performance at the office or even from home.
High-performance training accelerates your productivity, which means faster time to insight and faster time to market.
BERT Large Pre-Training Phase 1
Over 3X Faster
DGX Station A100 320GB; Batch Size=64; Mixed Precision; With AMP; Real Data; Sequence Length=128
BERT Large Inference
Over 4X Faster
DGX Station A100 320GB; Batch Size=256; INT8 Precision; Synthetic Data; Sequence Length=128, cuDNN 8.0.4
ResNet-50 V1.5 Training
Linear Scalability
DGX Station A100 320GB; Batch Size=192; Mixed Precision; Real Data; cuDNN Version=8.0.4; NCCL Version=2.7.8; NGC MXNet 20.10 Container
Enterprises across major industries can gain faster insights with complete solutions specific to their domain. Select the topic below for more insight into relevant solution benefits.
Mass General Hospital (MGH) & Brigham and Women's Hospital (BWH) Center for Clinical Data Science is using DGX Station to power generative adversarial networks (GANs) that create synthetic brain MRI images, enabling the team to train their neural network with significantly less data. DGX Station serves as a dedicated AI resource to ensure their radiologists can keep moving projects forward.
Swiss Federal Railway (SBB) has 15,000 trains that provide 1.2 millions rides per day. The power of DGX Station enabled more accurate and automated fault detection in railway tracks and reduced the time for onsite inspections. The optimized AI software in DGX Station lets their engineers focus on gathering the right data instead of testing and configuring components.
Avitas Systems uses AI-powered robots that detect corrosion, leaks, and other defects imperceptible to the human eye with incredible accuracy and go places unfit for humans. They use deep neural networks developed on NVIDIA DGX servers in the data center and easily extended to NVIDIA DGX Station in the field for inferencing on the data closest to where that data is being created.
Contact Us To Learn More
NVIDIA Privacy Policy