NVIDIA Blackwell Ultra DGX SuperPOD
Discover the NVIDIA Blackwell Ultra DGX SuperPOD a turnkey AI supercomputer for enterprises to build and scale AI factories effortlessly.
NVIDIA revealed the most cutting-edge enterprise AI infrastructure in the world, NVIDIA DGX SuperPOD, which is constructed with NVIDIA Blackwell Ultra GPUs and offers businesses in a variety of sectors AI factory supercomputing for cutting-edge agentic AI reasoning.
To boost token production for AI applications, enterprises can leverage the latest NVIDIA DGX GB300 and NVIDIA DGX B300 systems, which are connected with NVIDIA networking, to enable innovative DGX SuperPOD AI supercomputers that provide FP4 precision and faster AI reasoning.
For agentic, generative, and physical AI workloads which can demand substantial computational resources for AI pretraining, post-training, and test-time scaling for production-running applications AI factories offer specialized infrastructure.
According to Jensen Huang, founder and CEO of NVIDIA, “AI is developing at light speed, and companies are racing to build AI factories that can scale to meet the processing demands of reasoning AI and inference time scaling.” “In the era of agentic and physical AI, the NVIDIA Blackwell Ultra DGX SuperPOD offers innovative AI supercomputing.”
NVIDIA Grace Blackwell Ultra Superchips, including 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs, in addition to a rack-scale, liquid-cooled architecture intended for real-time agent responses on sophisticated reasoning models, are a feature of DGX GB300 systems.
Utilizing the NVIDIA B300 NVL16 architecture, air-cooled NVIDIA DGX B300 systems assist data centers worldwide in meeting the computing demands of generative and agentic AI applications.
NVIDIA also introduced NVIDIA Instant AI Factory, a managed solution that includes the Blackwell Ultra-powered NVIDIA DGX SuperPOD, to satisfy the increasing demand for sophisticated accelerated infrastructure. In 45 global countries, Equinix’s preconfigured liquid- or air-cooled AI-ready data centers will be the first to provide the new DGX GB300 and DGX B300 systems.
NVIDIA DGX SuperPOD With DGX GB300 Powers Age of AI Reasoning
To boost training and inference for the most computationally demanding applications, DGX SuperPOD with DGX GB300 systems can scale up to tens of thousands of NVIDIA Grace Blackwell Ultra Superchips connected by NVIDIA NVLink, NVIDIA Quantum-X800 InfiniBand, and NVIDIA Spectrum-X Ethernet networking.
DGX GB300 systems provide unparalleled performance at scale for multistep reasoning on agentic AI and reasoning applications, with up to 70x more AI performance than AI factories constructed with NVIDIA Hopper systems and 38TB of fast memory.
Fifth-generation NVLink technology connects the 72 Grace Blackwell Ultra GPUs in each DGX GB300 system, creating a single, enormous shared memory area via the NVLink Switch system.
With 72 NVIDIA ConnectX-8 SuperNICs per DGX GB300 system, networking rates can reach 800Gb/s, which is twice as fast as the previous version. In large-scale AI data centers, 18 NVIDIA BlueField-3 DPUs work in tandem with NVIDIA Spectrum-X Ethernet or NVIDIA Quantum-X800 InfiniBand to boost security, performance, and efficiency.
DGX B300 Systems Accelerate AI for Every Data Center
Every data center can now use energy-efficient generative AI and AI reasoning with the NVIDIA DGX B300 system, an AI infrastructure platform.
When compared to the Hopper generation, DGX B300 systems, which are powered by NVIDIA Blackwell Ultra GPUs, provide 4x faster training and 11x quicker AI performance for inference.
In addition to having two BlueField-3 DPUs and eight NVIDIA ConnectX-8 SuperNICs for enhanced networking, each system has 2.3TB of HBM3e memory.
NVIDIA Software Accelerates AI Development and Deployment
NVIDIA also unveiled NVIDIA Mission Control AI data center operation and orchestration software for Blackwell-based DGX systems, which will let businesses to automate the management and operations of their infrastructure.
The NVIDIA AI Enterprise software platform, which is used to create and implement enterprise-grade AI agents, is supported by NVIDIA DGX systems. This comprises NVIDIA AI Blueprints, frameworks, libraries, and tools for coordinating and maximizing the performance of AI agents, as well as NVIDIA NIM microservices, such the recently announced new NVIDIA Llama Nemotron open reasoning model family.
NVIDIA Instant AI Factory to Meet Infrastructure Demand
With the Blackwell Ultra-powered NVIDIA DGX SuperPOD and NVIDIA Mission Control software, NVIDIA Instant AI Factory provides businesses with an Equinix managed solution.
The service will eliminate months of pre-deployment infrastructure preparation by giving organizations access to fully provisioned, intelligence-generating AI factories that are optimized for cutting-edge model training and real-time reasoning workloads, all with specialized Equinix locations worldwide.