ASUS RS501A-E12-RS12U and Ascent GX10 make a powerful statement at GTC 2025, redefining high-performance computing.
The newest AI servers in the Blackwell and HGX family lineup are also offered by ASUS, which is at the forefront of AI innovation. These include ASUS ESC NB8-E11 with NVIDIA DGX B200 8-GPU, ASUS ESC N8-E11V with NVIDIA HGX H200, ASUS ESC8000A-E13P/ESC8000-E12P supporting NVIDIA RTX PRO 6000 Blackwell Server Edition with MGX architecture, and ASUS XA NB3I-E12 powered by NVIDIA B300 NVL16. ASUS is well-positioned to offer complete infrastructure solutions in conjunction with the NVIDIA AI Enterprise and NVIDIA Omniverse platforms, enabling customers to shorten their time to market. ASUS is committed to promoting the use of AI across industries.
ASUS AI POD with NVIDIA GB300 NVL72
ASUS AI POD provides outstanding processing capabilities by combining the enormous power of the NVIDIA GB300 NVL72 server platform, enabling businesses to easily handle difficult AI problems. The next era of AI is led by the GB300 NVL72, which is built with NVIDIA Blackwell Ultra and offers breakthrough performance with improved computing, more memory, and high-speed networking.
With 72 NVIDIA Blackwell Ultra GPUs and 36 Grace CPUs arranged in a rack-scale architecture, it offers more AI FLOPs and up to 40TB of fast memory per rack. Additional features include support for trillion-parameter LLM inference and training with NVIDIA, SXM7 and SOCAMM modules made for serviceability, and networking platform integration with NVIDIA Quantum-X800 InfiniBand and Spectrum-X Ethernet. This launch offers clients a dependable and scalable solution to satisfy their changing wants, marking a substantial advancement in AI infrastructure.
Building NVIDIA GB200 NVL72 infrastructure from the ground up is something ASUS has demonstrated experience in. Additionally on display is the ASUS RS501A-E12-RS12U, which uses a software-defined storage architectural paradigm to achieve maximum processing efficiency. This robust SDS server enhances the NVIDIA GB200 NVL72 and efficiently lowers the latency of data training and inferencing. From hardware to cloud-based apps, ASUS offers a wide range of services. These include architecture design, advanced cooling solutions, rack installation, large validation/deployment, and AI platforms. ASUS helps companies achieve AI infrastructure excellence and save time to market by using its wide experience.
RS501A-E12-RS12U
Up to 24 DIMM, four NVMe, three PCIe 5.0 slots, two M.2, OCP 3.0, two single-slot GPUs, and ASUS ASMB11-iKVM are all supported by this AMD EPYC 9005 single-processor 1U server.
Important Features
- Strong performance: driven by AMD EPYC 9005 CPUs, which have 192 Zen 5c cores, 12 channels, DDR5 up to 5200 MHz, and a maximum TDP of 360 watts.
- Design for scale-up storage and expansion: three PCIe 5.0 slots for increased bandwidth and system updates, four NVMe in the center bay, and up to 12 all-flash NVMe drives on the front panel
- On-board LAN and OCP 3.0 modules with a PCIe 5.0 slot on the back panel for quicker connectivity are examples of flexible networking module designs.
- sophisticated air-cooling system: includes a heatsink with enhanced volume air cooling (EVAC) for processors with higher TDP.
- Improved management of IT infrastructure: ASUS Control Center IT management software, ASUS ASMB11-iKVM remote control with ASPEED AST2600, and hardware-level Root-of-Trust remedy
- Read for workloads involving AI and HPC: up to two GPUs with a single slot.
GPU servers for heavy generative AI workloads
A range of NVIDIA-certified servers that enable processes and apps developed with the NVIDIA AI Enterprise and Omniverse platforms will also be on display by ASUS at GTC 2025. The performance of generative AI applications is enhanced by these flexible models, which facilitate smooth data processing and computation.
The ASUS 10U ESC NB8-E11 boasts unparalleled AI performance because to its NVIDIA Blackwell HGX B200 8-GPU. Blackwell Ultra offers revolutionary performance for AI reasoning, agentic AI, and video inference applications to satisfy the changing demands in every data center. The ASUS XA NB3I-E12 features HGX B300 NVL16, which has enhanced AI FLOPS, 2.3TB of HBM3e memory, and networking platform integration with NVIDIA Quantum-X800 InfiniBand and Spectrum-X Ethernet.
Last but not least, the ASUS ESC N8-E11V 7U dual-socket server, which has eight NVIDIA H200 GPUs, supports both liquid-cooled and air-cooled configurations, and is designed with cutting-edge components that will enable thermal efficiency, scalability, and performance never seen before.
Scalable servers to master AI inference optimization
The newest NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs are integrated into the ASUS ESC8000 series, which also offers server and edge AI options for AI inferencing. The ASUS ESC8000-E12P is a high-density 4U server that supports the NVIDIA AI Enterprise and Omniverse software package and can accommodate eight dual-slot high-end NVIDIA H200 GPUs. Additionally, it is completely compatible with the NVIDIA MGX architecture, guaranteeing quick, large-scale deployment and flexible scalability. Furthermore, for contemporary data centers and dynamic IT environments, the ASUS ESC8000A-E13P, 4U NVIDIA MGX server, supports eight dual-slot NVIDIA H200 GPUs and offers smooth integration, optimization, and scalability.
ASUS Ascent GX10 AI supercomputer
ASUS also unveiled the ASUS Ascent GX10, a small, revolutionary AI supercomputer. Its 1,000 AI TOPS performance, powered by the cutting-edge NVIDIA GB10 Grace Blackwell Superchip, makes it perfect for demanding applications. With its 20-core Arm CPU, 128GB of RAM, and Blackwell GPU, the Ascent GX10 can support AI models with up to 200 billion parameters. With this ground-breaking tool, developers, AI researchers, and data scientists worldwide may now directly access the powerful capabilities of a petaflop-scale AI supercomputer.
Additionally, ASUS IoT’s edge AI computers demonstrated their sophisticated edge AI inference capabilities at GTC. The ASUS IoT PE2100N edge AI computer is perfect for generative AI, visual language models (VLM), and large language models (LLMs) applications since it can produce up to 275 TOPS and is powered by the NVIDIA Jetson AGX Orin system-on-modules and NVIDIA JetPack SDK. These apps make it possible to use natural language to interact with photos and videos in order to recognize objects or identify events. It is ideal for robotics, in-car applications, and smart cities due to its low power consumption and optional support for 4G/5G and Out-of-Band technologies.
Furthermore, the ASUS IoT PE8000G rugged edge AI GPU computer is excellent at pre-processing, perceptual AI, and real-time AI inference. It supports two 450W NVIDIA RTX PCIe graphics cards. The PE8000G is ideal for computer vision, driverless vehicles, intelligent video analytics, and generative AI in challenging conditions due to its variable 8-48V DC power input, ignition power control, and operational stability from -20°C to 60°C.
Ascent GX10 AI Supercomputer Powered by NVIDIA GB10 Grace Blackwell Superchip
ASUS unveils the ASUS Ascent GX10, a ground-breaking AI supercomputer driven by the cutting-edge NVIDIA GB10 Grace Blackwell Superchip. With this ground-breaking tool, developers, AI researchers, and data scientists worldwide may now directly access the powerful capabilities of a petaflop-scale AI supercomputer.
Local development initiatives encounter more difficulties as generative AI models become larger and more complicated. Large models require a significant amount of memory and computational power to prototype, tune, and infer. Ascent GX10 is made to meet these demands by giving programmers a robust, cost-effective desktop option for AI development.
Unparalleled AI performance
Large AI workloads can be powered by the next ASUS Ascent GX10, which offers up to 1,000 TOPS of AI performance. The most recent generation of reasoning AI models with up to 200 billion parameters can be experimented with, refined, or inferred by developers using 128GB of coherent unified system memory.
Powered by NVIDIA Grace Blackwell technology
The cutting-edge NVIDIA GB10 Superchip, which was created using the Grace Blackwell architecture and optimized for a small form factor, is at the heart of the new ASUS Ascent GX10. With a strong Blackwell GPU and fifth-generation Tensor Cores and FP4 support, the incredibly potent GB10 processor can process up to 1000 TOPS of AI data. It also has a powerful Grace 20-core Arm CPU that speeds up real-time inferencing and model tuning by improving data preparation and orchestration. In order to provide a coherent CPU+GPU memory model with five times the bandwidth of PCIe 5.0, the GB10 Superchip uses NVIDIA NVLink-C2C.
Handling cutting-edge large-parameter generative-AI models
The ASUS Ascent GX10 can handle AI models with up to 200 billion parameters with its 128GB of unified system memory and support for the FP4 data format. This enables the most recent AI reasoning models with up to 70 billion parameters to be prototyped, refined, and inferred right on the developer’s desktop. Even larger models, as Llama 3.1 with 405-billion parameters, can be handled by connecting two GX10 computers because of the inbuilt NVIDIA ConnectX network interface cards (NICs).
Local development, scalable deployment
Developers may prototype models and AI applications using the ASUS Ascent GX10’s robust and affordable experimentation platform, which frees up critical computational resces in their clusters that are better suited for training and implementing production models. Ascent GX10 customers may easily move their models from desktop settings to NVIDIA DGX Cloud or any accelerated cloud or data center infrastructure with almost no code changes by using NVIDIA AI software. This makes prototyping, fine-tuning, and iteration processes simpler.