Monday, May 27, 2024

NVIDIA HGX: ESC N8A-E12 powerful 7U dual-socket server


ASUS and its affiliate Taiwan Web Service Corporation unveiled the GenAI POD Solution, a comprehensive strategy to meet the soaring need for AI supercomputing. ASUS, a leader in the AI revolution, will demonstrate its NVIDIA MGX-powered AI servers, which include the ESC N8A-E12 and RS720QN-E11-RS24U HGX GPU servers, in addition to the ESC NM1-E1 and ESR1-511N-M1. These solutions can readily handle a variety of generative AI and large language model (LLM) training workloads because of TWSC’s unique resource management platform and software stacks. These integrated solutions offer solid software platforms and cutting-edge thermal designs that can be customized to meet the needs of organizations. This allows clients to succeed in their AI endeavors by offering full data centre solutions.


Customized AI system to address particular requirements. An NVIDIA GH200 Grace Hopper Superchip powers the ASUS ESC NM1-E1, which is powered by an NVIDIA MGX. With its powerful 72 Arm Neoverse V9 CPU cores and NVIDIA NVLink-C2C technology, this combination ensures outstanding performance and efficiency. As such, it’s a great option for AI-driven data centres, HPC, data analytics, and NVIDIA Omniverse applications, offering revolutionary gains in memory capacity and performance.

The ASUS ESR1-511N-M1 server, another highlight of the ASUS showcase, is made to support large-scale AI and HPC applications by enabling deep-learning (DL) training and inference, data analytics, and high-performance computing. It does this by utilising the power of the NVIDIA GH200 Grace Hopper Superchip. In line with ESG trends, the ESR1-511N-M1 features a lower power use effectiveness (PUE) and an improved thermal solution for excellent performance. Its adaptable design, which combines three PCI Express (PCIe) 5.0 x16 slots with up to four E1.S local drives via NVIDIA BlueField-3 and a 1U design with the maximum compute density, allows for quick and easy data transfers.


Processor72-core NVIDIA Grace GH200 CPU (Arm-based)
MemoryNot specified
InterconnectNVIDIA NVLink-C2C (900 GB/s bandwidth)
CoolingAir Cooling Optimized for Heat Dissipation
SoftwareFull NVIDIA Software Stack including NVIDIA AI Enterprise
ManagementASUS-exclusive tool-less design for efficient maintenance
ApplicationsAI-driven data centers, HPC, data analytics, and Omniverse
Form FactorNot specified (Likely a server blade)

NVIDIA HGX servers

Enhancing AI with end-to-end H100 eight-GPU power its optimized servers, data centre infrastructure, and AI software development capabilities, the ASUS ESC N8A-E12 is a powerful 7U dual-socket server that utilises eight NVIDIA H100 Tensor Core GPUs and dual AMD EPYC 9004 processors. It is designed for generative AI. Its improved thermal solution guarantees peak efficiency and reduced PUE. This potent HGX server, designed for advances in AI and data science, has a special one-GPU-to-one-NIC setup that maximizes throughput for compute-intensive operations.

The ASUS RS720QN-E11-RS24U is a high-density server with an NVIDIA Grace CPU Superchip with NVIDIA NVLink-C2C technology, optimized for high-performance and compute-intensive workloads. With its compact design and capacity for four nodes within a 2U4N chassis, the RS720QN-E11-RS24U is an innovative solution that is perfect for data centres, web servers, virtualization clouds, and hyperscale environments. It also offers PCIe 5.0 compatibility and outstanding performance for dual-socket CPUs.

D2C cooling solution from ASUS

Direct-to-chip (D2C) cooling provides a simple and quick fix that makes use of existing infrastructure and enables quick deployment with lower PUE. The ASUS RS720QN-E11-RS24U allows for a variety of cooling options by supporting cool plates and manifolds. Furthermore, ASUS servers feature a rear-door heat exchanger that is compatible with common rack-server designs, meaning that just the rear door needs to be replaced in order to enable liquid cooling in the rack.

This eliminates the need to replace every rack. ASUS offers enterprise-grade comprehensive cooling solutions and is dedicated to minimizing data centre PUE, carbon emissions, and energy consumption to support the design and construction of greener data centres. The company works closely with industry-leading cooling solution providers to achieve this goal.

AI generative POD systems

TWSC has a wealth of experience in setting up and managing large-scale AIHPC infrastructure for NVIDIA Partner Network cloud partners (NCP) using the FORERUNNER 1 and TAIWANIA-2 supercomputer series from the National Centre for High-performance Computing (NCHC). Furthermore, TWSC’s AI Foundry Service allows users to customise AI demand to their unique needs by facilitating the rapid deployment of AI supercomputing and flexible model optimisation for AI 2.0 applications.

With quick rollouts and full end-to-end services, TWSC’s generative AI POD solutions provide enterprise-grade AI infrastructure while maintaining high availability and cybersecurity standards. ASUS products enable success stories in educational, scientific, and healthcare settings. All-inclusive cost-control features reduce OPEX and maximize power usage, making TWSC technology an appealing option for businesses looking for a generative AI platform that is dependable and sustainable.

ASUS RS720QN-E11-RS24U Specs

Form Factor2U4N (Four Nodes in a 2U Chassis)
ProcessorNVIDIA Grace CPU Superchip (Arm-based)
Operating System SupportArm SystemReady Certified
MemoryNot specified
StorageUp to 24x All-Flash NVMe Drives on Front Panel
NetworkingOptional: Two 10Gbps or Four 1Gbps LAN ports
ExpansionFour PCIe 5.0 slots
CoolingEnhanced Volume Air Cooling (EVAC) Heatsink
Other FeaturesPCIe 5.0 ready, Scalable Design


Please enter your comment!
Please enter your name here

Recent Posts

Popular Post Would you like to receive notifications on latest updates? No Yes