ASUS has recently unveiled the ESC N8-E11, their most advanced HGX H100 eight-GPU server designed for maximum data-center performance in AI infrastructure. This server is equipped with a dual-socket Intel HGX H100 configuration, enabling large-scale AI training models and high-performance computing (HPC). ASUS collaborated with Taiwan Web Service (TWS) and ASUS Cloud to provide server design, infrastructure, and AI software capabilities.
The ESC N8-E11 server features a dedicated one-GPU-to-one-NIC topology, supporting up to eight network interface cards (NICs) and eight GPUs. This configuration ensures the highest throughput for compute-intensive workloads. The server’s modular design reduces cable usage, simplifying system assembly, cable routing, and improving airflow for optimized thermal performance.
The ESC N8-E11 incorporates fourth-generation NVLink and NVSwitch technology, along with NVIDIA ConnectX-7 SmartNIC, enabling GPUDirect RDMA and Storage with NVIDIA Magnum IO and NVIDIA AI Enterprise software. These features accelerate AI and data science development. The server’s two-level GPU and CPU sled design enhances thermal efficiency, scalability, and overall performance. It is also compatible with direct-to-chip (D2C) liquid cooling, reducing the data center’s power usage effectiveness (PUE).
ASUS also offers a range of PCIe GPU servers in their lineup, suitable for various AI training needs. These servers, available with four to eight GPUs, are based on both Intel and AMD platforms. Certain models are optimized for NVIDIA OVX, making them ideal for rendering and digital-twin applications.
In summary, ASUS’s ESC N8-E11 server, powered by Intel HGX H100, delivers exceptional AI infrastructure performance in data centers. It features dedicated GPU-to-NIC topology, supports multiple GPUs and NICs, and offers high throughput for compute-intensive workloads. ASUS’s comprehensive server portfolio includes PCIe GPU servers suitable for different AI training requirements.