Everything from the ASUS EG500-E11 to the ESC A8A-E13 is included in the ASUS AI Server Lineup.
Utilise ASUS AI to unleash each token’s potential
Infrastructure-related solutions
AI is transforming the world at an unprecedented rate and has a major impact. Proactive management is necessary to maintain competitiveness, and ASUS AI infrastructure solutions are critical for navigating this changing environment. With its extensive array of AI solutions, including AI servers, integrated racks, ESC AI POD for large-scale computing, and the most crucial cutting-edge software platforms that are tailored to manage any workload, ASUS gives you the advantage to stay ahead of the competition in AI.
Why select ASUS for AI servers?
ASUS is exceptional at taking a comprehensive strategy, integrating state-of-the-art technology and software to enable users to expedite their research and development. ASUS AI is at the forefront of technology developments that redefine possibilities in AI-driven sectors and everyday encounters by fusing technological brilliance with workable solutions.
ASUS | Competitors | |
Product offering | ASUS designs complete AI server solution from software and hardware covering Intel, NVIDIA and AMD solutions, and x86 or Arm architectures. | Restricted to server hardware only. |
Design capability | ASUS has resources on tap to respond quickly to fulfill almost any requirement with top-tier components, strong ecosystem partnerships, feature-rich designs, and superior in-house expertise. | Limited support for bespoke solutions, particular with regard to software. |
Software ecosystem | ASUS engages in the expansive and intricate field of AI, leveraging internal software expertise while forging partnerships with software and cloud providers to offer holistic solutions. | Lacking self-owned software-development resources. |
Complete after-sales service | ASUS prioritizes customer satisfaction by offering consultations and services with customized support, software services such as remote management via ASUS Control Center and the establishment of an intuitive cloud-service user portal. | Only provide product sales services |
Select the AI server that best suits your demands and budget
Do you think AI could be able to help you with some of your present problems? Concerned about the upkeep and expense of using AI to develop company AI but not sure how to proceed? Complete AI infrastructure solutions are offered by ASUS AI for a variety of workloads and your unique requirements. Large language models (LLMs), generative AI (Gen AI), predictive AI, deep learning, machine learning, AI supercomputing, and Gen AI are just a few of the complex tasks that these cutting-edge AI servers and software solutions are designed to handle. Select ASUS to do intricate calculations and handle large datasets with efficiency.
Al’s Edge
In order to integrate edge AI and improve processes holistically, advanced control systems are needed as manufacturing moves into the Industry 4.0 era. With ASUS edge AI server solutions, real-time processing is possible at the device level, increasing IoT application security, efficiency, and lowering latency.
ASUS EG500-E11
EG500-E11-RS4-R
Improved networking, graphics, and storage speeds
- with up to 21% more general-purpose performance per watt thanks to 5th generation Intel Xeon Scalable processors, greatly enhancing AI inference and training
- chassis with a shallow depth and a rear access layout for edge situations with limited space
- Scalable expansion featuring support for two external SATA/NVMe in the front, two internal SATA with a short PSU in the back, and two E1.S SSDs that are optional
- Redundant AC power supply for server rooms or data centres
** For 650W/short PSU mode, there are only two internal SATA bays.
EG520-E11-RS6-R
Increased productivity, enhanced GPU and network capabilities
- with up to 21% more general-purpose performance per watt thanks to 5th generation Intel Xeon Scalable processors, greatly enhancing AI inference and training
- chassis with a shallow depth and a rear access layout for edge situations with limited space
- Scalable expansion is enabled with support for two FHFL cards for GPU, two SATA/NVMe cards on the front, and four SATA cards on the rear.
- Designed to operate dependably in conditions between 0 and 55°C
- Redundant AC power supply for server rooms or data centres
** GPU-equipped SKUs are intended for conditions between 0 and 35°C; non-GPU SKUs are intended for 0 to 55°C.
ASUS AI server
Al Deduction
The ultimate difficulty for painstakingly trained machine learning models is to comprehend newly discovered data; this requires handling massive amounts of data and getting beyond hardware constraints. With their strong data-transfer capabilities, ASUS AI servers effectively process real-time data using pre-trained models to produce precise forecasts.
RS720-E11-RS24U
- Scalability and expansion flexibility to accommodate the entire application spectrum.
- with up to 21% more general-purpose performance per watt thanks to 5th generation Intel Xeon Scalable processors, greatly enhancing AI inference and training
- expansion adaptability and scalability to accommodate the entire spectrum of uses
- For increased bandwidth and system upgrades, the front panel features up to 24 all-flash NVMe discs and four PCIe 5.0 slots.
- Compliant with an OCP 3.0 module or a PCIe LAN card for hyperscale data centre connectivity
RS720A-E12-RS24U
Multitasking’s effects on performance, efficiency, and manageability
- Powered by AMD EPYC 9004 processors featuring 12 channels, 128 Zen 4c cores, up to 4800 MHz DDR5, and the ability to sustain a maximum TDP of 400 watts per socket
- Nine PCIe 5.0 slots and up to 24 all-flash NVMe drives on the front panel allow for increased bandwidth and system upgrades.
- For speedier connectivity, there are optional two 10 Gbps or four 1 Gbps on-board LAN modules and an OCP 3.0 module with a PCIe 5.0 slot in the rear panel.
- Includes an expanded volume air cooling (EVAC) heatsink for TDP-heavy processors.
- Three dual-slot GPUs maximum, such as Xilinx, AMD Instinct, and NVIDIA A100 and A40
AI Fine-Tuning
A common goal of engineers and developers is to optimise large language models (LLMs) for better performance and customisation. But they frequently run across problems like deployment errors. Robust AI server solutions are necessary to ensure smooth and effective model deployment and operation in order to overcome these problems.
ESC NM1-E1
NVIDIA Grace-Hopper Superchip-powered 2U high-performance server with NVIDIA NVLink-C2C technology
- 72-core NVIDIA Grace GH200 Grace Hopper Superchip CPU featuring NVIDIA NVLink-C2C technology and 900 GB/s of bandwidth
- Shortens apps’ time to market
- An enhanced system effectively eliminates surplus heat
- Includes the many frameworks and tools offered by NVIDIA AI Enterprise.
- Simplifies maintenance to maximise effectiveness and save downtime
Prepared for a variety of uses in Omniverse, HPC, data analytics, and AI-driven data centres.
ESC NM2-E1
NVIDIA MGX GB200 NVL2 2U server intended for HPC and generative AI
- MGX server, a modular Arm-based system targeted at business clients
- NVIDIA’s definition of modular architecture expedites the development process and speeds up the launch of new products.
- Two CPUs and GPUs make up the dual NVIDIA Grace Blackwell GB200 NVL2 superchips.
- NVLINK and C2C nodes can be used to connect CPUs and GPUs for better AI computing performance.
AI Education
ASUS AI servers, known for their exceptional performance and scalability to process complex neural network training, greatly speed up training processes, fully unlocking the potential of AI applications, whether you’re engaged in data analysis, AI application development, or AI research.
ESC8000-E11
Boost workloads for generative AI and LLM.
- With up to 21% more general-purpose performance per watt thanks to 5th generation Intel Xeon Scalable processors, greatly enhancing AI inference and training
- Performance scaling is made possible with up to eight dual-slot active or passive GPUs, NVIDIA NVLink bridge, and NVIDIA Bluefield DPU compatibility.
- Up to four 3000 W Titanium redundant power sources can be supported for continuous operation, and separate CPU and GPU ventilation tunnels are provided for heat optimisation.
- Eleven PCIe 5.0 slots and eight bays with Tri-Mode NVMe/SATA/SAS drives on the front panel allow for increased bandwidth and system upgrades.
- Rear panel PCIe 5.0 slot and optional OCP 3.0 module for faster connectivity
ESC8000A-E12
ESC8000A-E12
Boost workloads for generative AI and LLM.
- Powered by AMD EPYC 9004 processors featuring 12 channels, 128 Zen 4c cores, up to 4800 MHz DDR5, and the ability to sustain a maximum TDP of 400 watts per socket
- Performance scaling is made possible by up to eight dual-slot active or passive GPUs, NVIDIA NVLink bridge, and NVIDIA Bluefield DPU compatibility.
- Up to four 3000 W Titanium redundant power sources can be supported for continuous operation, and separate CPU and GPU ventilation tunnels are provided for heat optimisation.
- The front panel has eleven PCIe 5.0 slots and eight Tri-Mode NVMe/SATA/SAS disc bays for bandwidth and system upgrades.
Artificial Intelligence
Large-scale dataset management, intricate computations, and intensive AI model training are all mastered by ASUS AI servers equipped with eight GPUs. These servers offer excellent performance and dependability since they are specifically made for AI, machine learning, and high-performance computing (HPC).
ESC I8-E11
Devoted inference and training for deep learning
Driven by the most recent 5th generation Intel Xeon Scalable processors
- Fits eight Intel Gaudi 3 AI OCP Accelerator Module (OAM) mezzanine cards
- Integrate 24 industry-standard RoCE 200GbE RDMA NICs on each Gaudi3.
- Reduced cable utilisation and modular design speed up assembly and enhance thermal optimisation.
- High power efficiency and a backup 3000 W 80 PLUS power supply
ESC A8A-E13
Optimising AI and HPC for superior performance
- Designed for generative AI and HPC, this 7U AMD eight-GPU server has two Next Gen AMD platforms installed.
- 5.2 TB/s bandwidth and an industry-leading 192 GB HBM capacity for big AI models and HPC workloads.
- For effective scaling, a direct GPU-to-GPU link provides 896 GB/s of bandwidth.
- During compute-intensive applications, a dedicated one-GPU-to-one-NIC topology can accommodate up to eight NICs for the greatest throughput.
- Reduced cable utilisation and modular design speed up assembly and enhance thermal optimisation.
With NVIDIA HGX H200, ESC N8-E11
- The greatest option for intensive AI applications using 5th generation Intel Xeon Scalable Processors and NVIDIA HGX Architecture.
- For effective scaling, direct GPU-to-GPU connectivity via NVLink provides 900 GB/s of bandwidth.
- During compute-intensive applications, a dedicated one-GPU-to-one-NIC topology can accommodate up to eight NICs for the greatest throughput.
- Reduced cable utilisation and modular design speed up assembly and enhance thermal optimisation.
- The complete power of NVIDIA GPUs, BlueField-3, NVLink, NVSwitch, and networking are delivered by advanced NVIDIA technologies.
Supercomputing with AI
AI supercomputers are constructed from specialised networks, many processors, large amounts of storage, and precisely calibrated hardware. Turnkey solutions are provided by ASUS AI, which competently manages every facet of building a supercomputer, from cabinet installation and data centre setup to extensive testing and onboarding. Excellent performance is guaranteed by their thorough testing.
ASUS ESC AI POD
NVL72 NVIDIA GB200
- AI beyond comprehension. Released.
- The first Arm-based rack-scale device from ASUS, equipped with 5th Generation NVLink technology and the most potent NVIDIA GB200 superchip.
- Up to 30 times quicker real-time LLM inference is possible by connecting 36 Grace CPUs and 72 Blackwell GPUs on a same rack.
ASUS offers eight weeks of enhanced AI software solutions
Have you purchased an AI server but are unsure about how to use user-friendly AI software to maximise performance and streamline management? The integration of AI Foundry Services and cutting-edge software tools from ASUS AI and TWSC makes it easier to create, implement, and manage AI applications. The ASUS team can finish conventional data-center software solutions, such as cluster deployment, billing systems, generative AI tools, and the most recent OS verification, security upgrades, service patches, and more, in as little as eight weeks*.
Additionally, ASUS AI offers vital software stack verification and acceptance services to guarantee servers function perfectly in real-world settings. This phase verifies that all requirements are being met and guarantees smooth communication inside each client’s unique IT configuration. To guarantee a seamless starting and operation, the process starts with extensive checks on the power, network, GPU cards, voltage, and temperature. Before handover, comprehensive testing finds and fixes problems, ensuring that data centres function dependably even when they are fully loaded.