Saturday, July 27, 2024

Explore NVIDIA Grace Hopper Superchips for AI

NVIDIA Grace Hopper Superchips 

The worldwide AI conference will include accelerated systems using NVIDIA Grace Hopper Superchips from 11 manufacturers.
At NVIDIA GTC, the memory of software icon Grace Hopper will endure.

At the worldwide AI conference taking place from March 18–21, advanced systems with strong processors named after the father of programming will be on show, poised to revolutionize computing.

More than 500 servers with various configurations distributed across 18 racks, all equipped with NVIDIA GH200 Grace Hopper Superchips, will be shown by system builders. They will occupy the MGX Pavilion at NVIDIA’s pavilion at the San Jose Convention Center, making the biggest display possible.

NVIDIA Grace Hopper MGX Platform

Building accelerated servers with any configuration of GPUs, CPUs, and data processing units (DPUs) for a variety of AI, high performance computing, and NVIDIA Omniverse applications is possible with the help of the NVIDIA MGX blueprint. This modular reference design may be used to various workloads and product generations.

Attendees at GTC will have the opportunity to see up close and personal with MGX models designed for business, cloud, and telco-edge applications including data analytics, recommenders, and generative AI inference.

Nvidia GH200 Grace Hopper Superchips


Showcasing accelerated systems with single and twin GH200 Superchips in 1U and 2U chassis, the pavilion will be connected by NVIDIA Quantum-2 400Gb/s InfiniBand networks and NVIDIA BlueField-3 DPUs via LinkX cables and transceivers.

Many of the systems have E1.S bays for nonvolatile storage, and they support industry standards for 19- and 21-inch rack enclosures.

Grace Hopper GH200 processors

  • With dimensions of 450 x 445 x 87 mm, ASRock RACK’s MECAI expedites 5G and AI services in confined areas at the periphery of telecom networks.
  • The ESC NM2N-E1, an ASUS MGX server, fits into a rack that can accommodate nodes that are water- or air-cooled and can accommodate up to 32 GH200 processors.
  • A range of MGX systems is offered by Foxconn, one of which is a 4U variant that can hold up to eight NVIDIA H100 NVL PCIe Tensor Core GPUs.
  • With its two M.2 ports and six 2.5-inch Gen5 NVMe hot-swappable bays, GIGABYTE’s XH23-VG0-MGX can hold a lot of storage.
    Inventec’s systems use three distinct liquid cooling techniques and fit into 19- and 21-inch racks.
  • Lenovo offers a selection of 1U, 2U, and 4U MGX servers, some of which are capable of direct liquid cooling.
  • A BlueField-3 DPU is included inside Pegatron’s air-cooled AS201-1N0 server to provide hardware-accelerated software-defined networking.
  • A single QCT QoolRack may hold 16 of QCT’s QuantaGrid D74S-IU systems, each of which has two GH200 Superchips.
  • Supermicro offers a range of liquid- and air-cooled NVIDIA Grace Hopper Superchips CPU systems, including the ARS-111GL-NHR with nine hot-swappable fans.
  • The 1U twin GH200 system from Wiwynn, the SV7200H, has a remotely controlled liquid cooling subsystem and a BlueField-3 DPU.
  • Eight accelerators may be supported in a single system by Wistron’s MGX servers, which are 4U GPU systems for mixed workloads and AI inference.

The three accelerated systems with MGX, Supermicro’s ARS-221GL-NR using the Grace CPU and QCT’s QuantaGrid S74G-2U and S74GM-2U powered by the GH200:

  1. NVIDIA Grace Hopper Superchips  Packs Two in One Because the hybrid CPU is powerful, system architects are embracing it. Are in addition to the new servers that were unveiled at COMPUTEX in May.
  2. NVIDIA Grace Hopper Superchips  pair a powerful NVIDIA H100 GPU with a high-performance, energy-efficient Grace CPU.
  3. Through a quick NVIDIA NVLink-C2C interface, hundreds of gigabytes of memory are shared between them.

As a consequence, the processor and memory complex are capable of handling even the most taxing tasks of the modern day, such processing massive language models. They possess the memory and speed required to connect generative AI models to data sources that may use retrieval-augmented generation, or RAG, to increase the models’ accuracy.

Run Four Times Quicker in GH200 Superchip

In addition, for activities like video streaming and online shopping suggestions, the GH200 Superchip offers up to 4 times higher speed and more economy than combining the H100 GPU with conventional CPUs.

GH200 systems extended the already industry-leading performance of H100 GPUs when they made their debut on the MLPerf industry benchmarks in November of last year, passing all data center inference tests.

The GH200, sometimes called the NVIDIA Grace Hopper Superchips, is an innovative accelerated CPU that was created especially for high-performance computing (HPC) and large-scale artificial intelligence (AI) applications.

Salient Attributes of NVIDIA Grace Hopper Superchip

  • Goal: It was created from the bottom up to handle the large datasets and challenging issues seen in the AI and HPC areas.
  • Performance: When working with terabytes of data, users may accomplish quicker outcomes thanks to the superchip’s up to 10 times greater performance than earlier alternatives.
  • Architecture: It creates a CPU+GPU coherent memory paradigm by combining the NVIDIA Grace Hopper Superchips  architectures with NVIDIA NVLink-C2C technology. With a single memory area, the CPU and GPU can communicate and exchange data more easily, which boosts the performance of AI and HPC applications.
  • Availability: Currently in use in a number of NVIDIA products, such as the NVIDIA HGX platform, is the GH200 superchip.
    Said another way, the NVIDIA Grace Hopper Superchips  is a powerful processor designed especially to tackle difficult jobs in scientific computing and artificial intelligence. It provides a unified memory architecture for effective data processing along with notable speed gains.

NVIDIA Grace Hopper Superchips Specs:

FeatureSpecification
TypeAccelerated CPU
Target ApplicationsHigh-Performance Computing (HPC), Artificial Intelligence (AI)
ArchitectureGrace + Hopper
MemoryUp to 480GB LPDDR5x ECC (CPU)
GPU MemorySupports 96GB HBM3 or 144GB HBM3e
Memory BandwidthUp to 624GB/s
NVLink-C2C Bandwidth900GB/s bidirectional coherent memory
AvailabilityCurrently available
agarapuramesh
agarapurameshhttps://govindhtech.com
Agarapu Ramesh was founder of the Govindhtech and Computer Hardware enthusiast. He interested in writing Technews articles. Working as an Editor of Govindhtech for one Year and previously working as a Computer Assembling Technician in G Traders from 2018 in India. His Education Qualification MSc.
RELATED ARTICLES

Recent Posts

Popular Post

Govindhtech.com Would you like to receive notifications on latest updates? No Yes