Tuesday, July 16, 2024

Transforming AI with NVIDIA H100 GPU

The ND H100 v5 virtual machine series instance from Microsoft Azure provides next-generation performance at scale for workloads requiring generative AI, LLMs, and other computing resources

NVIDIA

Users of Microsoft Azure from all across the world may now train and deploy their generative AI applications using the most recent NVIDIA accelerated computing technologies.

The NVIDIA H100 Tensor Core GPUs and NVIDIA Quantum-2 InfiniBand networking used in the Microsoft Azure ND H100 v5 VMs, which are already available, enable scaling generative AI, high performance computing (HPC), and other applications with a click from a browser.

The new instance, which is accessible to consumers all around the United States, comes as academics and developers leverage large language models (LLMs) and faster computation to identify fresh consumer and commercial use cases.

With fourth-generation Tensor Cores, a new Transformer Engine for accelerating LLMs, and the most recent NVLink technology, which enables GPUs to communicate with each other at 900GB/sec, the NVIDIA H100 GPU offers supercomputing-class performance.

The incorporation of NVIDIA Quantum-2 CX7 InfiniBand with 3,200 Gbps cross-node bandwidth enables flawless performance across the GPUs at an enormous scale, matching the potential of the world’s top supercomputers.

Using v5 VMs for Scaling

For training and running inference for more complicated LLMs and computer vision models, ND H100 v5 VMs are the best choice. The most complex and computationally costly generative AI applications, such as question answering, code generation, audio, video, and image generation, speech recognition, and others, are powered by these neural networks.

The ND H100 v5 VMs outperform previous generation instances in LLMs like the BLOOM 175B model for inference by up to 2x, highlighting their potential to further optimize AI applications.

Azure and NVIDIA

The performance, adaptability, and scale of NVIDIA and Azure’s NVIDIA H100 Tensor Core GPUs give businesses the tools they need to accelerate their AI training and inference workloads. The NVIDIA AI Enterprise software suite linked with Azure Machine Learning for MLOps accelerates the creation and deployment of production AI, and the result is record-breaking AI performance in the widely used MLPerf benchmarks.

Additionally, NVIDIA and Microsoft are giving hundreds of millions of Microsoft enterprise users access to potent industrial digitalization and AI supercomputing resources by integrating the NVIDIA Omniverse platform with Azure.

RELATED ARTICLES

4 COMMENTS

  1. […] Large-scale NVIDIA GPU-accelerated workloads are the focus of CoreWeave, a specialist cloud service. The core infrastructure supporting the company’s cloud solutions, which are designed for workloads including artificial intelligence (AI), machine learning (ML), visual effects (VFX) rendering, and large-scale simulations, will be Dell PowerEdge XE9860 servers equipped with NVIDIA H100 Tensor Core GPUs. […]

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Recent Posts

Popular Post

Govindhtech.com Would you like to receive notifications on latest updates? No Yes