Saturday, July 6, 2024

Microsoft and OpenAI Challenge NVIDIA GPUs

There have been rumours that Microsoft and OpenAI are working on developing AI chips to compete with NVIDIA GPUs

It has been rumoured that Microsoft and OpenAI will soon launch their own AI chip, which will compete with chips like NVIDIA’s H100s. This development coincides with a trend in the AI sector towards developing solutions in-house.

Through the Development of Its Own Artificial Intelligence Chips, Microsoft Hopes to Decrease Its Dependence on NVIDIA and Other Companies; OpenAI Is One of Its Major Partners

Everything was running nicely for Team Green, with little to no competition and orders pouring in at a high pace from the industry. Suddenly, though, corporations have begun shifting towards a “in-house” strategy. The solution is not complicated at all. You have to understand that ever since the beginning of the “AI frenzy,” NVIDIA has been able to keep its place at the forefront of the business. This is due to the fact that its H100 AI GPU is the piece of technology that is in the highest demand.

nividia and Ai
image credit to WCCFTECH

The very high demand resulted in a number of issues, including order backlogs, increased prices, and in some cases, “exclusive” access, based on the relationship that each company had with NVIDIA. Because of this, the playing field became “unbalanced” for other enterprises, and as a result, we may see a change in the landscape.

Microsoft is now seen as a major participant in the artificial intelligence (AI) market, and the company is moving quickly to build generative AI resources in order to integrate them into mainstream applications. Microsoft feels that the situation in the AI chip sector is impeding the advancement of their aim, which is why they believe that developing a domestic “AI chip” is a potential answer.

Although there are not many specifics on the chip itself, we do know that Microsoft intends for it to be called “Athena,” and that it is going to be introduced at Microsoft’s Ignite conference, which is going to take place in November 2023. The company’s aim is to have its Athena chip perform at the same level as NVIDIA’s H100 AI GPUs, but accomplishing this objective won’t be as simple as it may first seem.

On a much larger scale, AI developers choose NVIDIA because of its “well-shaped” CUDA platform and the company’s recognised software innovations, which provide the H100s a significant advantage over its rivals.

Although it is possible that Microsoft may acquire a level of computational power comparable to that of Athena, the development of software resources will take some time. However, the most important goal that Microsoft hopes to accomplish with its very own AI chips is to satisfy the requirements of the company’s subsidiaries and partners, such as Microsoft Azure and OpenAI.

It is fair to say that Microsoft has all of the resources it needs to “perfect” its AI chip; the only thing it lacks is sufficient amounts of time.

Until now, OpenAI’s ChatGPT software has been run on NVIDIA’s AI and Tensor Core GPU hardware such as the A100 and H100. However, in light of this recent advancement, it’s possible that the AI industry may see a significant transformation in the near future. For the purpose of powering ChatGPT, OpenAI was using thousands of NVIDIA H100 GPUs.

Always pointed out things in such coverages that with the fast expansion of a certain business comes a wave of “innovation” that is focused at breaking monopolies and reaching new heights. This has always been something that he have pointed out.

The situation is very much the same today, and given that new competitors will always appear, NVIDIA’s domination over the artificial intelligence market is restricted to some degree at best. Nevertheless, the most important issue to consider is how things turn out for Microsoft, considering that the business is now in the lead when it comes to breakthroughs in AI for consumer products.

News source

RELATED ARTICLES

9 COMMENTS

  1. […] Furthermore, the systems may use the most recent NCCL Fast Socket improvements and be based on our Ubuntu Deep Learning VM Image. You can now easily interact with unprivileged containers and specify the container in a Slurm task thanks to robust tools like the Pyxis plugin for Slurm Workload Manager included in the blueprint and the enroot container utility. You can quickly create up an HPC environment on Google Cloud that will let you to train your LLMs on NVIDIA GPUs. […]

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Recent Posts

Popular Post

Govindhtech.com Would you like to receive notifications on latest updates? No Yes