Microsoft revealed that the company has connected tens of thousands of Nvidia A100 chips and rack servers to build hardware for ChatGPT and Bing AI bots.
According to a report from Bloomberg, Microsoft has spent hundreds of millions of dollars building a giant supercomputer to power ChatGPT ‘s OpenAI chatbot. In blog posts, Microsoft explains how to create Azure’s artificial intelligence infrastructure used by OpenAI, and the company also reveals why processing systems are getting more powerful.
To build the supercomputer powering OpenAI’s projects, Microsoft linked thousands of Nvidia graphics cards (GPUs) to the Azure cloud computing platform, allowing OpenAI to train increasingly powerful models powerful and unlock the limitless potential of AI on ChatGPT and Bing.
vice president of AI and cloud According to Bloomberg, Scott Guthrie, Microsoft, said the company has spent hundreds of millions of dollars on the project. Although this is only a modest number for Microsoft, it also shows that the technology corporation has recently expanded its multi-billion dollar investment over many years in OpenAI, ready to pour more money into the AI market.
Microsoft has been working hard to optimize Azure’s AI capabilities with the launch of new virtual machines using Nvidia’s H100 and A100 Tensor Core GPUs, as well as the Quantum-2 InfiniBand network, which was unveiled in November 2022. . According to Microsoft, this direction will allow OpenAI and other companies that rely on Azure to train larger and more complex AI models.
Eric Boyd, Microsoft’s corporate vice president of Azure AI, said: “We realized that we needed to build specialized clusters of computers that focused on handling massive workloads like OpenAI. The parties worked closely together to understand what they were looking for and what they really needed when building an AI training environment.”