Nvidia upgraded the chip to better handle artificial intelligence systems

by nativetechdoctor
1 minutes read

Nvidia has just added new features to its key AI chip, to optimally handle artificial intelligence systems to serve partners such as Amazon, Google, and Oracle

The new chip, called H200, will surpass Nvidia’s current flagship chip, the H100. The new chip’s main upgrade lies in its higher bandwidth memory, which is one of the most expensive parts of the chip and determines data processing speed.

service Nvidia currently has a monopoly on the AI ​​chip market and provides chips for OpenAI’s ChatGPT and many AI services that generate human-like answers. Adding higher bandwidth memory and faster connections to the chip’s processing elements means services like ChatGPT can respond more quickly, according to Reuters.

The H200 has 141 gigabytes (GB) of high-bandwidth memory, up from 80 GB compared to the previous H100 chip. Nvidia did not reveal the memory supplier for the new chip, but Micron Technology said in September that it was working to become a supplier to Nvidia .

Nvidia further revealed that Amazon web services, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure will be the first cloud service providers to exploit the H200 chip, in addition to AI cloud service providers. Professionals like CoreWeave, Lambda, and Vultr.

Related Posts

Leave a Comment

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.