Nvidia uses AI to design chips faster

As the artificial intelligence (AI) industry grows strongly, technology companies are scrambling to get their hands on the scarce supply of GPUs. To solve this problem, Nvidia developed the ChipNeMo AI system to speed up production.

GPU design requires a lot of labor and time. Bryan Catanzaro – Nvidia’s vice president of deep learning applied research (deep learning) said a chip requires nearly 1,000 people to build and each person needs to understand how different parts of the design process coordinate.

The ChipNeMo system uses a large language model developed from Meta’s Llama 2. According to Insider, ChipNeMo’s chatbot can answer queries related to chip design, such as GPU architecture and writing chip design code.

In 2023, the AI ​​fever has brought Nvidia into the “club of trillion companies”, with a capitalization reaching 1,000 billion USD. Goldman Sachs analysts expect Nvidia shares to continue rising through the first half of 2025.

Since ChipNeMo was launched in October 2023, Nvidia said the AI ​​system has been useful in summarizing notes and training new engineers in chip design. The company is making efforts to increase production to meet rising chip demand.

In January, Mark Zuckerberg announced plans to spend billions of dollars to buy 350,000 more Nvidia H100 GPUs to serve the AI ​​race. If other chip models are included, Meta will accumulate 600,000 chips by the end of 2024.

Some other technology “giants” are also looking for ways to solve the chip shortage problem.

In July 2023, Google’s DeepMind division created an AI system to speed up the process of designing the latest custom chip model, according to the Wall Street Journal. Meanwhile, leading chip design company Synopsys has launched an AI tool designed to help chip engineers increase productivity

Related posts

GTA 6 is guaranteed to launch on time, Take-Two quashes delay rumors

Be wary of SteelFox malware attacking Windows using a copyright-cracking tool

Apple chose Foxconn and Lenovo to develop an AI server based on Apple Silicon