Home / Technology / Google unveils its latest A.I. supercomputer claiming superior performance to Nvidia

Google unveils its latest A.I. supercomputer claiming superior performance to Nvidia

Spread the love

Google unveils its latest A.I. supercomputer claiming superior performance to Nvidia

On Wednesday, Google released information about its latest artificial intelligence supercomputer, claiming that it outperforms Nvidia’s systems in terms of speed and efficiency. This comes as machine learning models, which consume a significant amount of power, continue to be a highly sought-after aspect of the technology industry.

Since 2016, Google has been creating and deploying its own AI chips, known as Tensor Processing Units (TPUs), even though Nvidia currently dominates the market for AI model training and deployment with over 90% market share.

Over the last decade, Google has made significant contributions to the field of AI, with its employees developing some of the most important advancements. However, there is a belief among some that the company has lagged behind in terms of commercializing its inventions. Internally, the company has been working to release products and demonstrate that it has not lost its lead. This has been described as a “code red” situation within the company, according to a previous report by CNBC.

Machine learning models and applications, such as Google’s Bard or OpenAI’s ChatGPT, rely heavily on computing power and require hundreds or even thousands of chips to work in unison for weeks or months to train models. Nvidia’s A100 chips are often used to power these processes.

Google announced on Tuesday that it had developed a system for training AI models, consisting of more than 4,000 TPUs linked with customized components. The system has been operational since 2020 and was utilized to train Google’s PaLM model for more than 50 days, which competes with OpenAI’s GPT model.

According to Google researchers, their TPU v4 supercomputer, which is based on TPUs, is faster and more energy-efficient than Nvidia A100 systems by a factor of 1.2x-1.7x and 1.3x-1.9x, respectively. They also noted that the TPU v4 supercomputers’ performance, scalability, and availability make them ideal for large language models.

The Google researchers stated that they did not compare the results of TPU with Nvidia’s latest AI chip, the H100, as it was made using more advanced manufacturing technology and is more recent.

On Wednesday, the results and rankings of an industrywide AI chip test named MLperf were released. According to Nvidia CEO Jensen Huang, the most recent Nvidia chip, the H100, performed significantly faster than its previous generation.

In a blog post, Nvidia CEO Jensen Huang stated that the latest Nvidia chip, the Hopper, delivered 4 times more performance than the A100 in the MLPerf 3.0 test. Huang also mentioned that new AI infrastructure is required to train Large Language Models with great energy-efficiency, as the next level of Generative AI demands it.

The high computer power required for AI is costly, so many in the industry are concentrating on creating new chips, components like optical connections, or software methods that decrease the amount of power required.

The high power demands of AI benefit cloud providers like Google, Microsoft, and Amazon, who can offer computer processing on an hourly basis and provide credits or computing time to startups to build relationships. Additionally, Google’s cloud service offers Nvidia chips for rent. As an example, Google used its TPU chips to train Midjourney, an AI image generator.

About Rajesh Parmar

Rajesh Parmar

Check Also

Qualcomm Initiates Discussions with Intel Regarding Potential Takeover

Qualcomm Initiates Discussions with Intel Regarding Potential Takeover

Spread the love Qualcomm has reached out to the struggling chipmaker Intel regarding a potential ...