Groq, Nvidia's new hobbyhorse?

Groq, Nvidia’s new hobbyhorse?

Nvidia is at the top, mainly thanks to the arrival of artificial intelligence. In addition, the financial results have just confirmed a record end to the financial year, with a 22% increase in sales in the fourth quarter of 2024 compared to the third. Even more telling, sales are up 265% year-on-year.

However, competition is organized. In recent months, a start-up has made headlines: Groq. The latter claims to produce chips for AI that offer more computing power than those of Nvidia. To achieve its goals, the company has developed a new chip architecture called LPU (Language Processing Unit). This component, engraved in 14 nanometers, has the sole purpose of running the various artificial intelligence language models, including the emblematic ChatGPT.

Jonathan Ross, founder of Groq, explained IT world in January 2024, the LPU was found to consume fewer resources than a traditional processor while offering much higher performance. Generating responses is therefore much faster than with a traditional processor or graphics card.

For comparison, Groq’s boss explains that its GrogChip LPU can generate up to 400 tokens per second, compared to 100 on a standard processor. However, it is necessary to collect a large number of chips to achieve the performance of Nvidia graphics cards.

It remains to be seen how this project will evolve in the coming months, and whether it will ultimately be integrated into infrastructures or devices to improve the performance of generative artificial intelligence.

One thing is certain: this solution should interest the major players in AI, who are also considering making their own chips. An article published by Reuters in late 2023 indicated that OpenAI was considering making its own specialty chips. Bloomberg also returned to the topic in January 2024, specifying that Sam Altman was trying to raise $100 billion to create a network of foundries in different parts of the world. If this ambition comes to fruition, OpenAI could in the future abandon the Nvidia GPUs used to run ChatGPT.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *