Nvidia presents Blackwell B200 chip specifically for AI: “we need bigger GPUs”

The CEO of Nvidia yesterday presented his company’s new GPU dedicated to artificial intelligence (AI), the B200. Based on a new architecture called Blackwell, this chip increases performance and reduces power consumption.

DR

Last night the man who claims that AI will surpass humans within 5 years presented a new advance in this field. Nvidia CEO Jensen Huang unveiled the Blackwell B200 chip and the GB200 “superchip,” designed for artificial intelligence calculations, to an audience of 18,000 people. To introduce his new creations, the Nvidia co-founder stated: “ We need bigger GPUs “. The color (green) is announced.

What does Nvidia’s Blackwell B200 chip offer?

Currently the most used GPU for AI calculations is the Nvidia H100, a chip that notably equips Tesla’s supercomputer. This part costs more than $40,000 per unit between companies in the industry like OpenAI. Older cards, like the A100, although launched in 2020, still sell for $20,000. These chips made Nvidia the third largest market capitalization in the world, just behind Apple.

Blackwell chips go beyond these hugely successful GPUs. The new B200 GPU delivers up to 20 petaflops of FP4 power thanks to 208 billion transistors. The GB200 superchip combines two of these GPUs with a Grace processor to deliver 30 times the performance of inference computations, i.e. those intended for turn, turn major language models such as GPT.

On a GPT-3 LLM benchmark with 175 billion parameters, Nvidia claims the GB200 delivers performance seven times that of an H100. Nvidia claims it also offers four times the speed of learning, i.e. calculations intended for form most important language models.

The B200 GPU is more energy efficient than other Nvidia chips

But the progress is not only significant in terms of performance. Everyone knows that the power consumption of AI is enormous. These cards are much more effective. “GB200 reduces costs and energy consumption by up to 25 times fewercompared to an H100, says Nvidia.

For example, according to Nvidia, training a model with 1.8 trillion parameters would previously require 8,000 GPU Hoppers and 15 megawatts of power. Today, Jensen Huang explains that 2,000 Blackwell GPUs can do this with just four megawatts.

A major technological advance, but not in the opinion of investors! The announcement of the Blackwell architecture and B200 GPU sent Nvidia shares down 1.77%, rather than continuing the unapologetic rise of the past twelve months. Nvidia will not overtake Apple. At least for now.

Source: Bloomberg

  • Nvidia CEO Jensen Huang presented his new GPU dedicated to AI calculations last night.
  • The GPU is combined into a superchip, the GB200, which offers better performance.
  • The chip also offers enormous progress in terms of power consumption.

Leave a Comment