Sunday, December 22, 2024
Technology

Nvidia’s keynote at GTC held some surprises

SAN JOSE — “I hope you realize this is not a concert,” said Nvidia President Jensen Huang to an audience so large, it filled up the SAP Center in San Jose. This is how he introduced what is perhaps the complete opposite of a concert: the company’s GTC event. “You have arrived at a developers conference. There will be a lot of science describing algorithms, computer architecture, mathematics. I sense a very heavy weight in the room; all of a sudden, you’re in the wrong place.”

It may not have been a rock concert, but the the leather-jacket wearing 61-year old CEO of the world’s third-most-valuable company by market cap certainly had a fair number of fans in the audience. The company launched in 1993, with a mission to push general computing past its limits. “Accelerated computing” became the rallying cry for Nvidia: Wouldn’t it be great to make chips and boards that were specialized, rather than for a general purpose? Nvidia chips give graphics-hungry gamers the tools they needed to play games in higher resolution, with higher quality and higher frame rates.

It is not a huge surprise, perhaps, that the Nvidia CEO drew parallels to a concert. The venue was, in a word, very concert-y. Image Credits: TechCrunch / Haje Kamps

Monday’s keynote was, in a way, a return to the company’s original mission. “I want to show you the soul of Nvidia, the soul of our company, at the intersection of computer graphics, physics and artificial intelligence, all intersecting inside a computer.”

Then, for the next two hours, Huang did a rare thing: He nerded out. Hard. Anyone who had come to the keynote expecting him to pull a Tim Cook, with a slick, audience-focused keynote, was bound to be disappointed. Overall, the keynote was tech-heavy, acronym-riddled, and unapologetically a developer conference.

We need bigger GPUs

Graphics processing units (GPUs) is where Nvidia got its start. If you’ve ever built a computer, you’re probably thinking of a graphics card that goes in a PCI slot. That is where the journey started, but we’ve come a long way since then.

The company announced its brand-new Blackwell platform, which is an absolute monster. Huang says that the core of the processor was “pushing the limits of physics how big a chip could be.” It uses combines the power of two chips, offering speeds of 10 Tbps.

“I’m holding around $10 billion worth of equipment here,” Huang said, holding up a prototype of Blackwell. “The next one will cost $5 billion. Luckily for you all, it gets cheaper from there.” Putting a bunch of these chips together can crank out some truly impressive power.

The previous generation of AI-optimized GPU was called Hopper. Blackwell is between 2 and 30 times faster, depending on how you measure it. Huang explained that it took 8,000 GPUs, 15 megawatts and 90 days to create the GPT-MoE-1.8T model. With the new system, you could use just 2,000 GPUs and use 25% of the power.

These GPUs are pushing a fantastic amount of data around — which is a very good segue into another topic Huang talked about.

What’s next

Nvidia rolled out a new set of tools for automakers working on self-driving cars. The company was already a major player in robotics, but it doubled down with new tools for roboticists to make their robots smarter.

The company also introduced Nvidia NIM, a software platform aimed at simplifying the deployment of AI models. NIM leverages Nvidia’s hardware as a foundation and aims to accelerate companies’ AI initiatives by providing an ecosystem of AI-ready containers. It supports models from various sources, including Nvidia, Google and Hugging Face, and integrates with platforms like Amazon SageMaker and Microsoft Azure AI. NIM will expand its capabilities over time, including tools for generative AI chatbots.

“Anything you can digitize: So long as there is some structure where we can apply some patterns, means we can learn the patterns,” Huang said. “And if we can learn the patterns, we can understand the meaning. When we understand the meaning, we can generate it as well. And here we are, in the generative AI revolution.”

source

Leave a Reply

Your email address will not be published. Required fields are marked *