Sunday, December 22, 2024
Technology

How NVIDIA became a major player in robotics

[A version of this post appeared in TechCrunch’s robotics newsletter, Actuator. Subscribe here.]

The last time I’d spoken with the NVIDIA at any length about robotics was also the last time we featured Claire Delaunay on stage at our Sessions event. That was a while ago. She left the company last July to work with startups and do investing. In fact, she returned to the TechCrunch stage at Disrupt two weeks back to discuss her work as a board advisor for the ag tech firm Farm-ng.

Not that Nvidia is desperate for positive reinforcement after its last several earnings reports, but it warrants pointing out how well the company’s robotics strategy has paid off in recent years. Nvidia pumped a lot into the category at a time when mainstreaming robotics beyond manufacturing still seemed like a pipe dream for many. April marks a decade since the launch of the TK1. Nvidia  described the offering thusly at the time, “Jetson TK1 brings the capabilities of Tegra K1 to developers in a compact, low-power platform that makes development as simple as developing on a PC.”

This February, the company noted, “A million developers across the globe are now using the Nvidia Jetson platform for edge AI and robotics to build innovative technologies. Plus, more than 6,000 companies — a third of which are startups — have integrated the platform with their products.”

You would be hard-pressed to find a robotics developer who hasn’t spent time with the platform, and frankly it’s remarkable how users run the gamut from hobbyists to multinational corporations. That’s the kind of spread companies like Arduino would kill for.

Last week, I paid a visit to the company’s massive Santa Clara offices. The buildings, which opened in 2018, are impossible to miss from the San Tomas Expressway. In fact, there’s a pedestrian bridge that runs over the road, connecting the old and new HQ. The new space is primarily composed of two buildings: Voyager and Endeavor, comprising 500,000 and 750,000 square feet, respectively.

Between the two is an outdoor walkway lined with trees, beneath large, crisscrossing trellises that support solar arrays. The battle of the South Bay Big Tech headquarters has really heated up in recent years, but when you’re effectively printing money, buying land and building offices is probably the single best place to direct it. Just ask Apple, Google and Facebook.

Image Credits: NVIDIA

Nvidia’s entry into robotics, meanwhile, has benefited from all manner of kismet. The firm knows silicon about as well as anyone on earth at this point, from design and manufacturing to the creation of low-power systems capable of performing increasingly complex tasks. That stuff is foundational for a world increasingly invested in AI and ML. Meanwhile, Nvidia’s breadth of knowledge around gaming has proven a huge asset for Isaac Sim, its robotics simulation platform. It’s a bit of a perfect storm, really.

Speaking at SIGGRAPH in August, CEO Jensen Huang explain, “We realized rasterization was reaching its limits. 2018 was a ‘bet the company’ moment. It required that we reinvent the hardware, the software, the algorithms. And while we were reinventing CG with AI, we were reinventing the GPU for AI.”

After some demos, I sat down with Deepu Talla, Nvidia’s vice president and general manager of Embedded & Edge Computing. As we began speaking, he pointed to a Cisco teleconferencing system on the far wall that runs the Jetson platform. It’s a far cry from the typical AMRs we tend to think about when we think about Jetson.

“Most people think of robotics as a physical thing that typically has arms, legs, wings or wheels — what you think of as inside-out perception,” he noted in reference to the office device. “Just like humans. Humans have sensors to see our surroundings and gather situational awareness. There’s also this thing called outside-in robotics. Those things don’t move. Imagine you had cameras and sensors in your building. They are able to see what’s happening. We have a platform called Nvidia Metropolis. It has video analytics and scales up for traffic intersections, airports, retail environments.”

Image Credits: TechCrunch

What was the initial reaction when you showed off the Jetson system in 2015? It was coming from a company that most people associate with gaming.

Yeah, although that’s changing. But you’re right. That’s what most consumers are used to. AI was still new, you had to explain what use case you were comprehending. In November 2015, Jensen [Huang] and I went to San Francisco to present a few things. The example we had was an autonomous drone. If you wanted to do an autonomous drone, what would it take? You would need to have this many sensors, you need to process this many frames, you need to identify this. We did some rough math to identify how many computations we would need. And if you want to do it today, what’s your option? There was nothing like that at the time.

How did Nvidia’s gaming history inform its robotics projects?

When we first started the company, gaming was what funded us to build the GPUs. Then we added CUDA to our GPUs so it could be used for non-graphical applications. CUDA is essentially what got us into AI. Now AI is helping gaming, because of ray tracing, for example. At the end of the day, we are building microprocessors with GPUs. All of this middleware we talked about is the same. CUDA is the same for robotics, high-performance computing, AI in the cloud. Not everyone needs to use all parts of CUDA, but it’s the same.

How does Isaac Sim compare to [Open Robotics’] Gazebo?

Gazebo is a good, basic simulator for doing limited simulations. We’re not trying to replace Gazebo. Gazebo is good for basic tasks. We provide a simple ROS bridge to connect Gazebo to Isaac Sim. But Isaac can do things that nobody else can do. It’s built on top of Omniverse. All of the things you have in Omniverse come to Isaac Sim. It’s also designed to plug in any AI mode, any framework, all the things we’re doing in the real world. You can plug it in for all the autonomy. It also has the visual fidelity.

You’re not looking to compete with ROS.

No, no. Remember, we are trying to build a platform. We want to connect into everybody and help others leverage our platform just like we are leveraging theirs. There’s no point in competing.

Are you working with research universities?

Absolutely. Dieter Fox is the head of Nvidia robotics research. He’s also a professor at University of Washington in robotics. And many of our research members also have dual affiliations. They are affiliated with universities in many cases. We publish. When you’re doing research, it has to be open.

Are you working with end users on things like deployment or fleet management?

Probably not. For example, if John Deere is selling a tractor, farmers are not talking to us. Typically, fleet management is. We have tools for helping them, but fleet management is done by whoever is providing the service or building the robot.

When did robotics become a piece of the puzzle for Nvidia?

I would say, early 2010s. That’s when AI kind of happened. I think the first time deep learning came about to the whole world was 2012. There was a recent profile on Bryan Catanzaro. He then immediately said on LinkedIn, [Full quote excerpted from the LinkedIn post], “I didn’t actually convince Jensen, instead I just explained deep learning to him. He instantly formed his own conviction and pivoted Nvidia to be an AI company. It was inspiring to watch and I still sometimes can’t believe I got to be there to witness Nvidia’s transformation.”

2015 was when we started AI for not just the cloud, but EDGE for both Jetson and autonomous driving.

When you discuss generative AI with people, how do you convince them that it’s more than just a fad?

I think it speaks in the results. You can already see the productivity improvement. It can compose an email for me. It’s not exactly right, but I don’t have to start from zero. It’s giving me 70%. There are obvious things you can already see that are definitely a step function better than how things were before. Summarizing something’s not perfect. I’m not going to let it read and summarize for me. So, you can already see some signs of productivity improvements.

source

Leave a Reply

Your email address will not be published. Required fields are marked *