Monday, December 23, 2024
Technology

TRI is developing a new method to teach robots overnight

Learning may well be the most exciting frontier in the whole of robotics. The field itself dates back decades. The 80s, for instance, brought exciting breakthroughs in learning by demonstration, but a slew of research projects out of schools like CMU, MIT and UC Berkeley point to a future in which robots learn much like their human counterparts.

Today at TechCrunch Disrupt’s Hardware Stage, the Toyota Research Institute (TRI) is showcasing advancements in research that can teach a robot a new skill quite literally overnight.

“It’s remarkable how fast it works,” says TRI CEO and Chief Scientist Gill Pratt. “In machine learning, up until quite recently there was a tradeoff, where it works, but you need millions of training cases. When you’re doing physical things, you don’t have time for that many, and the machine will break down before you get to 10,000. Now it seems that we need dozens. The reason for the dozens is that that we need to have some diversity in the training cases. But in some cases it’s less.”

The system demonstrated by TRI uses some more traditional robot learning techniques, coupled with diffusion models — similar to the processes that power generative AI models like Stable Diffusion. The automaker’s research wing says it has trained robots on 60 skills and counting using this method. But existing models won’t solve the problem themselves.

“We’ve seen some big progress with the advent of [large language models], using them to impart this high level of cognitive intelligence into robots,” says TRI Senior Research Scientist Benjamin Burchfiel. “If you have a robot that picks up a thing, now instead of having to specify an object, you can tell it to pick up the can of Coke. Or you can tell it to pick up the shiny object, or you can do the same thing and do it in French. That’s really great, but if you want a robot to plug in a USB device or pick up a tissue, those models just don’t work. They’re really useful, but they don’t solve that part of the problem. We’re focused on filling in that missing piece, and the thing we’re really excited about now is that we actually have a system and that the fundamentals are correct.”

Among the advantages to the method is the ability to program skills that are capable of functioning in diverse settings. This is an important aspect, as robots have difficulty functioning in less- or unstructured environments. That’s a big part of the reason why it’s easier for a robot to, say, function in a warehouse versus a road or even a house. Warehouses are generally built to be structured, with little change, aside from navigating moving objects like people or forklifts.

Ideally, you want a robot that can roll with the punches. Take the home. One of TRI’s primary focuses has been developing systems that can help older people continue to live independently. That’s an increasingly large concern in places with an aging population, like Toyota’s native Japan. One of the goals is the creation of a system that can both operate in different environments and navigate changes therein.

People move furniture, leave messes and don’t always put things back where they belong. Traditionally, roboticists have to take a kind of brute force approach to this stuff, anticipating any edge cases/deviations and programming the robot to manage them in advance.

This is important stuff if robots are going to function as advertised in the real world. Equally important is what roboticists deem “general purpose” systems. Those are robots that can learn and adapt to new tasks. It’s a radical shift away from more traditional single-purpose systems that are trained to do one thing well over and over again. It’s worth remembering, however, that we’re still a ways away from anything that can credibly be considered “general purpose.”

Image Credits: Toyota Research Institute

Roboticists at TRI begin by teaching the systems through teleoperation, a common tool in robot learning. Here, that process can take a monotonous couple of hours, wherein the system is made to repeat the same task over and over.

“You can think of it as remotely driving a robot through demonstrations,” says Burchfiel. “Currently that number is usually several dozen. It usually takes you about an hour to teach a basic behavior. The system doesn’t really care how you control a robot. The one that we’ve been using most recently, which has enabled a lot more of these more dexterous behaviors, is a teleop device that’s actually transmitting force between the robot and person. This means that the person can feel what the robot is doing as it’s interacting with the world. It lets you do other things that you can’t otherwise coordinate.”

The system utilizes all the data presented to it, including sight and force feedback, to produce a fuller picture of the task. As long as there is some overlap between the collected data (say, associating sight with touch), it’s able to replicate that activity using its built-in sensors. Force feedback is the key to understanding that you are, say, holding a tool correctly.

TRI says its initial experiments with tactility “have been extremely promising.” Flipping pancakes, for example, had a 90% success rate, with 27 out of 30 flips — a slight improvement over the non-tactile trials, which scored an 83%. On the other hand, the number is very stark with dough rolling (96%) and food serving (90%). Without the tactile sensing, those numbers drop to 0 and 10%, respectively.

Once that aspect of the training is completed, the systems are left alone, as their neural networks get to work training overnight. If things go as planned, the skill will have been fully learned by the time researchers return to the lab the next morning.

Image Credits: Toyota Research Institute

The system relies on diffusion policy, which is, “a new way of generating robot behavior by representing a robot’s visuomotor policy as a conditional denoising diffusion process,” according to the researchers behind it. In simpler terms, what it does is find meaning in randomized images by removing “noise” from the process. Again, it’s similar to much of what we’ve seen in the generative AI world, but this research is utilizing processes to create behaviors in the robot.

I recognized recently that I was thinking about robotic learning. I had previously considered different methods of teaching robots to be in conflict with one another — that ultimately one superior method would run the rest out. It’s clear to me that the way forward will be a combination of different methods, in much the same way that humans learn. Another important facet in all of this is fleet learning — effectively a centrally accessible cloud-based system, which robots can use to teach and learn from one another’s experiences.

One of the key next steps is the creation of Large Behavior Models to help robots learn. “We’re trying to scale,” says Vice President of Robotics Research Russ Tedrake. “We’ve trained 60 skills already, 100 scales by the end of the year, thousands of scales by the end of next year. We don’t really know the scaling laws yet. How many skills are we going to have to train where something completely new comes out the other end? We’re studying that. We’re in the regime now where we can start asking these pretty fundamental questions and start looking for the laws to know what kind of timeline we’re on.”

Image Credits: Toyota Research Institute

Further down the road, the team hopes such findings will lead to more capable robots, which can with novel objects in new settings, while creating actions on the fly based on trained behaviors. In many cases, tasks are comprised of smaller behaviors that can be strung together and executed. All in due time, of course.

In the meantime, Pratt is set to join Boston Dynamics AI Institute Executive Director Marc Raibert on Thursday as part Disrupt’s Hardware Stage. The pair will discuss these breakthroughs and more.

source

Leave a Reply

Your email address will not be published. Required fields are marked *