Swense Tech

Best Solution For You

How MIT Helped A Blind Robot Teach Itself To Walk In 3 Hours

MIT’s Biomimetics Lab recently broke the speed record for a robotic Mini Cheetah: not quite Usain Bolt speed, but probably faster than you can run. But this robot can’t see, and MIT’s researchers didn’t train it to walk or run. Rather, it taught itself, and in just three hours.

The key: removing humans from most of the process. And: extensive use of simulation.

“Traditionally, the process that people have been using [to train robots] require you to study the actual system and manually design models … this process is good, it’s well established, but it’s not very scalable,” MIT professor Pulkit Agrawal told me recently in a TechFirst podcast. “But we are removing the human from designing the specific behaviors.”

Training a robot the traditional way takes about 100 days of intensive effort, Agrawal says. That’s human time spent by computer scientists and engineers designing behaviors and completing multiple months of trial and error learning.

Taking it down to three hours means extensive use of simulators using technology from NVIDIA and other partners, but it also means changing methodology. And that means going from telling the robot what to do towards setting it (relatively) free to go make mistakes.

“If I remove a human designer … I need to pay a cost,” Agrawal says. “And that cost is we’re doing trial and error learning, but then it requires a lot more data … ow, if you were to do this in the real world, this would be very expensive because it would require 100 days of real-world experience. And if the robot falls down, that’s not very good. So, what simulator offers is kind of a safe playground, where it’s okay for the robot to fall down and go back, but also the simulator can run much faster than real time.”

Simulation and freedom to make no-consequence errors make the difference.

Previous running control systems for robots like Boston Dynamics’ robots and the MIT Cheetah 3 were analytically designed. That means relying on engineers to analyze the physics of locomotion, formulate abstractions, and implement a hierarchy of controllers to make the robot balance and run, MIT says. In the real world, that means trial and error, analyzing errors, adapting software models, and re-trying … all while trying to keep your robot hardware from bashing itself to bits every time it falls.

So simulation works, and self-learning works, but there’s one more critical step. And that is adapting simulated skills to the real world, which will always be different than the simulation. (No matter how good your simulation, it’s not exact or completely equivalent to reality.)

“We developed an approach by which the robot’s behavior improves from simulated experience, and our approach critically also enables successful deployment of those learned behaviors in the real-world,” MIT says. “The intuition behind why the robot’s running skills work well in the real world is: Of all the environments it sees in this simulator, some will teach the robot skills that are useful in the real world. When operating in the real world, our controller identifies and executes the relevant skills in real-time.”

The result is getting the Mini Cheetah to run at about the speed you or I could manage: about four meters per second, or nine miles per hour.

It may not sound very fast, but it is a record for this particular robot. Which, after all, only stands about a foot high and weighs about 20 pounds. Scale up the size, MIT grad student Gabriel Margolis says, and you’ll scale up the speed. Meaning, if they want to beat Usain Bolt and not just granny in her wheelchair, they can.

(Note: Boston Dynamics had a robot run as fast as Usain Bolt way back in 2012. However, the robot was on a treadmill, it was externally powered, and it had a support system.)

But there’s another astonishing part too: this robot is blind.

“All of the behaviors that we’ve shown have been achieved, essentially, blindly,” Margolis says. “What the robot is doing is it’s feeling the environment through … it doesn’t have touch sensors, either, in its feet, but it’s just feeling the environment through the motion of its joints.”

In living creatures, that’s known as proprioception or kinesthesia: the ability to know where your limbs are and what they are doing without seeing them. (I had to use this sense once, years ago, when fording a mountain stream in early spring. My feet lost feeling in the cold water, and I had to drive them down into the stream bed to test where I could find solid footing.)

This gives the Cheetah the sense of what surface it’s on: whether snow or ice or concrete or grass or gravel, and the Cheetah adapts its method and speed of locomotion accordingly.

“We’re definitely interested in maybe adding more sensors, but all of these behaviors that we’ve shown were actually achieved without them,” Margolis says. “There are certainly some advantages to using vision, but there are a lot of drawbacks too. One is that it can be a lot slower to simulate, actually. So you might not be able to get this massive speed up in the same way.”

A valid question is: why build with legs at all?

After all, wheeled travel is much faster, more efficient, easier to learn or program, and requires far less balance. The answer lies in what you want robots to be able to do, eventually.

“They can go a lot of places that wheeled robots can’t,” Margolis says. “We can have emergency response vehicles that actually come into a home and save someone. We can have delivery services that bring something up your stairs onto your porch or even into your house.”

The good news for those who might want to benefit from MIT’s research into robots with legs is that it’s freely available. The code is available as an open source project, Agrawal says, so anyone can download it, play with it, replicate the results, and build on top of it.

“And the good thing is, we are really doing this research on a low-cost platform which you could kind of, even if you build your own quadruped, you can deploy our system on it because we don’t use any specialized or expensive equipment to do it,” Agrawal says. “So, in some ways, our research is very democratic.”

Subscribe to TechFirst, get a full transcript of our conversation.