Back to Index

Marc Raibert: Boston Dynamics | MIT Artificial Intelligence (AI)


Chapters

0:0 Introduction
1:6 Slides
24:27 Demo
33:40 Q&A

Transcript

Welcome back to 6S099, Artificial General Intelligence. Today, we have Mark Raybert. (audience applauding) He is the, he really doesn't need an introduction, but we'll give him one anyway. He's the founder and CEO of Boston Dynamics. He founded the CMU Leg Lab in 1980, the MIT Leg Lab in 1986.

Boston Dynamics in 1992. He and his team have developed some of the most amazing robots ever built, including Big Dog, Atlas, Handle, Spot, Spot Mini. These robots move with the agility, dexterity, and even grace that rivals and often supersedes that of human movement. He continues to inspire us with what robots are capable of achieving in the real world and what physical form future intelligence systems may take as they become integrated in our daily lives.

So please give Mark a warm welcome. (audience applauding) - Thank you, thank you. This is our grand mission, our aspiration, which is to make robots that are equal to or greater than people and animals. And it's a daunting mission because we're so good at things. It seems effortless. I'm standing here knowing questions that I could stand here like this, but a lot's going on.

I can manipulate things. I can pick up this water or I could reach in my pocket and use my hands with all the sensors in my hands and coordinate that. And maybe most of all, our perception systems. You know, this audience has, what is it, 250 people in it or something?

And I can look out there and see every one of you stabilized in space even while I'm moving. It's just astounding. And robots aren't there yet. But I think they can be. And our goal is to keep chipping away to try and get there. Before I get started, I wanted to say that I got my start in robotics here at MIT.

I was a graduate student. I was in what was then called the psychology department, the brain and cognitive science department. But I was taking an IAP course, just like you are, and it might have been exactly this time of year, when I followed my professor, Bertolt Horn, back. I was jabbering away at him, asking him some questions about this or that, and we walked back to Tech Square, which is where the AI lab was in those days.

And we went up to the ninth floor, and Russell Novsker, who was a guy working in the lab, had an arm all taken apart on the table. It was like 1,000 pieces. And I was a roboticist from that day on. I didn't switch my major, but I got Bertolt to be an advisor.

I found a topic that had to do with robotics. In that time, it was a manipulation thing, but eventually became a legged thing. And it was amazing, and I've never looked back. So here are some animals doing things that are very exciting, climbing around on very rough terrain, very short-footed.

Using a mixture of their proprioception and their vision. And look, there's even a baby that probably is only a couple of months old, has no trouble at all doing these things. And look at the grace and suppleness and the fearlessness of these animals. It's amazing. Here are animals running for their lives.

The gazelle is trying to stay alive for the next 10 minutes, and the cheetah's trying to get a meal so that it can stay alive in general. Sorry. And even people can do things that are breathtaking. I assume all of you were out this morning getting a little exercise, climbing up the green building that we're in now, and maybe the other places around here.

It's funny, I bumped into some people when I came into the room who were climbing the stairways. I think they were going on a trek up and down them. But I'd like to see the going outside. So probably most of you have seen this video. This is sort of where we were after about 10 years of work attempting to make machines that could work out in the real world that were dynamically stabilized.

Dynamics is a big deal for our company and for what we do. So some active sensing and control and understanding of the physics. This robot has all its control on board, and it has reflexes and sensors. And this is an extension to a 1,000-pound robot that could carry about 400 pounds of payload.

And we took it all around the United States testing it in various situations. Here we have it in Virginia doing some bushwhacking. It's actually following a person, but the person is only in and out of view intermittently, so it has to be able to keep track of where the person is and deal with that.

And then back in good old Boston, 10 inches of snow, just marches right up the hill. Here's the cheetah. Now, you know, MIT has its own cheetah. This is our cheetah. People know Songbe, who's doing the MIT cheetah. A very dynamic machine. And basically an experiment in seeing how fast we could make something like this run.

Although you notice it's on a parking lot, so it wasn't doing this on rough terrain. And getting both the efficiency and the speed in the context of a machine that also can do rough terrain is a really big challenge that remains with us. So this is just a snapshot of most of the robots we've built at Boston Dynamics over the years.

And I'm not gonna talk about most of them. I'm just gonna talk about the last four. These are all robots that we developed since we've been part of Google, which has been the last four years. Spot Mini, there's a Spot Mini on the floor here, which we'll demo a little later.

Spot, Atlas, the Humanoid. Some of you may remember the Humanoid that we used in the DARPA Robotics Challenge. You have one here. And then Handle, which is our latest version. So I'll have a few words to say about each of them. So we had been developing Big Dog and those other quadrupeds that I showed you for quite some number of years.

And it was amazing for us to find out when we did this project on Spot, this is the predecessor to that, that there was still a lot to learn. And we kind of revolutionized the hardware design and how the control worked and got a much higher level of rough terrain performance.

And part of the solution to that was to be able to decompose the control problem into many separate controllers that operated in different regions of state space. And that allowed us both to have programmers work on multiple solutions to the problem and also have the complexity of each controller simplified by only having to operate in a small part of the dynamic space.

Here we've added a robot arm to the previous version of Spot. And we believe that mobile manipulation, that is manipulation when you can move the base, is really a powerful way of doing things. Now this is probably the most important thing I wanna show tonight. And I'll show it three different times.

The idea that we don't build controllers that just do one particular thing, but that they can determine where they are in the execution, here's another version of it, and then adjust what they're doing in order to compensate for disturbances in the real world. I know this class is about AI and probably autonomy.

I think that one of the most important ways of getting to autonomy is to have the low-level implementations very robust to disturbances so that the planning steps don't have to take care of all the minutia of the details of the real world. And that's what we've been trying to do there.

We've been experimenting with doing delivery of packages to people's houses. These are all employees of Boston Dynamics, so we didn't go crashing ordinary people's houses. And it turns out that there's just lots of different kinds of stairways and entranceways. And the robot's doing very well. We're up to something between 70 and 80% of the kinds of stairs and access places we encounter after collecting data and making improvements and adjustments.

So I'm gonna say a few philosophical things or approach things. A lot of people think that this is the model of how a computer and a robot interact. That is, there's the robot, which is hardware and electronics and sensors. And then there's a computer. And that the computer listens to the sensors on the robot and then gives it instructions and tells it what to do.

And while I think that's actually going on, there's another part to the story, which is that the physical world is also giving instructions to the robot. And that means that the energies stored in the robot, either in its springs or in its motion, those are all important determinants of how the robot's gonna behave in the time coming forward.

And so we like to think in terms of designing the hardware of the robot, the physical world, and the computer all as one holistic thing where we take into account those interactions. Sometimes we call this a harmony. A harmonic system is one usually where you have energy oscillating back and forth.

Almost all legged locomotion has some amount of harmony going on between potential energy of elevation, potential energy of elastic deformation, kinetic energy of motion, and inverted pendulum things and the like. Another part of our approach we call build it, break it, fix it. Now I have friends who build their robots and are so into the beauty of what they've created that they kind of put it on an altar and afraid of actually hurting it.

So, and in fact, I even have friends here at MIT that have done that, where they have a gold-plated robot and they're afraid of taking it out into the world. I mean, we're just the opposite. Every one of our robots is designed to get bashed to bits. We have staff who are there to fix the robot on a daily basis as we break it.

And I think doing that, build it, break it, fix it, means that we're able to learn a lot from the actual physical robot working in the world. And we can use that knowledge in order to improve the robot, improve its behavior, and we really like to go around that loop as quickly as we can, early in the process, and do it as many times as we can.

So here's what build it, break it, fix it looks like. This is in Somerville. Our engineers, this is a Boston driver. (audience laughing) Now this robot's supposed to be using its visual system to avoid the trees. I think it might have fallen in love with this tree. We don't purposely give them any emotion, but boy, it's hard not to see that.

And here's the first time we tested the push response to this robot. And you-- - That's your new guy's car. - Did you hear that? That's the new guy's car. So some guy who just started that week, Trent, had $5,000, which we paid for, in repairs to his vintage BMW.

So the last thing sort of about philosophy is long-term versus short-term. Our company is 25 years old, and we've mostly been a long-term robotics company. That is, we're interested in moving the boundary forward in what robots can do, and we're interested in making it so robots meet the dream of being the equal or better than people and animals.

But now we've started-- (microphone thudding) Malfunction. (microphone thudding) Okay. We still on? Can you hear me? But lately, we've started to realize that some of our robots have enough capability that maybe it's time to try and productize them, and we will learn a lot by doing that, too. One of the things, for instance, that I've always claimed is that we always spent a lot of money on building our robots and used that as a competitive advantage.

That is, DARPA was a frequent funder of us. DARPA always said, "Let's take money out of the equation "and just figure out how to get the solution "and then worry about getting the cost down later." So I've always assumed and argued that once we get a robot doing things that are interesting, then you can go and redesign it to make it lower cost.

Well, we're gonna test that, because it might not be true. It might be that we've designed ourselves into an expensive corner and that it might be too late. But the robot that we'll show in a little bit is much significantly cost-reduced from the prototype of it, and it'll be interesting to see whether we can get it down to the kind of prices that are useful.

So this is just a picture, again, of the idea of aiming long but also aiming short. And I think it's gonna be a challenge to see whether we can keep the culture of the company to support both of these directions, because people manufacturing stuff have a different mindset than people trying to get out to the future horizons, and it's gonna be a challenge to keep both those kinds of people happy.

Here's some of the things that, some of the kinds of applications you can look at based on modest technical capabilities. I've shown mobility and manipulation here, but you could put cost, reliability, there's many things that could be on these axes. You know, entertainment, like robots in theme parks is something that I think we should be able to do.

I already talked about home delivery. I think home delivery is waiting for self-driving cars to get all the way there, self-driving trucks, and once they do, then we will be working on getting it from the truck to the home. Logistics, there's about a trillion boxes moved every year around the world, and most of it's done by hand, and so there's really a big opportunity to having robots help with moving those trillion boxes.

Security, which could mean either commercial security, like patrolling your shopping center, or the military type security. Construction, a lot of people have been coming to us with their construction applications asking if we can help, and you know, I'm not gonna talk about it now, but if afterwards you wanna ask about that, I can fill you in a little more, and I think this is really the ultimate home run application, care for the elderly and the disabled.

I used to say that I wanted to have robots that would help me take care of my parents and older people, but I realize now that it's probably gonna be my children using them to help take care of me, but you guys, you're all a little bit younger, and I think there'll be a time when you could use robots to help make your parents' lives better.

Now, some of you may think that your parents don't want that, but I think it's a complex question. We've seen some surveys that say that, you know, people aren't totally happy with the idea of their kids taking care of them on a moment by moment basis, and I think there's gonna be an opportunity for doing something, but technically, this is still a ways off, it's a tough thing.

Okay, let's get back to the robots. Spot Mini is a robot that weighs about 60 pounds. That previous Spot weighed about 180 pounds. This one weighs about 60 pounds, and here's some anatomy. It's got an arm with five degrees of freedom. Each leg has three degrees of freedom. It's got about a 500 watt hour battery.

Batteries for these things are a challenge, because, you know, you can have consumer products like electric drills that have relatively small batteries, and then there's electric cars that have big batteries, and there's not really much available in between, so we've done a lot of work on the battery technology for these things to make them safe and reliable and hot swappable and things like that.

Then there's radios and computers. The previous version had three quad core i7s. This one has two. We're trying to cut back on the cost. And then there can be some sensors, lidars, stereo, and the like. (audience member speaking faintly) So you can see, Spot Mini's a little bit smaller than Spot.

This isn't a real house, and those aren't real people. Those are engineers. (audience laughing) This is inside of a warehouse we have out on 128 where we've built a house. You can see that we don't mind scuffing up the walls here, and there is a lot of scuffing that happens.

Some of you may recognize Zach Tuchowski, who's an MIT alum, and he's, again, disturbing the robot. Here, the robot's using its vision to do some stepping stone type operations, and I think Gene is gonna talk a little bit more about this in a couple of minutes. And here's a case where it's doing stepping stones on real stones, and it's keeping its balance, figuring out where to put the feet.

And again, this robot only has stereo looking out the front, whereas this one has stereo on all four sides. Now, one of the cool things about animals is that they have these stabilization mechanisms for their sensors. That was a real chicken. No robotics involved. And here's our attempt to show that this robot can do the same sort of thing.

And if you think about it, when you're manipulating, you really want the hand to be stabilized in space, and so you'd like the body to be able to kind of coordinate with the hand so that you can concentrate on what the real world task is. (audience laughing) Oh, man.

You guys didn't pick up the banana peels, huh? (audience laughing) So our concept for the Spot Mini product is to make a platform. It's sort of the, we're thinking of it like the Android of robots. So with Android, there's a hardware platform, and then there's a software platform, and then developers, third party developers, create their own apps that use the platform.

So we've made this spot so that there's a place to mount hardware on the robot, but there's also an API to program it through, and then there's a facility to have additional computing external to the robot, and we're working with third parties to develop their own applications that run on the platform.

This is a video that we haven't been able to release publicly. Please don't tape it and show it, because I'll explain later if you wanna know why not, but this is just revealing that we do have an arm on the new version of Spot. It's using a camera in the hand to find the door handle.

This robot doesn't weigh a lot, so it has to use tricks to keep the door open, so that's why it puts its foot in the door. (audience laughing) (audience applauding) And here again, we wanna show that we've made the solution robust to certain kinds of disturbances. So Andy there, Andy's sitting over here, is pushing on the door, pushing on the hand.

The robot keeps track of how much progress it's made in doing its task. (audience laughing) It's so smart, it even kicks that shell out of the way. No, that was a total accident. (audience laughing) And now it's just gone back to try again. Okay. And then this is a demo of autonomy.

Here the robot has, in a previous session, we've taken it around the lab, this is Boston Dynamics, taken it around the lab and recorded visual data that could be used for navigation. And it's using its stereo to match up features in the environment so that it can navigate and go where it had gone on the previous path.

So there's no one driving it for this, it's all autonomous. That was outside my office. Every day around noon, the robot seems to show up and I hear it pausing out there. I don't know why it turned there. Sometimes it comes up with a solution that isn't, in here you'll see another one.

It comes up with a solution that isn't quite what you'd call as an optimization, but it does get a solution. So we're pretty excited by this. We call this Patrol Route and we're working on developing a lot of software to support it, to make it so that other people can capture a patrol route and then execute them on a routine basis, and then do other tasks while they're on the patrol route.

Seth, you're on. So now we'll do a demo of Spot Mini. So for this demo, Seth's got a joystick and he's telling it the speed to go in the forward direction and turning, but the robot's doing all its own gate selection, coordination of legs, balance obviously. So the robot has a bunch of different gates.

It can walk. Here it's doing one leg at a time. It can trot. I don't know, you do whatever gates you want, Seth. He's gotta use a selected egg. So here's trotting, which is diagonal pairs of legs. It can do pacing, which is lateral pairs of legs to get working together.

I have to tell you, in the earliest days of me being involved in leg and locomotion, I thought gate was a big deal, but it's really kind of a small thing. And I don't think it's central to what matters, which is support, stability, propulsion, and things like that. I'm gonna wrap up shortly.

I just thought I'd say a couple of words about the mechanical side. Atlas is a new version of a humanoid. I know some of you worked with the DARPA Robotics Challenge humanoid, which was a big hulking thing that we made, and this is a much more svelte one. And the way we got there was to work on the elements of the mechanical design to take advantage of 3D printing and some optimization.

And we focused on two or three different things. One is making some of the leg parts where we embed hydraulic pathways, hydraulic actuators, places for valve mounts and filters and things like that into the leg. And this is what that looks like. There's a single upper leg part that incorporated about 15 or 20 different separate components in the previous design, which made it lighter, more compact, and higher strength to weight ratio.

We also developed a hydraulic power unit, which takes many components. The thing on the left are the components as separate ones. And we were able to print up parts that integrated them so that there was a motor, a pump inside of a motor, an accumulator, a reservoir, valves, filters, and those things.

And we shrunk it down so that the robot could be smaller and lighter. And using that approach, we went from about a 375 pound DRC robot to a 190 pound robot, and then the current one is about 165 pounds. Now this picture might lead you to believe that I'm advertising myself as only weighing 165 pounds.

And unfortunately that's not true, but I'm working on it. (audience laughing) But it is close to my size and weight. And I don't know, I don't think we have this out as a video. Here's some robot behavior that uses whole body motion, meaning the mobility base plus the arms plus the torso are all combining in order to handle these boxes.

It's using vision with the QR codes to simplify the task. Here we're trying to go at human speeds of operation, and so the robot searches for a box using its vision. (audience laughing) I think that was the only take we ever got with both robots working together. And one of the problems with YouTube is everybody's already seen what you've been up to by the time you go around to give a talk, so I imagine most of you have seen this.

But here's a parkour robot we're working on where we've actually strengthened the hips so that it can do a little bit more jumping and-- (audience laughing) And it's kind of interesting that we've been interested in making a robot a little bit like the humanoid that has less degrees of freedom, fewer degrees of freedom and is simpler, and we designed this robot, and the ultimate version of this will have about 10 joints, whereas the humanoid had 28, and have many of the same capabilities.

We have some use cases for this that I'm not gonna talk about today, but this robot can lift heavy loads. It has a relatively small footprint given what its strength is. So the way things are done in logistics now is to use big robot arms that take up a lot of floor area or heavy, and we're looking at ways of using a robot like this one.

Not exactly this, it's sort of an evolution of this design in order to do logistics operations. So I wanna make a pitch to you. Boston Dynamics is hiring, and I hope some of you will apply for a job there. These are, how many is it, six times three. These are 18 MIT alum that currently work at the company, many of them for many years, so I'm sort of making the point that these people are happy there, just like you could be, and I hope you'll look at our website and see what we're looking for and consider it.

So I'm just gonna wrap up by talking about, you know, I used to be a professor here and at Carnegie Mellon, and when I was a professor, we used to mostly wrote papers, and we were excited by how many papers we could write and how many people cited them in their papers, but as a company guy, instead of papers, I think we count YouTube hits, and instead of citations, here I wanna tell you what this is, but most of you probably know.

(audience laughing) (audience laughing) So now we count spoofs instead of citations, and I'm happy to say that we're doing great. We have about two dozen big dog spoofs. Here's four of them, and the upper left is in Akihabara, Japan. The upper right is a Los Angeles online television show.

It's the Netherlands on the lower left, and I guess that's Appalachia on the right. The poor kid doesn't even have a friend to be in his movie. (audience laughing) Well, what about Atlas? (audience laughing) Can you hear that? I love you, box. (audience laughing) Goodnight, box. (audience laughing) Box.

(audience laughing) Hello, box. (audience laughing) Do it, do it, I love you. No. (audience laughing) Here's another one. (audience laughing) (dog barking) (audience laughing) All right, where do you want to, mother? (audience laughing) (audience applauding) So we have a big crew working on all these projects. You've gotten to meet a couple of them here, but it's really quite a team and an absolute pleasure to work with.

So anyway, thank you. (audience laughing) (audience applauding) - Thanks for the presentation, it was amazing. What sort of physics simulation, if any, do you have in your robots? And do you really think that with the current trend of neural networks, we can just do end-to-end modeling of these robots without any sort of notion of physics, but just neural networks?

- So we have simulators that we've worked on for a long time, very detailed, in some cases validated. Validated mean compare the behavior of the simulator to the physics of ground truth. And I think they're important for our work and we use them frequently, but the end-to-end doesn't ring quite true.

Usually when we use simulation, the user is knowledgeable about the trade-offs between doing a physical experiment and doing a simulated experiment. And they're usually getting at some specific setup question rather than the idea that you start at one end. At least in our experience, trying to simulate all the subtleties of the hydraulic actuator, backlash in gears, flexibility, the non-rigidity in the components, that's a big undertaking and usually so distracting that you can't really get on with what you're doing.

So I think we use experiment for those subtleties and we use simulation for bigger level dynamics questions. - Hey, would you say mechanical concerns or computational capability is more of a difficulty in terms of determining how quickly you can perform tasks with the robots? - You know, we like to say that they're equally important.

We now, although we didn't start out this way, we now have equal strength in our groups in the mechanical design and implementation and in the software and controls and sensing. And I think they all matter. I think if you try and get by with just marginally designed hardware, you don't get much experimental time in because the thing's broken all the time.

So even though we are rough on our machines, they mostly keep working because we put a lot of attention to detail in how they're designed. But there's still, I think perception is still a tall pole in the tent. Certainly if you want to rival human perception, I don't think we're anywhere near there.

I think the self-driving car stuff is helping. There's a lot of interesting things happen there. I think specialized hardware is getting ASICs and things that could help. But it's all still needed. - So you guys have developed various components that all kind of come together to build one robot.

Have you seen applications for any of these separate components elsewhere? So organic design, for example, for the Atlas, maybe prosthetics or hip replacements or something like that because there seems to be a lot of development going on individually as well as in the big picture. - I mean, you're asking a very good question.

It was a question in case people couldn't hear is, aside from the value to the whole robot of the components we're making, are the components useful some other way? And the place where we think it's probably most true is the specialized hydraulic components we've made, servo valves and the HPU.

I'm sure we could sell them into other industry. As a company focus question though, that's really what it comes to. Do we really wanna be doing that? Will that absorb too much time and attention and personnel? Or do we wanna, our heart is really in building future generations of robots.

So I think we're gonna probably stay there. - Thanks. I was wondering, have you done any research in regards to getting the robots to perform tasks involving direct physical contact with humans? - Nope. The only thing we've done is we've done teleoperation, which is not what you mean, where we have a human moving and the robot copying, which is very interesting because you can see that that's a way of showing how fast the robot can be and how coordinated it can be using a human for part of the computing.

But we don't have them interacting with people. I guess the closest is we once did a thing where a person and a robot picked up a stretcher and worked together to pick up the stretcher, but they weren't touching each other. They were going through the stretcher material. Do we have plans?

We're really, to be honest, we're really struggling with coming up with some strong concepts for safety even without doing that. Robots, people's first reaction to a robot and people's first reaction and how you make a robot safe if there's a problem don't really work very well. You can't freeze the robot.

You have to find some, you have to keep them going, find a way to get into a safer state. So I think having them in contact with people is just gonna be harder. So eventually we want to to help, you know, to carry, lift the elderly and things like that, but we're not there yet.

- My question's about the relative rates of progress in robotics and machine intelligence. So an economist might maybe measure it by seeing how much money is going into computing hardware versus arms and legs, sensors and actuators, that kind of thing. So in one possible scenario, the machine intelligence rushes ahead and the robots are progressing more slowly because of kind of slow build test cycle, basically.

It's the real world things. It's not so easy to get a rapid build test cycle with a robot. And in the other scenario, the robots are more advanced than the machine intelligence 'cause machine intelligence is just such a conceptually difficult problem. So in one scenario, the machines are telling the humans what to do.

In the other scenario, the humans are telling the machines what to do, if you like. So do you have any kind of perspective on that whole issue of the machine intelligence folk gonna rush ahead, being robots, guys struggling behind, or the robots gonna get there before the massive problem of machine intelligence gets solved?

Or maybe somewhere in the middle. - I think, let's see, I don't know exactly what you mean by machine intelligence. Are you talking about having Google do better search? - So computation in general. So at the start, I talked about economists measuring sensors, actuators, and compute hardware. So that's the kind of split I'm thinking about.

Yeah, I think that it's always been a misconception that the hardware components by themselves constitute progress in intelligence or in robot behavior. I think they're important ingredients, but by themselves. You know, when I was a graduate student here, I can remember reading an ad for an optical character recognition system.

And what the ad said was, you know, we have camera, we have a thing for holding the paper you're looking at, all you have to do is write the software. So it was all done except for you had to write the software. And you know, the whole problem was there.

So I don't know if I'm answering your question. You know, robotics is hard. I think it feels like we're making progress. If you keep pushing, we keep making progress. It's not like there's a knee in the curve that we've hit. But I also think that the rest of the AI world is making good progress too, and it's fun being a part of it.

- Hi. My question is mostly related to security. So since you are productizing your robots now, there has been research on the lidars mainly, where you could spoof a lidar and the sensor basically cannot see anything. So are you looking into that as well? Taking into consideration these awesome robots that you're building could be in, let's say, defense, working for the defense as well.

So those are like really harsh environments. - Yeah, I mean, these are very hard problems. If someone, if an intelligent adversary wants to trick the robot, it's not all that hard these days. You know, we're working probably the other end of the problem, you know, trying to do the basics right now.

I don't think, you know, I don't think robots are gonna be as autonomous in a hostile environment as people either think or fear because of how frail they'll still be until we get further along. - Hi there. - Hey. - I wanted to ask about two things that are going to probably play a big role in adoption.

The first is price. So if you could speak to the current unit price of a Spot Mini and how that you think is going to evolve over time. And the second is sort of consumer psychology. I felt like when I saw the test at the end of the robots wearing, my level of comfort with it being in my house suddenly shot up.

It seemed way more human. So I was just thinking about what kinds of experiments you guys have run, what you've thought about with respect to making people more comfortable with robots working around them. - Yeah, in terms of cost, you know, we're not saying what this thing costs yet, but we will later in the year.

We have reduced the cost of this by about a factor of 10 from what the first prototypes cost. So we're making progress. In terms of the psychology of robots, it's been very interesting to watch. You know, we got branded sort of as robot abusers because we kicked our robot.

Really what we were doing was trying to show how good they were at balancing. And we didn't think we were abusing them. I have video of me pushing on my daughter when she's one years old and actually knocking her over, but that wasn't my goal. I wanted to kind of test out her balance.

(audience laughing) I bet you, you know, if you guys have kids or you're at all that, you've done stuff like that. So, but we have adjusted a little bit. And so we don't usually push on the robots in our videos, despite the one we showed with Andy hockey-sticking the hand on this thing.

That's why we had the banana peels as a way to have the robot crash without us being, having our fingerprints on it. You know, I guess the other data point I have is that if you look at the likes and dislikes on our YouTube videos, we found a way to get the likes to dislikes ratio much higher by partly probably by not looking like we're abusing the robots.

There's probably a long way to go to make these things really friendly. And I have to admit there's a little spirit at our company of being kind of, you know, it's fun being bad boys in terms of, you know, just make the robot do cool stuff and leave the emotions to others.

And certainly the social robots that have so much going into making them cute, I don't know. I'm sure we'll have marketing people working on that. I don't know what else to say. (audience laughing) - Hi, I have a general question. So in terms of research purpose or like practical purpose, so what are the reasons that we choose to investigate on this humanoid robot?

So it seems like it cannot run as fast as the cheetah and it also cannot carry as many stuff as the big dog. Yeah. - You're basically saying that the humanoids don't seem to be as practical in terms of functionality? - Right, so is it more efficient, like are the humanoid robots more efficient than these cheetahs and the big dogs?

- Well, you know, the, so I don't have a good answer. The motivation for the DRC, the DARPA Robotics Challenge, which was humanoid robots, was to say that they wanted to use robots that could go to the places designed for humans. And so that's why they used the human form.

And I think, you know, there's an argument there. It is true that the human form has a lot of complexity to it because you have very complicated legs in the biped and they're supporting the weight of the body and the arms, whereas the quadrupeds can spread all that out.

So I'm sympathetic to your question. I don't really have an answer. I can tell you that the public's reaction to a humanoid robot is off the scale compared to anything we've done with quadruped robots for what that's worth. So we always get a lot of viewership if we show a humanoid doing something.

But I think it's a question that we will keep addressing. We are gonna keep pushing on getting the humanoid to do more and more human-like things, even though we probably won't commercialize them as soon as we commercialize the other stuff. - How do you specify goals? And although you said earlier that it's expensive to do simulations and stuff, do you have any intentions of doing any deep reinforcement learning?

- What was the last thing you said? - Do you have any intentions of doing deep reinforcement learning? - I'll do the last one first. I'm sure we will use learning before too long. I'm not sure whether it'll be deep reinforcement learning or something else, but mostly we're interested in optimizing the complicated state space partitioning we do.

Right now we use, people make very simple decisions as to how to divide up the space, and we think that these things could probably be really improved if we use the learning approach. So that's probably the first place we'll apply it. We do a little bit of learning here and there, but not much compared to how much learning is talked about out there.

What was the other question? (man speaking off mic) How do we specify a goal? You mean to the robot, or how do we decide as a company? So I don't think there's any across the board answer. We write applications, for instance, for each of these uses. So for instance, where we were doing the patrol route, we have an application that has a UI that lets the user tell it the information it needs.

It can tell it to go ahead and start on the patrol, and things like that. For the door, I think there's a button on the controller. We can show you afterwards if you want. And you walk the robot up to the door, where you're steering it, and then you press the button, and then it starts looking for the door handle, and it goes through the whole, you know, it goes through the door.

(man speaking off mic) But I don't think these answers are fundamental. I think you could do it lots of different ways. You know, we're working on all the machinery coming up from the bottom to be able to do these things. And then, you know, in some case, you could have it be buttons on a UI.

It could be an API that's accessed through some higher level AI. And we just aren't sweating that part of it at this point. - Hi, so aside from locomotion, I can use my body for like, you know, nonverbal communication to communicate my intentions and other such things, even though I'm not always aware of it.

And I guess I'm wondering if this is something that you've considered for these robots. - I think the closest we've come is having the robot go like this after the flip, which was a way of communicating. We really haven't done anything along those lines. I'll bet you, though, that people writing code can interpret a lot of the subtleties of what's, you know, what's working and what isn't by looking at things like that.

But the robot isn't trying to communicate that way. - I have two questions. How do you make the robots really fast? - How do we make them fast? - No, my question is, how did you make them fast? - I mean, like, the time, how? - We get a lot of people who are really smart and good at working together with each other at our lab, and then they make plans, and everybody tries to stay on the plan, and then, you know, pull it together.

Sometimes it doesn't go as fast as we'd like, especially if we have to buy parts from someone else and they're slow. That happens a lot. No, honestly. Is that what you mean? So we don't make them that fast. You know, we're pretty fast, you know, usually four or five months to build a new robot, something like that.

But mostly it's getting people to work together. What's the other question? - The other question is, why do the people push the robots? (audience laughing) - Why do they push? Why do they push? The robots are always balancing themselves, and so we wanna show that they can balance by showing that when you knock them, they still, they don't fall over, they stay up on their feet.

So we're kind of showing off. (audience laughing) Are you building anything? Why not? - I don't know. - You should. - It's way off. - Why? - Way off. - Where? In the basement? - I'm not good at building. - No, yes you are. You might think you're not.

- Well, in some games they are. - You oughta give it a try. And you're the right age to get started. - I'm six and a half. - Perfect. (audience laughing) - All right, with that, I think, please give Mark a big hand. Thank you very much. - Thank you.

(audience applauding)