back to indexRodney Brooks: Robotics | Lex Fridman Podcast #217
Chapters
0:0 Introduction
1:31 First robots
22:56 Brains and computers
55:45 Self-driving cars
75:55 Believing in the impossible
86:45 Predictions
97:47 iRobot
125:9 Sharing an office with AI experts
137:19 Advice for young people
141:5 Meaning of life
00:00:00.000 |
The following is a conversation with Rodney Brooks, 00:00:08.000 |
and artificial intelligence laboratory at MIT, 00:00:13.160 |
which is one of the most successful robotics companies ever. 00:00:19.120 |
that created some amazing collaborative robots 00:00:27.360 |
whose mission is to teach robots common sense, 00:00:35.240 |
please check out our sponsors in the description. 00:00:43.160 |
in my now over two decade journey in robotics, 00:00:52.160 |
and two, he's not afraid to state controversial opinions 00:01:05.640 |
and he's fully supportive of such disagreement. 00:01:08.640 |
Nobody ever built anything great by being fully agreeable. 00:01:13.600 |
There's always respect and love behind our interactions, 00:01:26.240 |
and here is my conversation with Rodney Brooks. 00:01:35.320 |
that you've ever had the chance to work with? 00:01:39.320 |
which was made by one of my grad students, Aaron Edsinger. 00:02:00.000 |
He and Jeff Weber, who is a mechanical engineer 00:02:24.280 |
but everything else, series elastic actuators. 00:02:30.200 |
All the motors are inside, and it's just gorgeous. 00:02:35.880 |
- Oh, yeah, the eyeballs are actuated with cameras, 00:02:42.920 |
and looking in their face, and talking with them. 00:03:01.920 |
like literally the amount of actuators, the-- 00:03:06.680 |
he anodizes different parts, different colors, 00:03:17.920 |
- When you make a robot, it's making a promise 00:03:23.280 |
so I always encourage my students not to overpromise. 00:03:27.760 |
- Even with its essence, like the thing it presents, 00:03:31.960 |
- Yeah, so the joke I make, which I think you'll get, 00:03:39.560 |
So the only thing in Domo's face is the eyeballs, 00:03:49.480 |
So there is no, it's not like one of those Japanese robots 00:03:59.000 |
- But see, the thing is, us humans and dogs too, 00:04:02.680 |
don't just use eyes as attentional mechanisms. 00:04:10.200 |
Like a dog can look at you, look at another thing, 00:04:14.520 |
that we're going to be looking at that thing together. 00:04:17.640 |
You know, on both Baxter and Sawyer at Rethink Robotics, 00:04:26.200 |
so it wasn't actually where the cameras were pointing, 00:04:33.200 |
so people in the factory nearby were not surprised 00:04:36.120 |
by its motions, 'cause it gave that intent away. 00:04:45.880 |
When did you first fall in love with robotics? 00:04:53.360 |
- I've got these, I was born in the end of 1954, 00:04:59.680 |
And I have these two books that are dated 1961, 00:05:04.680 |
so I'm guessing my mother found them in a store in '62 or '63, 00:05:13.920 |
and How and Why Wonder Book of Giant Brains and Robots. 00:05:33.280 |
and then I tried to build them for the rest of my childhood. 00:05:40.460 |
- This was when the two books, I've still got them at home. 00:05:45.840 |
- No, they were, some of the robots that they had were arms, 00:05:50.840 |
you know, big arms to move nuclear material around, 00:06:03.980 |
but they were what people were thinking about for robots. 00:06:15.640 |
I realized my limitation on building mechanical stuff. 00:06:23.280 |
out of different technologies as I got older. 00:06:26.600 |
I built a learning system which was chemical-based, 00:06:33.140 |
and I had this ice cube tray, each well was a cell, 00:06:37.940 |
and by applying voltage to the two electrodes, 00:06:43.200 |
So over time, it would learn a simple network, 00:06:50.120 |
And that was, mostly things were driven by my budget, 00:06:58.520 |
I mean, an ice cube tray was about my budget at that stage. 00:07:04.720 |
and then I could build gates and flip-flops and stuff. 00:07:07.640 |
- So one of your first robots was an ice cube tray? 00:07:13.080 |
And it was very cerebral, 'cause it went to add. 00:07:23.040 |
Alan Turing wrote a paper that formulated the Turing test, 00:07:42.080 |
because I believe you're a machine, and I'm a machine, 00:07:49.880 |
is sort of a little ludicrous, what does think mean, 00:07:59.160 |
but do we have a clue how to build such machines? 00:08:11.060 |
Maybe we're just not smart enough to build stuff like us. 00:08:15.260 |
- The kind of computer that Alan Turing was thinking about, 00:08:18.340 |
do you think there is something fundamentally 00:08:22.020 |
or significantly different between a computer 00:08:24.520 |
between our ears, the biological computer that humans use, 00:08:43.120 |
titled, the working title is "Not Even Wrong." 00:08:47.160 |
And if I may, I'll tell you a bit about that book. 00:08:59.080 |
Goes all the way back to some manuscripts in Latin 00:09:09.680 |
And then Turing's 1936 paper is what we think of 00:09:28.880 |
one of Hilbert's three later set of problems. 00:09:32.760 |
He called it an effective way of getting answers. 00:09:37.760 |
And Hilbert really worked with rewriting rules, 00:09:47.540 |
a month earlier than Turing, disproved Hilbert's 00:09:54.600 |
The other two had already been disproved by Gödel. 00:09:59.220 |
'cause it's always easier to disprove these things 00:10:04.260 |
And so he needed, and it really came from his professor, 00:10:13.420 |
who turned it into, is there a mechanical process? 00:10:27.960 |
They were called computers, the people at the time. 00:10:30.960 |
And they followed a set of rules where they had paper, 00:10:35.540 |
and based on the numbers, they'd keep writing other numbers. 00:10:39.160 |
And they would produce numbers for these tables, 00:10:43.040 |
engineering tables, that the more iterations they did, 00:10:49.880 |
And so Turing, in that paper, set out to define 00:10:54.880 |
what sort of machine could do that, mechanical machine, 00:10:59.440 |
where it could produce an arbitrary number of digits 00:11:05.740 |
And he came up with a very simple set of constraints 00:11:23.420 |
that as a person could do with pencil and paper, 00:11:35.600 |
was not able to do something that Hilbert hypothesized. 00:11:39.920 |
But he had to show that this system was good enough 00:11:58.240 |
And then if you look over the next, from 1936, 00:12:20.280 |
which is a beautiful, beautiful mathematical book, 00:12:33.640 |
that relatively cheap stuff can do this stuff. 00:12:40.280 |
And Donald Knuth, in his first volume of his, 00:12:45.000 |
you know, "Art of Computer Programming" in around 1968, 00:12:54.520 |
that a person could do each step without too much trouble. 00:13:06.440 |
whether Fermat's Last Theorem was true or not, 00:13:10.360 |
And that's too much trouble for a person to do as a step. 00:13:16.760 |
sort of said a similar thing later that year. 00:13:19.640 |
And by 1975, in the A. O. Hopcroft and Ullman book, 00:13:27.280 |
but intuition says this is sort of about right, 00:13:37.360 |
which happens to be really easy to implement in silicon. 00:13:47.720 |
The version we have of computation, incredibly powerful. 00:13:52.920 |
So what we're talking about is there's an infinite tape 00:13:55.760 |
with some simple rules of how to write on that tape, 00:13:58.640 |
and that's what we're kind of thinking about. 00:14:01.480 |
- Yeah, and it's modeled after humans, how humans do stuff. 00:14:04.480 |
And I think it's, Turing says in the '36 paper, 00:14:10.100 |
is that a human has a limited amount of memory. 00:14:26.400 |
It was, this is what we're gonna call computation. 00:14:33.520 |
which has completely changed our technological world. 00:14:37.960 |
Second part of the book, or argument in the book, 00:14:53.020 |
Left column is intelligence, right column is life. 00:15:00.280 |
there's artificial intelligence and there's artificial life. 00:15:03.600 |
In the top row, there's neuroscience and abiogenesis. 00:15:10.080 |
how does non-living matter become living matter? 00:15:14.120 |
These four disciplines all came into the current form 00:15:40.560 |
except for artificial intelligence and abiogenesis. 00:15:49.280 |
were at the research lab for electronics at MIT, 00:15:57.100 |
And in fact, McCulloch, Pitts, Letvin, and Maturana 00:16:02.880 |
wrote the first paper on functional neuroscience 00:16:06.480 |
called "What the Frog's Eye Tells the Frog's Brain," 00:16:08.880 |
where instead of it just being this bunch of nerves, 00:16:12.360 |
they sort of showed what different anatomical components 00:16:16.680 |
were doing and telling other anatomical components 00:16:23.960 |
- Would you put them as basically the fathers 00:16:28.880 |
of what are now called artificial neural networks? 00:16:38.940 |
in 1943, had written a paper inspired by Bertrand Russell 00:16:43.940 |
on a calculus for the ideas eminent in neural systems, 00:16:49.780 |
where they had tried to, without any real proof, 00:16:55.460 |
they had tried to give a formalism for neurons, 00:17:03.860 |
with no real evidence that that was what was going on, 00:17:09.260 |
and that was picked up by Minsky for his 1953 dissertation, 00:17:13.620 |
which was a neural network, we would call it today. 00:17:22.500 |
when he was designing the EDVAC computer in 1945. 00:17:26.580 |
He talked about its components being neurons, 00:17:32.500 |
and one of them is the McCulloch-Pitts paper. 00:17:47.700 |
turned to computation as their primary metaphor. 00:17:50.960 |
So I've got a couple of chapters in the book. 00:18:19.660 |
"Well, maybe it's not computation that goes on in the head." 00:18:34.260 |
But we've got this idea, if you wanna build an AI system, 00:19:00.380 |
By the way, when you say computation in our conversation, 00:19:15.940 |
But computation in the way Turing thinks about it 00:19:29.140 |
There are places, and there can be stuff in places, 00:19:37.740 |
And it's this metaphor of place and container, 00:19:53.140 |
And when we get outside of our metaphor range, 00:20:02.740 |
It can do stuff that our raw reasoning can't do, 00:20:05.860 |
and we've got conventions of when you can use it or not. 00:20:14.100 |
we always try to get physical metaphors for things, 00:20:25.860 |
And I say, no, it's some weird mathematical logic 00:20:28.260 |
that's different from those, but we want that metaphor. 00:20:31.820 |
Well, I suspect that 100 years or 200 years from now, 00:20:46.440 |
'cause it just wasn't an adequate explanatory metaphor. 00:20:57.980 |
But as it turns out, the burning was outside the matter. 00:21:03.060 |
and combined with our limited cognitive capabilities, 00:21:18.480 |
Computation is sort of a particular thing we use. 00:21:33.460 |
and there are some stationary points on the drum's surface, 00:21:36.660 |
you know, 'cause the waves are going up and down 00:21:39.140 |
Now, you could compute them to arbitrary precision, 00:21:50.780 |
What was the very first computer program ever written 00:21:56.320 |
and Bernoulli numbers are exactly what you need 00:21:58.220 |
to find those stable points in the drum's surface. 00:22:02.280 |
- Anyway, and there was a bug in the program. 00:22:04.340 |
The arguments to divide were reversed in one place. 00:22:22.680 |
a thing that's become dominant as a metaphor, 00:22:28.480 |
All three of these four fields adopted computation, 00:22:33.400 |
and you know, a lot of it swirls around Warren McCulloch 00:22:37.720 |
and all his students, and he funded a lot of people. 00:22:41.480 |
And our human metaphors, our limitations to human thinking 00:22:52.880 |
So, I have a little to say about computation. 00:23:09.600 |
that appears to have consciousness and intelligence. 00:23:17.040 |
- And maybe it's not just the meat in your head, 00:23:20.900 |
I mean, you actually have a neural system in your gut. 00:23:27.920 |
but we're now dancing around things we don't know, 00:23:30.920 |
but I tend to believe other humans are important. 00:23:35.620 |
Like, so, we're almost like, I just don't think 00:23:40.620 |
we would ever have achieved the level of intelligence 00:23:44.760 |
I'm not saying so confidently, but I have an intuition 00:23:47.800 |
that some of the intelligence is in the interaction. 00:23:51.320 |
- Yeah, and I think it seems to be very likely, 00:23:55.800 |
again, this is speculation, but we, our species, 00:24:03.920 |
because you can find old bones where they seem 00:24:20.520 |
and then we get these tools that become shared tools, 00:24:23.160 |
and so there's a whole coupling that would not occur 00:24:30.440 |
which was fed all of literature or something. 00:24:33.100 |
- Yeah, the neural network can't step outside of itself, 00:24:38.520 |
but is there some, can we explore this dark room 00:24:47.720 |
What is the magic, where does the magic come from 00:24:53.440 |
What's your sense, as scientists that try to understand it 00:25:20.840 |
Is it through physics and chemistry and biology, 00:25:38.640 |
and who's, animal intelligence, dog intelligence, 00:25:44.240 |
octopus intelligence, which is a very different 00:25:48.740 |
All the intelligences we know perceive the world 00:25:55.040 |
in some way and then have action in the world, 00:26:00.320 |
but they're able to perceive objects in a way 00:26:05.680 |
which is actually pretty damn phenomenal and surprising. 00:26:19.520 |
which is a sound box, I think, is a blue box, 00:26:30.960 |
It's not, the blueness is not a direct function 00:26:44.160 |
maybe seen the examples where someone turns a stop sign 00:26:58.840 |
why is it thinking it's the other sort of sign? 00:27:00.080 |
Because redness is not intrinsic in just the photons. 00:27:02.920 |
It's actually a construction of an understanding 00:27:05.280 |
of the whole world and the relationship between objects 00:27:10.160 |
But our tendency, in order that we get an archive paper 00:27:14.480 |
really quickly is you just show a lot of data 00:27:16.760 |
and give the labels and hope it figures it out. 00:27:19.040 |
But it's not figuring it out in the same way we do. 00:27:21.160 |
We have a very complex perceptual understanding 00:27:25.160 |
Dogs have a very different perceptual understanding 00:27:28.200 |
They go smell a post, they can tell how many, 00:27:36.440 |
There's all sorts of stuff that we just don't perceive 00:27:42.520 |
It's not seeing the registration between us and the object. 00:28:02.200 |
We've always talked in AI about the symbol grounding problem, 00:28:05.160 |
how our symbols that we talk about are grounded in the world. 00:28:18.080 |
which is different from the grounding problem. 00:28:39.560 |
- Yeah, we shared an office when I was working 00:28:50.280 |
- So do you still kind of, maybe you can elaborate, 00:28:59.520 |
Can you make sense of why we humans have this poor intuition 00:29:11.720 |
- If you go back to the original teams working on AI 00:29:27.960 |
Was a bunch of really smart kids who got into MIT 00:29:36.360 |
playing chess, doing integrals, that was hard stuff. 00:29:41.360 |
But a baby could see stuff, that wasn't intelligent. 00:29:45.800 |
Anyone could do that, that's not intelligence. 00:29:48.200 |
And so there was this intuition that the hard stuff 00:29:54.920 |
and the easy stuff was the stuff that everyone could do. 00:30:01.920 |
- Yeah, I mean, I don't know how much truth there is to, 00:30:05.800 |
like chess, for example, was for the longest time 00:30:09.440 |
seen as the highest level of intellect, right? 00:30:14.440 |
- Until we got computers that were better at it than people. 00:30:17.320 |
And then we realized, if you go back to the '90s, 00:30:22.160 |
around when Kasparov was beaten by Deep Blue. 00:30:28.480 |
Computers are gonna be able to do anything from now on. 00:30:30.760 |
And we saw exactly the same stories with AlphaZero, 00:30:36.280 |
- Yeah, but still to me, reasoning is a special thing 00:30:42.120 |
- No, actually, we're really bad at reasoning. 00:30:49.760 |
don't you think the ability to construct metaphor 00:30:57.760 |
and registering that something constant in our brains. 00:31:01.200 |
- Like, isn't that what we're doing with vision too? 00:31:09.960 |
But I think we jumped between what we're capable of 00:31:16.080 |
- Right, there was a little confusion that went on. 00:31:27.320 |
so I'm trying to pull apart this Marv X paradox. 00:32:02.040 |
just a couple of hundred thousand years probably, 00:32:08.640 |
All that stuff was built on top of those earlier things 00:32:14.480 |
- So if you then look at the engineering of these things, 00:32:20.600 |
what's the hardest part of robotics, do you think? 00:32:34.240 |
the actual sort of the biomechanics of movement. 00:32:42.400 |
Like, what do you think is the hardest part of robotics? 00:32:55.760 |
If only we had all the location of all the points in 3D, 00:33:16.640 |
one category of problems in robotics instantly, 00:33:54.240 |
and that game theory of how they work well together? 00:33:56.920 |
- Well, let's talk about manipulation for a second, 00:34:19.880 |
And he'd never been able to get to a window before, 00:34:25.160 |
And he goes up to this window with a handle on it 00:34:32.520 |
and the other hand turning the handle to open the window. 00:34:40.120 |
two different things he knew how to put together. 00:34:54.520 |
a mechanism he'd never seen. - How did he do that? 00:34:59.880 |
- It's like, okay, like you could see the leap of genius 00:35:06.480 |
to combining, to doing, I mean, first of all, 00:35:19.360 |
But he inferred somehow a handle opened something. 00:35:27.120 |
slightly different failure cases that you didn't see. 00:35:33.040 |
but with other objects of turning and twisting and handles. 00:35:39.040 |
reinforcement learning will just give the robot, 00:35:45.440 |
or you'll give the robot plenty of time to try everything. 00:36:04.920 |
there's more security, and then you get to DeepMind, 00:36:12.280 |
Bayer conference room with some of the people, 00:36:14.320 |
and they tell me about their reinforcement learning 00:36:24.080 |
And they're my robots, they're Sawyers, we sold them. 00:36:30.160 |
'cause Sawyers are compliant and consents forces, 00:36:33.120 |
so they don't break when they're bashing into walls, 00:36:42.720 |
- By the way, Sawyer, we're talking about robot manipulation, 00:37:02.680 |
to try and use reinforcement learning to learn how to act. 00:37:13.880 |
this idea that you just let reinforcement learning 00:37:29.720 |
He didn't randomly, you know, and he was 18 months old, 00:37:32.600 |
he didn't randomly try to touch every surface 00:37:35.840 |
He found, he could see where the mechanism was 00:37:51.480 |
which cut down the search space dramatically. 00:38:01.080 |
is you have something like reinforcement learning 00:38:03.520 |
going on in the mind, in the space of imagination. 00:38:09.280 |
and you may be running those tens of thousands 00:38:20.400 |
but maybe there's a lot of computation going on. 00:38:24.520 |
- Whatever it is, but there's also a mechanism 00:38:41.840 |
so you don't think that's akin to a neural network 00:38:55.360 |
I'll be incredibly surprised if that happens. 00:39:00.360 |
I'll also be incredibly surprised that, you know, 00:39:03.760 |
after all the decades that I've been doing this, 00:39:11.680 |
You know, four or five years ago, I was saying, 00:39:16.640 |
"Oh, you don't understand how powerful AI is." 00:39:44.600 |
I think there's a lot of surprising and beautiful things 00:39:52.760 |
this new generation of deep learning revolution has revealed 00:40:04.040 |
we've solved intelligence, that's another big leap. 00:40:07.400 |
But is there something surprising and beautiful to you 00:40:10.720 |
about neural networks that where actually you sat back 00:40:28.520 |
- That doesn't mean that they're solving everything 00:40:31.200 |
in computer vision we need to solve or in vision for robots. 00:40:35.720 |
- What about AlphaZero and self-play mechanisms 00:40:40.000 |
- Yeah, that was all in Donald Mickey's 1961 paper. 00:40:49.440 |
So now you're talking about the actual techniques, 00:40:54.200 |
The level it's able to achieve with no human supervision 00:40:59.960 |
To me, there's a big, big difference between Deep Blue and- 00:41:16.880 |
- Yeah, I mean, I came across this 1946 report that, 00:41:21.880 |
and I'd seen this as a kid in one of those books 00:41:31.560 |
1946 report, which pitted someone with an abacus 00:41:44.800 |
well, humans are still better than machines at calculating. 00:41:49.240 |
Are you surprised today that a machine can, you know, 00:41:52.440 |
do a billion floating point operations a second, 00:42:05.720 |
There's something, to me, different about learning. 00:42:20.720 |
- Yeah, I mean, there's so many different forms of learning. 00:42:31.160 |
All of those are, you know, very different capabilities. 00:42:41.800 |
people would write a paper about learning something. 00:42:44.480 |
Now, the corporate press office puts out a press release 00:42:51.480 |
is leading the world because they have a system that can. 00:43:00.120 |
When I refer to learning as many things, but- 00:43:13.840 |
- Well, it becomes less dumb at the thing that it's doing. 00:43:21.400 |
- It gets better performance under some measure, 00:43:29.720 |
learning systems fail when you change the conditions 00:43:35.960 |
just a little bit in a way that humans don't. 00:43:38.040 |
So I was at DeepMind, the AlphaGo had just come out, 00:43:53.800 |
well, would actually be able to play that game. 00:44:07.800 |
So they're slowly expanding this generalization. 00:44:15.480 |
that even in a constrained game of chess or Go, 00:44:20.000 |
that through self-play, by a system playing itself, 00:44:23.240 |
that it can achieve superhuman level performance 00:44:32.280 |
- It's so fundamentally different in search of-- 00:44:40.040 |
There, in the second part of it, which came a year later, 00:44:44.240 |
they had self-play on an electronic computer at tic-tac-toe. 00:44:49.920 |
but it learned to play tic-tac-toe through self-play. 00:45:02.880 |
but only when they actually realize the promise. 00:45:10.240 |
what Bezos and Elon Musk are doing with rockets. 00:45:15.200 |
but doing reusable, cheap rockets, it's very impressive. 00:45:28.520 |
the game of Go was seen to be impossible to solve. 00:45:35.440 |
maybe it'd be possible to maybe have big leaps 00:45:52.320 |
learn your way to beat the best people in the world 00:46:01.280 |
- Okay, so using a different learning technique. 00:46:08.640 |
and he was the first person to use machine learning, 00:46:16.200 |
Now, so, and that at the time was considered amazing. 00:46:19.920 |
By the way, Arthur Samuel had some fantastic advantages. 00:46:23.640 |
Do you want to hear Arthur Samuel's advantages? 00:46:32.360 |
He was at Stanford when I was graduating there. 00:46:51.320 |
outlined the learning mechanism that Arthur Samuel used. 00:47:00.680 |
But Arthur Samuel had been a vacuum tube engineer, 00:47:11.960 |
And in those days, before you shipped a computer, 00:47:14.600 |
you ran it for a week to see, to get early failures. 00:47:30.080 |
So he ran his chess learning program with self-play 00:47:43.160 |
And then he was able to produce a chess playing program. 00:47:54.560 |
I don't just mean it's nice to have that accomplishment. 00:48:08.840 |
- Okay, well, let me then, doesn't mean I'm wrong. 00:48:14.680 |
- So the question is, if we keep taking steps like that, 00:48:19.320 |
Are we going to build a better recommender systems? 00:48:36.120 |
Well, in these games, they're all 100% information games. 00:48:43.920 |
is a very short description of the current state, 00:48:55.720 |
I'm definitely not saying that chess is somehow harder 00:49:05.120 |
even any kind of robotics in the physical world, 00:49:07.760 |
I definitely think is way harder than the game of chess. 00:49:10.920 |
So I was always much more impressed by like the workings 00:49:18.920 |
I wanted to be a psychiatrist for the longest time. 00:49:23.320 |
I think the game of chess is, I love the Olympics. 00:49:26.840 |
It's just another example of us humans picking a task 00:49:34.000 |
And that's the cool thing that the human mind 00:49:40.920 |
and achieve like weirdly incredible levels of performance. 00:49:44.600 |
That's the aspect of chess that's super cool. 00:49:46.880 |
Not that chess in itself is really difficult. 00:49:53.680 |
The fact that thousands of people have been struggling 00:49:56.600 |
to solve that particular problem is fascinating. 00:50:01.680 |
- Which actually is closer to what you're saying. 00:50:10.480 |
- Ice cube tray was one, but I built other machines. 00:50:40.480 |
where you take turns in taking matches from a pile 00:50:47.080 |
or the one who doesn't take the last one wins, I forget. 00:50:49.560 |
And so it was pretty easy to build that out of wires 00:50:52.040 |
and nails and little coils that were like plugging 00:50:58.320 |
The one I was prouder of, I was 12 when I built 00:51:02.320 |
a thing out of old telephone switchboard switches 00:51:11.520 |
And that was a much harder circuit to design. 00:51:14.600 |
But again, it was just, it was no active components. 00:51:17.720 |
It was just three position switches, empty, X, zero, O 00:51:22.080 |
and nine of them and a light bulb on which move 00:51:28.360 |
it wanted next and then the human would go and move that. 00:51:44.840 |
I think we can have deep connections with robots, 00:51:50.040 |
- Well, we'll come back to connections with robots. 00:51:52.880 |
- But I do wanna say, I don't, I think too many people 00:51:57.200 |
make the mistake of seeing that magic and thinking, 00:52:02.920 |
But each one of those is a hard fought battle 00:52:09.800 |
and this is why I'm playing devil's advocate, 00:52:11.440 |
but I often do when I read your blog post in my mind 00:52:17.560 |
is it's not clear to me, so I don't do what, obviously, 00:52:24.120 |
but it's not obvious to me how many steps away we are 00:52:32.160 |
of what it means to build intelligent systems, 00:52:49.880 |
And that optimism, just like reading old optimism, 00:52:56.880 |
they were saying things are trivial for decades, 00:53:03.120 |
But I think my mind is working crisply enough to where, 00:53:10.520 |
I'm really surprised by the things DeepMind has done. 00:53:13.760 |
I don't think they're yet close to solving intelligence, 00:53:26.160 |
when the engineering, it takes that idea to scale, 00:53:35.680 |
- Okay, honestly, Rodney, if it was you, me, and Demis 00:53:39.240 |
inside a room, forget the press, forget all those things, 00:53:49.520 |
okay, let's pick one that's the most surprising to you. 00:54:00.960 |
Alpha zero, alpha go, alpha go zero, alpha zero, 00:54:11.240 |
of forget usefulness or application and so on, 00:54:17.040 |
like as a scientist, was those surprising to you 00:54:24.400 |
- Okay, so if we're gonna make the distinction 00:54:33.960 |
I would say alpha fold, and one of the problems 00:54:56.320 |
- Yeah, it's a surprising number of them right. 00:55:00.440 |
So that was a surprise how many it gets right. 00:55:05.520 |
because you don't know which ones are right or not, 00:55:11.400 |
which is trying to get a useful tool out of it, 00:55:15.800 |
- In that sense, at least alpha fold is different, 00:55:24.200 |
that are actually potentially revolutionizing 00:55:28.760 |
Like they will actually help a lot of people. 00:55:45.960 |
Speaking of robots that operate in the real world, 00:55:54.680 |
- Okay, because you have built robotics companies. 00:55:58.560 |
You're one of the greatest roboticists in history, 00:56:06.640 |
but in the actual building and execution of businesses 00:56:13.880 |
and that actually work in the real world and make money. 00:56:16.720 |
You also sometimes are critical of Mr. Elon Musk, 00:56:27.920 |
What are your thoughts about Tesla autopilot, 00:56:31.560 |
or more generally vision-based machine learning approach 00:57:17.360 |
So now other things, I'm impressed by what it can do. 00:57:21.840 |
I'm really impressed with many aspects of it. 00:57:41.680 |
- Yeah, when I'm going in close to something in the park, 00:57:45.960 |
I get this beautiful, gorgeous, top-down view of the world. 00:58:01.080 |
- 360 view, you know, synthesized so it's above the car. 00:58:08.160 |
It's the longest I've ever owned a car without digging it. 00:58:16.040 |
So I'm not saying technology's bad or not useful, 00:58:31.480 |
- Okay, so maybe you've seen me ask this question before, 00:58:35.880 |
but when did the first car go over 55 miles an hour 00:58:50.920 |
with other traffic around driving completely autonomously? 00:59:12.400 |
When do you think, and Elon has said he's gonna do this, 00:59:18.160 |
drive coast to coast in the US, hands off the wheel, 00:59:21.920 |
hands off the wheel, feet off the pedals, coast to coast? 00:59:26.160 |
- As far as I know, a few people have claimed to do it. 00:59:32.440 |
yeah, they didn't claim, did they claim 100%? 00:59:44.560 |
what I see happening again is someone sees a demo 00:59:49.200 |
and they overgeneralize and say, we must be almost there. 00:59:55.120 |
- So that's demos, but this is gonna take us back 01:00:04.200 |
because I thought, okay, when I first started interacting 01:00:07.360 |
with the Mobileye implementation of Tesla Autopilot, 01:00:13.680 |
I've been in Google stuff, driving cars since the beginning. 01:00:16.800 |
I thought there was no way, before I sat and used Mobileye, 01:00:22.640 |
I thought there, just knowing computer vision, 01:00:24.760 |
I thought there's no way it could work as well as it was 01:00:27.120 |
working, so my model of the limits of computer vision 01:00:31.920 |
was way more limited than the actual implementation 01:00:35.960 |
of Mobileye, so that's one example, I was really surprised. 01:00:42.480 |
The second surprise came when Tesla threw away Mobileye 01:00:49.720 |
I thought there's no way they can catch up to Mobileye. 01:00:53.520 |
I thought what Mobileye was doing was kind of incredible, 01:00:57.640 |
- Yeah, well, Mobileye was started by Amnon Shasher 01:01:11.200 |
just good, like what you do to make a successful product, 01:01:17.000 |
and so I was very surprised when they, from scratch, 01:01:21.560 |
That's very impressive, and I've talked to a lot of 01:01:23.360 |
engineers that was involved, that was impressive. 01:01:31.000 |
well, with the involvement of Andrej Karpathy, 01:01:33.280 |
what they were, what they're doing with the data engine, 01:01:40.480 |
into these multiple tasks, and then doing this 01:01:42.920 |
edge case discovery when they're pulling back, 01:01:52.000 |
I don't know to that intensity, but I always thought 01:01:55.240 |
it was very difficult to solve autonomous driving 01:01:57.720 |
with all the sensors, with all of the computation. 01:02:00.400 |
I just thought it was a very difficult problem. 01:02:10.120 |
'cause I thought, you know, just because I worked 01:02:20.360 |
to where I didn't think they could do this at scale, 01:02:28.320 |
I started to think, okay, so what are the limits of this? 01:02:35.440 |
driver, like sensing and the interaction with the driver 01:02:53.680 |
especially a vision-based alone, how far that can take us. 01:03:01.160 |
Now, can you explain in the same way you said, 01:03:12.840 |
Here's actual people using an actual car and driving, 01:03:16.800 |
many of them drive more than half their miles 01:03:20.960 |
- So, yeah, they're doing well with pure vision. 01:03:36.120 |
that have a dynamic range closer to the human eye, 01:03:38.280 |
'cause human eye has incredible dynamic range. 01:03:53.560 |
where the sun on a white thing and the blinds, 01:03:58.480 |
I think there's a bunch of things to think about 01:04:04.480 |
before you say, this is so good, it's just going to work. 01:04:20.840 |
You've been writing a lot of great blog posts 01:04:23.160 |
about it for a while before Tesla had autopilot, right? 01:04:27.360 |
So you've been thinking about autonomous driving 01:04:36.680 |
from motor vehicle accidents is about 35,000 a year, 01:04:55.400 |
well, if we cut down the number of deaths by 10% 01:04:58.440 |
by having autonomous driving, that's going to be great. 01:05:02.720 |
And my prediction is that if autonomous vehicles 01:05:54.960 |
so there wouldn't be deaths from pedestrians getting hit. 01:05:58.400 |
We completely changed the structure of our cities 01:06:01.000 |
and had these foul smelling things everywhere around us. 01:06:06.000 |
And now you see pushback in cities like Barcelona 01:06:24.040 |
It's not going to be just take the current situation, 01:06:30.520 |
doing the same stuff because the end cases too many. 01:06:44.600 |
- I mean, do you count them as fully autonomous? 01:07:04.600 |
two that go about five kilometers out of airports. 01:07:08.280 |
- When is the first fully autonomous train system 01:07:15.160 |
for mass transit expected to operate fully autonomously 01:07:23.920 |
- It's expected to operate in 2017 in Honolulu. 01:07:32.160 |
But by the way, it was originally gonna be autonomous 01:07:35.880 |
- I mean, they're all very close to fully autonomous, right? 01:07:38.960 |
- Yeah, but getting the closest to the thing. 01:07:41.680 |
And I've often gone on a fully autonomous train in Japan, 01:07:55.680 |
What do you see when you go to a fully autonomous train 01:08:05.660 |
At every station, there's a double set of doors. 01:08:23.880 |
The whole track is built so that people can't climb onto it. 01:08:28.040 |
So there's an engineering that then makes the system safe 01:08:33.680 |
I think we'll see similar sorts of things happen in the US. 01:08:42.440 |
that we would have special purpose lanes on 101 01:08:51.620 |
so that it would be normal for Teslas or other cars 01:08:59.660 |
"Okay, now it's autonomous," and have that dedicated lane. 01:09:10.100 |
And it may be because Tesla's been over-promising 01:09:13.300 |
by saying this, calling their system fully self-driving. 01:09:19.540 |
by collaborating to change the infrastructure. 01:09:42.580 |
What sort of infrastructure do you need for that? 01:09:45.860 |
Do you need to have the human in there to do that? 01:10:04.840 |
- So I'm with you still, and I was with you for a long time, 01:10:21.500 |
I think there's something fundamentally different 01:10:26.060 |
with vision-based methods and Tesla Autopilot 01:10:29.260 |
and any company that's trying to do the same. 01:10:50.640 |
because that's what happened with every other technology. 01:10:57.740 |
which relies on government to help out in these cases. 01:11:02.740 |
If you just look at infrastructure in all domains, 01:11:08.060 |
it's just government always drags behind on infrastructure. 01:11:19.780 |
that are actually much worse on infrastructure. 01:11:28.940 |
- I guess my question is, which is at the core 01:11:32.580 |
of what I was trying to think through here and ask you, 01:11:35.540 |
is how hard is the driving problem as it currently stands? 01:11:55.580 |
Like I used to think it's, with vision alone, 01:12:07.260 |
but I do notice that on Highway 280 here in the Bay Area, 01:12:27.940 |
would not work without those black boundaries. 01:12:31.220 |
- So I don't know whether they've started doing it 01:12:43.580 |
without that change in the way they paint the line. 01:13:07.900 |
like it doesn't make sense to revamp the infrastructure 01:13:14.860 |
It does make sense to revamp the infrastructure. 01:13:17.820 |
- If you have a large fleet of autonomous vehicles, 01:13:24.420 |
I mean, but for that, you need autonomous vehicles. 01:13:31.540 |
I've gotten a bunch of chances to ride in a Waymo 01:13:38.540 |
I don't know if you'd call them self-driving, but. 01:13:41.140 |
- Well, I mean, I rode in one before that called Waymo. 01:13:48.700 |
another surprising leap I didn't think would happen, 01:13:56.220 |
And I think they're thinking of doing that in Austin as well. 01:14:00.780 |
- Although, and I do an annual checkup on this. 01:14:07.860 |
they were aiming for hundreds of rides a week, not thousands. 01:14:17.620 |
but there's certainly safety people in the loop. 01:14:27.820 |
- It wasn't, obviously they're not 100% transparent 01:14:45.780 |
- And that sort of fits with YouTube videos I've seen 01:14:50.140 |
of people being trapped in the car by a red cone 01:15:14.740 |
at least certainly the human side of autonomous vehicles 01:15:17.380 |
for many years, and you've been doing it for way longer. 01:15:28.980 |
- The argument I have is that people make interpolations 01:15:36.300 |
You know, it's just, you know, we've solved it. 01:15:57.500 |
in your criticisms of what is perceived as hype. 01:16:06.700 |
And so like, 'cause people extrapolate, like you said, 01:16:20.180 |
But let me ask you maybe a difficult question. 01:16:25.020 |
- Do you think, if you look at history of progress, 01:16:28.660 |
don't you think to achieve the quote, "impossible," 01:16:58.060 |
we went from barely flying, I don't know what it was, 01:17:01.660 |
50 feet, the length of the first flight or something, 01:17:12.780 |
but one of them didn't believe it's even possible 01:17:17.220 |
So like not just possible soon, but like ever. 01:17:22.660 |
- How important is it to believe and be optimistic 01:17:59.300 |
'cause it's 10 years since I stepped off the board, 01:18:04.260 |
robots cleaning houses from that one company. 01:18:09.220 |
- Was that a crazy idea that we had to believe 01:18:23.860 |
So iRobot, one of the greatest robotics companies ever 01:18:29.260 |
creating a robot that actually works in the real world 01:18:31.820 |
is probably the greatest robotics company ever. 01:18:55.580 |
That's what I'm referring to with the harshness. 01:18:58.700 |
You've accomplished an incredible thing there. 01:19:04.140 |
Like that's what I'm trying to get at that line. 01:19:08.020 |
my harshness is reserved for people who are not doing it, 01:19:14.660 |
well, this shows that it's just gonna happen. 01:19:30.380 |
You know, I think SpaceX is an amazing company. 01:19:34.860 |
On the other hand, you know, in one of my blog posts, 01:19:54.900 |
DCX reused those rockets that landed vertically. 01:19:58.420 |
There was a whole insurance industry in place 01:20:09.580 |
It took a great entrepreneur, a great personal expense. 01:20:25.980 |
that's never been thought about, never been demonstrated. 01:20:28.380 |
So my estimation is Hyperloop is a long, long, 01:20:39.740 |
between when the technology's coming along and ready, 01:20:44.740 |
and then he'll go off and mouth off about other things, 01:20:48.420 |
which then people go and compete about and try and do. 01:20:51.500 |
- This is where I understand what you're saying. 01:21:03.460 |
towards people who are not telling the truth, 01:21:06.020 |
who are basically fabricating stuff to make money 01:21:19.060 |
you have to believe even when most people tell you 01:21:27.060 |
I mean, that's the same thing I have with Tesla Autopilot. 01:21:37.380 |
in the robotics community were very negative towards Elon. 01:21:40.380 |
It was very interesting for me to observe colleagues at MIT. 01:21:48.540 |
because I understood where that's coming from. 01:21:53.300 |
And I kind of almost felt the same thing in the beginning 01:21:58.220 |
and realized there's a lot of interesting ideas here 01:22:05.900 |
that you shouldn't call a system full self-driving 01:22:10.900 |
when it's obviously not autonomous, fully autonomous, 01:22:19.020 |
but at the same time, there are people who buy it, 01:22:22.260 |
literally pay money for it and take those words as given. 01:22:27.140 |
- But I haven't, so take words as given is one thing. 01:22:33.580 |
I haven't actually seen people that use autopilot 01:22:36.540 |
that believe that the behavior is really important, 01:22:43.460 |
on the very thing that you're frustrated about, 01:22:51.340 |
In the same way, I think there's a lot of hype 01:22:58.460 |
People use the way, this opened my eyes actually, 01:23:03.220 |
the way people use a product is very different 01:23:09.620 |
Everybody has dreams of how a particular product 01:23:16.020 |
there's a lot of fear of robotics, for example, 01:23:20.980 |
But when you actually have robots in your life, 01:23:28.820 |
Your perceptions of it are gonna be way different. 01:23:30.980 |
And so my just tension was, was like, here's an innovator, 01:23:45.740 |
We should like be excited by those innovations. 01:24:08.940 |
It's the taking and not necessarily the people 01:24:25.540 |
because there's been a particular focus for me 01:24:29.420 |
Elon's prediction of when certain milestones will be hit. 01:24:42.700 |
as a person that kind of not inside the system, 01:25:04.380 |
drives people to do the best work of their life, 01:25:07.460 |
even when the odds of those deadlines are very low. 01:25:11.180 |
- To a point, and I'm not talking about Elon here. 01:25:16.780 |
because you overextend and it's demoralizing. 01:25:21.580 |
But I will say that there's an additional thing here, 01:25:27.060 |
that those words also drive the stock market. 01:25:32.060 |
And we have, because of the way that rich people 01:25:38.100 |
in the past have manipulated the rubes through investment, 01:25:42.860 |
we have developed laws about what you're allowed to say 01:25:57.580 |
but I tend to believe that engineers, innovators, 01:26:18.140 |
because I think most people that run companies 01:26:21.460 |
and build, especially original founders, they... 01:26:31.060 |
you fall into that kind of a behavior pattern. 01:26:37.260 |
- I wasn't saying it's falling into that intent. 01:26:39.700 |
It's just, you also have to protect investors 01:26:47.420 |
you have an amazing blog that people should check out, 01:27:06.700 |
I was gonna check back on how my predictions had... 01:27:14.300 |
'cause I thought that'll be January 1st, 2050. 01:27:33.340 |
- Yeah, I didn't say I was gonna make new predictions. 01:27:35.020 |
I was just gonna measure this set of predictions 01:27:42.580 |
So I said, "I should hold myself to a high standard." 01:27:53.060 |
And so the topics are artificial intelligence, 01:27:58.380 |
I was wondering if we could just go through some 01:28:08.300 |
some predictions that you're particularly proud of 01:28:17.260 |
the other element here is how widespread the location 01:28:22.260 |
where the deployment of the autonomous vehicles is. 01:28:33.820 |
that there would be a dedicated self-driving lane 01:28:39.620 |
and I think I was over-optimistic on that one. 01:28:42.500 |
- Yeah, actually, yeah, I actually do remember that. 01:28:49.580 |
- So Cambridge, Massachusetts, I think was an example. 01:28:54.180 |
I lived in Cambridge Port for a number of years, 01:29:02.260 |
is incredibly frustrating when you start to put, 01:29:04.940 |
and people drive the wrong way on one-way streets there. 01:29:07.860 |
- So your prediction was driverless taxi services 01:29:13.500 |
operating on all streets in Cambridge Port, Massachusetts, 01:29:21.100 |
- Yeah, and that may have been too optimistic. 01:29:26.140 |
- You know, I've gotten a little more pessimistic 01:29:28.420 |
since I made these internally on some of these things. 01:30:05.140 |
The major problem is delivery trucks stopping everywhere, 01:30:26.940 |
and the drivers will figure out how to get in there. 01:31:00.700 |
Although I have noticed that sometimes the driver 01:31:10.580 |
because they will sit there waiting for a long time. 01:31:22.180 |
and so you can see whether they've got their hands 01:31:31.460 |
- So what year do you think, what's your intuition? 01:31:36.740 |
San Francisco would be autonomous taxi service 01:31:41.500 |
from any point A to any point B without a driver? 01:31:46.220 |
Are you still, are you thinking 10 years from now, 01:31:57.020 |
If you're allowed to go South of market, way longer. 01:32:10.340 |
Is it the edge cases, the computer perception? 01:32:13.500 |
- Well, here's a case that I saw outside my house 01:32:16.580 |
a few weeks ago, about 8 p.m. on a Friday night. 01:32:27.780 |
turned right and stopped dead covering the crosswalk. 01:32:35.260 |
'Cause there was a human just two feet from it. 01:32:39.500 |
Now I just glanced, I knew what was happening. 01:32:41.860 |
The human was a woman, was at the door of her car 01:32:55.660 |
As a human, I could tell, no, she's not gonna jump out. 01:32:57.940 |
She's busy trying to unlock her, she's lost her keys. 01:33:07.140 |
- And so the human driver in there did not take over. 01:33:20.780 |
And now the crosswalk's blocked by this cruise vehicle. 01:33:28.140 |
Cleverly, I think he decided not to go in front of the car. 01:33:35.100 |
He had to get off the crosswalk, out into the intersection 01:33:38.220 |
to push his baby around this car, which was stopped there. 01:33:51.580 |
that safety is being compromised for individuals 01:34:14.820 |
with the discussion of autonomous vehicles in general. 01:34:24.740 |
- Your question is when is it gonna happen in San Francisco? 01:34:26.940 |
I say not soon, but it's gonna be one of them. 01:34:29.180 |
But where it is gonna happen is in limited domains, 01:34:33.660 |
campuses of various sorts, gated communities, 01:34:39.860 |
where the other drivers are not arbitrary people. 01:34:50.740 |
And at velocities where it's always safe to stop dead. 01:35:00.740 |
And they may not be shaped like current cars. 01:35:06.300 |
They may be things like May Mobility has those things 01:35:12.780 |
- Yeah, I wonder if that's a compelling experience. 01:35:17.220 |
It's about creating a product that makes your, 01:35:23.620 |
One of the least fun things is for a car that stops 01:35:29.660 |
There's something deeply frustrating for us humans, 01:35:32.860 |
for the rest of the world to take advantage of us 01:35:35.620 |
- But think about, you know, not you as the customer, 01:35:40.620 |
but someone who's in their 80s in a retirement village 01:35:47.700 |
whose kids have said, "You are not driving anymore." 01:35:52.020 |
And this gives you the freedom to go to the market. 01:36:00.980 |
It's not, it's just a few people in a small community 01:36:03.660 |
using cars as opposed to the entirety of the world. 01:36:06.460 |
I like that the first time that a car equipped 01:36:11.500 |
with some version of a solution to the trolley problem 01:36:23.860 |
- You know, I ask you, when have you had to decide 01:36:29.580 |
No, you put the brakes on and you brake as hard as you can. 01:36:35.500 |
- It is, you know, I do think autonomous vehicles 01:36:40.740 |
the whole pedestrian problem that has elements 01:36:43.300 |
of the trolley problem within it, but it's not-- 01:36:47.740 |
in one of the articles or blog posts that I wrote. 01:36:53.140 |
one of my coworkers has told me he does this. 01:37:01.740 |
Now, you know, once they realize that, you know, 01:37:04.380 |
putting one foot off the curb makes the car think 01:37:20.340 |
I believe that robots that interact with humans 01:37:29.140 |
because that creates a very uncompelling experience 01:37:33.660 |
before it was called Waymo, discovered that, you know, 01:37:36.500 |
they had to do that at four-way intersections. 01:37:44.020 |
the other drivers would just beat them all the time. 01:37:50.420 |
one of the most successful robotics companies ever. 01:38:01.620 |
- Well, there's something I'm quite proud of there, 01:38:06.780 |
but I was still on the board when this happened. 01:38:25.860 |
which was, everything was, I've been there since. 01:38:40.260 |
well, you know, Japan is so good at robotics. 01:38:52.300 |
but with intelligence, dealing with roadside bombs. 01:39:05.500 |
Whereas the Japanese robots, which were, you know, 01:39:18.500 |
Couldn't do a thing because they weren't deployed, 01:39:21.380 |
but we had deployed in really harsh conditions 01:39:24.940 |
And so we're able to do something very positive 01:39:40.780 |
What about the simple robot that is the Roomba, 01:39:54.900 |
- Well, I make the joke that I started out life 01:40:10.700 |
there was a wacky lawsuit that I got posed for 01:40:19.660 |
And I was the only one who had emailed from the 1990s 01:40:37.140 |
What was I doing at the time we were building the Roomba? 01:40:40.140 |
One of the things was we had this incredibly tight budget 01:40:47.660 |
'cause we wanted to put it on the shelves at $200. 01:40:51.580 |
There was another home cleaning robot at the time. 01:41:03.220 |
And to us, that was not going to be a consumer product. 01:41:19.060 |
That means the cost of goods has to be minimal. 01:41:22.820 |
So I find all these emails of me going, you know, 01:41:31.740 |
I'd go down to Hsinchu and talk to these little tiny 01:41:34.420 |
companies, lots of little tiny companies outside of TSMC, 01:41:38.460 |
Taiwan Semiconductor Manufacturing Corporation, 01:41:42.620 |
which let all these little companies be fabulous. 01:41:51.700 |
their innovations were to build stripped down 6802s. 01:41:57.700 |
Get rid of half the silicon and still have it be viable. 01:42:14.420 |
you know, they weren't gaming in the current sense. 01:42:16.980 |
There were these handheld games that you would play 01:42:22.900 |
'Cause we had about a 50 cent budget for computation. 01:42:34.740 |
the interrupt handling is too weak for a general purpose. 01:42:41.660 |
And then I found this one from a company called Winbond, 01:42:44.540 |
which had, and I'd forgotten that it had this much RAM. 01:42:47.700 |
It had 512 bytes of RAM and it was in our budget 01:43:03.820 |
did you ever think that you guys could be so successful? 01:43:06.500 |
Like eventually this company would be so successful. 01:43:13.980 |
We'd had 14 failed business models up to 2002. 01:43:27.940 |
'cause by this time we had some venture capital in. 01:43:41.860 |
And we went three times over what they authorized 01:43:46.060 |
and built 70,000 of them and sold them all in that first, 01:44:00.980 |
- But yeah, you didn't think this will take over the world. 01:44:32.620 |
- The founders, not the consumer, the founders. 01:44:34.580 |
- Yeah, expectations of what can be delivered, sure. 01:44:37.180 |
Mispricing, and what a customer thinks is a valid price 01:44:54.300 |
sheer hardness of getting people to adopt a new technology. 01:45:22.380 |
where your primary source of income is from robots, 01:45:43.220 |
- I'm not making any such predictions in my own mind. 01:45:46.460 |
I'm not thinking about a trillion dollar company. 01:45:51.740 |
that Apple would ever be a trillion dollar company. 01:45:58.980 |
but don't you, 'cause I kind of have a vision 01:46:05.700 |
that I see that there will be robots in the home 01:46:16.220 |
And I think there's a real market pull for them 01:46:22.260 |
You know, who's gonna do all the stuff for the older people? 01:46:26.340 |
There's too many, you know, I'm leading here. 01:46:39.220 |
to make that economic argument at this point. 01:46:55.780 |
And still today, the primary product of 20 years, 01:47:07.940 |
- Do you think it's possible for one person in the garage 01:47:14.580 |
Google self-driving car that turns into Waymo? 01:47:22.020 |
Do you think it's possible to go from the ground up 01:47:35.340 |
fair chunks of capital for some robotics companies. 01:47:46.420 |
But it can take real money to get into these things 01:48:02.820 |
is another amazing robotics company you co-founded. 01:48:11.180 |
And what are you most proud of with Rethink Robotics? 01:48:15.780 |
- I'm most proud of the fact that we got robots 01:48:23.180 |
absolutely safe for people and robots to be next to each other. 01:48:40.620 |
and big companies are advertising they're doing. 01:48:43.220 |
- That's both an interface problem and also a safety problem. 01:49:05.460 |
Maybe, maybe, you know, if I'd been stronger on, 01:49:09.980 |
and I remember the day, I remember the exact meeting. 01:49:17.140 |
So I'd said that, I'd set as a target for the company 01:49:22.300 |
that we were gonna build $3,000 robots with force feedback 01:49:39.580 |
plastic gear boxes and at a $3,000, you know, lifetime, 01:49:44.580 |
oh, $3,000, I was saying, we're gonna go after 01:49:49.780 |
not the people who already have robot arms in factories, 01:50:03.020 |
It doesn't have to have a 35,000 hour lifetime. 01:50:05.900 |
It's gonna be so cheap that it's OPEX, not CAPEX. 01:50:09.620 |
And so we had a prototype that worked reasonably well, 01:50:24.580 |
but we could use something called serious elastic actuators. 01:50:39.300 |
'cause these plastic gears, they're not great gears. 01:50:42.180 |
And there's this ripple and trying to do force control 01:50:51.740 |
but I remember one of the mechanical engineers saying, 01:50:54.900 |
we'll just build a metal gearbox with spur gears 01:50:58.140 |
and it'll take six weeks, we'll be done, problem solved. 01:51:03.700 |
Two years later, we got the spur gearbox working. 01:51:06.940 |
We cost reduced in every possible way we could, 01:51:19.900 |
So our first robot product, Baxter, now costs $25,000. 01:51:24.700 |
And the only people who were gonna look at that 01:51:30.420 |
'cause that was somewhat cheaper for two arms 01:51:36.860 |
reproducibility of motion and certain velocities. 01:51:41.860 |
And I kept thinking, but that's not what we're giving you. 01:51:54.620 |
All the other robots have that repeatability. 01:51:58.580 |
- So can you clarify, force control is you can grab the arm 01:52:15.980 |
Under position control, you have fixtured where this is. 01:52:26.260 |
In force control, you would do something like 01:52:41.540 |
- Couldn't convince our customers who were in factories 01:52:46.940 |
and were used to thinking about things a certain way, 01:52:51.940 |
So then we said, okay, we're gonna build an arm 01:53:03.540 |
A certain sort of gearbox made by a company whose name 01:53:29.780 |
We ended up having to do, the robot was now overpriced. 01:53:33.340 |
- And that was your intuition from the very beginning, 01:53:39.020 |
you're opening a door to solve a lot of problems 01:53:52.620 |
- Yeah, I think we could have done it for five. 01:54:01.140 |
Why would you say that company not failed, but went under? 01:54:08.700 |
- We had buyers and there's this thing called 01:54:14.780 |
the Committee on Foreign Investment in the US, CFIUS. 01:54:24.620 |
around where the government could stop foreign money 01:54:28.020 |
coming into a US company based on defense requirements. 01:54:33.020 |
We went through due diligence multiple times. 01:54:39.100 |
but every consortium had Chinese money in it. 01:54:43.740 |
And all the bankers would say at the last minute, 01:54:54.340 |
when we were about to run out of money, two buyers. 01:54:57.300 |
And one used heavy-handed legal stuff with the other one, 01:55:08.780 |
and then bought the assets at 1/30th of the price 01:55:30.980 |
I said I was never gonna start another company. 01:55:33.020 |
I was pleased that everyone liked what we did so much 01:55:36.340 |
that the team was hired by three companies within a week. 01:55:41.340 |
Everyone had a job in one of these three companies. 01:55:46.700 |
because another company came in and rented the space. 01:55:50.820 |
So I felt good about people not being out on the street. 01:55:58.340 |
That's a revolutionary idea for a robot manipulation, 01:56:14.580 |
it showed you what its understanding of the task was. 01:56:23.460 |
And so we made it so that when the robot was running, 01:56:26.460 |
it could be showing graphs of what was happening 01:56:30.380 |
Other people, and some of them surprised me who they were, 01:56:34.740 |
were saying, "Well, this one doesn't look as human 01:56:46.740 |
I'm kind of disappointed whenever I talk to roboticists, 01:56:54.080 |
they seem to not want to do the eyes type of thing. 01:57:08.140 |
will have to do the human connection very well, 01:57:18.880 |
- Do you think, well, maybe by way of asking the question, 01:57:23.660 |
let me first mention that you're kind of critical 01:57:26.660 |
of the idea of the Turing test as a test of intelligence. 01:57:34.340 |
Do you think we'll be able to build an AI system 01:57:40.540 |
and it falls in love with the human, like romantic love? 01:57:45.380 |
- Well, we've had that with humans falling in love with cars 01:58:14.340 |
is the thing that you said with the Marvax paradox 01:58:20.700 |
But if we just look at a voice, like the movie "Her," 01:58:40.260 |
you know, as humans, as we think about the future, 01:58:43.820 |
and this is the premise of most science fiction movies, 01:58:59.100 |
So surprisingly, it might be surprising to you, 01:59:01.980 |
it might not, I think the best movie about this stuff 01:59:12.860 |
As the robot was trying to become more human, 01:59:17.420 |
the humans were adopting the technology of the robot 01:59:21.580 |
So there was a convergence happening in a sense. 01:59:50.300 |
I've got things in my ears so that I can sort of hear you. 02:00:06.340 |
do you think, so forget about love for a second, 02:00:15.380 |
what is the interviewer for the Alexa prize or whatever? 02:00:21.700 |
Their idea is success looks like a person wanting to turn 02:00:27.980 |
on the Alexa and talk to an AI system for a prolonged period 02:00:37.980 |
And why is it difficult to build an AI system 02:00:45.900 |
Like not for, to check the weather or to check music, 02:00:59.620 |
being shocked at how much people would talk to Eliza. 02:01:03.260 |
And I remember, you know, in the seventies typing, 02:01:06.500 |
you know, stuff to Eliza to see what it would come back with. 02:01:12.700 |
and this is a thing that Amazon's been trying to improve 02:01:31.300 |
where there seems to be an ongoing existence, which changes. 02:01:40.340 |
And there's no sort of intention of these systems 02:01:55.180 |
I'm just saying, I think this is why we don't feel 02:02:02.980 |
If you want the sort of interaction you're talking about, 02:02:07.660 |
Whether it's going to be sufficient, I don't know. 02:02:22.740 |
And I think it's achievable in the near term. 02:02:40.620 |
He was using it as a rhetorical device to argue 02:02:52.020 |
because you can't tell the difference when it's thinking. 02:02:58.500 |
What it has become as this sort of weird game 02:03:10.460 |
we had this thing that still goes on called the AI Olympics. 02:03:32.700 |
and you had to tell whether it was a man or a woman. 02:03:39.900 |
one man came up with a question that he could ask, 02:03:52.340 |
of whether the other person was really a man or a woman. 02:03:57.660 |
did you have green plastic toy soldiers as a kid? 02:04:23.220 |
What's the, that's why I say it's sort of devolved 02:04:28.140 |
- Nevertheless, conversation not formulated as a test 02:04:32.620 |
is a pretty, is a fascinatingly challenging dance. 02:04:44.940 |
how far away we are from solving intelligence 02:04:48.900 |
It's hard, computer vision is harder for me to pull apart. 02:04:53.140 |
But with language, with conversation, you could see-- 02:05:02.820 |
Shit, you mentioned something I was gonna go off on. 02:05:10.740 |
'cause you were the head of CSAIL, AI Lab for a long time. 02:05:16.900 |
You're, I don't know, to me, when I came to MIT, 02:05:24.460 |
And plus you, you're, I don't know, friends with, 02:05:33.740 |
all the legendary AI people of which you're one. 02:05:39.460 |
What are memories that stand out to you from that time, 02:05:53.660 |
- Let me tell you first a disappointment in myself. 02:05:59.060 |
and so many of the players were active in the '50s and '60s, 02:06:26.780 |
those who founded AI, robotics and computer science 02:06:33.740 |
is that often I have questions for those who have passed on. 02:06:37.700 |
- And I didn't think to ask them any of these questions. 02:07:14.740 |
what the successes were, what the failures were, 02:07:20.000 |
I would have asked him more questions about that. 02:07:36.300 |
I should have asked Marvin why he and Seymour Papert 02:07:46.260 |
Because Marvin's PhD thesis was on neural networks. 02:08:11.240 |
But still, it's kind of tragic that he was both 02:08:14.740 |
the proponent and the destroyer of neural networks. 02:08:25.560 |
- Well, yeah, but you gotta be more specific. 02:08:33.260 |
I mean, to me, it's a little bit also heartbreaking 02:08:35.860 |
that with Google and Facebook, like DeepMind and so on, 02:08:41.860 |
so much of the talent doesn't stay necessarily 02:08:46.740 |
for prolonged periods of time in these universities. 02:08:52.940 |
are more guilty than others of paying fabulous salaries 02:09:00.180 |
And then just, you never hear from them again. 02:09:06.700 |
And it's sort of like collecting Hollywood stars 02:09:18.020 |
there's an openness to the university setting 02:09:20.620 |
where you do research, both in the space of ideas 02:09:23.560 |
and the space, like publication, all those kinds of things. 02:09:25.660 |
- Yeah, and there's the publication and all that, 02:09:29.060 |
and often, although these places say they publish, 02:09:41.260 |
I think Google buying those eight or nine robotics company 02:09:46.140 |
was bad for the field, because it locked those people away. 02:09:49.440 |
They didn't have to make the company succeed anymore, 02:09:53.100 |
locked them away for years, and then sort of all 02:10:11.160 |
I'm not sure I would say MIT is leading the world in AI, 02:10:19.580 |
I would say DeepMind, Google AI, Facebook AI. 02:10:33.020 |
But I think those other places are following a dream 02:10:41.780 |
and I'm not sure that it's well-founded, the dream, 02:10:47.460 |
and I'm not sure that it's going to have the impact 02:10:55.440 |
- You're talking about Facebook and Google and so on. 02:10:59.200 |
But the thing is, those research labs aren't, 02:11:01.840 |
there's the big dream, and I'm usually a fan of, 02:11:06.280 |
no matter what the dream is, a big dream is a unifier, 02:11:09.240 |
because what happens is you have a lot of bright minds 02:11:18.920 |
I mean, this is how so much progress is made. 02:11:20.840 |
- Yeah, so I'm not saying they're actually leading. 02:11:23.400 |
I'm not saying that the universities are leading, 02:11:26.440 |
but I don't think those companies are leading in general, 02:11:29.960 |
we saw this incredible spike in attendees at NeurIPS. 02:11:36.360 |
And as I said in my January 1st review this year for 2020, 02:11:42.040 |
2020 will not be remembered as a watershed year 02:11:50.320 |
There was nothing surprising happened anyway, 02:12:00.960 |
And there's a lot more people writing papers, 02:12:04.920 |
but the papers are fundamentally boring and uninteresting. 02:12:11.720 |
- Is there particular memories you have with Minsky 02:12:21.320 |
I mean, unfortunately, he's another one that's passed away. 02:12:24.320 |
You've known some of the biggest minds in AI. 02:12:34.600 |
- Well, he was interesting, 'cause he was very grumpy, 02:12:44.480 |
that the key to success or to keep being productive 02:12:48.480 |
is to hate everything you've ever done in the past. 02:13:08.960 |
I mean, that crankiness, I mean, there's a... 02:13:23.080 |
having access to technology before anyone else has seen it. 02:13:34.080 |
we had terminals that could show live video on them, 02:13:49.080 |
it wasn't like a typewriter ball hitting characters. 02:13:53.840 |
only in, you know, one bit, you know, black or white, 02:14:12.240 |
and I could, you know, I got there early enough in the day, 02:14:21.960 |
- So having that like direct glimpse into the future. 02:14:25.480 |
- Yeah, and, you know, I've had email every day since 1977, 02:14:29.960 |
and, you know, the host field was only eight bits, 02:14:36.600 |
but I could send email to other people at a few places. 02:14:47.000 |
- Let me ask you, I'll probably edit this out, 02:14:54.280 |
I'm hanging out with Don Knuth for a while tomorrow. 02:15:03.400 |
He's a very kind of theoretical computer science, 02:15:06.200 |
the puzzle of computer science and mathematics, 02:15:09.640 |
and you're so much about the magic of robotics, 02:15:19.720 |
- They did in a, you know, I know him now, we talk, 02:15:26.680 |
So, you know, besides, you know, analysis of algorithms, 02:15:30.720 |
he's well known for writing tech, which is in LaTeX, 02:15:40.960 |
and he would do it, he would work overnight at the AI lab. 02:15:57.560 |
He later did his PhD at the Media Lab at MIT, 02:16:11.000 |
or washing machine size disk drives had failed, 02:16:13.640 |
and that's what brought the whole system down. 02:16:17.600 |
and we're pulling, you know, circuit cards out, 02:16:43.320 |
Smoke comes out, 'cause we put it in backwards, 02:16:53.760 |
- So that was your little, when was that again? 02:16:56.360 |
- Well, that must have been before October '79, 02:17:00.400 |
So sometime, probably '78, sometime early '79. 02:17:03.960 |
- Yeah, all those figures are just fascinating. 02:17:11.840 |
Is there, let me ask you to put on your big, wise man hat. 02:17:16.840 |
Is there advice that you can give to young people today, 02:17:28.120 |
how to live a life they're proud of, a successful life? 02:17:36.680 |
- Yeah, so many people ask me for advice and have asked, 02:17:58.320 |
Maybe I come from an age where I could be a rebel 02:18:08.560 |
But I think it's important not to get too caught up 02:18:18.400 |
And well, it depends on what you want in life. 02:18:31.200 |
So you have to make a lot of unsafe decisions. 02:18:46.560 |
- Or you just may end up not having a lousy career. 02:18:54.320 |
But there's no way to make all safe decisions 02:19:04.840 |
- Do you think about your death, about your mortality? 02:19:19.240 |
That made me work on my book harder for a while. 02:19:24.400 |
and now I'm doing more than full-time at the company 02:19:30.440 |
- When you think about it, are you afraid of it? 02:20:09.040 |
I'm gonna review these every year for 32 years. 02:20:15.800 |
- That puts a whole, every time you write the blog posts, 02:20:19.000 |
you're getting closer and closer to your own prediction. 02:20:32.280 |
- What I hope is that I actually finish writing this book 02:20:44.640 |
and sees something about changing the way they're thinking. 02:20:56.280 |
And then there'll be on a podcast 100 years from now 02:21:33.720 |
- Do you have a sense of what that big picture is? 02:21:40.360 |
- You know, my atheism tells me it's just random, 02:22:03.480 |
but this little pocket of complexity they will call earth, 02:22:13.160 |
those pockets of complexity are or how often, 02:22:25.920 |
if we're alone or if there's infinite number of-- 02:22:30.560 |
- Oh, I think it's impossible for me to believe 02:22:43.360 |
it could be like a graveyard of intelligent civilizations. 02:22:54.800 |
including all the robots you build, everything forgotten. 02:23:00.200 |
- Well, on average, everyone has been forgotten in history. 02:23:10.760 |
- I mean, yeah, well, not just on average, basically. 02:23:15.400 |
Very close to 100% of people who have ever lived 02:23:22.920 |
my great-grandparents, 'cause we didn't meet them. 02:23:36.520 |
and at times meaninglessness of it all, it's pretty fun. 02:23:40.240 |
And for me, one of the most fun things is robots, 02:24:04.240 |
please check out our sponsors in the description. 02:24:07.000 |
And now, let me leave you with the three laws of robotics 02:24:15.840 |
or through inaction, allow a human being to come to harm. 02:24:20.200 |
Two, a robot must obey the orders given to it 02:24:27.560 |
And three, a robot must protect its own existence