back to indexSebastian Thrun: Autopilot Makes Me a Safer Driver | AI Podcast Clips
00:00:00.000 |
You know, it's interesting you mentioned gutsy. Let me ask some maybe unanswerable question, 00:00:07.000 |
maybe edgy questions, but in terms of how much risk is required, some guts, in terms 00:00:16.520 |
of leadership style, it would be good to contrast approaches. And I don't think anyone knows 00:00:22.880 |
what's right. But if we compare Tesla and Waymo, for example, Elon Musk and the Waymo 00:00:29.680 |
team, there's slight differences in approach. So on the Elon side, there's more, I don't 00:00:36.680 |
know what the right word to use, but aggression in terms of innovation. And on Waymo side, 00:00:45.520 |
there's more sort of cautious, safety focused approach to the problem. What do you think 00:00:53.640 |
it takes? Which leadership at which moment is right? Which approach is right? 00:01:00.640 |
Look, I don't sit in either of those teams, so I'm unable to even verify like somebody 00:01:05.760 |
says correct. In the end of the day, every innovator in that space will face a fundamental 00:01:11.760 |
dilemma. And I would say you could put aerospace titans into the same bucket, which is you 00:01:18.360 |
have to balance public safety with your drive to innovate. And this country in particular 00:01:25.360 |
in the States has a hundred plus year history of doing this very successfully. Air travel 00:01:30.360 |
is what a hundred times as safe per mile than ground travel, than cars. And there's a reason 00:01:37.160 |
for it because people have found ways to be very methodological about ensuring public 00:01:43.600 |
safety while still being able to make progress on important aspects, for example, like air 00:01:48.800 |
and noise and fuel consumption. So I think that those practices are proven and they actually 00:01:55.800 |
work. We live in a world safer than ever before. And yes, there will always be the provision 00:02:00.920 |
that something goes wrong. There's always the possibility that someone makes a mistake 00:02:04.240 |
or there's an unexpected failure. We can never guarantee to a hundred percent absolute safety 00:02:09.960 |
other than just not doing it. But I think I'm very proud of the history of the United 00:02:15.800 |
States. I mean, we've dealt with much more dangerous technology like nuclear energy and 00:02:20.720 |
kept that safe too. We have nuclear weapons and we keep those safe. So we have methods 00:02:27.080 |
and procedures that really balance these two things very, very successfully. 00:02:32.160 |
You've mentioned a lot of great autonomous vehicle companies that are taking sort of 00:02:36.280 |
the level four, level five, they jump in full autonomy with a safety driver and take that 00:02:41.400 |
kind of approach and also through simulation and so on. There's also the approach that 00:02:46.600 |
Tesla Autopilot is doing, which is kind of incrementally taking a level two vehicle and 00:02:52.920 |
using machine learning and learning from the driving of human beings and trying to creep 00:02:59.280 |
up, trying to incrementally improve the system until it's able to achieve level four autonomy. 00:03:04.700 |
So perfect autonomy in certain kind of geographical regions. What are your thoughts on these contrasting 00:03:11.680 |
Well, first of all, I'm a very proud Tesla owner and I literally use the Autopilot every 00:03:16.600 |
day and it literally has kept me safe. It is a beautiful technology specifically for 00:03:23.560 |
highway driving when I'm slightly tired because then it turns me into a much safer driver 00:03:31.240 |
and I'm 100% confident that's the case. In terms of the right approach, I think the biggest 00:03:38.080 |
change I've seen since I ran the Waymo team is this thing called deep learning. I think 00:03:43.760 |
deep learning was not a hot topic when I started Waymo or Google self-driving cars. It was 00:03:49.040 |
there. In fact, we started Google Brain at the same time in Google X, so I invested in 00:03:53.180 |
deep learning, but people didn't talk about it. It wasn't a hot topic. And now it is. 00:03:57.720 |
There's a shift of emphasis from a more geometric perspective where you use geometric sensors 00:04:03.280 |
that give you a full 3D view and you do a geometric reasoning about, oh, this box over 00:04:07.080 |
here might be a car, towards a more human-like, oh, let's just learn about it. This looks 00:04:13.620 |
like the thing I've seen 10,000 times before, so maybe it's the same thing, machine learning 00:04:18.080 |
perspective. And that has really put, I think, all these approaches on steroids. At Udacity, 00:04:25.680 |
we teach a course in self-driving cars. In fact, I think we've graduated over 20,000 00:04:31.440 |
or so people on self-driving car skills, so every self-driving car team in the world now 00:04:36.600 |
uses our engineers. And in this course, the very first homework assignment is to do lane 00:04:42.360 |
finding on images. And lane finding images for laymen, what this means is you put a camera 00:04:47.480 |
into your car or you open your eyes and you wouldn't know where the lane is, right? So 00:04:51.680 |
you can stay inside the lane with your car. Humans can do this super easily. You just 00:04:55.880 |
look and you know where the lane is just intuitively. For machines, for a long time, it was super 00:05:00.880 |
hard because people would write these kind of crazy rules. If there's like white lane 00:05:04.880 |
markers and here's what white really means, this is not quite white enough, so it's, oh, 00:05:08.960 |
it's not white. Or maybe the sun is shining, so when the sun shines and this is white and 00:05:12.880 |
this is a straight line, or maybe it's quite a straight line because the road is curved. 00:05:16.720 |
And do we know that there's really six feet between lane markings or not, or 12 feet, 00:05:20.320 |
whatever it is. And now what the students are doing, they would take machine learning. 00:05:26.560 |
So instead of like writing these crazy rules for the lane marker, they'll say, hey, let's 00:05:30.680 |
take an hour of driving and label it and tell the vehicle this is actually the lane by hand. 00:05:34.880 |
And then these are examples and have the machine find its own rules what lane markings are. 00:05:40.600 |
And within 24 hours, now every student that's never done any programming before in this 00:05:44.200 |
space can write a perfect lane finder as good as the best commercial lane finders. And that's 00:05:50.200 |
completely amazing to me. We've seen progress using machine learning that completely dwarfs 00:06:00.680 |
- What are your thoughts on Elon Musk's statement, provocative statement, perhaps that lighter 00:06:05.080 |
is a crutch. So this geometric way of thinking about the world may be holding us back if 00:06:12.360 |
what we should instead be doing in this robotics, in this particular space of autonomous vehicles 00:06:17.320 |
is using camera as a primary sensor and using computer vision and machine learning as the 00:06:24.200 |
- I think first of all, we all know that people can drive cars without lighters in their heads 00:06:31.640 |
because we only have eyes and we mostly just use eyes for driving. Maybe we use some other 00:06:38.160 |
perception about our bodies, accelerations, occasionally our ears, certainly not our noses. 00:06:45.600 |
So the existence proof is there that eyes must be sufficient. In fact, we could even 00:06:51.680 |
drive a car if someone put a camera out and then gave us the camera image with no latency, 00:06:58.320 |
you would be able to drive a car that way the same way. So a camera is also sufficient. 00:07:03.160 |
Secondly, I really love the idea that in the Western world, we have many, many different 00:07:08.040 |
people trying different hypotheses. It's almost like an anthill, like if an anthill tries 00:07:12.800 |
to forge for food, right? You can sit there as two ants and agree what the perfect path 00:07:17.120 |
is and then every single ant marches for the most likely location of food is, or you can 00:07:21.440 |
have them just spread out. And I promise you the spread out solution will be better because 00:07:26.360 |
if the disgusting philosophical intellectual ants get it wrong and they're all moving the 00:07:31.120 |
wrong direction, they're gonna waste the day. And then they're gonna discuss again for another 00:07:34.560 |
week. Whereas if all these ants go in a random direction, someone's gonna succeed and they're 00:07:38.440 |
gonna come back and claim victory and get the Nobel Prize or whatever the ant equivalent 00:07:43.000 |
is. And then they all march in the same direction. And that's great about society. That's great 00:07:46.880 |
about the Western society. We're not plan based, we're not central based, we don't have 00:07:50.640 |
a Soviet Union style central government that tells us where to forge. We just forge, we 00:07:56.920 |
start in C Corp. We get investor money, go out and try it out. And who knows who's gonna