Back to Index

Sebastian Thrun: Autopilot Makes Me a Safer Driver | AI Podcast Clips


Transcript

You know, it's interesting you mentioned gutsy. Let me ask some maybe unanswerable question, maybe edgy questions, but in terms of how much risk is required, some guts, in terms of leadership style, it would be good to contrast approaches. And I don't think anyone knows what's right. But if we compare Tesla and Waymo, for example, Elon Musk and the Waymo team, there's slight differences in approach.

So on the Elon side, there's more, I don't know what the right word to use, but aggression in terms of innovation. And on Waymo side, there's more sort of cautious, safety focused approach to the problem. What do you think it takes? Which leadership at which moment is right? Which approach is right?

Look, I don't sit in either of those teams, so I'm unable to even verify like somebody says correct. In the end of the day, every innovator in that space will face a fundamental dilemma. And I would say you could put aerospace titans into the same bucket, which is you have to balance public safety with your drive to innovate.

And this country in particular in the States has a hundred plus year history of doing this very successfully. Air travel is what a hundred times as safe per mile than ground travel, than cars. And there's a reason for it because people have found ways to be very methodological about ensuring public safety while still being able to make progress on important aspects, for example, like air and noise and fuel consumption.

So I think that those practices are proven and they actually work. We live in a world safer than ever before. And yes, there will always be the provision that something goes wrong. There's always the possibility that someone makes a mistake or there's an unexpected failure. We can never guarantee to a hundred percent absolute safety other than just not doing it.

But I think I'm very proud of the history of the United States. I mean, we've dealt with much more dangerous technology like nuclear energy and kept that safe too. We have nuclear weapons and we keep those safe. So we have methods and procedures that really balance these two things very, very successfully.

You've mentioned a lot of great autonomous vehicle companies that are taking sort of the level four, level five, they jump in full autonomy with a safety driver and take that kind of approach and also through simulation and so on. There's also the approach that Tesla Autopilot is doing, which is kind of incrementally taking a level two vehicle and using machine learning and learning from the driving of human beings and trying to creep up, trying to incrementally improve the system until it's able to achieve level four autonomy.

So perfect autonomy in certain kind of geographical regions. What are your thoughts on these contrasting approaches? Well, first of all, I'm a very proud Tesla owner and I literally use the Autopilot every day and it literally has kept me safe. It is a beautiful technology specifically for highway driving when I'm slightly tired because then it turns me into a much safer driver and I'm 100% confident that's the case.

In terms of the right approach, I think the biggest change I've seen since I ran the Waymo team is this thing called deep learning. I think deep learning was not a hot topic when I started Waymo or Google self-driving cars. It was there. In fact, we started Google Brain at the same time in Google X, so I invested in deep learning, but people didn't talk about it.

It wasn't a hot topic. And now it is. There's a shift of emphasis from a more geometric perspective where you use geometric sensors that give you a full 3D view and you do a geometric reasoning about, oh, this box over here might be a car, towards a more human-like, oh, let's just learn about it.

This looks like the thing I've seen 10,000 times before, so maybe it's the same thing, machine learning perspective. And that has really put, I think, all these approaches on steroids. At Udacity, we teach a course in self-driving cars. In fact, I think we've graduated over 20,000 or so people on self-driving car skills, so every self-driving car team in the world now uses our engineers.

And in this course, the very first homework assignment is to do lane finding on images. And lane finding images for laymen, what this means is you put a camera into your car or you open your eyes and you wouldn't know where the lane is, right? So you can stay inside the lane with your car.

Humans can do this super easily. You just look and you know where the lane is just intuitively. For machines, for a long time, it was super hard because people would write these kind of crazy rules. If there's like white lane markers and here's what white really means, this is not quite white enough, so it's, oh, it's not white.

Or maybe the sun is shining, so when the sun shines and this is white and this is a straight line, or maybe it's quite a straight line because the road is curved. And do we know that there's really six feet between lane markings or not, or 12 feet, whatever it is.

And now what the students are doing, they would take machine learning. So instead of like writing these crazy rules for the lane marker, they'll say, hey, let's take an hour of driving and label it and tell the vehicle this is actually the lane by hand. And then these are examples and have the machine find its own rules what lane markings are.

And within 24 hours, now every student that's never done any programming before in this space can write a perfect lane finder as good as the best commercial lane finders. And that's completely amazing to me. We've seen progress using machine learning that completely dwarfs anything that I saw 10 years ago.

- What are your thoughts on Elon Musk's statement, provocative statement, perhaps that lighter is a crutch. So this geometric way of thinking about the world may be holding us back if what we should instead be doing in this robotics, in this particular space of autonomous vehicles is using camera as a primary sensor and using computer vision and machine learning as the primary way to...

- I think first of all, we all know that people can drive cars without lighters in their heads because we only have eyes and we mostly just use eyes for driving. Maybe we use some other perception about our bodies, accelerations, occasionally our ears, certainly not our noses. So the existence proof is there that eyes must be sufficient.

In fact, we could even drive a car if someone put a camera out and then gave us the camera image with no latency, you would be able to drive a car that way the same way. So a camera is also sufficient. Secondly, I really love the idea that in the Western world, we have many, many different people trying different hypotheses.

It's almost like an anthill, like if an anthill tries to forge for food, right? You can sit there as two ants and agree what the perfect path is and then every single ant marches for the most likely location of food is, or you can have them just spread out.

And I promise you the spread out solution will be better because if the disgusting philosophical intellectual ants get it wrong and they're all moving the wrong direction, they're gonna waste the day. And then they're gonna discuss again for another week. Whereas if all these ants go in a random direction, someone's gonna succeed and they're gonna come back and claim victory and get the Nobel Prize or whatever the ant equivalent is.

And then they all march in the same direction. And that's great about society. That's great about the Western society. We're not plan based, we're not central based, we don't have a Soviet Union style central government that tells us where to forge. We just forge, we start in C Corp.

We get investor money, go out and try it out. And who knows who's gonna win.