Back to Index

Jim Keller: Elon Musk and Tesla Autopilot | AI Podcast Clips


Transcript

All the cost is in the equipment to do it. And the trend on equipment is, once you figure out how to build the equipment, the trend of cost is zero. Elon said, first you figure out what configuration you want the atoms in, and then how to put them there.

Right? - Yeah. - 'Cause, well, here's the, you know, his great insight is, people are how constrained. I have this thing, I know how it works, and then little tweaks to that will generate something, as opposed to, what do I actually want, and then figure out how to build it.

It's a very different mindset. And almost nobody has it, obviously. - Well, let me ask on that topic, you were one of the key early people in the development of autopilot, at least in the hardware side. Elon Musk believes that autopilot and vehicle autonomy, if you just look at that problem, can follow this kind of exponential improvement.

In terms of the how question that we're talking about, there's no reason why it can't. What are your thoughts on this particular space of vehicle autonomy, and your part of it, and Elon Musk's and Tesla's vision for-- - Well, the computer you need to build was straightforward. And you could argue, well, does it need to be two times faster, or five times, or 10 times?

But that's just a matter of time. Or price, in the short run. So that's not a big deal. You don't have to be especially smart to drive a car. So it's not like a super hard problem. I mean, the big problem with safety is attention, which computers are really good at, not skills.

- Well, let me push back on one. You see, everything you said is correct, but we as humans tend to take for granted how incredible our vision system is. So-- - You can drive a car with 20/50 vision, and you can train a neural network to extract the distance of any object in the shape of any surface from a video and data.

- Yeah, but-- - It's really simple. - No, it's not simple. - That's a simple data problem. - It's not simple. 'Cause it's not just detecting objects, it's understanding the scene, and it's being able to do it in a way that doesn't make errors. So the beautiful thing about the human vision system and our entire brain around the whole thing is we're able to fill in the gaps.

It's not just about perfectly detecting cars, it's inferring the occluded cars. It's trying to, it's understanding the physics-- - I think that's mostly a data problem. - So you think what data would compute with improvement of computation, with improvement in collection? - Well, there is a, you know, when you're driving a car and somebody cuts you off, your brain has theories about why they did it.

You know, they're a bad person, they're distracted, they're dumb, you know, you can listen to yourself. - Right. - So, you know, if you think that narrative is important to be able to successfully drive a car, then current autopilot systems can't do it. But if cars are ballistic things with tracks and probabilistic changes of speed and direction, and roads are fixed and given, by the way, they don't change dynamically, right?

You can map the world really thoroughly. You can place every object really thoroughly, right? You can calculate trajectories of things really thoroughly. Right? - But everything you said about really thoroughly has a different degree of difficulty. So-- - And you could say at some point, computer autonomous systems will be way better at things that humans are lousy at.

Like, they'll be better at attention. They'll always remember there was a pothole in the road that humans keep forgetting about. They'll remember that this set of roads has these weirdo lines on it that the computers figured out once. And especially if they get updates, so if somebody changes a given, like the key to robots and stuff, somebody said, is to maximize the givens.

Right? - Right. - Right, so having a robot pick up this bottle cap is way easier to put a red dot on the top. 'Cause then you have to figure out, if you wanna do a certain thing with it, maximize the givens is the thing. And autonomous systems are happily maximizing the givens.

Like, humans, when you drive someplace new, you remember it 'cause you're processing it the whole time. And after the 50th time you drove to work, you get to work, you don't know how you got there. Right? You're on autopilot. Right? Autonomous cars are always on autopilot. But the cars have no theories about why they got cut off or why they're in traffic.

So they also never stop paying attention. - Right. So I tend to believe you do have to have theories, meta models of other people, especially with pedestrian and cyclists, but also with other cars. So everything you said is actually essential to driving. Driving is a lot more complicated than people realize, I think, so sort of to push back slightly, but-- - So to cut into traffic, right?

- Yep. - You can't just wait for a gap. You have to be somewhat aggressive. You'd be surprised how simple a calculation for that is. - I may be on that particular point, but there's a... - Yeah. - Maybe I should have to push back. I would be surprised.

You know what? Yeah, I'll just say where I stand. I would be very surprised, but I think it's, you might be surprised how complicated it is. - I tell people, it's like progress disappoints in the short run and surprises in the long run. - It's very possible, yeah. - I suspect in 10 years, it'll be just taken for granted.

- Yeah, probably. But you're probably right. Now it looks like-- - It's gonna be a $50 solution that nobody cares about. It's like GPS is like, wow, GPS, we have satellites in space that tell you where your location is. It was a really big deal. Now everything has a GPS in it.

- Yeah, it's true, but I do think that systems that involve human behavior are more complicated than we give them credit for. So we can do incredible things with technology that don't involve humans, but when you-- - I think humans are less complicated than people, you know, frequently ascribed.

- Maybe I have-- - We tend to operate out of large numbers of patterns and just keep doing it over and over. - But I can't trust you because you're a human. That's something a human would say. But my hope is on the point you've made is, even if, no matter who's right, I'm hoping that there's a lot of things that humans aren't good at that machines are definitely good at, like you said, attention.

Things like that, well, they'll be so much better that the overall picture of safety and autonomy will be obviously cars will be safer, even if they're not as good at-- - I'm a big believer in safety. I mean, there are already the current safety systems like cruise control that doesn't let you run into people and lane keeping.

There are so many features that you just look at the Pareto of accidents and knocking off like 80% of them is, you know, super doable. - Just to linger on the autopilot team and the efforts there, it seems to be that there's a very intense scrutiny by the media and the public in terms of safety, the pressure, the bar put before autonomous vehicles.

What are your, sort of as a person there working on the hardware and trying to build a system that builds a safe vehicle and so on, what was your sense about that pressure? Is it unfair? Is it expected of new technology? - Yeah, it seems reasonable. I was interested, I talked to both American and European regulators, and I was worried that the regulations would write into the rules technology solutions, like modern brake systems imply hydraulic brakes.

So if you read the regulations to meet the letter of the law for brakes, it sort of has to be hydraulic, right? And the regulator said they're interested in the use cases, like a head-on crash, an offset crash, don't hit pedestrians, don't run into people, don't leave the road, don't run a red light or a stoplight.

They were very much into the scenarios. And, you know, and they had all the data about which scenarios injured or killed the most people. And for the most part, those conversations were like, what's the right thing to do to take the next step? Now Elon's very interested also in the benefits of autonomous driving are freeing people's time and attention as well as safety.

And I think that's also an interesting thing, but, you know, building autonomous systems so they're safe and safer than people seemed, since the goal is to be 10x safer than people, having the bar to be safer than people and scrutinizing accidents seems philosophically correct. So I think that's a good thing.

- What are, is different than the things you worked at, the Intel, AMD, Apple, with autopilot chip design and hardware design, what are interesting or challenging aspects of building this specialized kind of computing system in the automotive space? - I mean, there's two tricks to building like an automotive computer.

One is the software team, the machine learning team, is developing algorithms that are changing fast. So as you're building the accelerator, you have this, you know, worry or intuition that the algorithms will change enough that the accelerator will be the wrong one, right? And there's the generic thing, which is, if you build a really good general purpose computer, say its performance is one, and then GPU guys will deliver about 5x the performance for the same amount of silicon, because instead of discovering parallelism, you're given parallelism.

And then special accelerators get another two to 5x on top of a GPU, because you say, I know the math is always eight bit integers into 32 bit accumulators, and the operations are the subset of mathematical possibilities. So auto, you know, AI accelerators have a claimed performance benefit over GPUs, because in the narrow math space, you're nailing the algorithm.

Now, you still try to make it programmable, but the AI field is changing really fast. So there's a, you know, there's a little creative tension there of, I want the acceleration afforded by specialization without being over specialized so that the new algorithm is so much more effective that you'd have been better off on a GPU.

So there's a tension there. To build a good computer for an application like automotive, there's all kinds of sensor inputs and safety processors and a bunch of stuff. So one of Elon's goals to make it super affordable. So every car gets an autopilot computer. So some of the recent startups you look at, and they have a server in the trunk, because they're saying, I'm going to build this autopilot computer replaces the driver.

So their cost budget's 10 or $20,000. And Elon's constraint was, I'm going to put one in every car, whether people buy autonomous driving or not. So the cost constraint he had in mind was great. Right, and to hit that, you had to think about the system design, that's complicated.

It's fun, you know, it's like, it's like, it's craftsman's work, like a violin maker, right? You can say Stradivarius is this incredible thing. The musicians are incredible. But the guy making the violin, you know, picked wood and sanded it, and then he cut it, you know, and he glued it, you know, and he waited for the right day.

So that when he put the finish on it, it didn't, you know, do something dumb. That's craftsman's work, right? You may be a genius craftsman, because you have the best techniques and you discover a new one, but most engineering is craftsman's work. And humans really like to do that.

You know, expression-- - Smart humans. - No, everybody. - All humans. - I don't know. I used to, I dug ditches when I was in college. I got really good at it, satisfying. - Yeah. - So. - Digging ditches is also craftsman work. - Yeah, of course. So there's an expression called complex mastery behavior.

So when you're learning something, that's fun, 'cause you're learning something. When you do something and it's rote and simple, it's not that satisfying. But if the steps that you have to do are complicated and you're good at 'em, it's satisfying to do them. And then if you're intrigued by it all, as you're doing them, you sometimes learn new things that you can raise your game.

But craftsman's work is good. And engineers, like engineering is complicated, enough that you have to learn a lot of skills. And then a lot of what you do is then craftsman's work, which is fun. - Autonomous driving, building a very resource-constrained computer, so a computer has to be cheap enough that it's put in every single car.

That essentially boils down to craftsman's work. It's engineering, it's-- - Yeah, you know, there's thoughtful decisions and problems to solve and trade-offs to make. Do you need 10 camera inputs or eight? You know, is your building for the current car or the next one? You know, how do you do the safety stuff?

You know, there's a whole bunch of details. But it's fun, but it's not like I'm building a new type of neural network which has a new mathematics and a new computer to work. That's, like there's more invention than that. But the rejection to practice, once you pick the architecture, you look inside and what do you see?

Adders and multipliers and memories and the basics. So computers, there's always this weird set of abstraction layers of ideas and thinking that reduction to practice is transistors and wires and pretty basic stuff. And that's an interesting phenomena. By the way, like factory work, lots of people think factory work is road assembly stuff.

I've been on the assembly line. Like the people who work there really like it. It's a really great job. It's really complicated. Putting cars together is hard, right? And the car is moving and the parts are moving and sometimes the parts are damaged and you have to coordinate putting all the stuff together and people are good at it.

They're good at it. And I remember one day I went to work and the line was shut down for some reason and some of the guys sitting around were really bummed 'cause they had reorganized a bunch of stuff and they were gonna hit a new record for the number of cars built that day.

And they were all gung-ho to do it. And these were big, tough buggers. (Luke laughs) But what they did was complicated and you couldn't do it. - Yeah, and I mean-- - Well, after a while you could, but you'd have to work your way up 'cause putting the bright, what's called the brights, the trim on a car on a moving assembly line where it has to be attached 25 places in a minute and a half is unbelievably complicated.

And human beings can do it. It's really good. I think that's harder than driving a car, by the way. - Putting together, working-- - Working on a factory. - Too smart, people can disagree. - Yeah. - It's like driving a car. - Well, we'll get you in the factory someday and then we'll see how you do.

- No, not for us humans driving a car is easy. I'm saying building a machine that drives a car is not easy. Okay, driving a car is easy for humans because we've been evolving for billions of years. - To drive cars, yeah, I noticed that. - To do-- - The pale of the cars are super cool.

- Oh, now you join the rest of the internet in mocking me. - Okay. (Luke laughs) - It wasn't mocking, I was just intrigued by your anthropology. - Yeah, it's-- - I'll have to go dig into that. - There's some inaccuracies there, yes. Okay, but in general, what have you learned in terms of thinking about passion, craftsmanship, tension, chaos, the whole mess of it?

Or what have you learned, have taken away from your time working with Elon Musk, working at Tesla, which is known to be a place of chaos, innovation, craftsmanship, and all those things. - I really like the way he thought. Like, you think you have an understanding about what first principles of something is and then you talk to Elon about it and you didn't scratch the surface.

He has a deep belief that no matter what you do, it's a local maximum. - Right, and I had a friend, he invented a better electric motor. And it was a lot better than what we were using. And one day he came by, he said, "You know, I'm a little disappointed "'cause this is really great "and you didn't seem that impressed." And I said, "You know when the super intelligent aliens come, "are they gonna be looking for you?" Like, where is he?

The guy who built the motor. (Luke laughs) - Yeah. - Probably not. But doing interesting work that's both innovative and let's say craftsman's work on the current thing, it's really satisfying and it's good. And that's cool. And then Elon was good at taking everything apart. Like, what's the deep first principle?

Oh, no, what's really, no, what's really? You know, that ability to look at it without assumptions and how constraints is super wild. You know, he built a rocket ship and-- - Using the same process. - Electric car, you know, everything. And that's super fun and he's into it too.

Like, when they first landed two SpaceX rockets to Tesla, we had a video projector in the big room and like 500 people came down and when they landed, everybody cheered and some people cried. It was so cool. All right, but how did you do that? Well, it was super hard.

And then people say, "Well, it's chaotic." Really? To get out of all your assumptions? You think that's not gonna be unbelievably painful? And is Elon tough? Yeah, probably. Do people look back on it and say, "Boy, I'm really happy I had that experience "to go take apart that many layers of assumptions." Sometimes super fun, sometimes painful.

- So it could be emotionally and intellectually painful, that whole process of just stripping away assumptions. - Yeah, imagine 99% of your thought process is protecting your self-conception. And 98% of that's wrong. Now you got the math right. How do you think you're feeling when you get back into that one bit that's useful and now you're open and you have the ability to do something different?

I don't know if I got the math right. It might be 99.9, but it ain't 50. - Imagining it, the 50% is hard enough. - Now, for a long time, I've suspected you could get better. Like you can think better, you can think more clearly, you can take things apart.

And there's lots of examples of that, people who do that. - And Elon is an example of that. - Apparently. - You are an example. - I don't know if I am. I'm fun to talk to. - Certainly. - I've learned a lot of stuff. Well, here's the other thing is, I joke, I read books.

And people think, "Oh, you read books." Well, no, I've read a couple of books a week for 55 years. Well, maybe 50, 'cause I didn't learn to read until I was eight or something. And it turns out when people write books, they often take 20 years of their life where they passionately did something, reduce it to 200 pages.

That's kind of fun. And then you go online and you can find out who wrote the best books and who, you know, that's kind of wild. So there's this wild selection process. And then you can read it and, for the most part, understand it. And then you can go apply it.

Like I went to one company, I thought, and I haven't managed much before, so I read 20 management books. And I started talking to them and basically compared to all the VPs running around, I'd read 19 more management books than anybody else. (laughing) Wasn't even that hard. And half the stuff worked, like first time.

It wasn't even rocket science. - But at the core of that is questioning the assumptions or sort of entering the thinking, first principles thinking, sort of looking at the reality of the situation and using that knowledge, applying that knowledge. - Yeah, so I would say my brain has this idea that you can question first assumptions.

But I can go days at a time and forget that. And you have to kind of like circle back that observation. - Because it is emotionally challenging. - Well, it's hard to just keep it front and center 'cause you operate on so many levels all the time and getting this done takes priority or being happy takes priority or screwing around takes priority.

Like how you go through life is complicated. And then you remember, oh yeah, I could really think first principles. Oh shit, that's tiring. But you do for a while and that's kind of cool. - So just as a last question in your sense from the big picture, from the first principles, do you think, you kind of answered it already, but do you think autonomous driving is something we can solve on a timeline of years?

So one, two, three, five, 10 years as opposed to a century? - Yeah, definitely. - Just to linger on it a little longer, where's the confidence coming from? Is it the fundamentals of the problem, the fundamentals of building the hardware and the software? - As a computational problem, understanding ballistics, roles, topography, it seems pretty solvable.

I mean, and you can see this, like speech recognition for a long time, people are doing frequency and domain analysis and all kinds of stuff and that didn't work for at all, right? And then they did deep learning about it and it worked great. And it took multiple iterations.

And autonomous driving is way past the frequency analysis point. Use radar, don't run into things. And the data gathering is going up and the computation is going up and the algorithm understanding is going up and there's a whole bunch of problems getting solved like that. - The data side is really powerful, but I disagree with both you and Elon.

I'll tell Elon once again as I did before that when you add human beings into the picture, it's no longer a ballistics problem. It's something more complicated, but I could be very well proven wrong. - Cars are highly damped in terms of rate of change. Like the steering system's really slow compared to a computer.

The acceleration is really slow. - Yeah, on a certain time scale, on a ballistics time scale, but human behavior, I don't know. I shouldn't say-- - Brains are really slow too. Weirdly, we operate half a second behind reality. Nobody really understands that one either. It's pretty funny. - Yeah, yeah.

We very well could be surprised. And I think with the rate of improvement in all aspects on both the compute and the software and the hardware, there's gonna be pleasant surprises all over the place. - Mm-hmm. (upbeat music) (upbeat music) (upbeat music) (upbeat music) (upbeat music) (upbeat music)