back to index

Elon Musk: Neuralink, AI, Autopilot, and the Pale Blue Dot | Lex Fridman Podcast #49


Chapters

0:0
12:56 Biggest Benefit Arriving from on the Machine Side or the Human Side
17:55 Future Impact of Neural Link
27:41 Where Does Tesla Currently Stand on Its Quest for Full Autonomy
28:53 Traffic Lights

Whisper Transcript | Transcript Only Page

00:00:00.000 | The following is a conversation with Elon Musk, part two, the second time we spoke on the podcast,
00:00:07.280 | with parallels, if not in quality, then in outfit, to the objectively speaking greatest
00:00:13.120 | sequel of all time, Godfather part two. As many people know, Elon Musk is a leader of Tesla,
00:00:20.720 | SpaceX, Neuralink, and the Boring Company. What may be less known is that he's a world-class
00:00:26.880 | engineer and designer, constantly emphasizing first principles thinking and taking on big
00:00:32.480 | engineering problems that many before him would consider impossible. As scientists and engineers,
00:00:39.600 | most of us don't question the way things are done, we simply follow the momentum of the crowd.
00:00:44.160 | But revolutionary ideas that change the world on the small and large scales happen when you
00:00:51.520 | return to the fundamentals and ask, "Is there a better way?" This conversation focuses on the
00:00:57.840 | incredible engineering and innovation done in brain-computer interfaces at Neuralink.
00:01:02.960 | This work promises to help treat neurobiological diseases, to help us further understand the
00:01:09.440 | connection between the individual neuron to the high-level function of the human brain,
00:01:14.400 | and finally, to one day expand the capacity of the brain through two-way communication
00:01:20.240 | with computational devices, the internet, and artificial intelligence systems.
00:01:24.640 | This is the Artificial Intelligence Podcast. If you enjoy it, subscribe by YouTube, Apple Podcasts,
00:01:32.000 | Spotify, support on Patreon, or simply connect with me on Twitter @LexFriedman, spelled F-R-I-D-M-A-N.
00:01:39.600 | And now, as an anonymous YouTube commenter referred to our previous conversation as the,
00:01:45.520 | quote, "historical first video of two robots conversing without supervision,"
00:01:50.160 | here's the second time, the second conversation with Elon Musk.
00:01:56.560 | Let's start with an easy question about consciousness. In your view, is consciousness
00:02:03.120 | something that's unique to humans, or is it something that permeates all matter,
00:02:07.120 | almost like a fundamental force of physics? - I don't think consciousness permeates all matter.
00:02:12.960 | Panpsychists believe that. - Yeah.
00:02:14.560 | - There's a philosophical-- - How would you tell?
00:02:16.800 | - That's true. That's a good point. - I believe in scientific methods. I don't
00:02:22.800 | blow your mind or anything, but the scientific method is like, if you cannot test the hypothesis,
00:02:26.400 | then you cannot reach a meaningful conclusion that it is true.
00:02:29.360 | - Do you think consciousness, understanding consciousness, is within the reach of science,
00:02:34.800 | of the scientific method? - We can dramatically improve
00:02:39.360 | our understanding of consciousness. I would be hard-pressed to say that we understand anything
00:02:44.880 | with complete accuracy, but can we dramatically improve our understanding of consciousness?
00:02:50.400 | I believe the answer is yes. - Does an AI system, in your view,
00:02:55.760 | have to have consciousness in order to achieve human-level or superhuman-level intelligence?
00:03:00.320 | Does it need to have some of these human qualities, like consciousness, maybe a body,
00:03:05.760 | maybe a fear of mortality, capacity to love, those kinds of silly human things?
00:03:12.080 | - There's this scientific method, which I very much believe in, where something is true to the
00:03:22.880 | degree that it is testably so, and otherwise, you're really just talking about preferences or
00:03:35.120 | untestable beliefs or that kind of thing. So, it ends up being somewhat of a semantic question
00:03:42.480 | where we are conflating a lot of things with the word intelligence. If we parse them out and say,
00:03:48.800 | "Are we headed towards a future where an AI will be able to outthink us in every way?"
00:04:01.520 | Then the answer is unequivocally yes. - In order for an AI system that needs to
00:04:08.000 | outthink us in every way, it also needs to have a capacity to have consciousness,
00:04:13.760 | self-awareness, and understanding. - It will be self-aware, yes. That's
00:04:18.480 | different from consciousness. I mean, to me, in terms of what consciousness feels like,
00:04:23.600 | it feels like consciousness is in a different dimension. But this could be just an illusion.
00:04:30.400 | You know, if you damage your brain in some way physically, you damage your consciousness,
00:04:36.080 | which implies that consciousness is a physical phenomenon, in my view. The things that I think
00:04:44.160 | are really quite likely is that digital intelligence will be able to outthink us
00:04:49.760 | in every way, and it will also be able to simulate what we consider consciousness to a degree that
00:04:56.640 | you would not be able to tell the difference. - And from the aspect of the scientific method,
00:05:01.280 | it might as well be consciousness if we can simulate it perfectly.
00:05:04.560 | - If you can't tell the difference, and this is sort of the Turing test, but think of a more
00:05:10.720 | sort of advanced version of the Turing test. If you're talking to a digital superintelligence
00:05:18.080 | and can't tell if that is a computer or a human, like let's say you're just having a conversation
00:05:23.920 | over a phone or a video conference or something where you think you're talking, looks like a
00:05:30.960 | person makes all of the right inflections and movements and all the small subtleties that
00:05:38.240 | constitute a human and talks like a human, makes mistakes like a human, and you literally just can't
00:05:47.600 | tell. Are you video conferencing with a person or an AI? - Might as well. - Might as well. - Be human.
00:05:57.200 | So on a darker topic, you've expressed serious concern about existential threats of AI. It's
00:06:05.520 | perhaps one of the greatest challenges our civilization faces, but since I would say we're
00:06:10.960 | kind of an optimistic descendants of apes, perhaps we can find several paths of escaping the harm of
00:06:16.480 | AI. So if I can give you three options, maybe you can comment which do you think is the most
00:06:21.600 | promising. So one is scaling up efforts on AI safety and beneficial AI research in hope of
00:06:28.560 | finding an algorithmic or maybe a policy solution. Two is becoming a multi-planetary species as
00:06:35.680 | quickly as possible. And three is merging with AI and riding the wave of that increasing
00:06:43.760 | intelligence as it continuously improves. What do you think is most promising, most interesting
00:06:49.520 | as a civilization that we should invest in? - I think there's a lot, a tremendous amount of
00:06:55.680 | investment going on in AI. Where there's a lack of investment is in AI safety, and there should be,
00:07:03.120 | in my view, a government agency that oversees anything related to AI to confirm that it is,
00:07:09.920 | does not represent a public safety risk. Just as there is a regulatory authority for, just like
00:07:15.840 | the Food and Drug Administration, there's NHTSA for automotive safety, there's the FAA for
00:07:21.760 | aircraft safety. We generally come to the conclusion that it is important to have a government
00:07:27.200 | referee or a referee that is serving the public interest in ensuring that things are safe when
00:07:35.040 | there's a potential danger to the public. I would argue that AI is unequivocally something that has
00:07:42.960 | potential to be dangerous to the public and therefore should have a regulatory agency,
00:07:46.800 | just as other things that are dangerous to the public have a regulatory agency.
00:07:50.240 | But let me tell you, the problem with this is that the government moves very slowly,
00:07:55.520 | and the rate of, the usually way a regulatory agency comes into being is that
00:08:04.160 | something terrible happens. There's a huge public outcry, and years after that,
00:08:09.760 | there's a regulatory agency or a rule put in place. Take something like seatbelts. It was known for
00:08:16.640 | a decade or more that seatbelts would have a massive impact on safety and save so many lives
00:08:26.960 | and serious injuries. And the car industry fought the requirement to put seatbelts in tooth and nail.
00:08:33.120 | Tooth and nail. That's crazy. And hundreds of thousands of people probably died because of that.
00:08:40.720 | And they said people wouldn't buy cars if they had seatbelts, which is obviously absurd.
00:08:45.360 | You know, or look at the tobacco industry and how long they fought anything about smoking.
00:08:51.680 | That's part of why I helped make that movie, Thank You for Smoking. You can sort of see just
00:08:58.880 | how pernicious it can be when you have these companies effectively
00:09:04.160 | achieve regulatory capture of government. The bad. People in the AI community refer to the advent of
00:09:15.040 | digital superintelligence as a singularity. That is not to say that it is good or bad,
00:09:22.480 | but that it is very difficult to predict what will happen after that point. And
00:09:28.160 | that there's some probability it will be bad, some probability it will be good.
00:09:31.520 | But obviously I want to affect that probability and have it be more good than bad.
00:09:36.880 | Well, let me, on the merger with AI question and the incredible work that's being done at Neuralink,
00:09:43.280 | there's a lot of fascinating innovation here across different disciplines going on.
00:09:48.240 | So the flexible wires, the robotic sewing machine, the responsive brain movement,
00:09:55.120 | everything around ensuring safety and so on. So we currently understand very little about the human
00:10:02.960 | brain. Do you also hope that the work at Neuralink will help us understand more about our, about the
00:10:11.040 | human mind, about the brain? Yeah, I think the work in Neuralink will definitely shed a lot of
00:10:16.000 | insight into how the brain and the mind works. Right now, just the data we have regarding
00:10:22.880 | how the brain works is very limited. We've got fMRI, which is, that's kind of like putting a
00:10:30.800 | stethoscope on the outside of a factory wall and then putting it all over the factory wall and you
00:10:36.960 | can sort of hear the sounds, but you don't know what the machines are doing really. It's hard.
00:10:42.720 | You can infer a few things, but it's very broad brushstroke. In order to really know what's going
00:10:47.520 | on in the brain, you really need, you have to have high precision sensors and then you want to have
00:10:51.760 | stimulus and response. Like if you trigger a neuron, how do you feel? What do you see?
00:10:56.720 | How does it change your perception of the world? You're speaking to physically just getting close
00:11:01.600 | to the brain, being able to measure signals from the brain will give us sort of open the door
00:11:06.000 | inside the factory. Yes, exactly. Being able to have high precision sensors that tell you what
00:11:14.400 | individual neurons are doing and then being able to trigger a neuron and see what the response is
00:11:20.400 | in the brain. So you can see the consequences of if you fire this neuron, what happens? How do you
00:11:28.320 | feel? What does it change? It'll be really profound to have this in people because people can articulate
00:11:34.160 | their change. Like if there's a change in mood or if they, you know, if they can tell you,
00:11:41.760 | if they can see better or hear better or be able to form sentences better or worse or, you know,
00:11:49.120 | their memories are jogged or that kind of thing. So on the human side, there's this incredible
00:11:55.520 | general malleability plasticity of the human brain. The human brain adapts, adjusts and so on.
00:12:01.040 | It's not that plastic to be totally frank. So there's a firm structure, but nevertheless,
00:12:06.320 | there is some plasticity and the open question is, sort of, if I could ask a broad question is
00:12:12.240 | how much that plasticity can be utilized. Sort of on the human side, there's some plasticity
00:12:17.200 | in the human brain and on the machine side, we have neural networks, machine learning,
00:12:24.800 | artificial intelligence, it's able to adjust and figure out signals. So there's a mysterious
00:12:30.400 | language that we don't perfectly understand that's within the human brain. And then we're trying to
00:12:35.600 | understand that language to communicate both directions. So the brain is adjusting a little
00:12:40.960 | bit. We don't know how much and the machine is adjusting. Where do you see as they try to sort
00:12:46.720 | of reach together, almost like with an alien species, try to find a protocol, communication
00:12:52.000 | protocol that works? Where do you see the biggest benefit arriving from on the machine side or the
00:12:59.200 | human side? Do you see both of them working together? I think the machine side is far more
00:13:03.680 | malleable than the biological side, by a huge amount. So it'll be the machine that adapts to
00:13:11.600 | the brain. That's the only thing that's possible. The brain can't adapt that well to the machine.
00:13:17.360 | You can't have neurons start to regard an electrode as another neuron because a neuron
00:13:22.960 | just, there's like the pulse. And so something else is pulsing. So there is that elasticity in
00:13:30.080 | the interface, which we believe is something that can happen. But the vast majority of the
00:13:35.840 | malleability will have to be on the machine side. But it's interesting when you look at that
00:13:39.920 | synaptic plasticity at the interface side, there might be like an emergent plasticity.
00:13:45.920 | Because it's a whole nother, it's not like in the brain, it's a whole nother extension of the brain.
00:13:50.560 | You know, we might have to redefine what it means to be malleable for the brain. So maybe the brain
00:13:56.960 | is able to adjust to external interfaces. There'll be some adjustments to the brain because there's
00:14:01.520 | going to be something reading and simulating the brain. And so it will adjust to that thing.
00:14:09.440 | But the vast majority of the adjustment will be on the machine side.
00:14:13.520 | This is just, it has to be that, otherwise it will not work. Ultimately, we currently
00:14:20.800 | operate on two layers. We have sort of a limbic, like prime primitive brain layer,
00:14:25.120 | which is where all of our kind of impulses are coming from. It's sort of like we've got,
00:14:30.640 | we've got like a monkey brain with a computer stuck on it. That's the human brain. And a lot
00:14:36.240 | of our impulses and everything are driven by the monkey brain. And the computer, the cortex,
00:14:41.360 | is constantly trying to make the monkey brain happy. It's not the cortex that's
00:14:46.160 | steering the monkey brains, the monkey brain is steering the cortex.
00:14:49.040 | You know, the cortex is the part that tells the story of the whole thing. So we convince
00:14:55.520 | ourselves it's more interesting than just the monkey brain. The cortex is like what we call
00:15:00.880 | like human intelligence. You know, so it's like, that's like the advanced computer relative to
00:15:05.120 | other creatures. Other creatures do not have either, really, they don't have the computer,
00:15:12.640 | or they have a very weak computer relative to humans. But it's like, it sort of seems like
00:15:20.560 | surely the really smart thing should control the dumb thing, but actually the dumb thing
00:15:25.200 | controls the smart thing. So do you think some of the same kind of machine learning methods,
00:15:31.440 | whether that's natural language processing applications, are going to be applied for
00:15:35.760 | the communication between the machine and the brain to learn how to do certain things like
00:15:42.640 | movement of the body, how to process visual stimuli, and so on? Do you see the value of
00:15:48.960 | using machine learning to understand the language of the two-way communication with the brain?
00:15:54.880 | Sure. Yeah, absolutely. I mean, we're a neural net, and that, you know, AI is basically neural net.
00:16:02.240 | So it's like digital neural net will interface with biological neural net.
00:16:06.000 | And hopefully bring us along for the ride, you know. But the vast majority of our intelligence
00:16:13.840 | will be digital. So like, think of like the difference in intelligence between your cortex
00:16:23.040 | and your limbic system is gigantic. Your limbic system really has no comprehension of what the
00:16:29.840 | hell the cortex is doing. It's just literally hungry, you know, or tired or angry or
00:16:37.920 | sexy or something, you know. And then that communicates that impulse to the cortex and
00:16:47.600 | tells the cortex to go satisfy that. Then a great deal of like, a massive amount of thinking,
00:16:54.480 | like truly stupendous amount of thinking has gone into sex without purpose, without procreation.
00:17:01.920 | Which is actually quite a silly action in the absence of procreation. It's a bit silly.
00:17:12.000 | Why are you doing it? Because it makes the limbic system happy. That's why.
00:17:16.480 | But it's pretty absurd, really.
00:17:19.760 | Well, the whole of existence is pretty absurd in some kind of sense.
00:17:24.880 | Yeah. But I mean, this is a lot of computation has gone into how can I do more of that
00:17:30.400 | with procreation not even being a factor? This is, I think, a very important area of research by NSFW.
00:17:37.440 | An agency that should receive a lot of funding, especially after this conversation.
00:17:44.160 | I propose the formation of a new agency.
00:17:46.400 | Oh, boy.
00:17:48.480 | What is the most exciting or some of the most exciting things that you see
00:17:55.680 | in the future impact of Neuralink, both on the science, the engineering and societal broad impact?
00:18:01.520 | So Neuralink, I think, at first will solve a lot of brain related diseases.
00:18:07.200 | So it could be anything from like autism, schizophrenia, memory loss, like everyone
00:18:12.640 | experiences memory loss at certain points in age. Parents can't remember their kids' names
00:18:17.760 | and that kind of thing. So there's a tremendous amount of good that Neuralink can do in solving
00:18:23.280 | critical damage to the brain or the spinal cord. There's a lot that can be done to improve
00:18:32.880 | quality of life of individuals. And those will be steps along the way. And then ultimately,
00:18:39.920 | it's intended to address the existential risk associated with digital superintelligence.
00:18:46.160 | Like we will not be able to be smarter than a digital supercomputer. So therefore,
00:18:55.360 | if you cannot beat them, join them. And at least we won't have that option.
00:18:59.280 | So you have hope that Neuralink will be able to be a kind of connection to allow us to
00:19:08.480 | merge to ride the wave of the improving AI systems?
00:19:11.680 | I think the chance is above 0%.
00:19:14.320 | So it's non-zero. There's a chance.
00:19:18.400 | And that's...
00:19:18.900 | Have you seen Dumb and Dumber?
00:19:22.640 | So I'm saying there's a chance.
00:19:24.160 | He's saying one in a billion or one in a million, whatever it was, at Dumb and Dumber.
00:19:28.240 | You know, it went from maybe one in a million to improving,
00:19:31.040 | maybe it'll be one in a thousand and then one in a hundred, then one in ten.
00:19:34.160 | It depends on the rate of improvement of Neuralink and how fast we're able to make progress.
00:19:40.400 | Well, I've talked to a few folks here, they're quite brilliant engineers. So I'm excited.
00:19:45.440 | Yeah, I think it's like fundamentally good. You know,
00:19:47.200 | giving somebody back full motor control after they've had a spinal cord injury,
00:19:52.400 | restoring brain functionality after a stroke,
00:19:56.240 | solving debilitating genetically oriented brain diseases. These are all incredibly great, I think.
00:20:03.680 | And in order to do these, you have to be able to interface with neurons at a detail level
00:20:08.560 | and need to be able to fire the right neurons, read the right neurons.
00:20:12.320 | And then effectively you can create a circuit, replace what's broken with
00:20:19.120 | silicon and essentially fill in the missing functionality.
00:20:23.920 | And then over time, we can have, we develop a tertiary layer.
00:20:31.120 | So if like the limbic system is a primary layer, then the cortex is like the second layer.
00:20:35.520 | And I said that, you know, obviously the cortex is vastly more intelligent than the limbic system,
00:20:40.400 | but people generally like the fact that they have a limbic system and a cortex.
00:20:43.760 | I haven't met anyone who wants to delete either one of them.
00:20:45.680 | They're like, okay, I'll keep them both. That's cool.
00:20:48.640 | The limbic system is kind of fun.
00:20:50.240 | It does. That's what the fun is. Absolutely.
00:20:52.160 | And then people generally don't want to lose the cortex either.
00:20:56.560 | Right. So they like having the cortex and the limbic system.
00:20:59.760 | Yeah.
00:21:00.560 | And then there's a tertiary layer, which will be digital superintelligence.
00:21:04.400 | And I think there's room for optimism given that the cortex is very intelligent and limbic system
00:21:12.880 | is not, and yet they work together well. Perhaps there can be a tertiary layer
00:21:16.960 | where digital superintelligence lies, and that will be vastly more intelligent than the cortex,
00:21:23.280 | but still coexist peacefully and in a benign manner with the cortex and limbic system.
00:21:29.280 | That's a super exciting future, both in the low level engineering that I saw is being done here
00:21:34.000 | and the actual possibility in the next few decades.
00:21:37.120 | It's important that Neuralink solve this problem sooner rather than later, because
00:21:42.240 | the point at which we have digital superintelligence, that's when we pass singularity
00:21:46.480 | and things become just very uncertain. It doesn't mean that they're necessarily bad or good,
00:21:49.760 | but the point at which we pass singularity, things become extremely unstable.
00:21:53.440 | So we want to have a human brain interface before the singularity,
00:21:58.240 | or at least not long after it, to minimize existential risk for humanity and consciousness
00:22:04.240 | as we know it.
00:22:04.720 | But there's a lot of fascinating actual engineering, low level problems here at Neuralink
00:22:10.720 | that are quite exciting.
00:22:13.120 | The problems that we face in Neuralink are material science, electrical engineering,
00:22:19.760 | software, mechanical engineering, micro fabrication. It's a bunch of
00:22:25.520 | engineering disciplines, essentially. That's what it comes down to is you have to have a
00:22:28.960 | tiny electrode. It's so small, it doesn't hurt neurons, but it's got to last for as long as a
00:22:38.880 | person. So it's going to last for decades. And then you've got to take that signal, you've got
00:22:43.840 | to process that signal locally at low power. So we need a lot of chip design engineers,
00:22:52.400 | because we're going to do signal processing and do so in a very power efficient way,
00:22:59.520 | so that we don't heat your brain up, because the brain is very heat sensitive.
00:23:03.120 | And then we've got to take those signals, we're going to do something with them.
00:23:06.720 | And then we've got to stimulate the back to, so you could, bidirectional communication.
00:23:15.040 | So if somebody's good at material science, software, mechanical engineering, electrical
00:23:20.800 | engineering, chip design, micro fabrication, that's what those are the things we need to work on.
00:23:26.000 | We need to go to material science so that we can have tiny electrodes that last a long time.
00:23:32.080 | And it's a tough thing with the material science problem, it's a tough one, because
00:23:35.760 | you're trying to read and simulate electrically in an electrically active
00:23:42.000 | area, your brain is very electrically active and electrochemically active.
00:23:46.320 | So how do you have a coating on the electrode that doesn't dissolve over time,
00:23:51.840 | and is safe in the brain? This is a very hard problem.
00:23:57.200 | And then how do you collect those signals in a way that is most efficient, because you really
00:24:06.880 | just have very tiny amounts of power to process those signals. And then we need to automate the
00:24:12.720 | whole thing so it's like Lasik, you know, so it's not, if this is done by neurosurgeons,
00:24:19.440 | there's no way it can scale to a large number of people. And it needs to scale to large numbers
00:24:24.160 | of people, because I think ultimately we want the future to be determined by a large number of
00:24:29.280 | humans. Do you think this has a chance to revolutionize surgery, period? So neurosurgery
00:24:37.520 | and surgery all across? Yeah, for sure. It's got to be like Lasik. If Lasik had to be hand done,
00:24:43.120 | done by hand, by a person, that wouldn't be great. It's done by a robot.
00:24:50.640 | And the ophthalmologist kind of just needs to make sure your head's in the right position,
00:24:57.200 | and then they can just press a button and go. So Smart Summon and soon Autopark takes on the
00:25:03.200 | full beautiful mess of parking lots and their human to human nonverbal communication.
00:25:07.600 | I think it has actually the potential to have a profound impact in changing how our civilization
00:25:15.440 | looks at AI and robotics, because this is the first time human beings, people that don't own
00:25:20.880 | a Tesla, may have never seen a Tesla or heard about a Tesla, get to watch hundreds of thousands
00:25:25.680 | of cars without a driver. Do you see it this way, almost like an education tool for the world about
00:25:32.640 | AI? Do you feel the burden of that, the excitement of that? Or do you just think it's a smart
00:25:37.680 | parking feature? I do think you are getting at something important, which is most people have
00:25:43.440 | never really seen a robot. And what is the car that is autonomous? It's a four-wheeled robot.
00:25:49.440 | Right. Yeah. It communicates a certain sort of message with everything from safety to the
00:25:55.120 | possibility of what AI could bring, its current limitations, its current challenges, what's
00:26:01.360 | possible. Do you feel the burden of that, almost like a communicator, educator to the world about
00:26:05.920 | AI? We're just really trying to make people's lives easier with autonomy. But now that you
00:26:12.160 | mentioned it, I think it will be an eye opener to people about robotics, because they've really
00:26:17.120 | never seen, most people have never seen a robot. And there are hundreds of thousands of Teslas,
00:26:23.520 | won't be long before there's a million of them, that have autonomous capability and the drive
00:26:28.080 | without a person in it. And you can see the evolution of the car's personality and thinking
00:26:34.880 | with each iteration of autopilot. You can see it's uncertain about this, or it gets,
00:26:43.440 | but now it's more certain. Now it's moving in a slightly different way.
00:26:48.880 | I can tell immediately if a car is on Tesla autopilot, because it's got just little nuances
00:26:54.880 | of movement. It just moves in a slightly different way. Cars on Tesla autopilot, for example,
00:27:00.320 | on the highway are far more precise about being in the center of the lane than a person. If you
00:27:05.760 | drive down the highway and look at where cars are, the human driven cars are within their lane,
00:27:12.320 | they're like bumper cars. They're like moving all over the place. The car in autopilot, dead center.
00:27:16.480 | Yeah. So the incredible work that's going into that neural network, it's learning fast.
00:27:23.440 | Autonomy is still very, very hard. We don't actually know how hard it is fully, of course.
00:27:28.800 | You look at the most problems you tackle, this one included with an exponential lens,
00:27:36.160 | but even with an exponential improvement, things can take longer than expected sometimes. So where
00:27:42.560 | does Tesla currently stand on its quest for full autonomy? What's your sense? When can we see
00:27:50.960 | successful deployment of full autonomy? Well, on the highway already, the probability of
00:27:58.800 | intervention is extremely low. Yes. So for highway autonomy, with the latest release,
00:28:06.560 | especially the probability of needing to intervene is really quite low. In fact, I'd say for stopping
00:28:12.560 | to go traffic, it's far safer than a person right now. The probability of an injury or an impact is
00:28:20.720 | much, much lower for autopilot than a person. And then with navigating autopilot, you can change
00:28:26.480 | lanes, take highway interchanges. And then we're coming at it from the other direction, which is
00:28:31.680 | low speed, full autonomy. And in a way, this is like, how does a person learn to drive? You
00:28:37.280 | learn to drive in the parking lot. First time you learn to drive probably wasn't jumping on
00:28:42.640 | Marcus Street in San Francisco. That'd be crazy. You learn to drive in the parking lot, get things
00:28:47.520 | right at low speed. And then the missing piece that we're working on is traffic lights and
00:28:55.120 | stuff streets. Stuff streets, I would say actually also relatively easy because you kind of know where
00:29:01.840 | the stuff street is, voice casing, geocoders, and then use visualization to see where the line is
00:29:06.880 | and stop at the line to eliminate the GPS error. So I'd say there's probably complex traffic lights
00:29:15.360 | and very windy roads are the two things that need to get solved. What's harder, perception or
00:29:22.800 | control for these problems? So being able to perfectly perceive everything or figuring out
00:29:28.080 | a plan once you perceive everything, how to interact with all the agents in the environment,
00:29:32.480 | in your sense, from a learning perspective, is perception or action harder in that giant,
00:29:39.520 | beautiful, multitask learning neural network? The hardest thing is having accurate representation
00:29:45.440 | of the physical objects in vector space. So taking the visual input, primarily visual input,
00:29:52.320 | some sonar and radar, and then creating an accurate vector space representation of the
00:30:01.440 | objects around you. Once you have an accurate vector space representation, the planning and
00:30:06.240 | control is relatively easier. I'd say it's relatively easy. Basically, once you have
00:30:11.040 | accurate vector space representation, then you're kind of like a video game. Like cars in like
00:30:18.000 | Grand Theft Auto or something, they work pretty well. They drive down the road, they don't crash,
00:30:23.200 | you know, pretty much unless you crash into them. That's because they've got an accurate vector
00:30:27.280 | space representation of where the cars are and then they're rendering that as the output.
00:30:32.880 | Do you have a sense, high level, that Tesla's on track
00:30:36.720 | on being able to achieve full autonomy? So on the highway?
00:30:43.040 | Yeah, absolutely.
00:30:44.000 | And still no driver state, driver sensing?
00:30:47.120 | And we have driver sensing with torque on the wheel.
00:30:50.320 | That's right. By the way, just a quick comment on karaoke. Most people think it's fun,
00:30:57.120 | but I also think it is a driving feature. I've been saying for a long time, singing in the car
00:31:01.040 | is really good for attention management and vigilance management.
00:31:03.920 | That's right. Tesla karaoke is great. It's one of the most fun features of the car.
00:31:09.360 | Do you think of a connection between fun and safety sometimes?
00:31:11.920 | Yeah, you can do both at the same time. That's great.
00:31:14.640 | I just met with Andrew and wife of Carl Sagan, who directed Cosmos.
00:31:21.120 | I'm generally a big fan of Carl Sagan. He's super cool and had a great way of putting things.
00:31:27.760 | All of our consciousness, all civilization, everything we've ever known and done is on
00:31:31.520 | this tiny blue dot. People also get too trapped in their squabbles amongst humans.
00:31:38.000 | And let's not think of the big picture. They take civilization and our continued existence for
00:31:43.680 | granted. They shouldn't do that. Look at the history of civilizations. They rise and they fall.
00:31:50.640 | And now civilization is all globalized. And so civilization, I think now rises and falls together.
00:32:00.480 | There's not geographic isolation. This is a big risk.
00:32:04.960 | Things don't always go up. That should be an important lesson of history.
00:32:10.800 | In 1990, at the request of Carl Sagan, the Voyager 1 spacecraft, which is a spacecraft
00:32:19.120 | that's reaching out farther than anything human made into space, turned around to take a picture
00:32:24.960 | of Earth from 3.7 billion miles away. And as you're talking about the pale blue dot, that picture,
00:32:32.160 | the Earth takes up less than a single pixel in that image. Appearing as a tiny blue dot,
00:32:38.000 | as pale blue dot as Carl Sagan called it. So he spoke about this dot of ours in 1994.
00:32:47.760 | And if you could humor me, I was wondering if in the last two minutes you could read the words
00:32:55.360 | that he wrote describing this pale blue dot. Sure. Yes, it's funny. The universe appears to be 13.8
00:33:02.720 | billion years old. Earth is like four and a half billion years old.
00:33:09.040 | In another half billion years or so, the sun will expand and probably evaporate the oceans.
00:33:18.160 | And make life impossible on Earth. Which means that if it had taken consciousness
00:33:22.560 | 10% longer to evolve, it would never have evolved at all. Just 10% longer.
00:33:28.000 | And I wonder how many dead one planet civilizations there are out there in the cosmos.
00:33:35.920 | That never made it to the other planet and ultimately extinguished themselves or were
00:33:41.120 | destroyed by external factors. Probably a few. It's only just possible to travel to Mars.
00:33:51.600 | Just barely. If G was 10% more, it wouldn't work really.
00:33:56.320 | If G was 10% lower, it would be easy.
00:34:01.520 | Like you can go single stage from the surface of Mars all the way to the surface of the Earth.
00:34:07.520 | Because Mars is 37% Earth's gravity. We need a giant boost to get off Earth.
00:34:14.560 | Chandler and Carl Sagan.
00:34:19.200 | Look again at that dot. That's here. That's home. That's us. On it, everyone you love,
00:34:28.960 | everyone you know, everyone you've ever heard of, every human being who ever was,
00:34:34.320 | lived out their lives. The aggregate of our joy and suffering. Thousands of confident religions,
00:34:40.000 | ideologies, and economic doctrines. Every hunter and forger, every hero and coward,
00:34:44.880 | every creator and destroyer of civilization. Every king and peasant, every young couple in love,
00:34:50.880 | every mother and father, hopeful child, inventor and explorer. Every teacher of morals,
00:34:59.440 | every corrupt politician, every superstar, every supreme leader, every saint and center
00:35:07.600 | in the history of our species live there on a moat of dust suspended in a sunbeam.
00:35:12.160 | Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity,
00:35:19.200 | in all this vastness, there is no hint that help will come from elsewhere to save us from ourselves.
00:35:25.360 | The earth is the only world known so far to harbor life. There is nowhere else,
00:35:29.680 | at least in the near future, to which our species could migrate. This is not true.
00:35:34.560 | This is false. Mars. And I think Carl Sagan would agree with that. He couldn't even imagine it at
00:35:42.400 | that time. So thank you for making the world dream and thank you for talking today. I really appreciate
00:35:48.400 | it. Thank you.
00:35:49.520 | Thank you.
00:35:50.900 | Thank you.
00:36:06.000 | [BLANK_AUDIO]