back to index

Sean Carroll: The Nature of the Universe, Life, and Intelligence | Lex Fridman Podcast #26


Chapters

0:0 Introduction
2:21 Understanding the Universe
9:33 Living in a Simulation
11:4 Simulation Resolution
12:9 Simulations
14:32 Intelligence
17:41 Why Life
19:34 Space Exploration
23:36 Creating Intelligent Life
25:19 Creating Consciousness
27:16 Question Marks
28:6 The Power of Technology

Whisper Transcript | Transcript Only Page

00:00:00.000 | The following is a conversation with Sean Carroll.
00:00:02.760 | He's a theoretical physicist at Caltech
00:00:04.920 | specializing in quantum mechanics, gravity, and cosmology.
00:00:08.800 | He's the author of several popular books.
00:00:11.640 | One on the arrow of time called "From Eternity to Here,"
00:00:15.360 | one on the Higgs boson called
00:00:17.240 | "Particle at the End of the Universe,"
00:00:19.160 | and one on science and philosophy called "The Big Picture"
00:00:22.560 | on the origins of life, meaning, and the universe itself.
00:00:26.360 | He has an upcoming book on quantum mechanics
00:00:28.720 | that you can pre-order now called "Something Deeply Hidden."
00:00:32.840 | He writes, "One of my favorite blogs,"
00:00:34.920 | on his website preposterousuniverse.com,
00:00:37.920 | "I recommend clicking on the greatest hits link
00:00:40.400 | "that lists accessible, interesting posts
00:00:43.240 | "on the arrow of time, dark matter, dark energy,
00:00:45.560 | "the Big Bang, general relativity,
00:00:47.520 | "string theory, quantum mechanics,
00:00:49.480 | "and the big meta questions about the philosophy of science,
00:00:53.080 | "God, ethics, politics, academia, and much, much more."
00:00:57.600 | Finally, and perhaps most famously,
00:01:00.280 | he's the host of a podcast called "Mindscape"
00:01:03.640 | that you should subscribe to and support on Patreon.
00:01:06.920 | Along with the Joe Rogan experience,
00:01:08.800 | Sam Harris's "Making Sense,"
00:01:10.520 | and Dan Carlin's "Hardcore History,"
00:01:13.080 | Sean's "Mindscape" podcast is one of my favorite ways
00:01:15.840 | to learn new ideas or explore different perspectives
00:01:18.800 | and ideas that I thought I understood.
00:01:22.120 | It was truly an honor to meet
00:01:24.640 | and spend a couple hours with Sean.
00:01:27.200 | It's a bit heartbreaking to say
00:01:28.960 | that for the first time ever,
00:01:30.480 | the audio recorder for this podcast died
00:01:32.760 | in the middle of our conversation.
00:01:34.880 | There are technical reasons for this
00:01:36.280 | having to do with phantom power
00:01:38.360 | that I now understand and will avoid.
00:01:41.020 | It took me one hour to notice and fix the problem.
00:01:44.180 | So, much like the universe is 68% dark energy,
00:01:48.280 | roughly the same amount from this conversation was lost,
00:01:51.280 | except in the memories of the two people involved
00:01:54.160 | and in my notes.
00:01:56.280 | I'm sure we'll talk again and continue this conversation
00:01:59.920 | on this podcast or on Sean's.
00:02:02.440 | And of course, I look forward to it.
00:02:05.320 | This is the Artificial Intelligence Podcast.
00:02:07.840 | If you enjoy it, subscribe on YouTube, iTunes,
00:02:11.040 | support it on Patreon,
00:02:12.520 | or simply connect with me on Twitter @LexFriedman.
00:02:16.680 | And now, here's my conversation with Sean Carroll.
00:02:21.360 | - What do you think is more interesting and impactful,
00:02:23.520 | understanding how the universe works at a fundamental level
00:02:26.880 | or understanding how the human mind works?
00:02:29.200 | - You know, of course, this is a crazy,
00:02:31.960 | meaningless, unanswerable question in some sense,
00:02:33.960 | because they're both very interesting
00:02:35.160 | and there's no absolute scale of interestingness
00:02:37.520 | that we can rate them on.
00:02:39.180 | There's the glib answer that says
00:02:40.520 | the human brain is part of the universe, right?
00:02:43.080 | And therefore, understanding the universe
00:02:44.400 | is more fundamental than understanding the human brain.
00:02:47.040 | - But do you really believe that once we understand
00:02:49.600 | the fundamental way the universe works
00:02:51.520 | at the particle level, the forces,
00:02:53.760 | we would be able to understand how the mind works?
00:02:55.800 | - No, certainly not.
00:02:56.640 | We cannot understand how ice cream works
00:02:58.760 | just from understanding how particles work, right?
00:03:01.040 | So I'm a big believer in emergence.
00:03:02.760 | I'm a big believer that there are different ways
00:03:05.320 | of talking about the world
00:03:06.680 | beyond just the most fundamental microscopic one.
00:03:11.200 | You know, when we talk about tables and chairs
00:03:13.860 | and planets and people,
00:03:15.120 | we're not talking the language
00:03:16.400 | of particle physics and cosmology.
00:03:18.360 | So, but understanding the universe,
00:03:20.880 | you didn't say just at the most fundamental level, right?
00:03:24.040 | So understanding the universe at all levels
00:03:26.560 | is part of that.
00:03:28.200 | I do think, you know, to be a little bit more fair
00:03:29.960 | to the question, there probably are general principles
00:03:33.980 | of complexity, biology, information processing,
00:03:38.520 | memory, knowledge, creativity
00:03:41.840 | that go beyond just the human brain, right?
00:03:45.600 | And maybe one could count understanding those
00:03:47.800 | as part of understanding the universe.
00:03:49.120 | The human brain, as far as we know,
00:03:50.480 | is the most complex thing in the universe.
00:03:54.300 | So there's, it's certainly absurd to think
00:03:57.400 | that by understanding the fundamental laws
00:03:58.840 | of particle physics, you get any direct insight
00:04:01.400 | on how the brain works.
00:04:02.840 | - But then there's this step from the fundamentals
00:04:05.960 | of particle physics to information processing,
00:04:08.680 | which a lot of physicists and philosophers
00:04:10.840 | maybe a little bit carelessly take
00:04:12.480 | when they talk about artificial intelligence.
00:04:14.640 | Do you think of the universe
00:04:18.040 | as a kind of a computational device?
00:04:21.320 | - No.
00:04:22.160 | To be like, the honest answer there is no.
00:04:24.160 | There's a sense in which the universe
00:04:26.320 | processes information, clearly.
00:04:29.160 | There's a sense in which the universe
00:04:30.680 | is like a computer, clearly.
00:04:33.880 | But in some sense, I think,
00:04:36.520 | I tried to say this once on my blog
00:04:38.520 | and no one agreed with me,
00:04:39.360 | but the universe is more like a computation
00:04:42.360 | than a computer, because the universe happens once.
00:04:45.040 | A computer is a general purpose machine, right?
00:04:46.920 | And you can ask it different questions,
00:04:48.680 | even a pocket calculator, right?
00:04:50.120 | And it's set up to answer certain kinds of questions.
00:04:52.960 | The universe isn't that.
00:04:54.320 | So information processing happens in the universe,
00:04:57.360 | but it's not what the universe is.
00:04:59.200 | And I know your MIT colleague, Seth Lloyd,
00:05:01.560 | feels very differently about this, right?
00:05:03.840 | - Well, you're thinking of the universe
00:05:05.720 | as a closed system.
00:05:07.240 | - I am.
00:05:08.080 | - So what makes a computer more like a PC,
00:05:11.760 | like a computing machine,
00:05:14.560 | is that there's a human that everyone's,
00:05:16.920 | comes up to it and moves the mouse around.
00:05:19.120 | So input--
00:05:19.960 | - Gives it input.
00:05:20.780 | - Gives it input.
00:05:21.620 | And that's why you're saying it's just a computation,
00:05:26.320 | a deterministic thing that's just unrolling.
00:05:29.280 | But the immense complexity of it
00:05:32.240 | is nevertheless like processing.
00:05:34.440 | There's a state and it changes with rules.
00:05:39.440 | And there's a sense for a lot of people
00:05:41.680 | that if the brain operates,
00:05:44.440 | the human brain operates within that world,
00:05:46.480 | then it's simply just a small subset of that.
00:05:49.360 | And so there's no reason we can't build
00:05:52.560 | arbitrarily great intelligences.
00:05:55.600 | - Yeah.
00:05:56.440 | - Do you think of intelligence in this way?
00:05:58.720 | - Intelligence is tricky.
00:05:59.620 | I don't have a definition of it offhand.
00:06:01.700 | So I remember this panel discussion
00:06:04.600 | that I saw on YouTube.
00:06:05.520 | I wasn't there, but Seth Lloyd was on the panel.
00:06:07.680 | And so was Martin Rees, the famous astrophysicist.
00:06:10.720 | And Seth gave his shtick for why the universe is a computer
00:06:13.800 | and he explained this.
00:06:14.840 | And Martin Rees said, "So what is not a computer?"
00:06:19.240 | (laughs)
00:06:20.320 | Seth is like, "Oh, that's a good question.
00:06:21.920 | "I'm not sure."
00:06:22.760 | Because if you have a sufficiently broad definition
00:06:24.960 | of what a computer is, then everything is, right?
00:06:28.360 | And the simile or the analogy gains force
00:06:32.160 | when it excludes some things.
00:06:34.360 | You know, is the moon going around the earth
00:06:36.240 | performing a computation?
00:06:38.640 | I can come up with definitions
00:06:40.080 | in which the answer is yes,
00:06:41.320 | but it's not a very useful computation.
00:06:43.800 | I think that it's absolutely helpful
00:06:46.120 | to think about the universe in certain situations,
00:06:49.640 | certain contexts, as an information processing device.
00:06:53.040 | I'm even guilty of writing a paper
00:06:54.840 | called "Quantum Circuit Cosmology"
00:06:56.840 | where we modeled the whole universe as a quantum circuit.
00:06:59.240 | - As a circuit.
00:07:00.360 | - As a circuit, yeah.
00:07:01.360 | - With qubits kind of thing?
00:07:02.880 | - With qubits basically, right, yeah.
00:07:05.040 | So in qubits becoming more and more entangled.
00:07:07.440 | So do we wanna digress a little bit?
00:07:09.680 | 'Cause it's kind of fun. - Let's do it.
00:07:11.040 | - So here's a mystery about the universe
00:07:13.680 | that is so deep and profound that nobody talks about it.
00:07:16.840 | Space expands, right?
00:07:19.040 | And we talk about, in a certain region of space,
00:07:21.920 | a certain number of degrees of freedom,
00:07:23.560 | a certain number of ways that the quantum fields
00:07:25.520 | and the particles in that region can arrange themselves.
00:07:28.800 | That number of degrees of freedom in a region of space
00:07:32.200 | is arguably finite.
00:07:33.760 | We actually don't know how many there are,
00:07:36.640 | but there's a very good argument
00:07:37.760 | that says it's a finite number.
00:07:39.400 | So as the universe expands and space gets bigger,
00:07:43.520 | are there more degrees of freedom?
00:07:46.760 | If it's an infinite number, it doesn't really matter.
00:07:48.480 | Infinity times two is still infinity.
00:07:49.960 | But if it's a finite number,
00:07:51.760 | then there's more space, so there's more degrees of freedom.
00:07:54.400 | So where did they come from?
00:07:55.720 | That would mean the universe is not a closed system.
00:07:57.980 | There's more degrees of freedom popping into existence.
00:08:01.480 | So what we suggested was
00:08:03.440 | that there are more degrees of freedom,
00:08:05.320 | and it's not that they're not there to start,
00:08:07.940 | but they're not entangled to start.
00:08:10.840 | So the universe that you and I know of,
00:08:12.800 | the three dimensions around us that we see,
00:08:15.400 | we said those are the entangled degrees of freedom
00:08:18.060 | making up space-time.
00:08:19.600 | As the universe expands,
00:08:20.880 | there are a whole bunch of qubits in their zero state
00:08:25.160 | that become entangled with the rest of space-time
00:08:28.120 | through the action of these quantum circuits.
00:08:31.160 | - So what does it mean that there's now more
00:08:35.520 | degrees of freedom as they become more entangled?
00:08:39.260 | - Yeah, so--
00:08:40.100 | - As the universe expands.
00:08:41.620 | - That's right, so there's more and more degrees of freedom
00:08:43.300 | that are entangled, that are playing the role
00:08:46.980 | of part of the entangled space-time structure.
00:08:49.560 | So the underlying philosophy is that space-time itself
00:08:53.320 | arises from the entanglement
00:08:54.580 | of some fundamental quantum degrees of freedom.
00:08:57.540 | - Wow, okay, so at which point is most of the
00:09:02.140 | entanglement happening?
00:09:05.220 | Are we talking about close to the Big Bang?
00:09:07.440 | Are we talking about throughout the time of the--
00:09:11.500 | - Throughout history, yeah.
00:09:12.500 | So the idea is that at the Big Bang,
00:09:15.100 | almost all the degrees of freedom
00:09:16.740 | that the universe could have were there,
00:09:19.680 | but they were unentangled with anything else.
00:09:22.400 | And that's a reflection of the fact that the Big Bang
00:09:24.620 | had a low entropy, it was a very simple, very small place.
00:09:28.060 | And as space expands, more and more degrees of freedom
00:09:31.380 | become entangled with the rest of the world.
00:09:34.260 | - Well, I have to ask John Carroll,
00:09:35.900 | what do you think of the thought experiment
00:09:37.840 | from Nick Bostrom that we're living in a simulation?
00:09:41.540 | So I think, let me contextualize that a little bit more.
00:09:44.940 | I think people don't actually take this thought experiment,
00:09:48.340 | I think it's quite interesting.
00:09:50.420 | It's not very useful, but it's quite interesting.
00:09:52.880 | From the perspective of AI, a lot of the learning
00:09:55.460 | that can be done usually happens in simulation,
00:09:59.060 | from artificial examples.
00:10:01.460 | And so it's a constructive question to ask,
00:10:03.820 | how difficult is our real world to simulate?
00:10:08.260 | - Right.
00:10:09.380 | - Which is kind of a dual part of,
00:10:12.180 | if we're living in a simulation,
00:10:14.100 | and somebody built that simulation,
00:10:16.420 | if you were to try to do it yourself, how hard would it be?
00:10:18.860 | - So obviously we could be living in a simulation.
00:10:21.100 | If you just want the physical possibility,
00:10:23.000 | then I completely agree that it's physically possible.
00:10:25.400 | I don't think that we actually are.
00:10:27.380 | So take this one piece of data into consideration.
00:10:30.420 | We live in a big universe, okay?
00:10:33.980 | There's two trillion galaxies in our observable universe,
00:10:38.500 | with 200 billion stars in each galaxy, et cetera.
00:10:41.660 | It would seem to be a waste of resources
00:10:44.940 | to have a universe that big going on
00:10:46.540 | just to do a simulation.
00:10:47.540 | So in other words, I wanna be a good Bayesian.
00:10:50.140 | I wanna ask, under this hypothesis,
00:10:52.940 | what do I expect to see?
00:10:54.960 | So the first thing I would say is I wouldn't expect
00:10:56.780 | to see a universe that was that big, okay?
00:11:00.340 | The second thing is I wouldn't expect the resolution
00:11:02.540 | of the universe to be as good as it is.
00:11:05.060 | So it's always possible that if our superhuman simulators
00:11:08.780 | only have finite resources,
00:11:09.940 | that they don't render the entire universe, right?
00:11:12.060 | That the part that is out there,
00:11:14.360 | the two trillion galaxies,
00:11:16.340 | isn't actually being simulated fully, okay?
00:11:19.680 | But then the obvious extrapolation of that
00:11:22.780 | is that only I am being simulated fully.
00:11:25.540 | Like the rest of you are just non-player characters, right?
00:11:29.260 | I'm the only thing that is real.
00:11:30.540 | The rest of you are just chatbots.
00:11:32.780 | Beyond this wall, I see the wall,
00:11:34.340 | but there is literally nothing
00:11:36.060 | on the other side of the wall.
00:11:37.400 | That is sort of the Bayesian prediction.
00:11:39.340 | That's what it would be like
00:11:40.220 | to do an efficient simulation of me.
00:11:42.260 | So like none of that seems quite realistic.
00:11:45.740 | I don't see, I hear the argument
00:11:49.700 | that it's just possible and easy to simulate lots of things.
00:11:53.340 | I don't see any evidence from what we know
00:11:55.800 | about our universe that we look like a simulated universe.
00:11:59.360 | Now maybe you can say,
00:12:00.200 | well, we don't know what it would look like,
00:12:02.000 | but that's just abandoning your Bayesian responsibilities.
00:12:04.560 | Like your job is to say under this theory,
00:12:07.680 | here's what you would expect to see.
00:12:09.560 | - Yeah, so certainly if you think about simulation
00:12:11.720 | as a thing that's like a video game
00:12:14.360 | where only a small subset is being rendered.
00:12:17.800 | But say the entire, all the laws of physics,
00:12:22.880 | the entire closed system of the quote unquote universe,
00:12:26.560 | it had a creator.
00:12:27.800 | - Yeah, it's always possible.
00:12:29.320 | - Right, so that's not useful to think about
00:12:32.300 | when you're thinking about physics.
00:12:34.000 | The way Nick Bostrom phrases it,
00:12:36.220 | if it's possible to simulate a universe,
00:12:39.120 | eventually we'll do it.
00:12:40.520 | - Right.
00:12:41.360 | - You can use that by the way for a lot of things.
00:12:44.880 | - Well, yeah.
00:12:45.720 | - But I guess the question is,
00:12:48.560 | how hard is it to create a universe?
00:12:52.340 | I wrote a little blog post about this
00:12:53.800 | and maybe I'm missing something,
00:12:55.440 | but there's an argument that says not only
00:12:57.680 | that it might be possible to simulate a universe,
00:13:00.480 | but probably if you imagine
00:13:04.400 | that you actually attribute consciousness and agency
00:13:07.320 | to the little things that we're simulating
00:13:09.920 | to our little artificial beings,
00:13:12.000 | there's probably a lot more of them
00:13:13.400 | than there are ordinary organic beings in the universe
00:13:15.480 | or there will be in the future, right?
00:13:17.400 | So there's an argument
00:13:18.240 | that not only is being a simulation possible,
00:13:20.760 | it's probable because in the space
00:13:23.560 | of all living consciousnesses,
00:13:24.960 | most of them are being simulated, right?
00:13:26.640 | Most of them are not at the top level.
00:13:28.860 | I think that argument must be wrong
00:13:30.520 | because it follows from that argument
00:13:32.080 | that if we're simulated,
00:13:34.120 | but we can also simulate other things,
00:13:36.920 | well, but if we can simulate other things,
00:13:38.840 | they can simulate other things, right?
00:13:41.840 | If we give them enough power and resolution
00:13:44.320 | and ultimately we'll reach a bottom
00:13:46.000 | because the laws of physics in our universe have a bottom,
00:13:49.160 | or made of atoms and so forth.
00:13:51.000 | So there will be the cheapest possible simulations.
00:13:55.120 | And if you believe the original argument,
00:13:57.680 | you should conclude that we should be
00:13:59.320 | in the cheapest possible simulation
00:14:00.960 | 'cause that's where most people are.
00:14:02.640 | But we don't look like that.
00:14:03.560 | It doesn't look at all like we're at the edge of resolution
00:14:06.840 | that we're 16 bit things.
00:14:09.400 | It seems much easier to make much lower level things
00:14:13.040 | than we are.
00:14:13.880 | And also I questioned the whole approach
00:14:18.200 | to the anthropic principle that says
00:14:19.840 | we are typical observers in the universe.
00:14:22.320 | I think that that's not actually,
00:14:23.640 | I think that there's a lot of selection that we can do
00:14:27.320 | that we're typical within things we already know,
00:14:30.140 | but not typical within all of the universe.
00:14:32.240 | - So do you think there's intelligent life,
00:14:35.760 | however you would like to define intelligent life,
00:14:37.800 | out there in the universe?
00:14:39.920 | - My guess is that there is not intelligent life
00:14:44.600 | in the observable universe other than us.
00:14:47.820 | Simply on the basis of the fact that the likely number
00:14:52.500 | of other intelligent species in the observable universe,
00:14:56.300 | there's two likely numbers, zero or billions.
00:15:00.260 | (laughs)
00:15:01.460 | And if there had been billions,
00:15:02.540 | you would have noticed already.
00:15:04.100 | For there to be literally like a small number,
00:15:07.300 | like Star Trek, there's a dozen intelligent civilizations
00:15:12.300 | in our galaxy, but not a billion.
00:15:16.220 | That's weird, that's sort of bizarre to me.
00:15:18.480 | It's easy for me to imagine that there are zero others
00:15:21.020 | because there's just a big bottleneck
00:15:22.620 | to making multicellular life
00:15:24.980 | or technological life or whatever.
00:15:27.020 | It's very hard for me to imagine
00:15:28.580 | that there's a whole bunch out there
00:15:30.140 | that have somehow remained hidden from us.
00:15:32.300 | - The question I'd like to ask is,
00:15:34.900 | what would intelligent life look like?
00:15:36.800 | What I mean by that question and where it's going is,
00:15:41.140 | what if intelligent life is just fundamentalist,
00:15:44.900 | is in some very big ways different
00:15:47.820 | than the one that has on Earth?
00:15:51.500 | That there's all kinds of intelligent life
00:15:53.900 | that operates at different scales
00:15:55.420 | of both size and temporal.
00:15:57.300 | - Right, that's a great possibility
00:15:59.300 | because I think we should be humble
00:16:00.800 | about what intelligence is, what life is.
00:16:02.640 | We don't even agree on what life is,
00:16:04.020 | much less what intelligent life is, right?
00:16:06.120 | So that's an argument for humility,
00:16:08.980 | saying there could be intelligent life
00:16:10.900 | of a very different character, right?
00:16:13.620 | Like you could imagine that dolphins are intelligent
00:16:18.060 | but never invent space travel
00:16:20.500 | 'cause they live in the ocean
00:16:21.460 | and they don't have thumbs, right?
00:16:23.220 | So they never invent technology,
00:16:25.380 | they never invent smelting.
00:16:26.780 | Maybe the universe is full of intelligent species
00:16:32.020 | that just don't make technology, right?
00:16:34.060 | That's compatible with the data, I think.
00:16:36.340 | And I think maybe what you're pointing at
00:16:39.820 | is even more out there versions of intelligence.
00:16:44.460 | Intelligence in intermolecular clouds
00:16:47.580 | or on the surface of a neutron star
00:16:49.460 | or in between the galaxies in giant things
00:16:51.780 | where the equivalent of a heartbeat is 100 million years.
00:16:54.620 | On the one hand, yes, we should be very open-minded
00:16:59.020 | about those things.
00:16:59.860 | On the other hand, all of us share the same laws of physics.
00:17:04.860 | There might be something about the laws of physics
00:17:08.220 | even though we don't currently know
00:17:09.340 | exactly what that thing would be
00:17:10.860 | that makes meters and years
00:17:15.860 | the right length and timescales for intelligent life.
00:17:18.820 | Maybe not, but we're made of atoms,
00:17:22.180 | atoms have a certain size,
00:17:23.700 | we orbit stars, stars have a certain lifetime.
00:17:27.220 | It's not impossible to me that there's a sweet spot
00:17:30.220 | for intelligent life that we find ourselves in.
00:17:32.140 | So I'm open-minded in either way.
00:17:33.700 | I'm open-minded either being humble
00:17:35.220 | and there's all sorts of different kinds of life
00:17:37.020 | or no, there's a reason we just don't know it yet
00:17:39.220 | why life like ours is the kind of life that's out there.
00:17:42.060 | - Yeah, I'm of two minds too,
00:17:43.300 | but I often wonder if our brains is just designed to,
00:17:47.800 | quite obviously, to operate and see the world
00:17:52.700 | in these timescales and we're almost blind
00:17:56.340 | and the tools we've created for detecting things are blind
00:18:01.180 | to the kind of observation needed
00:18:02.740 | to see intelligent life at other scales.
00:18:04.900 | - Well, I'm totally open to that,
00:18:07.020 | but so here's another argument I would make.
00:18:09.260 | We have looked for intelligent life,
00:18:11.540 | but we've looked at for it in the dumbest way we can,
00:18:14.140 | by turning radio telescopes to the sky.
00:18:16.620 | And why in the world would a super advanced civilization
00:18:21.060 | randomly beam out radio signals wastefully
00:18:24.020 | in all directions into the universe?
00:18:25.440 | That just doesn't make any sense,
00:18:27.280 | especially because in order to think
00:18:29.100 | that you would actually contact another civilization,
00:18:32.020 | you would have to do it forever.
00:18:33.820 | You'd have to keep doing it for millions of years.
00:18:35.840 | That sounds like a waste of resources.
00:18:38.280 | If you thought that there were other solar systems
00:18:43.140 | with planets around them where maybe intelligent life
00:18:46.020 | didn't yet exist, but might someday,
00:18:48.620 | you wouldn't try to talk to it with radio waves.
00:18:51.380 | You would send a spacecraft out there
00:18:53.580 | and you would park it around there.
00:18:55.540 | And it would be like, from our point of view,
00:18:57.380 | it'd be like 2001 where there was a monolith.
00:19:00.700 | - Monolith.
00:19:01.540 | - So there could be an artifact.
00:19:02.360 | In fact, the other way works also, right?
00:19:04.460 | There could be artifacts in our solar system
00:19:07.320 | that have been put there
00:19:10.440 | by other technologically advanced civilizations.
00:19:12.260 | And that's how we will eventually contact them.
00:19:14.620 | We just haven't explored the solar system
00:19:16.140 | well enough yet to find them.
00:19:17.640 | The reason why we don't think about that
00:19:19.960 | is 'cause we're young and impatient, right?
00:19:21.500 | Like it would take more than my lifetime
00:19:23.960 | to actually send something to another star system
00:19:26.060 | and wait for it and then come back.
00:19:27.740 | So, but if we start thinking on hundreds of thousands
00:19:30.780 | of years or million year timescales,
00:19:32.660 | it's clearly the right thing to do.
00:19:34.540 | - Are you excited by the thing that Elon Musk
00:19:37.540 | is doing with SpaceX in general?
00:19:39.300 | Space, but the idea of space exploration,
00:19:41.540 | even though you're, or your species is young and impatient?
00:19:45.300 | - Yeah.
00:19:46.140 | No, I do think that space travel is crucially important,
00:19:49.140 | long-term, even to other star systems.
00:19:52.420 | And I think that many people overestimate the difficulty
00:19:57.420 | because they say, look, if you travel 1% the speed of light
00:20:00.860 | to another star system, we'll be dead before we get there.
00:20:03.980 | And I think that it's much easier.
00:20:06.100 | And therefore, when they write their science fiction stories,
00:20:08.060 | they imagine we could go faster than the speed of light
00:20:09.540 | 'cause otherwise they're too impatient.
00:20:11.620 | We're not gonna go faster than the speed of light,
00:20:13.540 | but we could easily imagine that the human lifespan
00:20:15.940 | gets extended to thousands of years.
00:20:18.020 | And once you do that,
00:20:19.060 | then the stars are much closer effectively.
00:20:21.220 | What's a hundred year trip?
00:20:23.180 | So I think that that's gonna be the future,
00:20:25.740 | the far future, not my lifetime once again,
00:20:28.620 | but baby steps.
00:20:30.300 | - Unless your lifetime gets extended.
00:20:32.380 | - Well, it's in a race against time, right?
00:20:34.380 | A friend of mine who actually thinks about these things
00:20:37.300 | said, you know, you and I are gonna die,
00:20:40.420 | but I don't know about our grandchildren.
00:20:43.020 | That's, I don't know, predicting the future is hard,
00:20:45.900 | but that's at least a plausible scenario.
00:20:47.860 | And so, yeah, no, I think that as we discussed earlier,
00:20:51.780 | there are threats to the Earth, known and unknown, right?
00:20:56.740 | Having spread humanity and biology elsewhere
00:21:01.740 | is a really important long-term goal.
00:21:04.940 | - What kind of questions can science not currently answer,
00:21:08.860 | but might soon?
00:21:09.860 | When you think about the problems and the mysteries
00:21:13.820 | before us that may be within reach of science.
00:21:17.820 | - I think an obvious one is the origin of life.
00:21:20.260 | We don't know how that happened.
00:21:21.860 | There's a difficulty in knowing how it happened historically,
00:21:25.260 | actually, literally on Earth,
00:21:27.180 | but starting life from non-life
00:21:29.940 | is something I kind of think we're close to, right?
00:21:32.340 | We're really- - You really think so?
00:21:34.020 | How difficult is it to start life?
00:21:36.700 | - Well, I've talked to people,
00:21:39.100 | including on the podcast, about this.
00:21:40.940 | Life requires three things.
00:21:43.300 | Life as we know it.
00:21:44.180 | So there's a difference with life,
00:21:45.460 | which who knows what it is,
00:21:46.980 | and life as we know it,
00:21:48.060 | which we can talk about with some intelligence.
00:21:50.740 | So life as we know it requires compartmentalization.
00:21:53.780 | You need like a little membrane around your cell.
00:21:56.580 | Metabolism, you need to take in food and eat it
00:21:58.900 | and let that make you do things.
00:22:00.980 | And then replication, okay?
00:22:02.540 | So you need to have some information about who you are
00:22:04.540 | that you pass down to future generations.
00:22:07.820 | In the lab, compartmentalization seems pretty easy,
00:22:11.700 | not hard to make lipid bilayers
00:22:13.700 | that come into little cellular walls pretty easily.
00:22:16.700 | Metabolism and replication are hard,
00:22:19.220 | but replication we're close to.
00:22:21.820 | People have made RNA-like molecules in the lab
00:22:24.900 | that I think the state of the art is
00:22:28.780 | they're not able to make one molecule
00:22:30.580 | that reproduces itself,
00:22:31.980 | but they're able to make two molecules
00:22:33.500 | that reproduce each other.
00:22:35.180 | So that's okay, that's pretty close.
00:22:37.020 | Metabolism is harder, believe it or not,
00:22:40.980 | even though it's sort of the most obvious thing,
00:22:42.820 | but you want some sort of controlled metabolism.
00:22:44.860 | And the actual cellular machinery in our bodies
00:22:47.460 | is quite complicated.
00:22:48.620 | It's hard to see it just popping into existence
00:22:50.900 | all by itself, it probably took a while.
00:22:53.700 | But we're making progress.
00:22:56.060 | And in fact, I don't think we're spending
00:22:57.180 | nearly enough money on it.
00:22:58.540 | If I were the NSF, I would flood this area with money
00:23:01.740 | 'cause it would change our view of the world
00:23:05.180 | if we could actually make life in the lab
00:23:06.740 | and understand how it was made originally here on Earth.
00:23:09.380 | - And I'm sure it'd have some ripple effects
00:23:11.140 | that help cure disease and so on.
00:23:12.900 | I mean, just that understanding.
00:23:14.340 | - So synthetic biology is a wonderful big frontier
00:23:16.620 | where we're making cells.
00:23:18.940 | Right now, the best way to do that
00:23:21.100 | is to borrow heavily from existing biology.
00:23:23.660 | Craig Venter, several years ago,
00:23:25.380 | created an artificial cell, but all he did was,
00:23:28.220 | not all he did, it was a tremendous accomplishment,
00:23:29.860 | but all he did was take out the DNA from a cell
00:23:33.180 | and put in entirely new DNA and let it boot up and go.
00:23:36.380 | - What about the leap to creating intelligent life on Earth?
00:23:42.220 | However, again, we define intelligence, of course,
00:23:45.900 | but let's just even say Homo sapiens,
00:23:49.500 | the modern intelligence in our human brain.
00:23:54.480 | Do you have a sense of what's involved in that leap
00:23:58.700 | and how big of a leap that is?
00:24:00.460 | - So AI would count in this, or you really want life?
00:24:03.340 | You want really an organism in some sense?
00:24:06.460 | - AI would count, I think.
00:24:07.660 | - Okay.
00:24:08.500 | - Yeah, of course, of course AI would count.
00:24:11.060 | - Well, let's say artificial consciousness, right?
00:24:13.500 | So I do not think we are on the threshold
00:24:15.540 | of creating artificial consciousness.
00:24:16.780 | I think it's possible.
00:24:18.220 | I'm not, again, very educated about how close we are,
00:24:20.300 | but my impression is not that we're really close
00:24:22.100 | 'cause we understand how little we understand
00:24:24.860 | of consciousness and what it is.
00:24:26.460 | So if we don't have any idea what it is,
00:24:28.460 | it's hard to imagine we're on the threshold
00:24:29.820 | of making it ourselves.
00:24:31.640 | But it's doable, it's possible.
00:24:34.540 | I don't see any obstacles in principle.
00:24:35.980 | So yeah, I would hold out some interest
00:24:38.180 | in that happening eventually.
00:24:40.260 | - I think in general, consciousness,
00:24:42.700 | I think we'll be just surprised how easy consciousness is
00:24:46.900 | once we create intelligence.
00:24:49.060 | I think consciousness is a thing that,
00:24:51.060 | that's just something we all fake.
00:24:54.000 | - Well, good, no, actually I like this idea
00:24:57.180 | that in fact consciousness is way less mysterious
00:24:59.340 | than we think because we're all at every time,
00:25:02.020 | at every moment, less conscious than we think we are, right?
00:25:04.500 | We can fool things.
00:25:05.460 | And I think that plus the idea
00:25:07.780 | that you not only have artificial intelligent systems,
00:25:11.180 | but you put them in a body, right, give them a robot body,
00:25:14.300 | that will help the faking a lot.
00:25:18.460 | - Yeah, I think creating consciousness
00:25:20.980 | in artificial consciousness is as simple
00:25:25.140 | as asking a Roomba to say, "I'm conscious."
00:25:29.980 | And refusing to be talked out of it.
00:25:32.740 | - Could be, it could be.
00:25:33.820 | - And I mean, I'm almost being silly,
00:25:36.740 | but that's what we do.
00:25:38.280 | That's what we do with each other.
00:25:40.940 | This is the kind of,
00:25:42.020 | that consciousness is also a social construct.
00:25:44.580 | And a lot of our ideas of intelligence
00:25:46.420 | is a social construct.
00:25:47.860 | And so reaching that bar involves something that's beyond,
00:25:52.820 | that's not necessarily,
00:25:53.660 | doesn't necessarily involve the fundamental understanding
00:25:56.860 | of how you go from electrons to neurons to cognition.
00:26:01.860 | - No, actually I think that is an extremely good point.
00:26:05.060 | And in fact, what it suggests is,
00:26:08.700 | so yeah, you referred to Kate Darling,
00:26:10.540 | who I had on the podcast,
00:26:11.940 | and who does these experiments with very simple robots,
00:26:16.420 | but they look like animals,
00:26:18.060 | and they can look like they're experiencing pain,
00:26:20.740 | and we human beings react very negatively
00:26:23.380 | to these little robots
00:26:24.380 | looking like they're experiencing pain.
00:26:26.300 | And what you wanna say is,
00:26:28.820 | yeah, but they're just robots, it's not really pain, right?
00:26:31.700 | It's just some electrons going around.
00:26:33.060 | But then you realize, you and I
00:26:34.900 | are just electrons going around,
00:26:36.980 | and that's what pain is also.
00:26:38.460 | And so what I would have an easy time imagining
00:26:43.060 | is that there is a spectrum
00:26:44.740 | between these simple little robots that Kate works with
00:26:47.420 | and a human being,
00:26:49.420 | where there are things that sort of by some strict definition
00:26:52.860 | Turing test level thing are not conscious,
00:26:55.460 | but nevertheless walk and talk like they're conscious.
00:26:58.580 | And it could be that the future is,
00:27:00.220 | I mean, Siri is close, right?
00:27:02.420 | And so it might be the future
00:27:04.540 | has a lot more agents like that.
00:27:07.100 | And in fact, rather than someday going,
00:27:08.860 | aha, we have consciousness,
00:27:10.700 | we'll just creep up on it
00:27:12.060 | with more and more accurate reflections of what we expect.
00:27:15.260 | - And in the future, maybe the present,
00:27:18.340 | for example, we haven't met before,
00:27:20.820 | and you're basically assuming that I'm human.
00:27:23.500 | - I get a high probability.
00:27:26.220 | - At this time, because the, yeah,
00:27:28.540 | but in the future,
00:27:30.180 | there might be question marks around that, right?
00:27:32.020 | - Yeah, no, absolutely.
00:27:33.340 | Certainly videos are almost to the point
00:27:35.740 | where you shouldn't trust them already.
00:27:36.740 | Photos, you can't trust, right?
00:27:39.060 | Videos is easier to trust, but we're getting worse.
00:27:42.940 | We're getting better at faking them, right?
00:27:45.580 | - Getting better at faking.
00:27:46.500 | - Yeah, so physical embodied people,
00:27:48.780 | what's so hard about faking that?
00:27:51.020 | So this is very depressing,
00:27:51.980 | this conversation we're having right now.
00:27:53.420 | - So to me, it's exciting.
00:27:55.220 | - To me, you're doing it, so it's exciting to you,
00:27:57.780 | but it's a sobering thought.
00:27:59.060 | We're very bad, right?
00:28:00.180 | At imagining what the next 50 years are gonna be like
00:28:02.820 | when we're in the middle of a phase transition
00:28:04.220 | as we are right now.
00:28:05.260 | - Yeah, and I, in general,
00:28:06.700 | I'm not blind to all the threats.
00:28:09.180 | I am excited by the power of technology to solve,
00:28:14.180 | to protect us against the threats as they evolve.
00:28:18.020 | I'm not as much as Steven Pinker, optimistic about the world
00:28:22.300 | but in everything I've seen,
00:28:23.700 | all of the brilliant people in the world that I've met
00:28:27.260 | are good people.
00:28:29.100 | So the army of the good
00:28:30.780 | in terms of the development of technology is large.
00:28:33.380 | - Okay, you're way more optimistic than I am.
00:28:36.860 | I think that goodness and badness are equally distributed
00:28:40.180 | among intelligent and unintelligent people.
00:28:42.700 | I don't see much of a correlation there.
00:28:44.620 | - Interesting.
00:28:46.020 | Neither of us have proof.
00:28:47.260 | - Yeah, exactly.
00:28:48.380 | Again, opinions are free, right?
00:28:50.640 | - Nor definitions of good and evil.
00:28:52.500 | We come without definitions or without data, opinions.
00:28:57.420 | So what kind of questions can science not currently answer
00:29:01.940 | or may never be able to answer in your view?
00:29:04.380 | - Well, the obvious one is what is good and bad?
00:29:06.300 | You know, what is right and wrong?
00:29:07.860 | I think that there are questions that,
00:29:09.340 | you know, science tells us what happens,
00:29:11.300 | what the world is and what it does.
00:29:13.260 | It doesn't say what the world should do
00:29:14.740 | or what we should do 'cause we're part of the world.
00:29:17.820 | But we are part of the world
00:29:19.220 | and we have the ability to feel like something's right,
00:29:21.460 | something's wrong.
00:29:22.740 | And to make a very long story very short,
00:29:25.660 | I think that the idea of moral philosophy
00:29:28.020 | is systematizing our intuitions of what is right,
00:29:30.660 | what is wrong.
00:29:31.700 | And science might be able to predict ahead of time
00:29:34.580 | what we will do, but it won't ever be able to judge
00:29:38.000 | whether we should have done it or not.
00:29:39.620 | - So, you know, you're kind of unique in terms of scientists.
00:29:43.620 | It doesn't have to do with podcasts,
00:29:45.520 | but even just reaching out,
00:29:47.660 | I think you referred to as sort of
00:29:49.060 | doing interdisciplinary science.
00:29:51.300 | So you reach out and talk to people
00:29:54.100 | that are outside of your discipline,
00:29:55.980 | which I always hope that's what science was for.
00:30:00.140 | In fact, I was a little disillusioned
00:30:02.260 | when I realized that academia is very siloed.
00:30:06.380 | - Yeah.
00:30:07.220 | - And so the question is,
00:30:09.520 | how, at your own level,
00:30:12.980 | how do you prepare for these conversations?
00:30:15.340 | How do you think about these conversations?
00:30:16.860 | How do you open your mind enough
00:30:18.260 | to have these conversations?
00:30:20.180 | And maybe a little bit broader,
00:30:21.900 | how can you advise other scientists
00:30:24.340 | to have these kinds of conversations?
00:30:26.220 | Not at the podcast.
00:30:28.140 | The fact that you're doing a podcast is awesome.
00:30:29.820 | Other people get to hear them.
00:30:31.380 | But it's also good to have it without mics in general.
00:30:34.660 | - It's a good question, but a tough one to answer.
00:30:37.420 | I think about, you know, a guy I know who's a personal
00:30:40.180 | trainer, and he was asked on a podcast,
00:30:43.220 | how do we, you know, psych ourselves up to do a workout?
00:30:45.660 | How do we make that discipline to go and work out?
00:30:48.300 | And he's like, why are you asking me?
00:30:50.020 | Like, I can't stop working out.
00:30:51.740 | Like, I don't need to psych myself up.
00:30:54.340 | So, and likewise, you know, he asked me, like,
00:30:57.300 | how do you get to, like, have interdisciplinary
00:30:59.060 | conversations and all sorts of different things
00:31:00.660 | with all sorts of different people?
00:31:01.620 | I'm like, that's what makes me go, right?
00:31:04.780 | Like, that's, I couldn't stop doing that.
00:31:07.340 | I did that long before any of them were recorded.
00:31:09.620 | In fact, a lot of the motivation for starting recording it
00:31:12.340 | was making sure I would read all these books
00:31:14.380 | that I had purchased, right?
00:31:15.420 | Like, all these books I wanted to read,
00:31:17.660 | not enough time to read them.
00:31:18.860 | And now, if I have the motivation,
00:31:20.660 | 'cause I'm gonna, you know, interview Pat Churchland,
00:31:23.180 | I'm gonna finally read her book, you know?
00:31:27.140 | And it's absolutely true that academia
00:31:30.060 | is extraordinarily siloed, right?
00:31:31.700 | We don't talk to people, we rarely do.
00:31:34.260 | And in fact, when we do, it's punished, you know?
00:31:36.780 | Like, the people who do it successfully
00:31:38.820 | generally first became very successful
00:31:41.420 | within their little siloed discipline,
00:31:43.100 | and only then did they start expanding out.
00:31:46.380 | If you're a young person, you know,
00:31:47.660 | I have graduate students, and I try to be very, very
00:31:51.020 | candid with them about this, that it's, you know,
00:31:54.500 | most graduate students do not become faculty members,
00:31:57.140 | right, it's a tough road.
00:31:59.020 | And so, live the life you wanna live,
00:32:03.140 | but do it with your eyes open
00:32:04.620 | about what it does to your job chances.
00:32:06.900 | And the more broad you are,
00:32:09.580 | and the less time you spend hyper-specializing in your field,
00:32:14.020 | the lower your job chances are.
00:32:15.780 | That's just an academic reality.
00:32:17.100 | It's terrible, I don't like it, but it's a reality.
00:32:20.040 | And for some people, that's fine.
00:32:23.500 | Like, there's plenty of people who are wonderful scientists
00:32:25.700 | who have zero interest in branching out
00:32:28.060 | and talking to things, to anyone outside their field.
00:32:31.620 | But it is disillusioning to me,
00:32:34.700 | some of the romantic notion I had
00:32:37.180 | of the intellectual academic life
00:32:39.180 | is belied by the reality of it.
00:32:40.980 | The idea that we should reach out beyond our discipline,
00:32:44.460 | and that is a positive good,
00:32:46.780 | is just so rare in universities
00:32:51.780 | that it may as well not exist at all.
00:32:53.900 | - But that said, even though you're saying you're doing it,
00:32:57.660 | like the personal trainer, because you just can't help it,
00:33:00.300 | you're also an inspiration to others.
00:33:02.940 | Like, I could speak for myself.
00:33:04.960 | You know, I also have a career I'm thinking about, right?
00:33:09.540 | And without your podcast,
00:33:12.060 | I may not have been doing this at all, right?
00:33:15.060 | So, it makes me realize that these kinds of conversations
00:33:19.500 | is kind of what science is about in many ways.
00:33:23.340 | The reason we write papers, this exchange of ideas,
00:33:26.480 | it's much harder to do interdisciplinary papers,
00:33:30.540 | I would say. - Yeah, that's correct.
00:33:32.820 | - And conversations are easier.
00:33:35.140 | So, conversations is a beginning.
00:33:36.820 | And in the field of AI, it's obvious
00:33:41.180 | that we should think outside
00:33:42.860 | of pure computer vision competitions
00:33:46.240 | and a particular data sets.
00:33:47.540 | We should think about the broader impact
00:33:49.660 | of how this can be, you know,
00:33:52.460 | reaching out to physics, to psychology, to neuroscience,
00:33:56.180 | and having these conversations.
00:33:58.140 | So, you're an inspiration.
00:34:00.580 | And so, never know how the world changes.
00:34:05.220 | I mean, the fact that this stuff is out there,
00:34:08.560 | and I've, a huge number of people come up to me,
00:34:12.220 | grad students, really loving the podcast, inspired by it,
00:34:16.100 | and they will probably have that,
00:34:18.660 | there'll be ripple effects when they become faculty
00:34:20.740 | and so on and so on.
00:34:21.580 | We can end on a balance between pessimism and optimism.
00:34:25.300 | And Sean, thank you so much for talking to me.
00:34:27.180 | It was awesome.
00:34:28.020 | - No, Lex, thank you very much for this conversation.
00:34:29.420 | It was great.
00:34:30.260 | (upbeat music)
00:34:32.840 | (upbeat music)
00:34:35.420 | (upbeat music)
00:34:38.000 | (upbeat music)
00:34:40.580 | (upbeat music)
00:34:43.160 | (upbeat music)
00:34:45.740 | [BLANK_AUDIO]