back to index

Donald Hoffman: Reality is an Illusion - How Evolution Hid the Truth | Lex Fridman Podcast #293


Chapters

0:0 Introduction
1:12 Case against reality
12:40 Spacetime
37:4 Reductionism
57:30 Evolutionary game theory
85:53 Consciousness
141:13 Visualizing reality
153:48 Immanuel Kant
156:30 Ephemerality of life
164:56 Simulation theory
170:37 Difficult ideas
185:39 Love
189:14 Advice for young people
191:33 Meaning of life

Whisper Transcript | Transcript Only Page

00:00:00.000 | Whatever reality is, it's not what you see.
00:00:03.660 | What you see is just an adaptive fiction.
00:00:09.720 | - The following is a conversation with Donald Hoffman,
00:00:14.920 | professor of cognitive sciences at UC Irvine,
00:00:17.920 | focusing his research on evolutionary psychology,
00:00:21.200 | visual perception, and consciousness.
00:00:23.840 | He's the author of over 120 scientific papers
00:00:27.760 | on these topics, and his most recent book titled,
00:00:31.120 | "The Case Against Reality,
00:00:33.240 | "Why Evolution Hid the Truth from Our Eyes."
00:00:36.720 | I think some of the most interesting ideas in this world,
00:00:39.600 | like those of Donald Hoffman's,
00:00:41.440 | attempt to shake the foundation
00:00:43.640 | of our understanding of reality,
00:00:45.920 | and thus, they take a long time to internalize deeply.
00:00:50.320 | So proceed with caution.
00:00:52.240 | Questioning the fabric of reality
00:00:54.440 | can lead you to either madness or to truth.
00:00:58.520 | And the funny thing is, you won't know which is which.
00:01:01.420 | This is the Lex Friedman Podcast.
00:01:04.240 | To support it, please check out our sponsors
00:01:06.420 | in the description.
00:01:07.640 | And now, dear friends, here's Donald Hoffman.
00:01:11.080 | In your book, "The Case Against Reality,
00:01:14.240 | "Why Evolution Hid the Truth from Our Eyes,"
00:01:17.200 | you make the bold claim that the world we see
00:01:20.000 | with our eyes is not real.
00:01:21.960 | It's not even an abstraction of objective reality.
00:01:24.560 | It is completely detached from objective reality.
00:01:28.800 | Can you explain this idea?
00:01:30.880 | - Right, so this is a theorem from evolution
00:01:33.000 | by natural selection.
00:01:34.160 | So the technical question that I and my team asked was,
00:01:38.540 | what is the probability that natural selection
00:01:41.360 | would shape sensory systems to see true properties
00:01:44.360 | of objective reality?
00:01:45.400 | And to our surprise, we found that the answer
00:01:48.240 | is precisely zero, except for one kind of structure
00:01:51.400 | that we can go into if you want to.
00:01:52.480 | But for any generic structure that you might think
00:01:55.400 | the world might have, a total order, a topology metric,
00:01:59.160 | the probability is precisely zero that natural selection
00:02:03.160 | would shape any sensory system of any organism
00:02:06.500 | to see any aspect of objective reality.
00:02:08.560 | So in that sense, what we're seeing is
00:02:12.440 | what we need to see to stay alive long enough to reproduce.
00:02:18.700 | So in other words, we're seeing what we need
00:02:20.600 | to guide adaptive behavior.
00:02:22.440 | Full stop.
00:02:23.480 | - So the evolutionary process, the process that took us
00:02:27.860 | from the origin of life on Earth to the humans
00:02:31.160 | that we are today, that process does not maximize
00:02:35.760 | for truth, it maximizes for fitness, as you say.
00:02:39.240 | Fitness beats truth.
00:02:41.280 | And fitness does not have to be connected to truth,
00:02:45.000 | is the claim.
00:02:46.680 | And that's where you have an approach towards zero
00:02:49.920 | of probability that we have evolved human cognition,
00:02:54.920 | human consciousness, whatever it is,
00:02:58.360 | the magic that makes our mind work,
00:03:00.640 | evolved not for its ability to see the truth of reality,
00:03:05.640 | but its ability to survive in the environment.
00:03:09.160 | - That's exactly right.
00:03:10.240 | So most of us intuitively think that surely
00:03:14.200 | the way that evolution will make our senses more fit
00:03:18.600 | is to make them tell us more truths,
00:03:21.120 | or at least the truths we need to know
00:03:22.960 | about objective reality, the truths we need in our niche.
00:03:26.120 | That's the standard view, and it was the view I took.
00:03:27.960 | I mean, that's sort of what we're taught
00:03:30.520 | or just even assume.
00:03:31.840 | It's just sort of like the intelligent assumption
00:03:33.480 | that we would all make.
00:03:34.720 | But we don't have to just wave our hands.
00:03:37.840 | Evolution of a natural selection
00:03:38.960 | is a mathematically precise theory.
00:03:41.220 | John Maynard Smith in the '70s
00:03:44.200 | created evolutionary game theory.
00:03:45.600 | And we have evolutionary graph theory
00:03:48.140 | and even genetic algorithms that we can use to study this.
00:03:50.520 | And so we don't have to wave our hands.
00:03:52.360 | It's a matter of theorem and proof and/or simulation
00:03:55.480 | before you get the theorems and proofs.
00:03:56.720 | And a couple of graduate students of mine,
00:03:59.440 | Justin Mark and Brian Marion,
00:04:01.180 | did some wonderful simulations that tipped me off
00:04:03.960 | that there was something going on here.
00:04:06.480 | And then I went to a mathematician, Chetan Prakash
00:04:08.960 | and Manish Singh and some other friends of mine,
00:04:13.040 | Chris Fields.
00:04:14.440 | But Chetan was the real mathematician behind all this.
00:04:17.000 | And he's proved several theorems
00:04:18.540 | that uniformly indicate that with one exception,
00:04:21.820 | which has to do with probability measures,
00:04:23.920 | there's no, the probability is zero.
00:04:28.260 | The reason there's an exception for probability measures,
00:04:30.900 | so-called sigma algebras or sigma additive classes,
00:04:35.900 | is that for any scientific theory,
00:04:38.760 | there is the assumption that needs to be made
00:04:43.420 | that the whatever structure,
00:04:48.420 | whatever probabilistic structure the world may have
00:04:51.760 | is not unrelated to the probabilistic structure
00:04:55.520 | of our perceptions.
00:04:56.360 | If they were completely unrelated,
00:04:57.360 | then no science would be possible.
00:04:59.560 | So this is technically,
00:05:01.600 | the map from reality to our senses
00:05:05.080 | has to be a so-called measurable map,
00:05:07.040 | has to preserve sigma algebras.
00:05:08.640 | But that means it could be infinite to one
00:05:10.360 | and it could collapse all sorts of event information.
00:05:14.340 | But other than that, there's no requirement
00:05:17.060 | in standard evolutionary theory
00:05:18.980 | for fitness payoff functions, for example,
00:05:22.460 | to preserve any specific structures of objective reality.
00:05:25.580 | So you can ask the technical question.
00:05:27.100 | This is one of the avenues we took.
00:05:29.200 | If you look at all the fitness payoffs
00:05:32.940 | from whatever world structure you might want to imagine,
00:05:37.960 | so a world with say a total order on it.
00:05:40.460 | So it's got N states and they're totally ordered.
00:05:44.100 | And then you can have a set of maps from that world
00:05:48.140 | into a set of payoffs, say from zero to a thousand
00:05:50.580 | or whatever you want your payoffs to be.
00:05:52.700 | And you can just literally count all the payoff functions
00:05:56.260 | and just do the combinatorics and count them.
00:05:58.140 | Then you can ask a precise question.
00:05:59.820 | How many of those payoff functions
00:06:02.220 | preserve the total order, if that's what you're looking,
00:06:04.860 | or how many preserve the topology?
00:06:07.300 | And you just count them and divide.
00:06:08.800 | So the number that are homomorphisms
00:06:11.880 | versus the total number, and then take the limit
00:06:14.160 | as the number of states in the world
00:06:16.660 | and the number of payoff values goes very large.
00:06:19.960 | And when you do that, you get zero every time.
00:06:21.580 | - Okay, there's a million things to ask here.
00:06:24.400 | But first of all, just in case people are not familiar
00:06:28.920 | with your work, let's sort of linger
00:06:32.160 | on the big, bold statement here.
00:06:36.520 | Which is, the thing we see with our eyes
00:06:39.400 | is not some kind of limited window into reality.
00:06:45.040 | It is completely detached from reality.
00:06:47.860 | Likely completely detached from reality.
00:06:49.600 | You're saying 100% likely.
00:06:51.260 | Okay, so none of this is real in the way we think is real.
00:06:57.560 | In the way we have this intuition,
00:07:00.020 | there's like this table is some kind of abstraction,
00:07:05.320 | but underneath it all, there's atoms.
00:07:07.880 | And there's an entire century of physics
00:07:09.880 | that describes the functioning of those atoms
00:07:12.020 | and the quarks that make them up.
00:07:13.720 | There's many Nobel Prizes about particles and fields
00:07:18.720 | and all that kind of stuff that slowly builds up
00:07:23.300 | to something that's perceivable to us,
00:07:25.200 | both with our eyes, with our different senses,
00:07:27.720 | as this table.
00:07:29.960 | Then there's also ideas of chemistry
00:07:33.560 | that over layers of abstraction from DNA to embryos,
00:07:38.560 | to cells that make the human body.
00:07:42.960 | So all of that is not real.
00:07:46.680 | - It's a real experience.
00:07:48.320 | And it's a real adaptive set of perceptions.
00:07:52.560 | So it's an adaptive set of perceptions, full stop.
00:07:56.200 | We want to think that-- - So the perceptions are real.
00:07:58.600 | - So their perceptions are real as perceptions.
00:08:01.080 | Right, we are having our perceptions,
00:08:03.600 | but we've assumed that there's a pretty tight relationship
00:08:06.900 | between our perceptions and reality.
00:08:08.940 | If I look up and see the moon,
00:08:11.680 | then there is something that exists in space and time
00:08:15.120 | that matches what I perceive.
00:08:18.920 | And all I'm saying is that if you take evolution
00:08:23.920 | by natural selection seriously, then that is precluded.
00:08:30.320 | Our perceptions are there.
00:08:31.880 | They're there to guide adaptive behavior, full stop.
00:08:35.200 | They're not there to show you the truth.
00:08:36.760 | In fact, the way I think about it is
00:08:38.780 | they're there to hide the truth
00:08:40.680 | because the truth is too complicated.
00:08:42.560 | It's just like if you're trying to use your laptop
00:08:45.760 | to write an email, right?
00:08:47.520 | What you're doing is toggling voltages in the computer.
00:08:50.280 | But good luck trying to do it that way.
00:08:52.640 | The reason why we have a user interface
00:08:54.320 | is because we don't want to know that quote unquote truth,
00:08:56.640 | the diodes and resistors and all that terrible hardware.
00:08:59.600 | If you had to know all that truth,
00:09:01.600 | your friends wouldn't hear from you.
00:09:04.000 | So what evolution gave us was perceptions
00:09:08.360 | that guide adaptive behavior.
00:09:10.200 | And part of that process, it turns out,
00:09:12.000 | means hiding the truth and giving you eye candy.
00:09:16.680 | - So what's the difference between hiding the truth
00:09:20.700 | and forming abstractions,
00:09:22.860 | layers upon layers of abstractions
00:09:26.560 | over low-level voltages and transistors and chips
00:09:31.080 | and programming languages from assembly to Python
00:09:37.720 | that then leads you to be able to have an interface
00:09:40.380 | like Chrome where you open up another set of JavaScript
00:09:44.000 | and HTML programming languages
00:09:46.800 | that leads you to have a graphical user interface
00:09:49.360 | on which you can then send your friends an email.
00:09:53.320 | Is that completely detached from the zeros and ones
00:09:58.320 | that are firing away inside the computer?
00:10:01.560 | - It's not.
00:10:02.880 | Of course, when I talk about the user interface
00:10:04.800 | on your desktop, there's this whole sophisticated
00:10:09.800 | backstory to it, right, that the hardware and the software
00:10:13.120 | that's allowing that to happen.
00:10:15.040 | Evolution doesn't tell us the backstory, right?
00:10:17.200 | So the theory of evolution is not going to be adequate
00:10:20.380 | to tell you what is that backstory.
00:10:23.040 | It's going to say that whatever reality is,
00:10:27.120 | and that's the interesting thing,
00:10:27.960 | it says whatever reality is, you don't see it.
00:10:31.240 | You see a user interface, but it doesn't tell you
00:10:34.280 | what that user interface is, how it's built, right?
00:10:38.800 | Now we can try to look at certain aspects of the interface,
00:10:43.800 | but already we're going to look at that and go,
00:10:46.040 | okay, before I would look at neurons
00:10:47.920 | and I was assuming that I was seeing something
00:10:49.600 | that was at least partially true.
00:10:52.800 | And now I'm realizing that it could be like looking
00:10:54.840 | at the pixels on my desktop or icons on my desktop
00:10:59.220 | and good luck going from that to the data structures
00:11:02.560 | and then the voltages, I mean, good luck.
00:11:05.320 | There's just no way.
00:11:06.920 | So what's interesting about this is that
00:11:08.560 | our scientific theories are precise enough
00:11:12.960 | and rigorous enough to tell us certain limits,
00:11:15.600 | but, and even limits of the theories themselves,
00:11:20.000 | but they're not going to tell us what the next move is
00:11:23.200 | and that's where scientific creativity comes in.
00:11:25.860 | So the stuff that I'm saying here, for example,
00:11:28.920 | is not alien to physicists.
00:11:31.000 | The physicists are saying precisely the same thing,
00:11:33.640 | that space-time is doomed.
00:11:35.220 | We've assumed that space-time is fundamental.
00:11:37.200 | We've assumed that for several centuries
00:11:39.320 | and it's been very useful.
00:11:40.740 | So all the things that you were mentioning,
00:11:41.940 | the particles and all the work that's been done,
00:11:43.900 | that's all been done in space-time,
00:11:45.080 | but now physicists are saying space-time is doomed.
00:11:47.480 | There's no such thing as space-time fundamentally
00:11:51.680 | in the laws of physics.
00:11:54.040 | And that comes actually out of gravity
00:11:58.560 | together with quantum field theory.
00:11:59.920 | It just comes right out of it.
00:12:01.040 | It's a theorem of those two theories put together,
00:12:05.680 | but it doesn't tell you what's behind it.
00:12:07.980 | So the physicists know that their best theories,
00:12:11.840 | Einstein's gravity and quantum field theory put together,
00:12:15.160 | entail that space-time cannot be fundamental
00:12:17.440 | and therefore particles in space-time cannot be fundamental.
00:12:20.920 | They're just irreducible representations
00:12:22.560 | of the symmetries of space-time.
00:12:23.800 | That's what they are.
00:12:24.800 | So we have, so space-time, so we put the two together.
00:12:27.640 | We put together what the physicists are discovering
00:12:29.720 | and we can talk about how they do that.
00:12:32.240 | And then the new discoveries
00:12:33.820 | from evolution of natural selection,
00:12:35.400 | both of these discoveries are really in the last 20 years.
00:12:38.760 | And what both are saying is space-time has had a good ride.
00:12:42.400 | It's been very useful.
00:12:44.360 | Reductionism has been useful, but it's over.
00:12:46.520 | And it's time for us to go beyond.
00:12:48.360 | - When you say space-time is doomed,
00:12:50.500 | is it the space, is it the time,
00:12:53.160 | is it the very hard-coded specification of four dimensions?
00:12:57.620 | Or are you specifically referring
00:13:01.520 | to the kind of perceptual domain that humans operate in,
00:13:06.400 | which is space-time?
00:13:07.320 | You think like there's a 3D,
00:13:09.580 | like our world is three-dimensional
00:13:13.160 | and time progresses forward,
00:13:15.400 | therefore three dimensions plus one, 4D.
00:13:18.080 | What exactly do you mean by space-time?
00:13:20.600 | (laughing)
00:13:21.840 | What do you mean by space-time is doomed?
00:13:24.040 | - Great, great.
00:13:24.880 | So this is, by the way, not my quote.
00:13:26.560 | This is from, for example, Nima Arkani-Hamed
00:13:29.840 | at the Institute for Advanced Study at Princeton.
00:13:31.720 | Ed Witten, also there.
00:13:34.320 | David Gross, Nobel Prize winner.
00:13:36.520 | So this is not just something the cognitive scientists,
00:13:39.360 | this is what the physicists are saying.
00:13:40.640 | - Yeah, the physicists, they're space-time skeptics.
00:13:45.100 | - Well, yeah, they're saying that,
00:13:46.640 | and I can say exactly why they think it's doomed,
00:13:49.560 | but what they're saying is that,
00:13:51.600 | 'cause your question was what aspect of space-time,
00:13:53.840 | what are we talking about here?
00:13:54.960 | It's both space and time,
00:13:56.880 | their union into space-time as in Einstein's theory,
00:13:59.720 | that's doomed.
00:14:01.240 | And they're basically saying that even quantum theory,
00:14:06.240 | this is Nima Arkani-Hamed especially.
00:14:09.200 | So Hilbert spaces will not be fundamental either.
00:14:12.880 | So that the notion of Hilbert space,
00:14:15.520 | which is really critical to quantum field theory,
00:14:18.960 | quantum information theory,
00:14:20.960 | that's not going to figure in the fundamental
00:14:23.980 | new laws of physics.
00:14:25.240 | So what they're looking for
00:14:26.800 | is some new mathematical structures beyond space-time,
00:14:31.800 | beyond Einstein's four-dimensional space-time
00:14:35.240 | or supersymmetric version,
00:14:38.000 | geometric algebra signature, two comma four kind of,
00:14:41.120 | there are different ways that you can represent it,
00:14:43.240 | but they're finding new structures.
00:14:45.520 | And then by the way, they're succeeding now.
00:14:47.280 | They're finding, they found something
00:14:48.400 | called the amplituhedron.
00:14:49.640 | This is Nima and his colleagues,
00:14:51.480 | the cosmological polytope.
00:14:53.560 | These are, so there are these like polytopes,
00:14:57.360 | these polyhedra in multi dimensions,
00:15:00.280 | generalizations of simplices that are coding for,
00:15:05.280 | for example, the scattering amplitudes of processes
00:15:08.200 | in the Large Hadron Collider and other colliders.
00:15:10.860 | So they're finding that if they let go of space-time,
00:15:14.000 | completely, they're finding new ways of computing
00:15:17.400 | these scattering amplitudes that turn literally
00:15:20.480 | billions of terms into one term.
00:15:22.560 | When you do it in space and time,
00:15:25.000 | because it's the wrong framework,
00:15:26.760 | it's just a user interface from,
00:15:29.520 | that's not from the evolutionary point of view,
00:15:30.880 | it's just user interface.
00:15:32.120 | It's not a deep insight into the nature of reality.
00:15:34.560 | So it's missing deep symmetry,
00:15:36.920 | it's something called a dual conformal symmetry,
00:15:38.980 | which turns out to be true of the scattering data,
00:15:40.900 | but you can't see it in space-time,
00:15:42.680 | and it's making the computations way too complicated
00:15:46.240 | 'cause you're trying to compute all the loops
00:15:47.860 | and the Feynman diagrams and all the Feynman integrals.
00:15:50.280 | So see the Feynman approach to the scattering amplitudes
00:15:52.960 | is trying to enforce two critical properties of space-time,
00:15:56.480 | locality and unitarity.
00:15:58.560 | And so by, when you enforce those,
00:16:00.400 | you get all these loops and multiple,
00:16:02.880 | you know, different levels of loops.
00:16:04.560 | And for each of those,
00:16:05.380 | you have to add new terms to your computation.
00:16:07.560 | But when you do it outside of space-time,
00:16:11.040 | you don't have the notion of unitarity,
00:16:13.760 | you don't have the notion of locality,
00:16:15.720 | you have something deeper,
00:16:17.280 | and it's capturing some symmetries
00:16:18.800 | that are actually true of the data.
00:16:20.760 | And, but then when you look at the geometry
00:16:22.960 | of the facets of these polytopes,
00:16:25.340 | then certain of them will code for unitarity and locality.
00:16:30.340 | So it actually comes out of the structure
00:16:32.360 | of these deep polytopes.
00:16:33.480 | So what we're finding is there's this whole new world.
00:16:36.480 | Now, beyond space-time,
00:16:39.440 | that is making explicit symmetries
00:16:42.080 | that are true of the data
00:16:42.960 | that cannot be seen in space-time.
00:16:45.000 | And that is turning the computations
00:16:46.680 | from billions of terms to one or two or a handful of terms.
00:16:50.380 | So we're getting insights into symmetries,
00:16:53.280 | and all of a sudden the math is becoming simple
00:16:55.360 | because we're not doing something silly,
00:16:56.840 | we're not adding up all these loops in space-time,
00:16:59.020 | we're doing something far deeper.
00:17:00.800 | But they don't know what this world is about.
00:17:03.080 | So, they're in an interesting position
00:17:05.880 | where we know that space-time is doomed,
00:17:09.040 | and I should probably tell you why it's doomed,
00:17:11.320 | what they're saying about why it's doomed.
00:17:12.800 | But they need a flashlight to look beyond space-time.
00:17:15.560 | What flashlight are we gonna use
00:17:17.380 | to look into the dark beyond space-time?
00:17:19.680 | Because Einstein's theory and quantum theory
00:17:22.240 | can't tell us what's beyond them.
00:17:23.900 | All they can do is tell us that when you put us together,
00:17:26.240 | space-time is doomed at 10 to the minus 33 centimeters,
00:17:30.040 | 10 to the minus 43 seconds.
00:17:31.560 | Beyond that, space-time doesn't even make sense.
00:17:34.000 | It just has no operational definition.
00:17:36.040 | But it doesn't tell you what's beyond,
00:17:38.960 | and so they're just looking for deep structures,
00:17:41.520 | like guessing is really fun.
00:17:43.660 | So these really brilliant guys, generic,
00:17:46.620 | brilliant men and women who are doing this work,
00:17:48.920 | physicists, are making guesses about these structures,
00:17:52.260 | informed guesses, because they're trying to ask,
00:17:54.120 | well, okay, what deeper structure could give us
00:17:56.560 | the stuff that we're seeing in space-time,
00:17:58.360 | but without certain commitments
00:17:59.960 | that we have to make in space-time, like locality.
00:18:02.840 | So they make these brilliant guesses,
00:18:04.680 | and of course, most of the time you're gonna be wrong,
00:18:06.640 | but once you get one or two, that start to pay off.
00:18:09.600 | And then you get some lucky breaks.
00:18:11.360 | So they got a lucky break back in 1986.
00:18:14.060 | Couple of mathematicians named Park and Taylor
00:18:17.820 | took the scattering amplitude for two gluons coming in
00:18:22.660 | at high energy and four gluons going out at low energy,
00:18:25.560 | so that kind of scattering thing.
00:18:27.200 | So apparently for people who are into this,
00:18:30.280 | that's sort of something that happens so often,
00:18:32.040 | you need to be able to find it and get rid of those,
00:18:34.360 | 'cause you already know about that and you need to,
00:18:36.160 | so you needed to compute them.
00:18:37.280 | It was billions of terms, and they couldn't do it,
00:18:39.880 | even for the supercomputers couldn't do that
00:18:41.960 | for the many billions or millions of times per second
00:18:44.560 | they needed to do it.
00:18:45.400 | So they begged, the experimentalists begged the theorists,
00:18:49.040 | please, you gotta.
00:18:51.280 | And so Park and Taylor took the billions of terms,
00:18:53.140 | hundreds of pages, and miraculously,
00:18:56.560 | they turned it into nine.
00:18:58.120 | And then a little bit later,
00:18:59.400 | they guessed one term expression
00:19:01.240 | that turned out to be equivalent.
00:19:02.480 | So billions of terms reduced to one term,
00:19:07.280 | that so-called famous Park-Taylor formula, 1986.
00:19:10.760 | And that was like, okay, where did that come from?
00:19:14.280 | This is a pointer into a deep realm beyond space and time,
00:19:18.860 | but no one, I mean, what can you do with it?
00:19:21.480 | And they thought maybe it was a one-off,
00:19:23.160 | but then other formulas started coming up,
00:19:25.880 | and then eventually, Nimar Kani-Hamed
00:19:28.000 | and his team found this thing called the Amplituhedron,
00:19:30.240 | which really sort of captures the whole,
00:19:32.680 | a big part of the whole ball of wax.
00:19:34.640 | I'm sure they would say, no, there's plenty more to do,
00:19:37.880 | so I won't say they did it all by any means.
00:19:40.520 | They're looking at the cosmological polytope as well.
00:19:42.700 | So what's remarkable to me is that two pillars
00:19:47.700 | of modern science, quantum field theory with gravity,
00:19:51.700 | on the one hand,
00:19:52.920 | and evolution by natural selection on the other.
00:19:55.540 | Just in the last 20 years have very clearly said,
00:19:59.080 | space-time has had a good run,
00:20:01.120 | reductionism has been a fantastic methodology.
00:20:03.840 | So we had a great ontology of space-time,
00:20:05.720 | a great methodology of reductionism.
00:20:07.960 | Now it's time for a new trick.
00:20:09.480 | With now you need to go deeper and show,
00:20:13.040 | but by the way, this doesn't mean we throw away
00:20:14.960 | everything we've done, not by a long shot.
00:20:17.200 | Every new idea that we come up with beyond space-time
00:20:20.760 | must project precisely into space-time,
00:20:23.320 | and it better give us back everything
00:20:25.000 | that we know and love in space-time, or generalizations,
00:20:28.720 | or it's not gonna be taken seriously, and it shouldn't be.
00:20:30.820 | So we have a strong constraint
00:20:33.460 | on whatever we're going to do beyond space-time.
00:20:35.500 | It needs to project into space-time,
00:20:37.620 | and whatever this deeper theory is,
00:20:39.380 | it may not itself have evolution by natural selection.
00:20:42.820 | This may not be part of this deeper realm,
00:20:44.460 | but when we take whatever that thing is beyond space-time
00:20:47.500 | and project it into space-time,
00:20:49.180 | it has to look like evolution by natural selection,
00:20:51.740 | or it's wrong.
00:20:52.940 | So that's a strong constraint on this work.
00:20:57.400 | - So even the evolution by natural selection
00:21:00.880 | and quantum field theory could be interfaces
00:21:05.880 | into something that doesn't look anything like,
00:21:11.440 | like you mentioned, I mean, it's interesting to think
00:21:13.740 | that evolution might be a very crappy interface
00:21:16.880 | into something much deeper.
00:21:18.360 | - That's right.
00:21:19.200 | They're both telling us that the framework
00:21:21.280 | that you've had can only go so far, and it has to stop,
00:21:24.180 | and there's something beyond.
00:21:25.580 | And that framework, the very framework
00:21:27.100 | that is space and time itself.
00:21:29.200 | Now, of course, evolution by natural selection
00:21:32.400 | is not telling us about, like,
00:21:34.960 | Einstein's relativistic space-time.
00:21:36.340 | So that was another question you asked a little bit earlier.
00:21:38.340 | It's telling us more about our perceptual space and time,
00:21:42.420 | which we have used as the basis
00:21:45.260 | for creating first a Newtonian space versus time
00:21:49.700 | as a mathematical extension of our perceptions.
00:21:53.380 | And then Einstein then took that
00:21:55.440 | and extended it even further.
00:21:56.700 | So the relationship between what evolution is telling us
00:21:59.140 | and what the physicists are telling us is that,
00:22:01.020 | in some sense, the Newton and Einstein space-time
00:22:05.860 | are formulated as sort of rigorous extensions
00:22:11.300 | of our perceptual space, making it mathematically rigorous
00:22:15.500 | and laying out the symmetries that they find there.
00:22:19.100 | So that's sort of the relationship between them.
00:22:20.740 | So it's the perceptual space-time
00:22:22.460 | that evolution is telling us
00:22:24.060 | is just a user interface, effectively.
00:22:27.740 | And then the physicists are finding
00:22:28.960 | that even the mathematical extension of that
00:22:31.440 | into the Einsteinian formulation has to be, as well,
00:22:36.140 | not the final story, there's something deeper.
00:22:38.140 | - So let me ask you about reductionism and interfaces.
00:22:43.140 | As we march forward from Newtonian physics
00:22:47.960 | to quantum mechanics, these are all, in your view, interfaces.
00:22:52.960 | Are we getting closer to objective reality?
00:22:59.260 | How do we know, if these interfaces,
00:23:02.280 | in the process of science,
00:23:04.860 | the reason we like those interfaces
00:23:06.880 | is because they're predictive of some aspects,
00:23:09.700 | strongly predictive about some aspects of our reality.
00:23:14.140 | Is that completely deviating
00:23:16.060 | from our understanding of that reality,
00:23:19.560 | or is it helping us get closer and closer and closer?
00:23:22.760 | - Well, of course, one critical constraint
00:23:24.560 | on all of our theories is that they are empirically tested
00:23:27.220 | and pass the experiments that we have for them.
00:23:30.800 | So no one's arguing against experiments being important
00:23:34.440 | and wanting to test all of our current theories
00:23:38.420 | and any new theories on that.
00:23:40.600 | So that's all there.
00:23:44.300 | But we have good reason to believe
00:23:48.060 | that science will never get a theory of everything.
00:23:51.460 | - Everything, everything.
00:23:53.600 | - Everything, everything, right,
00:23:54.540 | a final theory of everything, right.
00:23:56.420 | I think that my own take is, for what it's worth,
00:23:59.140 | is that Gerdl's incompleteness theorem
00:24:01.620 | sort of points us in that direction,
00:24:03.140 | that even with mathematics,
00:24:04.800 | any finite axiomatization that's sophisticated enough
00:24:09.440 | to be able to do arithmetic,
00:24:11.060 | it's easy to show that there'll be statements that are true,
00:24:14.260 | that can't be proven,
00:24:16.940 | can't be deduced from within that framework.
00:24:19.500 | And if you add the new statements to your axioms,
00:24:21.940 | then there'll be always new statements that are true,
00:24:24.300 | but can't be proven with a new axiom system.
00:24:26.960 | And the best scientific theories,
00:24:30.020 | in physics, for example,
00:24:32.660 | and also now evolution, are mathematical.
00:24:35.100 | So our theories are gonna be,
00:24:36.340 | they're gonna have their own assumptions,
00:24:38.420 | and they'll be mathematically precise.
00:24:41.740 | And there'll be theories, perhaps,
00:24:42.840 | of everything except those assumptions,
00:24:44.440 | 'cause assumptions are,
00:24:45.640 | we say, "Please grant me these assumptions.
00:24:48.240 | "If you grant me these assumptions,
00:24:49.400 | "then I can explain this other stuff."
00:24:51.540 | But so you have the assumptions that are like miracles,
00:24:56.540 | as far as the theory is concerned, they're not explained,
00:24:59.440 | they're the starting points for explanation.
00:25:01.560 | And then you have the mathematical structure
00:25:03.200 | of the theory itself, which will have the Gerdl limits.
00:25:07.560 | And so my take is that reality, whatever it is,
00:25:12.560 | is always going to transcend any conceptual theory
00:25:18.600 | that we didn't come up with.
00:25:22.420 | - There's always gonna be mystery at the edges.
00:25:24.760 | (Lex laughing)
00:25:26.900 | - Right.
00:25:27.740 | - Contradictions and all that kind of stuff.
00:25:29.380 | Okay.
00:25:30.220 | And truths.
00:25:32.860 | So there's this idea that is brought up
00:25:34.840 | in the financial space of settlement of transactions,
00:25:39.440 | it's often talked about in cryptocurrency especially.
00:25:42.560 | So you could do, you know, money,
00:25:44.060 | cash is not connected to anything.
00:25:48.640 | It used to be connected to gold, to physical reality,
00:25:52.180 | but then you can use money to exchange value, to transact.
00:25:57.180 | So when it was on the gold standard,
00:25:59.760 | the money would represent some stable component of reality.
00:26:05.240 | Isn't it more effective to avoid things like hyperinflation,
00:26:10.240 | if we generalize that idea?
00:26:14.160 | Isn't it better to connect your,
00:26:19.160 | whatever we humans are doing
00:26:20.520 | in the social interaction space with each other,
00:26:23.040 | isn't it better from an evolutionary perspective
00:26:26.000 | to connect it to some degree to reality
00:26:28.040 | so that the transactions are settled
00:26:31.640 | with something that's universal,
00:26:33.760 | as opposed to us constantly operating
00:26:35.920 | in something that's a complete illusion?
00:26:38.120 | Isn't it easy to hyperinflate that?
00:26:40.540 | (Lex laughs)
00:26:41.440 | Like where you really deviate very, very far away
00:26:45.280 | from the underlying reality,
00:26:51.040 | or do you never get in trouble for this?
00:26:53.760 | Can you just completely drift far, far away
00:26:58.240 | from the underlying reality and never get in trouble?
00:27:01.600 | - That's a great question.
00:27:02.840 | On the financial side, there's two levels, at least,
00:27:05.760 | that we could take your question.
00:27:06.920 | One is strictly evolutionary psychology
00:27:09.840 | of financial systems, and that's pretty interesting.
00:27:13.520 | And there, the decentralized idea,
00:27:15.120 | the DeFi kind of idea in cryptocurrencies
00:27:18.560 | may make good sense
00:27:19.960 | from just an evolutionary psychology point of view.
00:27:22.440 | Having human nature being what it is,
00:27:25.760 | putting a lot of faith in a few central controllers
00:27:30.640 | depends a lot on the veracity of those
00:27:34.320 | and trustworthiness of those few central controllers.
00:27:37.080 | And we have ample evidence time and again
00:27:39.440 | that that's often betrayed.
00:27:41.880 | So it makes good evolutionary sense, I would say,
00:27:44.920 | to have a decentralized,
00:27:46.680 | I mean, democracy is a step in that direction, right?
00:27:49.600 | We don't have a monarch now telling us what to do.
00:27:52.240 | We decentralize things, right?
00:27:54.580 | Because if you have Marcus Aurelius as your emperor,
00:27:57.860 | you're great, if you have Nero, it's not so great.
00:28:01.140 | And so we don't want that.
00:28:02.320 | So democracy is a step in that direction,
00:28:04.280 | but I think the DeFi thing is an even bigger step
00:28:08.780 | and is going to even make the democratization even greater.
00:28:13.140 | So that's one level of--
00:28:14.820 | - Also, the fact that power corrupts
00:28:16.500 | and absolute power corrupts absolutely
00:28:18.140 | is also a consequence of evolution.
00:28:22.600 | (laughing)
00:28:24.220 | That's also a feature, I think, right?
00:28:26.960 | You can argue from the long span of living organisms,
00:28:30.860 | it's nice for power to corrupt for you to,
00:28:33.820 | so mad men and women throughout history
00:28:38.780 | might be useful to teach us a lesson.
00:28:43.000 | - We can learn from our negative example, right?
00:28:44.740 | - Exactly.
00:28:45.580 | - Right, right.
00:28:46.400 | Right, so power does corrupt,
00:28:49.040 | and I think that you can think about that again
00:28:51.280 | from an evolutionary point of view.
00:28:53.540 | But I think that your question was a little deeper,
00:28:55.780 | when that was, does the evolutionary interface idea
00:29:00.780 | sort of unhinge science from some kind of important test
00:29:06.460 | for the theories, right?
00:29:08.820 | We don't want, it doesn't mean that anything goes
00:29:12.320 | in scientific theory, but there's no,
00:29:14.540 | if we don't see the truth,
00:29:15.900 | is there no way to tether our theories and test them?
00:29:18.620 | And I think there's no problem there.
00:29:24.900 | We can only test things in terms of what we can measure
00:29:27.820 | with our senses in space and time.
00:29:29.540 | So we're going to have to continue to do experiments,
00:29:33.700 | but we're gonna understand a little bit differently
00:29:36.860 | what those experiments are.
00:29:38.420 | We had thought that when we see a pointer
00:29:41.820 | on some machine in an experiment,
00:29:45.700 | that the machine exists, the pointer exists,
00:29:48.220 | and the values exist even when no one is looking at them,
00:29:51.260 | and that they're an object of truth.
00:29:52.740 | And our best theorists are telling us, no,
00:29:55.460 | the pointers are just pointers,
00:29:58.460 | and that's what you have to rely on
00:30:00.140 | for making your judgments.
00:30:02.860 | But even the pointers themselves
00:30:07.620 | are not the objective reality.
00:30:10.420 | So, and I think Gödel is telling us that,
00:30:13.540 | not that anything goes, but as you develop
00:30:17.700 | new axiom systems, you will find out what goes
00:30:19.700 | within that axiom system,
00:30:21.580 | and what testable predictions you can make.
00:30:23.660 | So I don't think we're untethered.
00:30:25.740 | We continue to do experiments.
00:30:27.940 | What I think we won't have that we want
00:30:31.140 | is a conceptual understanding that gives us
00:30:34.620 | a theory of everything that's final and complete.
00:30:37.500 | I think that this is, to put it another way,
00:30:40.260 | this is job security for scientists.
00:30:43.060 | Our job will never be done,
00:30:45.180 | it's job security for neuroscience.
00:30:47.700 | Because before we thought that when we looked
00:30:50.020 | in the brain, we saw neurons and neural networks
00:30:52.420 | and action potentials and synapses and so forth,
00:30:57.420 | and that was it, that was the reality.
00:31:00.320 | Now we have to reverse engineer that.
00:31:01.740 | We have to say, what is beyond space-time?
00:31:04.760 | What is going on?
00:31:05.620 | What is a dynamical system beyond space-time?
00:31:08.380 | That when we project it into Einstein's space-time,
00:31:10.500 | gives us things that look like neurons
00:31:12.340 | and neural networks and synapses.
00:31:14.140 | So we have to reverse engineer it.
00:31:16.500 | So there's gonna be lots more work for neuroscience.
00:31:19.020 | It's gonna be far more complicated
00:31:20.900 | and difficult and challenging.
00:31:23.880 | But that's wonderful, that's what we need to do.
00:31:26.060 | We thought neurons exist when they are perceived
00:31:28.420 | and they don't.
00:31:29.360 | In the same way that if I show you,
00:31:31.060 | when I say they don't exist, I should be very, very concrete.
00:31:34.480 | If I draw on a piece of paper a little sketch
00:31:37.380 | of something that is called the Necker cube,
00:31:40.540 | it's just a little line drawing of a cube, right?
00:31:42.860 | It's on a flat piece of paper.
00:31:44.020 | If I execute it well and I show it to you,
00:31:46.180 | you'll see a 3D cube and you'll see it flip.
00:31:48.380 | Sometimes you'll see one face in front,
00:31:49.780 | sometimes you'll see the other face in front.
00:31:51.920 | But if I ask you, which face is in front
00:31:54.360 | when you don't look?
00:31:55.360 | The answer is, well, neither face is in front
00:31:59.580 | 'cause there's no cube.
00:32:01.340 | There's just a flat piece of paper.
00:32:03.220 | So when you look at the piece of paper,
00:32:05.100 | you perceptually create the cube.
00:32:08.420 | And when you look at it, then you fix one face
00:32:11.500 | to be in front and one face to be.
00:32:13.100 | So that's what I mean when I say it doesn't exist.
00:32:15.980 | Space-time itself is like the cube.
00:32:18.140 | It's a data structure that your sensory systems construct,
00:32:21.980 | whatever your sensory systems mean now,
00:32:23.740 | 'cause we now have to even take that for granted.
00:32:27.340 | But there are perceptions that you construct on the fly
00:32:31.380 | and they're data structures in a computer science sense
00:32:33.980 | and you garbage collect them when you don't need them.
00:32:35.700 | So you create them and garbage collect them.
00:32:37.500 | - But is it possible that it's mapped well
00:32:40.860 | in some concrete, predictable way to objective reality,
00:32:45.620 | the sheet of paper, this two-dimensional space,
00:32:48.620 | or we can talk about space-time,
00:32:51.220 | maps in some way that we maybe don't yet understand,
00:32:55.840 | but will one day understand what that mapping is,
00:32:59.600 | but it maps reliably, it is tethered in that way.
00:33:02.940 | - Well, yes.
00:33:03.780 | And so the new theories that the physicists are finding
00:33:06.220 | beyond space-time have that kind of tethering.
00:33:08.100 | So they show precisely how you start
00:33:10.540 | with an amplitude hedron
00:33:11.780 | and how you project this high-dimensional structure
00:33:15.260 | into the four dimensions of space-time.
00:33:18.300 | So there's a precise procedure that relates the two.
00:33:22.300 | And they're doing the same thing
00:33:23.820 | with the cosmological polytopes.
00:33:25.140 | So they're the ones that are making the most concrete
00:33:29.740 | and fun advances going beyond space-time.
00:33:32.700 | And they're tethering it, right?
00:33:35.180 | They say this is precisely the mathematical projection
00:33:38.420 | from this deeper structure into space-time.
00:33:41.580 | One thing I'll say about, as a non-physicist,
00:33:44.620 | what I find interesting is that they're finding
00:33:46.620 | just geometry, but there's no notion of dynamics.
00:33:51.140 | Right now, they're just finding
00:33:52.980 | these static geometric structures, which is impressive.
00:33:57.420 | So I'm not putting them down.
00:33:58.820 | What they're doing is unbelievably complicated
00:34:01.140 | and brilliant and adventurous.
00:34:06.140 | All those things.
00:34:08.260 | - And beautiful from a human aesthetic perspective
00:34:11.700 | 'cause geometry is beautiful.
00:34:12.900 | - It's absolutely.
00:34:14.140 | And they're finding symmetries that are true of the data
00:34:16.380 | that can't be seen in space-time.
00:34:18.660 | But I'm looking for a theory beyond space-time
00:34:22.940 | that's a dynamical theory.
00:34:24.420 | I would love to find, and we can talk about that
00:34:27.500 | at some point, a theory of consciousness
00:34:29.540 | in which the dynamics of consciousness itself
00:34:33.100 | will give rise to the geometry
00:34:35.500 | that the physicists are finding beyond space-time.
00:34:37.940 | If we can do that, then we'd have a completely different way
00:34:40.420 | of looking at how consciousness is related
00:34:42.620 | to what we call the brain or the physical world
00:34:45.300 | more generally.
00:34:46.260 | Right now, all of my brilliant colleagues,
00:34:49.620 | 99% of them are trying to,
00:34:51.900 | they're assuming space-time is fundamental.
00:34:56.700 | They're assuming that particles are fundamental,
00:34:59.180 | quarks, gluons, leptons, and so forth.
00:35:01.140 | Elements, atoms, and so forth are fundamental,
00:35:04.060 | and that therefore neurons and brains
00:35:06.420 | are part of objective reality.
00:35:08.780 | And that somehow when you get matter
00:35:10.900 | that's complicated enough,
00:35:12.660 | it will somehow generate conscious experiences
00:35:16.260 | by its functional properties.
00:35:17.860 | Or if you're a panpsychist,
00:35:19.380 | maybe in addition to the physical properties of particles,
00:35:22.700 | you add consciousness property as well.
00:35:27.260 | And then you combine these physical and conscious properties
00:35:30.900 | to get more complicated ones.
00:35:32.220 | But they're all doing it within space-time.
00:35:34.740 | All of the work that's being done on consciousness
00:35:38.980 | and its relationship to the brain
00:35:41.900 | is all assumed something that our best theories
00:35:45.020 | are telling us is doomed, space-time.
00:35:46.820 | - Why does that particular assumption bother you the most?
00:35:50.500 | So you bring up space-time.
00:35:52.120 | I mean, that's just one useful interface
00:35:57.020 | we've used for a long time.
00:35:58.740 | Surely there's other interfaces.
00:36:01.780 | Is space-time just one of the big ones
00:36:05.540 | to build up people's intuition about the fact
00:36:07.380 | that they do assume a lot of things strongly?
00:36:10.380 | Or is it in fact a fundamental flaw
00:36:15.060 | in the way we see the world?
00:36:16.580 | - Well, everything else that we think we know
00:36:20.660 | are things in space-time.
00:36:21.900 | - Sure.
00:36:24.260 | - And so when you say space-time is doomed,
00:36:27.780 | this is a shot to the heart of the whole framework,
00:36:32.780 | the whole conceptual framework that we've had in science.
00:36:35.940 | Not to the scientific method,
00:36:37.700 | but to the fundamental ontology
00:36:40.820 | and also the fundamental methodology,
00:36:42.420 | the ontology of space-time and its contents
00:36:45.740 | and the methodology of reductionism,
00:36:47.340 | which is that as we go to smaller scales in space-time,
00:36:51.940 | we will find more and more fundamental laws.
00:36:55.140 | And that's been very useful for space and time for centuries,
00:36:59.440 | reductionism for centuries,
00:37:01.420 | but now we realize that that's over.
00:37:04.700 | Reductionism is in fact dead as is space-time.
00:37:08.700 | - What exactly is reductionism?
00:37:10.580 | What is the process of reductionism
00:37:13.140 | that is different than some of the physicists
00:37:17.620 | that you mentioned that are trying to think,
00:37:19.440 | trying to let go of the assumption of space-time?
00:37:22.060 | Like beyond, isn't that still trying to come up
00:37:24.540 | with a simple model that explain this whole thing?
00:37:27.340 | Isn't it still reducing?
00:37:29.380 | - It's a wonderful question
00:37:30.220 | because it really helps to clarify two different notions,
00:37:33.120 | which is scientific explanation on the one hand
00:37:35.500 | and a particular kind of scientific explanation
00:37:39.380 | on the other, which is the reductionist.
00:37:40.860 | So the reductionist explanation is saying,
00:37:43.260 | I will start with things that are smaller in space-time
00:37:47.500 | and therefore more fundamental
00:37:49.300 | where the laws are more fundamental.
00:37:51.060 | So we go to just smaller and smaller scales.
00:37:54.880 | Whereas in science more generally,
00:37:58.000 | we just say like when Einstein
00:37:59.500 | did the special theory of relativity,
00:38:01.500 | he's saying, let me have a couple postulates.
00:38:03.600 | I will assume that the speed of light is universal
00:38:06.140 | for all observers in uniform motion
00:38:11.140 | and that the laws of physics,
00:38:13.360 | so if you're for uniform motion are,
00:38:16.780 | that's not a reductionist.
00:38:18.140 | Those are saying, grant me these assumptions.
00:38:20.020 | I can build this entire concept of space-time out of it.
00:38:23.380 | It's not a reductionist thing.
00:38:24.580 | You're not going to smaller and smaller scales of space.
00:38:27.780 | You're coming up with these deep, deep principles.
00:38:30.620 | Same thing with this theory of gravity.
00:38:33.140 | It's the falling elevator idea.
00:38:35.660 | So this is not a reductionist kind of thing.
00:38:37.620 | It's something different.
00:38:39.820 | - So simplification is a bigger thing than just reductionism.
00:38:44.820 | - Reductionism has been a particularly useful
00:38:47.780 | kind of scientific explanation,
00:38:49.580 | for example, in thermodynamics.
00:38:51.140 | The notion that we have of heat,
00:38:53.420 | some macroscopic thing like temperature and heat.
00:38:56.700 | It turns out that Neil Boltzmann and others discovered,
00:38:59.620 | well, hey, if we go to smaller and smaller scales,
00:39:01.980 | we find these things called molecules or atoms.
00:39:04.460 | And if we think of them as bouncing around
00:39:06.420 | and having some kind of energy,
00:39:08.660 | then what we call heat really can be reduced to that.
00:39:13.660 | And so that's a particularly useful kind of reduction,
00:39:19.060 | is a useful kind of scientific explanation
00:39:21.340 | that works within a range of scales within space-time.
00:39:25.460 | But we know now precisely where that has to stop.
00:39:28.420 | At 10 to the minus 33 centimeters
00:39:30.220 | and 10 to the minus 43 seconds.
00:39:32.740 | And I would be impressed
00:39:34.340 | if it was 10 to the minus 33 trillion centimeters.
00:39:37.500 | I'm not terribly impressed at 10 to the minus 33 centimeters.
00:39:40.500 | (Lex laughing)
00:39:43.580 | - I don't even know how to comprehend
00:39:44.900 | either of those numbers, frankly.
00:39:47.340 | Just a small aside, 'cause I am a computer science person,
00:39:51.500 | I also find cellular automata beautiful.
00:39:54.140 | And so you have somebody like Stephen Wolfram,
00:39:57.820 | who recently has been very excitedly exploring
00:40:01.540 | a proposal for a data structure that could be
00:40:05.020 | the numbers that would make you a little bit happier
00:40:08.420 | in terms of scale,
00:40:09.260 | 'cause they're very, very, very, very tiny.
00:40:11.720 | So do you like this space of exploration,
00:40:15.500 | of really thinking, letting go of space-time,
00:40:18.620 | letting go of everything,
00:40:19.580 | and trying to think what kind of data structures
00:40:21.660 | could be underneath this whole mess?
00:40:23.780 | - That's right.
00:40:24.900 | If they're thinking about these as outside of space-time,
00:40:27.800 | then that's what we have to do.
00:40:29.100 | That's what our best theories are telling us.
00:40:30.580 | You now have to think outside of space-time.
00:40:32.580 | Now, of course, I should back up and say,
00:40:36.540 | we know that Einstein surpassed Newton, right?
00:40:40.380 | But that doesn't mean that there's not good work
00:40:41.980 | to do on Newton.
00:40:42.940 | There's all sorts of Newtonian physics
00:40:44.300 | that takes us to the moon and so forth,
00:40:46.220 | and there's lots of good problems
00:40:47.240 | that we want to solve with Newtonian physics.
00:40:49.980 | The same thing will be true of space-time.
00:40:52.180 | It's not like we're gonna stop using space-time.
00:40:53.980 | We'll continue to do all sorts of good work there.
00:40:56.420 | But for those scientists who are really looking
00:40:59.700 | to go deeper, to actually find the next,
00:41:04.260 | just like what Einstein did to Newton,
00:41:06.160 | what are we gonna do to Einstein?
00:41:07.440 | How do we get beyond Einstein and quantum theory
00:41:09.780 | to something deeper?
00:41:10.900 | Then we have to actually let go.
00:41:13.260 | And if we're gonna do this automata kind of approach,
00:41:17.780 | it's critical that it's not automata in space-time,
00:41:21.180 | it's automata prior to space-time
00:41:23.580 | from which we're gonna show how space-time emerges.
00:41:25.860 | If you're doing automata within space-time,
00:41:28.200 | well, that might be a fun model,
00:41:29.580 | but it's not the radical new step that we need.
00:41:32.080 | - Yeah, so the space-time emerges from that whatever system,
00:41:36.700 | like you're saying, it's a dynamical system.
00:41:39.540 | Do we even have an understanding what dynamical means
00:41:42.460 | when we go beyond?
00:41:43.720 | When you start to think about dynamics,
00:41:48.020 | it could mean a lot of things.
00:41:50.340 | Even causality could mean a lot of things
00:41:52.940 | if we realize that everything's an interface.
00:41:56.760 | How much do we really know is an interesting question,
00:42:01.420 | 'cause you brought up neurons,
00:42:02.540 | I gotta ask you yet another tangent.
00:42:05.440 | There's a paper I remember a while ago looking at
00:42:07.900 | called Could a Neuroscientist Understand a Microprocessor?
00:42:11.500 | And I just enjoyed that thought experiment
00:42:14.180 | that they provided, which is they basically,
00:42:16.740 | it's a couple of neuroscientists,
00:42:18.860 | Eric Jonas and Conrad Kording,
00:42:22.300 | who use the tools of neuroscience
00:42:24.840 | to analyze a microprocessor, so a computer chip.
00:42:29.840 | - Yeah, if we lesion it here, what happens and so forth,
00:42:32.220 | and if you go and lesion a computer,
00:42:34.540 | it's very, very clear that lesion experiments on computers
00:42:38.300 | are not gonna give you a lot of insight into how it works.
00:42:40.420 | - And also the measurement devices and the kind of,
00:42:42.660 | just using the basic approaches of neuroscience,
00:42:44.540 | collecting the data, trying to intuit
00:42:47.140 | about the underlying function of it.
00:42:49.420 | And that helps you understand that
00:42:52.540 | our scientific exploration of concepts,
00:42:55.900 | depending on the field,
00:42:59.360 | are maybe in the very, very early stages.
00:43:05.020 | I wouldn't say it leads us astray,
00:43:08.500 | perhaps it does sometimes, but it's not a,
00:43:10.860 | it's not anywhere close to some fundamental mechanism
00:43:14.660 | that actually makes a thing work.
00:43:16.460 | - I don't know if you can sort of comment on that
00:43:18.940 | in terms of using neuroscience to understand
00:43:21.060 | the human mind and neurons.
00:43:24.180 | Are we really far away, potentially,
00:43:26.340 | from understanding in the way we understand
00:43:29.540 | the transistors enough to be able to build a computer?
00:43:33.700 | So one thing about understanding
00:43:37.940 | is you can understand for fun.
00:43:39.580 | The other one is to understand so you could build things.
00:43:45.900 | And that's when you really have to understand.
00:43:48.200 | - Exactly.
00:43:49.980 | In fact, what got me into the field that I,
00:43:53.260 | at MIT, was work by David Marr on this very topic.
00:43:57.620 | So David Marr was a professor at MIT,
00:43:59.820 | but he'd done his PhD in neuroscience,
00:44:02.060 | studying just the architectures of the brain.
00:44:05.340 | But he realized that his work, it was on the cerebellum.
00:44:08.680 | He realized that his work, as rigorous as it was,
00:44:15.140 | left him unsatisfied because he didn't know
00:44:17.140 | what the cerebellum was for.
00:44:18.580 | And why it had that architecture.
00:44:21.740 | So he went to MIT and he was in the AI lab there.
00:44:25.540 | And he said he had this three-level approach
00:44:29.660 | that really grabbed my attention.
00:44:30.740 | So when I was an undergrad at UCLA,
00:44:32.700 | I read one of his papers in a class and said,
00:44:34.740 | "Who is this guy?"
00:44:35.580 | Because he said, "You have to have a computational theory.
00:44:37.420 | "What is being computed and why?"
00:44:39.060 | An algorithm, how is it being computed?
00:44:42.420 | What are the precise algorithms?
00:44:44.820 | And then the hardware,
00:44:45.940 | how does it get instantiated in the hardware?
00:44:47.820 | And so to really do neuroscience, he argued,
00:44:50.380 | we needed to have understanding at all those levels.
00:44:52.780 | And that really got me.
00:44:54.420 | I loved the neuroscience, but I realized
00:44:56.380 | this guy was saying, "If you can't build it,
00:44:58.440 | "you don't understand it effectively."
00:45:00.060 | And so that's why I went to MIT.
00:45:02.020 | And I had the pleasure of working with David
00:45:04.500 | until he died just a year and a half later.
00:45:07.760 | So there's been that idea that with neuroscience,
00:45:12.780 | we have to have, in some sense, a top-down model
00:45:15.900 | of what's being computed and why
00:45:18.940 | that we would then go after.
00:45:20.020 | And the same thing with the,
00:45:21.500 | trying to reverse engineer a computing system
00:45:24.220 | like your laptop.
00:45:25.520 | We really need to understand what the user interface
00:45:28.540 | is about and why we have,
00:45:30.160 | what are keys on the keyboard for and so forth.
00:45:34.260 | You need to know why to really understand
00:45:37.460 | all the circuitry and what it's for.
00:45:40.340 | Now, we don't, evolution of natural selection
00:45:45.340 | does not tell us the deeper question that we're asking,
00:45:51.580 | the answer to the deeper question, which is why.
00:45:53.660 | What's this deeper reality and what's it up to and why?
00:45:58.660 | All it tells us is that whatever reality is,
00:46:03.200 | it's not what you see.
00:46:06.740 | What you see is just an adaptive fiction.
00:46:11.740 | - So just to linger on this fascinating, bold question
00:46:15.420 | that shakes you out of your dream state.
00:46:18.980 | Does this fiction still help you in building intuitions
00:46:23.420 | as literary fiction does about reality?
00:46:27.800 | The reason we read literary fiction
00:46:30.260 | is it helps us build intuitions
00:46:34.740 | and understanding in indirect ways,
00:46:37.220 | sneak up to the difficult questions of human nature.
00:46:40.300 | Great fiction.
00:46:41.900 | Same with this observed reality.
00:46:44.460 | Does this interface that we get, this fictional interface,
00:46:49.180 | help us build intuition about deeper truths
00:46:52.820 | of how this whole mess works?
00:46:55.020 | - Well, I think that each theory that we propose
00:46:58.900 | will give its own answer to that question, right?
00:47:01.100 | So when the physicists are proposing these structures
00:47:05.300 | like the amplituhedron and cosmological polytope,
00:47:08.260 | associahedron and so forth, beyond space-time,
00:47:11.100 | we can then ask your question for those specific structures
00:47:14.220 | and say, how much information, for example,
00:47:17.620 | does evolution by natural selection
00:47:19.240 | and the kinds of sensory systems that we have right now
00:47:24.240 | give us about this deeper reality?
00:47:26.940 | And why did we evolve this way?
00:47:30.000 | We can try to answer that question from within the deep.
00:47:33.020 | So there's not gonna be a general answer.
00:47:34.940 | I think what we'll have to do
00:47:36.760 | is posit these new deeper theories
00:47:39.380 | and then try to answer your question
00:47:41.480 | within the framework of those deeper theories,
00:47:43.580 | knowing full well that there'll be an even deeper theory.
00:47:47.140 | - So is this paralyzing though?
00:47:49.860 | 'Cause how do we know we're not completely adrift
00:47:53.340 | out to sea, lost forever from,
00:47:57.700 | so like that our theory is a completely lost.
00:48:00.080 | So if it's all,
00:48:01.980 | if we can never truly, deeply introspect to the bottom,
00:48:07.940 | if it's always just turtles on top of turtles infinitely,
00:48:12.000 | isn't that paralyzing for a scientific mind?
00:48:18.360 | - Well, it's interesting that you say
00:48:20.320 | introspect to the bottom.
00:48:22.160 | (Lex laughs)
00:48:23.160 | Because there is that, there is one,
00:48:26.480 | I mean, again, this is in the same spirit
00:48:28.260 | of what I said before,
00:48:29.100 | which is it depends on what answer you give
00:48:31.020 | to what's beyond space-time,
00:48:32.820 | what answer we would give to your question, right?
00:48:35.220 | So, but one answer that is interesting to explore
00:48:39.700 | is something that spiritual traditions
00:48:41.020 | have said for thousands of years,
00:48:42.580 | but haven't said precisely.
00:48:43.680 | So we can't take it seriously in science
00:48:45.900 | until it's made precise,
00:48:46.940 | but we might be able to make it precise.
00:48:49.140 | And that is that they've also said something like
00:48:53.180 | space and time aren't fundamental,
00:48:54.820 | they're maya, they're illusion.
00:48:56.820 | And, but that if you look inside, if you introspect
00:49:01.340 | and let go of all of your particular perceptions,
00:49:05.900 | you will come to something that's beyond conceptual thought.
00:49:10.340 | And that is, they claim,
00:49:15.340 | being in contact with the deep ground of being
00:49:17.580 | that transcends any particular conceptual understanding.
00:49:21.220 | If that is correct, now I'm not saying it's correct,
00:49:24.260 | but, and I'm not saying it's not correct.
00:49:26.020 | I'm just saying, if that's correct,
00:49:28.340 | then it would be the case that as scientists,
00:49:30.740 | because we also are in touch with this ground of being,
00:49:34.100 | we would then not be able to conceptually understand
00:49:38.360 | ourselves all the way,
00:49:40.140 | but we could know ourselves just by being ourselves.
00:49:43.540 | And so we would, there would be a sense in which
00:49:47.260 | there is a fundamental grounding to the whole enterprise
00:49:50.820 | because we're not separate from the enterprise.
00:49:53.300 | This is the opposite of the impersonal third-person science.
00:49:57.380 | This would make science go--
00:49:58.220 | - You'd be personal.
00:49:59.220 | - Personal all the way down.
00:50:01.640 | And, but nevertheless, scientific,
00:50:04.020 | because the scientific method would still be
00:50:06.180 | what we would use all the way down
00:50:09.340 | for the conceptual understanding.
00:50:10.500 | - Unfortunately, you still don't know
00:50:11.460 | if you went all the way down.
00:50:12.780 | It's possible that this kind of whatever consciousness is,
00:50:15.940 | and we'll talk about it, is getting
00:50:17.940 | the cliche statement of be yourself.
00:50:23.540 | It is somehow digging at a deeper truth of reality,
00:50:28.540 | but you still don't know when you get to the bottom.
00:50:31.740 | You know, a lot of people, they'll take psychedelic drugs,
00:50:34.460 | and they'll say, well, that takes my mind to certain places
00:50:37.980 | where it feels like that is revealing
00:50:41.120 | some deeper truth of reality.
00:50:43.100 | But you still, it could be interfaces on top of interfaces.
00:50:46.860 | That's, in your view of this, you really don't know.
00:50:52.620 | - That means Gato's incompleteness
00:50:54.100 | is that you really don't know.
00:50:55.620 | - My own view on it, for what it's worth,
00:50:58.480 | 'cause I don't know the right answer,
00:51:00.700 | but my own view on it right now is that it's never ending.
00:51:05.700 | I think that there will never, that this is great,
00:51:08.380 | as I said before, great job security for science,
00:51:12.040 | and that we, if this is true,
00:51:14.860 | and if consciousness is somehow important
00:51:17.580 | or fundamental in the universe,
00:51:19.020 | this may be an important fundamental fact
00:51:20.620 | about consciousness itself,
00:51:21.980 | that it's a never-ending exploration
00:51:25.080 | that's going on in some sense.
00:51:27.500 | - Well, that's interesting,
00:51:30.100 | push back on the job security.
00:51:31.900 | - Okay.
00:51:32.740 | - So maybe as we understand this kind of idea
00:51:37.500 | deeper and deeper, we understand that the pursuit
00:51:40.900 | is not a fruitful one.
00:51:42.940 | Then maybe we need to, maybe that's why
00:51:45.920 | we don't see aliens everywhere,
00:51:48.400 | is you get smarter and smarter and smarter,
00:51:51.260 | you realize that exploration is,
00:51:55.580 | there's other fun ways to spend your time than exploring.
00:51:59.260 | You could be sort of living maximally
00:52:03.980 | in some way that's not exploration.
00:52:05.860 | There's all kinds of video games you can construct
00:52:10.020 | and put yourself inside of them
00:52:11.820 | that don't involve you going outside of the game world.
00:52:15.980 | Feeling, from my human perspective,
00:52:18.720 | what seems to be fun is challenging yourself
00:52:20.960 | and overcoming those challenges,
00:52:22.640 | so you can constantly artificially generate
00:52:24.720 | challenges for yourself, like Sisyphus and his boulder.
00:52:28.000 | And that's it, so the scientific method
00:52:32.260 | that's always reaching out to the stars,
00:52:34.000 | that's always trying to figure out
00:52:35.160 | the puzzle upon a puzzle,
00:52:36.880 | that's always trying to get to the bottom turtle.
00:52:40.520 | Maybe if we can build more and more the intuition
00:52:43.960 | that that's an infinite pursuit,
00:52:46.180 | we agree to start deviating from that pursuit,
00:52:51.120 | start enjoying the here and now
00:52:53.160 | versus the looking out into the unknown always.
00:52:56.560 | Maybe that's looking out into the unknown
00:52:58.960 | is a early activity for a species that's evolved.
00:53:03.960 | I'm just sort of saying, pushing back,
00:53:09.800 | 'cause you probably got a lot of scientists excited
00:53:12.160 | in terms of job security,
00:53:13.600 | I could envision where it's not job security,
00:53:17.760 | where scientists become more and more useless.
00:53:20.740 | Maybe they're like the holders of the ancient wisdom
00:53:25.440 | that allows us to study our own history,
00:53:29.640 | but not much more than that.
00:53:32.280 | Just to--
00:53:33.120 | - That's good pushback.
00:53:35.640 | I'll put one in there for the scientists again.
00:53:39.320 | But sure, but then I'll take the other side too.
00:53:41.800 | So when Faraday did all of his experiments
00:53:46.680 | with magnets and electricity and so forth,
00:53:50.280 | came with all this wonderful empirical data
00:53:52.040 | and James Clerk Maxwell looked at it
00:53:54.120 | and wrote down a few equations,
00:53:56.120 | which we can now write down in a single equation,
00:53:58.200 | the Maxwell equation,
00:53:59.040 | if we use geometric algebra, just one equation.
00:54:01.400 | That opened up unbelievable technologies.
00:54:07.440 | People are zooming and talking to each other
00:54:09.480 | around the world.
00:54:11.200 | The whole electronics industry,
00:54:13.640 | there was something that transformed our lives
00:54:17.120 | in a very positive way.
00:54:18.440 | With the theories beyond space time,
00:54:21.800 | here's one potential.
00:54:24.860 | Right now, most of the galaxies that we see,
00:54:28.820 | we can see them, but we know that we could never get to them
00:54:32.600 | no matter how fast we traveled.
00:54:34.160 | They're going away from us at the speed of light or beyond,
00:54:37.560 | so we can't ever get to them.
00:54:39.320 | So there's all this beautiful real estate
00:54:40.760 | that's just smiling and waving at us
00:54:42.720 | and we can never get to it.
00:54:44.880 | But that's if we go through space time.
00:54:47.400 | But if we recognize that space time
00:54:48.880 | is just a data structure, it's not fundamental.
00:54:51.780 | We're not little things inside space time.
00:54:56.080 | Space time was a little data structure in our perceptions.
00:55:00.240 | It's just the other way around.
00:55:02.120 | Once we understand that and we get equations
00:55:05.440 | for the stuff that's beyond space time,
00:55:08.240 | maybe we won't have to go through space time.
00:55:10.640 | Maybe we can go around it.
00:55:11.800 | Maybe I can go to Proxima Centauri and not go through space.
00:55:14.280 | I can just go right there directly.
00:55:16.800 | It's a data structure.
00:55:17.640 | We can start to play with it.
00:55:19.320 | So I think that for what it's worth,
00:55:23.600 | my take would be that the endless sequence of theories
00:55:28.600 | that we could contemplate building
00:55:32.880 | will lead to an endless sequence of new remarkable insights
00:55:37.880 | into the potentialities, the possibilities
00:55:41.520 | that would seem miraculous to us
00:55:44.000 | and that we will be motivated to continue the exploration
00:55:47.600 | partly just for the technological innovations
00:55:51.240 | that come out.
00:55:52.920 | But the other thing that you mentioned, though,
00:55:56.200 | what about just being?
00:55:57.840 | What if we decide instead of all this doing
00:56:00.480 | and exploring, what about being?
00:56:02.360 | My guess is that the best scientists will do both
00:56:06.680 | and that the act of being will be a place
00:56:11.680 | where they get many of their ideas
00:56:14.920 | and that they then pull into the conceptual realm.
00:56:18.800 | And I think many of the best scientists,
00:56:21.040 | Einstein comes to mind, right?
00:56:22.160 | Where these guys say, look, I didn't come up
00:56:24.400 | with these ideas by a conceptual analysis.
00:56:27.640 | I was thinking in vague images
00:56:30.240 | and it was just something non-conceptual.
00:56:34.040 | And then it took me a long, long time
00:56:36.040 | to pull it out into concepts
00:56:38.120 | and then longer to put it into math.
00:56:40.680 | But the real insights didn't come from just slavishly
00:56:44.440 | playing with equations.
00:56:45.720 | They came from a deeper place.
00:56:48.320 | And so there may be this going back and forth
00:56:51.600 | between the complete non-conceptual
00:56:54.680 | where there's essentially no end to the wisdom
00:56:57.640 | and then conceptual systems
00:56:58.800 | where there's the girdle limits that we have to that.
00:57:02.200 | And that may be, if consciousness is important
00:57:05.480 | and fundamental, that may be what consciousness,
00:57:07.600 | at least part of what consciousness is about
00:57:09.360 | is this discovering itself,
00:57:11.880 | discovering its possibilities, so to speak.
00:57:14.040 | We can talk about what that might mean
00:57:16.000 | by going from the non-conceptual
00:57:20.240 | to the conceptual and back and forth.
00:57:22.160 | - To get better and better and better at being.
00:57:26.440 | Right, let me ask you, just to linger on the evolutionary,
00:57:29.860 | because you mentioned evolutionary game theory
00:57:33.160 | and that's really where you,
00:57:35.280 | the perspective from which you come
00:57:37.040 | to form the case against reality.
00:57:40.200 | At which point in our evolutionary history
00:57:45.360 | did we start to deviate the most from reality?
00:57:49.520 | Is it way before life even originated on Earth?
00:57:55.800 | Is it in the early development from bacteria and so on?
00:58:00.800 | Or is it when some inklings
00:58:05.080 | of what we think of as intelligence
00:58:06.760 | or maybe even complex consciousness started to emerge?
00:58:11.760 | So where did this deviation,
00:58:14.840 | just like with the interfaces in a computer,
00:58:19.520 | you start with transistors and then you have assembly
00:58:23.920 | and then you have C, C++,
00:58:27.200 | then you have Python, then you have GUIs,
00:58:29.760 | all that kind, you have layers upon layers.
00:58:31.480 | When did we start to deviate?
00:58:33.320 | - Well, David Marr, again, my advisor at MIT,
00:58:36.320 | in his book Vision,
00:58:38.920 | suggested that the more primitive sensory systems
00:58:42.360 | were less realistic, less veridical,
00:58:45.800 | but that by the time you got to something
00:58:47.000 | as complicated as the humans,
00:58:48.240 | we were actually estimating the true shapes
00:58:51.600 | and distances to objects and so forth.
00:58:53.360 | So his point of view, and I think it was probably,
00:58:57.040 | it's not an uncommon view among my colleagues,
00:59:01.600 | that yeah, the sensory systems of lower creatures
00:59:06.160 | may just not be complicated enough
00:59:07.560 | to give them much truth.
00:59:10.000 | But as you get to 86 billion neurons,
00:59:12.840 | you can now compute the truth,
00:59:14.080 | or at least the parts of the truth that we need.
00:59:17.080 | When I look at evolutionary game theory,
00:59:21.660 | one of my graduate students, Justin Mark,
00:59:24.100 | did some simulations using genetic algorithms.
00:59:27.700 | So there, he was just exploring,
00:59:30.620 | we start off with random organisms,
00:59:32.140 | random sensory genetics and random actions,
00:59:36.340 | and the first generation was unbelievably,
00:59:38.500 | it was a foraging situation,
00:59:39.700 | they were foraging for resources.
00:59:41.300 | Most of them stayed in one place,
00:59:44.140 | didn't do anything important.
00:59:45.640 | But we could then just look at how the genes evolved,
00:59:51.140 | and what we found was,
00:59:53.260 | what he found was that basically you never even saw
00:59:59.420 | the truth organisms even come on the stage.
01:00:04.840 | If they came, they were gone in one generation,
01:00:07.820 | they just weren't.
01:00:09.420 | So they came and went, even just in one generation.
01:00:14.420 | They just are not good enough.
01:00:16.300 | The ones that were just tracking,
01:00:17.980 | their senses just were tracking the fitness payoffs,
01:00:20.860 | were far more fit than the truth seekers.
01:00:25.860 | So an answer at one level,
01:00:30.500 | I'm gonna give an answer at a deeper level,
01:00:31.820 | but just with evolutionary game theory.
01:00:34.220 | Because my attitude as a scientist is,
01:00:37.300 | I don't believe any of our theories.
01:00:39.100 | I take them very, very seriously, I study them,
01:00:42.280 | I look at their implications,
01:00:43.340 | but none of them are the gospel,
01:00:44.900 | they're just the latest ideas that we have.
01:00:47.020 | So the reason I study evolutionary game theory
01:00:50.620 | is because that's the best tool we have
01:00:53.060 | right now in this area.
01:00:55.060 | There is nothing else that competes.
01:00:57.340 | And so as a scientist, it's my responsibility
01:00:59.580 | to take the best tools and see what they mean.
01:01:02.060 | And the same thing the physicists are doing,
01:01:03.820 | they're taking the best tools
01:01:04.860 | and looking at what they entail.
01:01:07.260 | But I think that science now has enough experience
01:01:11.700 | to realize that we should not believe our theories,
01:01:14.980 | in the sense that we've now arrived.
01:01:17.960 | In 1890, it was a lot of physicists thought we'd arrived.
01:01:22.400 | They were discouraging bright young students
01:01:26.640 | from going into physics 'cause it was all done.
01:01:28.800 | And that's precisely the wrong attitude.
01:01:30.960 | Forever, it's the wrong attitude forever.
01:01:34.520 | The attitude we should have is, a century from now,
01:01:38.600 | they'll be looking at us and laughing
01:01:40.560 | at what we didn't know.
01:01:41.720 | And we just have to assume that that's going to be the case.
01:01:44.120 | Just know that everything that we think
01:01:46.060 | is so brilliant right now, our final theory.
01:01:49.260 | A century from now, they'll look at us
01:01:50.840 | like we look at the physicists of 1890
01:01:53.120 | and go, how could they have been so dumb?
01:01:55.200 | So I don't wanna make that mistake.
01:01:57.960 | So I'm not doctrinaire about
01:01:59.800 | any of our current scientific theories.
01:02:02.920 | I'm doctrinaire about this.
01:02:06.280 | We should use the best tools we have right now.
01:02:08.960 | That's what we've got. - And with humility.
01:02:11.040 | Well, so let me ask you about game theory.
01:02:14.480 | I love game theory, evolutionary game theory.
01:02:17.940 | But I'm always suspicious of it, like economics.
01:02:22.600 | When you construct models, it's too easy
01:02:26.940 | to construct things that oversimplify
01:02:30.500 | just because we, our human brains,
01:02:34.420 | enjoy the simplification of constructing a few variables
01:02:39.040 | that somehow represent organisms or represent people
01:02:43.100 | and running a simulation that then allows you
01:02:45.600 | to build up intuition and it feels really good
01:02:48.440 | because you can get some really deep
01:02:50.420 | and surprising intuitions.
01:02:51.980 | But how do you know your models aren't,
01:02:55.260 | the assumptions underlying your models
01:02:57.200 | aren't some fundamentally flawed
01:02:58.880 | and because of that, your conclusions
01:03:01.720 | are fundamentally flawed.
01:03:03.020 | So I guess my question is, what are the limits
01:03:06.220 | in your use of game theory, evolutionary game theory,
01:03:08.840 | your experience with it, what are the limits of game theory?
01:03:12.480 | - So I've gotten some pushback from professional colleagues
01:03:15.780 | and friends who have tried to rerun simulations
01:03:19.340 | and try to, the idea that we don't see the truth
01:03:21.840 | is not comfortable and so many of my colleagues
01:03:24.220 | are very interested in trying to show that we're wrong.
01:03:26.420 | And so the idea would be to say that somehow
01:03:28.600 | we did something, as you're suggesting,
01:03:30.540 | maybe something special that wasn't completely general.
01:03:33.740 | We got some little special part of the whole search space
01:03:36.980 | in evolutionary game theory in which this happens
01:03:39.180 | to be true but more generally,
01:03:41.060 | organisms would evolve to see the truth.
01:03:42.640 | So the best pushback we've gotten is from a team at Yale
01:03:47.640 | and they suggested that if you use
01:03:52.180 | thousands of payoff functions, so we,
01:03:54.400 | in our simulations, we just used a couple, one or two,
01:03:57.740 | 'cause it was our first simulations, right?
01:04:00.280 | So that would be a limit.
01:04:01.240 | We had one or two payoff functions,
01:04:02.380 | we showed the result in those,
01:04:03.920 | at least for the genetic algorithms.
01:04:07.080 | And they said if you have 20,000 of them,
01:04:10.460 | then we can find these conditions in which
01:04:12.720 | truth-seeing organisms would be the ones
01:04:17.240 | that evolved and survived.
01:04:19.360 | And so we looked at their simulations
01:04:21.120 | and it certainly is the case that you can find
01:04:25.160 | special cases in which truth can evolve.
01:04:27.560 | So when I say it's probability zero,
01:04:29.120 | it doesn't mean it can't happen.
01:04:30.160 | It can happen, in fact, it could happen infinitely often.
01:04:33.000 | It's just probability zero.
01:04:34.360 | So probability zero things can happen infinitely often.
01:04:38.240 | - When you say probability zero,
01:04:39.560 | you mean probability close to zero.
01:04:41.980 | - To be very, very precise.
01:04:43.120 | So for example, if I have a unit square on the plane,
01:04:46.780 | and I use a measure in which the,
01:04:51.440 | on a probability measure in which the area
01:04:54.600 | of a region is this probability.
01:04:56.260 | Then if I draw a curve in that unit square,
01:05:02.400 | it has measure precisely zero.
01:05:05.460 | Precisely, not approximately, precisely zero.
01:05:07.920 | And yet it has infinitely many points.
01:05:10.120 | So there's an object that, for that probability measure,
01:05:12.200 | has probability zero, and yet there's
01:05:14.040 | infinitely many points in it.
01:05:16.280 | So that's what I mean when I say that
01:05:18.640 | the things that are probability zero
01:05:19.880 | can happen infinitely often, in principle.
01:05:21.800 | - Yeah, but infinity, as far as,
01:05:25.080 | and I look outside often, I walk around
01:05:28.120 | and I look at people, I have never seen
01:05:30.080 | infinity in real life.
01:05:31.600 | - That's an interesting issue.
01:05:35.960 | - I've been looking, I've been looking.
01:05:37.560 | I don't notice it, infinitely small or the infinitely big.
01:05:41.200 | And so the tools of mathematics,
01:05:43.360 | you could sort of apply the same kind of criticism
01:05:45.680 | that it is a very convenient interface into our reality.
01:05:49.200 | - That's a big debate in mathematics.
01:05:50.500 | The intuitionists versus the ones who take,
01:05:52.260 | for example, the real numbers as real.
01:05:55.120 | And that's a fun discussion.
01:05:57.080 | Nicholas Gieson has, a physicist,
01:05:59.080 | has really interesting work recently
01:06:00.920 | on how if you go with intuitionist mathematics,
01:06:05.520 | you could effectively quantize Newton.
01:06:09.960 | And you find that the Newtonian theory
01:06:12.440 | and quantum theory aren't that different
01:06:14.440 | once you go with it.
01:06:16.440 | - It's funny.
01:06:17.280 | - It's really quite interesting.
01:06:18.360 | So the issue you raise is a very, very deep one.
01:06:21.000 | And one that I think we should take quite seriously,
01:06:23.740 | which is, how shall we think about the reality
01:06:27.640 | of the contours hierarchy, A-Left one, A-Left two,
01:06:32.160 | and all these different infinities
01:06:35.840 | versus just a more algorithmic approach, right?
01:06:40.840 | So where everything's computable in some sense,
01:06:45.240 | everything's finite, as big as you want,
01:06:47.200 | but nevertheless finite.
01:06:49.760 | - So yeah, it ultimately boils down to whether
01:06:53.480 | the world is discrete or continuous in some general sense.
01:06:58.480 | And again, we can't really know,
01:07:01.200 | but there's just a mind-breaking thought,
01:07:05.480 | just common sense reasoning, that something can happen
01:07:09.880 | and is yet, probability of it happening is 0%.
01:07:13.800 | That doesn't compute for common sense computer.
01:07:18.120 | - Right.
01:07:18.960 | This is where you have to be a sharp mathematician
01:07:21.800 | to really, and I'm not.
01:07:23.440 | - Sharp is one word.
01:07:24.980 | What I'm saying is common sense computer is,
01:07:27.400 | I mean that in a very kind of,
01:07:31.960 | in a positive sense, because we've been talking
01:07:36.160 | about perception systems and interfaces.
01:07:38.600 | If we are to reason about the world,
01:07:42.120 | we have to use the best interfaces we got.
01:07:45.040 | And I'm not exactly sure that game theory
01:07:50.040 | is the best interface we got for this.
01:07:52.760 | - Oh, right.
01:07:53.600 | - And application of mathematics,
01:07:56.680 | tricks and tools of mathematics,
01:07:58.440 | the game theory is the best we got
01:08:00.600 | when we are thinking about the nature of reality
01:08:03.920 | and fakeness functions and evolution, period.
01:08:07.560 | - Well, that's a fair rejoinder.
01:08:10.040 | And I think that that was the tool that we used.
01:08:14.080 | And if someone says, here's a better mathematical tool
01:08:17.320 | and here's why, this is, this mathematical tool
01:08:20.240 | better captures the essence of Darwin's idea.
01:08:23.280 | John Maynard Smith didn't quite get it
01:08:24.960 | with evolutionary game theory.
01:08:26.280 | There's this better, this thing.
01:08:27.520 | Now there are tools like evolutionary graph theory,
01:08:30.640 | which generalize evolutionary game theory.
01:08:32.920 | And then there's quantum game theory.
01:08:35.560 | So you can use quantum tools like entanglement,
01:08:40.560 | for example, as a resource in games
01:08:44.460 | that change the very nature of the solutions,
01:08:48.240 | of the optimal solutions of the game theory.
01:08:52.320 | - Well, the work from Yale is really interesting.
01:08:54.440 | It's a really interesting challenge
01:08:56.320 | of that kind of, of these ideas where, okay,
01:08:59.080 | if you have a very large number of fitness functions,
01:09:01.800 | or let's say you have a nearly infinite number
01:09:06.360 | of fitness functions, or a growing number
01:09:08.440 | of fitness functions, what kind of interesting things
01:09:11.880 | start to emerging, if you are to be an organism?
01:09:15.560 | If to be an organism that adapts means
01:09:18.880 | having to deal with an ensemble of fitness functions.
01:09:22.440 | - Right, and so we've actually redone some
01:09:26.680 | of our own work based on theirs.
01:09:29.040 | And this is the back and forth that we expect in science.
01:09:32.320 | And what we found was that they, in their simulations,
01:09:36.360 | they were assuming that you couldn't carve the world
01:09:39.160 | up into objects.
01:09:40.080 | And so we said, well, let's relax that assumption.
01:09:43.960 | Allow organisms to create data structures
01:09:46.000 | that we might call objects.
01:09:47.720 | And an object would be, you take,
01:09:49.400 | you would do hierarchical clustering
01:09:51.600 | of your fitness payoff functions.
01:09:53.200 | The ones that have similar shapes.
01:09:54.840 | If you have 20,000 of them, maybe these 50
01:09:58.320 | are all very, very similar.
01:09:59.560 | So I can take all the perception, action, fitness stuff
01:10:03.640 | and make that into a data structure,
01:10:05.680 | and we'll call that a unit or an object.
01:10:08.280 | And as soon as we did that,
01:10:09.640 | then all of their results went away.
01:10:11.600 | It turned out they were the special case.
01:10:13.400 | And that the organisms that were allowed
01:10:16.680 | to only see, that were shaped to see only fitness payoffs
01:10:21.480 | were the ones that were.
01:10:22.960 | So the idea is that objects then,
01:10:25.360 | what are objects from an evolutionary point of view?
01:10:27.160 | This bottle, we thought that when I saw a bottle,
01:10:30.160 | it was because I was seeing a true object
01:10:31.760 | that existed whether or not it was perceived.
01:10:34.400 | Evolutionary theories suggest a different interpretation.
01:10:37.960 | I'm seeing a data structure that is encoding
01:10:41.160 | a convenient way of looking at various fitness payoffs.
01:10:45.440 | I can use this for drinking.
01:10:48.320 | I could use it as a weapon, not a very good one.
01:10:50.800 | I could beat someone with head with it.
01:10:52.880 | If my goal is mating, this is pointless.
01:10:56.640 | So I'm seeing for what I'm coding here
01:10:59.680 | is all sorts of actions and the payoffs that I could get.
01:11:04.200 | When I pick up an apple,
01:11:05.480 | now I'm getting a different set of actions and payoffs.
01:11:08.720 | When I pick up a rock, I'm getting.
01:11:10.240 | So for every object, what I'm getting
01:11:12.240 | is a different set of payoff functions
01:11:15.120 | with various actions.
01:11:18.240 | And so once you allow that,
01:11:20.720 | then what you find is once again that truth goes extinct
01:11:25.720 | and the organisms that just get an interface
01:11:28.120 | are the ones that win.
01:11:29.640 | - But the question, just sneaking up on,
01:11:32.200 | this is fascinating.
01:11:33.280 | From where do fitness functions originate?
01:11:38.240 | What gives birth to the fitness functions?
01:11:40.240 | So if there's a giant black box
01:11:43.360 | that just keeps giving you fitness functions,
01:11:45.160 | what are we trying to optimize?
01:11:46.200 | You said that water has different uses
01:11:50.080 | than an apple, so there's these objects.
01:11:57.000 | What are we trying to optimize?
01:11:58.880 | And why is not reality a really good generator
01:12:02.720 | of fitness functions?
01:12:03.840 | - So each theory makes its own assumptions.
01:12:07.440 | It says, grant me this, and I'll explain that.
01:12:09.640 | So evolutionary game theory says,
01:12:11.040 | grant me fitness payoffs.
01:12:12.320 | And grant me strategies with payoffs.
01:12:16.360 | And I can write down the matrix
01:12:18.560 | for if this strategy interacts with that strategy,
01:12:20.320 | these are the payoffs that come up.
01:12:21.640 | If you grant me that,
01:12:22.480 | then I can start to explain a lot of things.
01:12:24.560 | Now you can ask for a deeper question,
01:12:25.920 | like, okay, how does physics evolve biology
01:12:30.920 | and where do these fitness payoffs come from?
01:12:36.080 | Now, that's a completely different enterprise.
01:12:41.080 | And of course, evolutionary game theory then
01:12:43.160 | would be not the right tool for that.
01:12:45.360 | It would have to be a deeper tool
01:12:46.480 | that shows where evolutionary game theory comes from.
01:12:49.320 | My own take is that there's gonna be a problem in doing that
01:12:55.680 | because space-time isn't fundamental.
01:13:00.260 | It's just a user interface.
01:13:04.520 | And that the distinction that we make
01:13:06.240 | between living and non-living
01:13:08.480 | is not a fundamental distinction.
01:13:10.620 | It's an artifact of the limits of our interface.
01:13:13.220 | Right, so this is a new wrinkle,
01:13:16.400 | and this is an important wrinkle.
01:13:18.040 | It's so nice to take space and time as fundamental
01:13:22.260 | because if something looks like it's inanimate,
01:13:24.280 | it's inanimate, and we can just say it's not living.
01:13:27.160 | Now, it's much more complicated.
01:13:30.760 | Certain things are obviously living.
01:13:32.200 | I'm talking with you.
01:13:34.160 | I'm obviously interacting with something
01:13:36.560 | that's alive and conscious.
01:13:38.640 | - I think we've let go of the word obviously
01:13:40.680 | in this conversation.
01:13:42.000 | I think nothing is obvious.
01:13:43.320 | - Nothing's obvious, that's right.
01:13:45.280 | But when we get down to an ant, it's obviously living,
01:13:49.960 | but I'll say it appears to be living.
01:13:52.560 | When we get down to a virus, now people wonder,
01:13:55.600 | and when we get down to protons,
01:13:57.240 | people say it's not living.
01:13:58.840 | And my attitude is, look, I have a user interface.
01:14:02.780 | The interface is there to hide certain aspects of reality
01:14:05.780 | and others to, it's an uneven representation,
01:14:10.780 | put it that way.
01:14:11.960 | Certain things just get completely hidden.
01:14:14.060 | Dark matter and dark energy are most of the energy
01:14:18.580 | and matter that's out there.
01:14:19.780 | Our interface just plain flat out hides them.
01:14:22.140 | The only way we get some hint is because gravitational
01:14:26.140 | things are going wrong within our.
01:14:28.740 | So most things are outside of our interface.
01:14:31.980 | The distinction between living and non-living
01:14:34.220 | is not fundamental, it's an artifact of our interface.
01:14:37.180 | So if, so this is the, if we really,
01:14:40.980 | really want to understand where evolution comes from,
01:14:43.620 | to answer the question, the deep question you asked,
01:14:46.740 | I think the right way we're gonna have to do that
01:14:48.660 | is to come up with a deeper theory than space-time,
01:14:51.220 | in which there may not be the notion of time.
01:14:54.340 | And show that whatever this dynamics of that
01:14:59.860 | deeper theory is, and by the way,
01:15:02.620 | I'll talk about how you could have dynamics without time,
01:15:04.460 | but the dynamics of this deeper theory,
01:15:07.140 | when we project it into, in certain ways,
01:15:11.020 | then we do get space-time and we get what appears
01:15:13.060 | to be evolution by natural selection.
01:15:15.060 | So I would love to see evolution by natural selection,
01:15:17.760 | nature, red and tooth and claw, people fighting,
01:15:20.220 | animals fighting for resources and the whole bit,
01:15:22.220 | come out of a deeper theory in which perhaps
01:15:24.260 | it's all cooperation.
01:15:25.620 | There's no limited resources and so forth,
01:15:27.940 | but as a result of projection, you get space and time,
01:15:32.260 | and as a result of projection, you get nature,
01:15:34.260 | red and tooth and claw, the appearance of it,
01:15:36.460 | but it's all an artifact of the interface.
01:15:39.100 | - I like this idea that the line between living
01:15:43.220 | and non-living is very important,
01:15:45.020 | 'cause that's a thing that would emerge
01:15:48.660 | before you have evolution, the idea of death.
01:15:55.220 | So that seems to be an important component
01:15:58.860 | of natural selection, and if that emerged,
01:16:01.060 | because that's also, you know, asking the question,
01:16:05.500 | I guess, that I ask, where do fitness functions come from?
01:16:09.060 | That's like asking the old meaning of life question, right?
01:16:12.940 | It's what's the why, why, why?
01:16:17.500 | And one of the big underlying why's,
01:16:20.300 | okay, you can start with evolution on Earth,
01:16:22.660 | but without living, without life and death,
01:16:26.100 | without the line between the living and the dead,
01:16:28.700 | you don't have evolution.
01:16:30.520 | So what if underneath it, there's no such thing
01:16:32.540 | as the living and the dead?
01:16:33.900 | There's no, like this concept of an organism, period,
01:16:39.500 | there's a living organism that's defined
01:16:42.700 | by a volume in space-time that somehow interacts,
01:16:48.380 | that over time maintains its integrity somehow,
01:16:52.860 | it has some kind of history, it has a wall of some kind,
01:16:56.580 | the outside world, the environment,
01:16:58.380 | and then inside, there's an organism.
01:17:00.780 | So you're defining an organism,
01:17:02.940 | and also, you define that organism
01:17:04.900 | by the fact that it can move, and it can come alive,
01:17:09.900 | which you kind of think of as moving,
01:17:12.100 | combined with the fact that it's keeping itself
01:17:14.700 | separate from the environment,
01:17:15.980 | so you can point out that thing is living,
01:17:17.900 | and then it can also die.
01:17:19.580 | That seems to be all very powerful components of space-time
01:17:26.060 | that enable you to have something
01:17:28.380 | like natural selection and evolution.
01:17:30.840 | - Well, and there's a lot of interesting work,
01:17:33.140 | some of it by collaborators of Carl Friston and others,
01:17:36.200 | where they have Bayes' net kind of stuff
01:17:40.940 | that they build on, and the notion of a Markov blanket,
01:17:43.820 | so you have some states within this network
01:17:47.220 | that are inside the blanket, then you have the blanket,
01:17:49.140 | and then the states outside the blanket,
01:17:50.820 | and the states inside this Markov blanket
01:17:52.980 | are conditionally independent of the states
01:17:54.300 | outside the blanket, conditioned on the blanket.
01:17:57.420 | And what they're looking at is that the dynamics
01:18:01.260 | inside of the states inside the Markov blanket
01:18:04.500 | seem to be trying to estimate properties of the outside
01:18:07.180 | and react to them in a way,
01:18:09.020 | so it seems like you're doing probabilistic inferences
01:18:11.340 | in ways that might be able to keep you alive,
01:18:14.260 | so there's interesting work going on in that direction,
01:18:17.600 | but what I'm saying is something slightly different,
01:18:21.580 | and that is, like when I look at you,
01:18:24.860 | all I see is skin, hair, and eyes, right?
01:18:26.460 | That's all I see.
01:18:27.500 | But I know that there's a deeper reality,
01:18:31.220 | I believe that there's a much deeper reality,
01:18:32.660 | there's the whole world of your experiences,
01:18:34.260 | your thoughts, your hopes, your dreams.
01:18:35.860 | In some sense, the face that I see
01:18:38.700 | is just a symbol that I create, right?
01:18:42.140 | And as soon as I look away, I delete that symbol,
01:18:44.860 | but I don't delete you,
01:18:46.340 | I don't delete the conscious experience,
01:18:48.540 | the whole world of your,
01:18:50.260 | so I'm only deleting an interface symbol,
01:18:53.520 | but that interface symbol is a portal, so to speak,
01:18:58.520 | not a perfect portal, but a genuine portal
01:19:04.380 | into your beliefs, into your conscious experiences,
01:19:06.540 | into, that's why we can have a conversation,
01:19:08.820 | we genuinely, what your consciousness
01:19:10.740 | is genuinely affecting mine,
01:19:12.180 | and mine is genuinely affecting yours,
01:19:13.900 | through these icons, which I create on the fly,
01:19:17.660 | I mean, I create your face, when I look, I delete it.
01:19:20.120 | I don't create you, your consciousness,
01:19:22.100 | that's there all the time, but I do,
01:19:25.020 | so now when I look at a cat,
01:19:27.420 | I'm creating something that I still call living,
01:19:29.940 | and I still think is conscious.
01:19:31.920 | When I look at an ant, I create something
01:19:34.700 | that I still would call living, but maybe not conscious.
01:19:38.160 | When I look at something I call a virus,
01:19:40.700 | now I'm not even sure I would call it living,
01:19:42.940 | and when I look at a proton, I would say,
01:19:45.420 | I don't even think it's not alive at all.
01:19:48.900 | It could be that I'm nevertheless interacting
01:19:53.620 | with something that's just as conscious as you.
01:19:55.700 | I'm not saying the proton is conscious.
01:19:57.460 | The face that I'm creating, when I look at you,
01:19:59.420 | that face is not conscious,
01:20:00.580 | that face is a data structure in me,
01:20:03.700 | that face is an experience, it's not an experiencer.
01:20:08.700 | Similarly, a proton is something that I create
01:20:12.700 | when I look or do a collision in the Large Hadron Collider
01:20:16.140 | or something like that,
01:20:17.300 | but what is behind the entity in space-time?
01:20:21.420 | So I've got this space-time interface,
01:20:23.180 | and I've just got this entity that I call a proton.
01:20:25.580 | What is the reality behind it?
01:20:27.500 | Well, the physicists are finding these big, big structures,
01:20:30.780 | amplituhedron, the sociahedron,
01:20:32.980 | what's behind those?
01:20:36.020 | Could be consciousness, what I'm playing with,
01:20:38.740 | in which case, when I'm interacting with a proton,
01:20:41.220 | I could be interacting with consciousness.
01:20:43.740 | Again, to be very, very clear, 'cause it's easy to,
01:20:46.980 | I'm not saying a proton is conscious,
01:20:49.300 | just like I'm not saying your face is conscious.
01:20:51.340 | Your face is a symbol I create and then delete
01:20:54.300 | as I look, and so your face is not conscious,
01:20:57.380 | but I know that that face in my interface,
01:21:00.060 | the Lex Friedman face that I create,
01:21:01.860 | is an interface symbol that's a genuine portal
01:21:04.260 | into your consciousness.
01:21:06.060 | The portal is less clear for a cat,
01:21:09.980 | even less clear for an ant,
01:21:11.860 | and by the time we get down to a proton,
01:21:13.380 | the portal is not clear at all.
01:21:15.900 | But that doesn't mean I'm not interacting
01:21:17.140 | with consciousness, it just means my interface gave up,
01:21:20.180 | and there's some deeper reality that we have to go after.
01:21:23.020 | So your question really forces out a big part
01:21:26.980 | of this whole approach that I'm talking about.
01:21:29.100 | - So it's this portal and consciousness.
01:21:30.580 | I wonder why you can't, your portal is not as good
01:21:35.140 | to a cat, to a cat's consciousness,
01:21:39.260 | than it is to a human.
01:21:40.420 | Does it have to do with the fact that you're human
01:21:43.740 | and just similar organisms, organisms of similar complexity
01:21:49.820 | are able to create portals better to each other,
01:21:53.460 | or is it just, as you get more and more complex,
01:21:55.660 | you get better and better portals?
01:21:57.900 | - Well, let me answer one aspect of it
01:22:00.260 | that I'm more confident about,
01:22:01.180 | and then I'll speculate on that.
01:22:03.540 | Why is it that the portal is so bad with protons?
01:22:07.180 | Well, and elementary particles more generally,
01:22:09.500 | so quarks, leptons, and gluons, and so forth.
01:22:12.260 | Well, the reason for that is because those are just
01:22:16.140 | symmetries of space-time.
01:22:18.280 | More technically, they're irreducible representations
01:22:21.060 | of the Poincaré group of space-time.
01:22:22.960 | So they're just literally representations
01:22:29.820 | of the data structure of space-time that we're using.
01:22:33.580 | So that's why they're not very much insightful.
01:22:35.820 | They're just almost entirely tied
01:22:38.300 | to the data structure itself.
01:22:39.740 | There's not much, they're telling you only something
01:22:42.780 | about the data structure, not behind the data structure.
01:22:44.940 | It's only when we get to higher levels
01:22:46.740 | that we're starting to, in some sense,
01:22:48.500 | build portals to what's behind space-time.
01:22:52.300 | - Sure, yeah, so there's more and more
01:22:56.860 | complexity built on top of the interface
01:23:00.300 | of space-time with the cat.
01:23:02.020 | - So you can actually build a portal, right?
01:23:03.420 | - Yeah, yeah, right.
01:23:06.440 | Yeah, this interface of face, and hair, and so on, skin.
01:23:12.740 | There's some syncing going on between humans, though,
01:23:20.100 | where we sync, like, you're getting
01:23:22.980 | a pretty good representation of the ideas in my head,
01:23:26.260 | and starting to get a foggy view of my memories in my head.
01:23:31.260 | Even though this is the first time we're talking,
01:23:36.300 | you start to project your own memories.
01:23:38.620 | You start to solve a giant hierarchy
01:23:41.780 | of puzzles about a human.
01:23:43.620 | 'Cause we're all, there's a lot of similarities,
01:23:46.980 | a lot of it rhymes, so you start to make
01:23:48.980 | a lot of inferences, and you build up this model
01:23:51.780 | of a person.
01:23:52.620 | You have a pretty sophisticated model
01:23:54.400 | of what's going on underneath.
01:23:57.140 | Again, I just, I wonder if it's possible
01:24:00.540 | to construct these models about each other,
01:24:02.460 | and nevertheless be very distant
01:24:04.900 | from an underlying reality.
01:24:07.100 | The syncing. - Yeah, there's a lot
01:24:09.260 | of work on this.
01:24:10.080 | So there's some interesting work called signaling games,
01:24:12.040 | where they look at how people can coordinate
01:24:15.380 | and come to communicate.
01:24:19.060 | There's some interesting work that was done
01:24:21.140 | by some colleagues and friends of mine,
01:24:24.100 | Louis Nerons, Natalia Komarova, and Kimberly Jamison,
01:24:28.340 | where they were looking at evolving color words.
01:24:33.340 | So you have a circle of colors,
01:24:36.100 | you know, this is the color circle.
01:24:37.780 | And they wanted to see if they could get people
01:24:41.040 | to cooperate in how they carved the color circle
01:24:43.080 | up into units of words.
01:24:44.900 | And so they had a game, a theoretic kind of thing
01:24:49.620 | that they'd had people do.
01:24:50.620 | And what they found was that when they included,
01:24:52.920 | so most people are trichromats,
01:24:54.940 | you have three kinds of cone photoreceptors,
01:24:57.920 | but there are some, a lot of men,
01:24:59.460 | 7% of men are dichromats,
01:25:01.420 | they might be missing the red cone photoreceptor.
01:25:04.380 | They found that the dichromats had an outsized influence
01:25:08.980 | on the final ways that the whole space of colors
01:25:12.540 | was carved up and labels attached.
01:25:14.900 | You needed to be able to include the dichromats
01:25:17.880 | in the conversation, and so they had a bigger influence
01:25:20.120 | on how you made the boundaries of the language.
01:25:23.020 | And I thought that was a really interesting kind of insight
01:25:25.700 | that there's going to be again, a game,
01:25:27.880 | perhaps a game or evolutionary or genetic algorithm
01:25:31.440 | kind of thing that goes on in terms of learning
01:25:34.400 | to communicate in ways that are useful.
01:25:37.900 | And so, yeah, you can use game theory
01:25:39.940 | to actually explore that or signaling games.
01:25:42.580 | There's a lot of brilliant work on that.
01:25:44.740 | I'm not doing it, but there's work out there.
01:25:47.620 | - So if it's okay, let us tackle once more,
01:25:50.840 | and perhaps several more times,
01:25:52.640 | after the big topic of consciousness.
01:25:55.560 | This very beautiful, powerful things
01:25:59.560 | that perhaps is the thing that makes us human.
01:26:01.960 | What is it?
01:26:03.080 | What's the role of consciousness in,
01:26:05.300 | let's say even just the thing we've been talking about,
01:26:08.160 | which is the formation of this interface.
01:26:10.280 | Any kind of ways you want to kind of start
01:26:17.040 | talking about it?
01:26:18.460 | - Well, let me say first what most of my colleagues say.
01:26:21.420 | 99% are again, assuming that space-time is fundamental,
01:26:27.620 | particles in space-time, matter is fundamental,
01:26:30.000 | and most are reductionist.
01:26:32.540 | And so the standard approach to consciousness
01:26:37.120 | is to figure out what complicated systems of matter
01:26:42.120 | with the right functional properties
01:26:45.980 | could possibly lead to the emergence of consciousness.
01:26:48.560 | That's the general idea, right?
01:26:50.560 | So maybe you have to have neurons.
01:26:53.920 | Maybe only if you have neurons,
01:26:56.920 | but that might not be enough.
01:26:58.640 | They have to certain kinds of complexity
01:27:00.400 | in their organization and their dynamics,
01:27:02.840 | certain kind of network abilities, for example.
01:27:05.200 | So there are those who say, for example,
01:27:10.760 | that consciousness arises from orchestrated collapse
01:27:14.680 | of quantum states of microtubules and neurons.
01:27:17.400 | So this is Hameroff and Penrose have this kind of.
01:27:22.440 | So you start with something physical,
01:27:25.440 | a property of quantum states of neurons,
01:27:28.540 | of microtubules and neurons,
01:27:32.160 | and you say that somehow an orchestrated collapse of those
01:27:35.680 | is consciousness or conscious experiences.
01:27:38.720 | Or integrated information theory.
01:27:40.480 | Again, you start with something physical,
01:27:42.560 | and if it has the right kind of functional properties,
01:27:44.880 | it's something they call phi,
01:27:46.260 | with the right kind of integrated information,
01:27:48.040 | then you have consciousness.
01:27:50.600 | Or you can be a panpsychist, Philip Goff, for example,
01:27:54.900 | where you might say, well,
01:27:57.300 | in addition to the particles in space and time,
01:28:01.320 | those particles are not just matter,
01:28:03.400 | they also could have, say, a unit of consciousness.
01:28:06.520 | And so, but once again, you're taking space and time
01:28:09.320 | and particles as fundamental,
01:28:11.200 | and you're adding a new property to them,
01:28:14.080 | say, their consciousness,
01:28:15.080 | and then you have to talk about how,
01:28:16.720 | when a proton and a neutron,
01:28:19.560 | where a proton and electron get together to form hydrogen,
01:28:21.840 | then how those consciousnesses merge to,
01:28:25.160 | or interact to create the consciousness of hydrogen,
01:28:28.400 | and so forth.
01:28:29.240 | There's attention schema theory,
01:28:31.520 | which again, this is how neural network processes,
01:28:35.720 | representing to the network itself,
01:28:38.320 | its attentional processes,
01:28:40.000 | that could be consciousness.
01:28:41.780 | There's global workspace theory,
01:28:44.400 | and neuronal global workspace theory.
01:28:48.400 | So there's many, many theories of this type.
01:28:50.440 | What's common to all of them
01:28:52.800 | is they assume that space-time is fundamental.
01:28:55.260 | They assume that physical processes
01:28:57.920 | in space-time is fundamental.
01:28:59.640 | Panpsychism adds consciousness as an additional thing,
01:29:02.560 | it's almost dualist in that regard.
01:29:04.620 | And my attitude is,
01:29:08.620 | our best science is telling us
01:29:11.240 | that space-time is not fundamental.
01:29:13.320 | So, why is that important here?
01:29:17.120 | Well, for centuries,
01:29:18.700 | deep thinkers thought of earth, air, fire, and water
01:29:24.460 | as the fundamental elements.
01:29:26.280 | It was a reductionist kind of idea.
01:29:28.620 | Nothing was more elemental than those,
01:29:30.120 | and you could sort of build everything up from those.
01:29:33.520 | When we got the periodic table of elements,
01:29:37.240 | we realized that, of course,
01:29:40.020 | we want to study earth, air, fire, and water.
01:29:42.040 | There's combustion science for fire.
01:29:44.200 | There's sciences for all these other things,
01:29:49.020 | water and so forth.
01:29:50.300 | So we're gonna do science with these things,
01:29:51.900 | but fundamental, no, no.
01:29:54.220 | If you're looking for something fundamental,
01:29:56.220 | those are the wrong building blocks.
01:29:58.500 | Earth has many, many different kinds of elements
01:30:02.160 | that project into the one thing that we call earth.
01:30:04.660 | If you don't understand that there's silicon,
01:30:06.800 | that there's iron,
01:30:07.640 | that there's all these different kinds of things
01:30:09.020 | that project into what we call earth,
01:30:10.820 | you're hopelessly lost.
01:30:14.660 | You're not fundamental.
01:30:15.780 | You're not gonna get there.
01:30:17.020 | And then, after the periodic table,
01:30:19.820 | then we came up with quarks, leptons, and gluons,
01:30:22.260 | the particles of the standard model of physics.
01:30:26.480 | And so, we actually now know
01:30:29.020 | that if you really want to get fundamental,
01:30:31.180 | the periodic table is a net.
01:30:34.620 | It's good for chemistry.
01:30:35.620 | And it's wonderful for chemistry.
01:30:37.180 | But if you're trying to go deep, fundamental,
01:30:39.820 | what is the fundamental science?
01:30:41.460 | That's not it.
01:30:42.300 | You're gonna have to go to quarks, leptons,
01:30:44.540 | and gluons, and so forth.
01:30:46.060 | Well, now, we've discovered space-time itself is doomed.
01:30:51.060 | Quarks, leptons, and gluons
01:30:52.980 | are just irreducible representations
01:30:54.940 | of the symmetries of space-time.
01:30:56.540 | So, the whole framework
01:31:00.300 | on which consciousness research is being based right now
01:31:03.860 | is doomed.
01:31:05.460 | And for me, these are my friends and colleagues
01:31:09.180 | that are doing this.
01:31:10.020 | They're brilliant.
01:31:10.980 | They're brilliant.
01:31:12.700 | My feeling is I'm so sad
01:31:17.740 | that they're stuck with this old framework
01:31:21.740 | because if they weren't stuck
01:31:24.260 | with earth, air, fire, and water,
01:31:25.860 | you could actually make progress.
01:31:27.140 | So, it doesn't matter how smart you are.
01:31:28.700 | If you start with earth, air, fire, and water,
01:31:30.260 | you're not gonna get anywhere, right?
01:31:31.980 | - Can I actually just,
01:31:33.820 | 'cause the word doomed is so interesting.
01:31:36.020 | Let me give you some options, multiple choice quiz.
01:31:39.220 | Is space-time, we could say, is reality,
01:31:43.180 | the way we perceive it, doomed,
01:31:46.460 | wrong, or fake?
01:31:54.140 | Because doomed just means it could still be right,
01:31:59.140 | and we're now ready to go deeper.
01:32:02.700 | It would be that.
01:32:03.860 | - So, it's not wrong.
01:32:05.500 | It's not a complete deviation
01:32:08.340 | from a journey toward the truth.
01:32:10.620 | - Right, it's like earth, air, fire, and water is not wrong.
01:32:13.860 | There is earth, air, fire, and water.
01:32:15.700 | That's a useful framework, but it's not fundamental.
01:32:19.060 | - Right, well, there's also wrong,
01:32:20.820 | which is they used to believe, as I recently learned,
01:32:24.460 | that George Washington, the first president
01:32:28.020 | of the United States, was bled to death
01:32:30.900 | for something that could have been easily treated
01:32:34.020 | because it was believed that you can get,
01:32:36.380 | actually, I need to look into this further,
01:32:38.100 | but I guess you get toxins out or demons out.
01:32:40.820 | I don't know what you're getting out
01:32:41.860 | with the bleeding of a person.
01:32:43.540 | But so that ended up being wrong,
01:32:47.140 | but widely believed as a medical tool.
01:32:50.740 | So, is it also possible that our assumption
01:32:54.620 | of space-time is not just doomed, but is wrong?
01:32:58.940 | - Well, if we believe that it's fundamental, that's wrong.
01:33:02.180 | But if we believe it's a useful tool, that's right.
01:33:05.340 | - But it could, see, but bleeding somebody to death
01:33:08.100 | was believed to be a useful tool.
01:33:10.100 | - And that was wrong.
01:33:11.140 | - It wasn't just not fundamental.
01:33:13.820 | It was very, I'm sure there's cases
01:33:17.260 | in which bleeding somebody would work,
01:33:19.020 | but it would be a very tiny, tiny, tiny percentage of cases.
01:33:23.540 | So it could be that it's wrong.
01:33:25.820 | Like it's a side road that's ultimately leading
01:33:29.140 | to a dead end as opposed to a truck stop or something
01:33:32.620 | that you can get off of.
01:33:34.700 | - My feeling is not the dead end kind of thing.
01:33:37.340 | I think that what the physicists are finding
01:33:39.460 | is that there are these structures beyond space-time,
01:33:41.700 | but they project back into space-time.
01:33:44.260 | And so space-time, when they say space-time is doomed,
01:33:48.240 | they're explicit.
01:33:49.120 | They're saying it's doomed in the sense
01:33:50.420 | that we thought it was fundamental.
01:33:51.780 | It's not fundamental.
01:33:53.220 | It's a useful, absolutely useful
01:33:55.500 | and brilliant data structure.
01:33:57.460 | But there are deeper data structures
01:33:59.500 | like cosmological polytope.
01:34:01.460 | And space-time is not fundamental.
01:34:03.980 | What is doomed in the sense that it's wrong is reductionism.
01:34:08.980 | - Which is saying space-time is fundamental, essentially.
01:34:13.980 | - Right, right.
01:34:14.940 | The idea that somehow being smaller in space and time
01:34:20.380 | or space-time is a fundamental nature of reality,
01:34:23.660 | that's just wrong.
01:34:26.160 | It turned out to be a useful heuristic
01:34:28.180 | for thermodynamics and so forth.
01:34:29.900 | And in several other places,
01:34:31.300 | reductionism has been very useful.
01:34:33.200 | But that's, in some sense,
01:34:35.340 | an artifact of how we use our interface.
01:34:38.100 | - Yeah, so you're saying size doesn't matter.
01:34:41.620 | Okay, this is very important for me to write down.
01:34:43.500 | - Ultimately. - Ultimately.
01:34:45.180 | - Ultimately, right.
01:34:46.020 | I mean, it's useful for theories like thermodynamics
01:34:49.740 | and also for understanding brain networks
01:34:51.460 | in terms of individual neurons
01:34:53.940 | and neurons in terms of chemical systems inside cells.
01:34:58.500 | That's all very, very useful.
01:35:00.340 | But the idea that we're getting
01:35:02.380 | to the more fundamental nature of reality, no.
01:35:05.900 | When you get all the way down in that direction,
01:35:08.260 | you get down to the quarks and gluons,
01:35:09.980 | what you realize is what you've gotten down to
01:35:11.860 | is not fundamental reality,
01:35:13.300 | just the irreducible representations of a data structure.
01:35:16.420 | That's all you've gotten down to.
01:35:17.780 | So you're always stuck inside the data structure.
01:35:21.660 | So you seem to be getting closer and closer.
01:35:23.460 | I mean, I went from neural networks to neurons,
01:35:25.660 | neurons to chemistry, chemistry to particles,
01:35:27.660 | particles to quarks and gluons.
01:35:29.980 | I'm getting closer and closer to the real.
01:35:31.500 | No, I'm getting closer and closer
01:35:32.840 | to the actual structure of the data structure
01:35:35.820 | of space and time, the irreducible representations.
01:35:38.220 | That's what you're getting closer to,
01:35:39.680 | not to a deeper understanding of what's beyond space-time.
01:35:43.180 | - We'll also refer,
01:35:45.020 | we'll return again to this question of dynamics
01:35:48.060 | because you keep saying that space-time is doom,
01:35:51.840 | but mostly focusing on the space part of that.
01:35:54.420 | It's very interesting to see why time gets the bad cred too
01:35:59.020 | because how do you have dynamics without time
01:36:01.060 | is the thing I'd love to talk to you a little bit about.
01:36:02.920 | But let us return.
01:36:04.180 | Your brilliant whirlwind overview
01:36:09.380 | of the different theories of consciousness
01:36:11.420 | that are out there.
01:36:14.420 | What is consciousness if outside of space-time?
01:36:18.780 | - If we think that we want to have a model of consciousness,
01:36:20.860 | we as scientists then have to say,
01:36:23.940 | what do we want to write down?
01:36:25.140 | What kind of mathematical modeling
01:36:26.940 | are we gonna write down, right?
01:36:28.700 | And if you think about it,
01:36:29.540 | there's lots of things that you might want to write down
01:36:31.100 | about consciousness.
01:36:32.100 | It's a fairly complicated subject.
01:36:34.100 | So most of my colleagues are saying,
01:36:36.900 | let's start with matter or neurons
01:36:38.380 | and see what properties of matter
01:36:40.940 | could create consciousness.
01:36:42.500 | But I'm saying that that whole thing is out.
01:36:45.220 | Space-time is doomed, that whole thing is out.
01:36:47.740 | We need to look at consciousness qua consciousness.
01:36:51.420 | In other words, not as something
01:36:52.640 | that arises in space and time,
01:36:54.500 | but perhaps as something that creates space and time
01:36:56.180 | as a data structure.
01:36:57.180 | So what do we want?
01:36:59.460 | And here again, there's no hard and fast rule,
01:37:02.020 | but what you as a scientist have to do
01:37:03.820 | is to pick what you think are the minimal assumptions
01:37:09.740 | that are gonna allow you to boot up a comprehensive theory.
01:37:13.740 | That is the trick.
01:37:15.120 | So what do I want?
01:37:17.340 | So what I chose to do was to have three things.
01:37:20.780 | I said that there are conscious experiences,
01:37:25.260 | feeling of headache, the smell of garlic,
01:37:28.940 | experiencing the color red.
01:37:32.220 | There are, those are conscious.
01:37:33.620 | So that's a primitive of a theory.
01:37:34.860 | And the reason I want few primitives, why?
01:37:36.820 | Because those are the miracles of the theory, right?
01:37:38.780 | The primitives, the assumptions of the theory
01:37:40.700 | are the things you're not going to explain.
01:37:42.380 | Those are the things you assume.
01:37:43.980 | - And those experiences you particularly mean
01:37:46.300 | there's a subjectiveness to them.
01:37:49.620 | - That's right.
01:37:50.460 | - It's the thing when people refer
01:37:51.660 | to the hard problem of consciousness
01:37:54.020 | is it feels like something to look at the color red, okay.
01:37:58.140 | - Exactly right.
01:37:58.980 | It feels like something to have a headache
01:38:00.380 | or to feel upset to your stomach.
01:38:02.780 | It feels like something.
01:38:04.540 | And so I'm going to grant that in this theory
01:38:09.540 | there are experiences and they're fundamental in some sense.
01:38:12.460 | So conscious experience.
01:38:13.820 | So they're not derived from physics.
01:38:15.700 | They're not functional properties of particles.
01:38:18.780 | They are sui generis.
01:38:20.180 | They exist.
01:38:21.780 | Just like we assume space-time exists.
01:38:23.740 | I'm now saying space-time is just a data structure.
01:38:26.300 | It doesn't exist independent of conscious experiences.
01:38:29.500 | - Sorry to interrupt once again,
01:38:30.740 | but should we be focusing in your thinking on humans alone?
01:38:35.460 | Or is there something about in relation
01:38:40.460 | to other kinds of organisms
01:38:42.180 | that have a sufficiently high level of complexity?
01:38:44.620 | Or even, or is there some kind of generalization
01:38:49.620 | of the panpsychist idea
01:38:52.060 | that consciousness permeates all matter
01:38:55.700 | outside of the usual definition
01:38:58.700 | of what matter is inside space-time?
01:39:01.180 | - So it's beyond human consciousness.
01:39:04.260 | Human consciousness from my point of view
01:39:06.340 | would be one of a countless variety of consciousnesses.
01:39:10.500 | And even within human consciousness,
01:39:12.380 | there's countless variety of consciousnesses within us.
01:39:15.780 | You have your left and right hemisphere.
01:39:18.420 | And apparently if you split the corpus callosum,
01:39:20.700 | the personality of the left hemisphere
01:39:22.780 | and the religious beliefs of the left hemisphere
01:39:24.340 | can be very different from the right hemisphere.
01:39:26.380 | And their conscious experiences can be disjoint.
01:39:30.660 | One could have one conscious experience.
01:39:32.500 | They can play 20 questions.
01:39:33.860 | The left hemisphere can have an idea in its mind
01:39:35.820 | and the right hemisphere has to guess.
01:39:37.340 | And it might not get it.
01:39:38.860 | So even within you,
01:39:40.660 | there is more than just one consciousness.
01:39:43.020 | It's lots of consciousnesses.
01:39:45.220 | So the general theory of consciousness that I'm after
01:39:48.660 | is not just human consciousness.
01:39:50.340 | It's going to be just consciousness.
01:39:51.900 | And I presume human consciousness is a tiny drop
01:39:56.700 | in the bucket of the infinite variety of consciousnesses.
01:39:59.300 | - That said, I should clarify
01:40:01.100 | that the black hole of consciousness
01:40:03.780 | is the home cat.
01:40:07.380 | I'm pretty sure cats lack,
01:40:09.180 | is the embodiment of evil
01:40:11.780 | and lack all capacity for consciousness or compassion.
01:40:15.980 | So I just want to lay that on the table.
01:40:16.820 | That's a theory I'm working on.
01:40:18.180 | I don't have any good evidence.
01:40:19.500 | - The black cat.
01:40:20.340 | - Intuit, that's just a shout out.
01:40:21.900 | Sorry to distract.
01:40:24.300 | So that's the first assumption.
01:40:25.660 | - The first assumption.
01:40:27.180 | The second assumption is that
01:40:29.160 | these experiences have consequences.
01:40:31.660 | So I'm going to say that conscious experiences
01:40:35.020 | can trigger other conscious experiences somehow.
01:40:37.420 | So really in some sense,
01:40:40.740 | there's two basic assumptions.
01:40:43.740 | - There's some kind of causality.
01:40:46.180 | Is there a chain of causality?
01:40:47.780 | Does this relate to dynamics?
01:40:50.300 | - I'll say there's a probabilistic relationship.
01:40:52.700 | So I'm trying to be as nonspecific to begin with
01:40:58.860 | and see where it leads me.
01:41:01.060 | So what I can write down are probability spaces.
01:41:03.980 | So probability space,
01:41:05.100 | which contains the conscious experiences
01:41:08.180 | that this consciousness can have.
01:41:09.540 | So I call this a conscious agent.
01:41:13.580 | This technical thing,
01:41:15.620 | now Anika Harris and I have talked about this
01:41:19.260 | and she rightly cautions me that people will think
01:41:22.860 | that I'm bringing in a notion of a cell for agency
01:41:25.100 | and so forth when I say conscious agent.
01:41:27.460 | So I just want to say that I use the term conscious agent
01:41:30.020 | merely as a technical term.
01:41:32.300 | There is no notion of self
01:41:34.220 | in my fundamental definition of a conscious agent.
01:41:36.640 | There are only experiences and probabilistic relationships
01:41:41.060 | of how they trigger other experiences.
01:41:43.020 | - So the agent is the generator of the conscious experience?
01:41:46.300 | - The agent is a mathematical structure
01:41:49.580 | that includes a probability measure,
01:41:51.940 | a probability space of a possible conscious experiences
01:41:56.100 | and a Markovian kernel,
01:41:58.220 | which describes how if this agent
01:42:01.080 | has certain conscious experiences,
01:42:02.780 | how that will affect the experiences
01:42:04.300 | of other conscious agents, including itself.
01:42:07.500 | - But you don't think of that as a self?
01:42:09.900 | - No, there is no notion of a self here.
01:42:13.700 | There's no notion of really of an agent.
01:42:16.600 | - But is there a locality?
01:42:19.120 | Is there an organism? - There's no space.
01:42:22.900 | So these are conscious units, conscious entities.
01:42:28.260 | - But they're distinct in some way
01:42:30.460 | 'cause they have to interact.
01:42:32.260 | - Well, so here's the interesting thing.
01:42:33.620 | When we write down the mathematics,
01:42:36.100 | when you have two of these conscious agents interacting,
01:42:39.060 | the pair satisfy a definition of a conscious agent.
01:42:43.700 | So they are a single conscious agent.
01:42:46.140 | So there is one conscious agent.
01:42:47.860 | - Yeah.
01:42:48.700 | - But it has a nice analytic decomposition
01:42:52.360 | into as many conscious agents as you wish.
01:42:53.860 | - So that's a nice interface.
01:42:55.500 | - It's a very useful scientific interface.
01:42:58.700 | It's a scale-free, or if you like a fractal-like,
01:43:02.500 | approach to it in which we can use the same unit of analysis
01:43:06.320 | at all scales in studying consciousness.
01:43:09.740 | But if I want to talk about,
01:43:12.220 | so there's no notion of learning, memory, problem-solving,
01:43:17.140 | intelligence, self, agency.
01:43:20.800 | So none of that is fundamental.
01:43:24.140 | So, and the reason I did that was
01:43:26.360 | because I want to assume as little as possible.
01:43:29.860 | Everything I assume is a miracle in the theory.
01:43:32.180 | It's not something you explain, it's something you assume.
01:43:34.500 | So I have to build networks of conscious agents.
01:43:37.420 | If I want to have a notion of a self, I have to build a self.
01:43:41.540 | I have to build learning, memory, problem-solving,
01:43:43.920 | intelligence, and planning, all these different things.
01:43:46.720 | I have to build networks of conscious agents to do that.
01:43:49.500 | It's a trivial theorem that networks of conscious agents
01:43:52.380 | are computationally universal, that's trivial.
01:43:54.660 | So anything that we can do with neural networks
01:43:56.780 | or, you know, automata, you can do
01:43:59.180 | with networks of conscious agents, that's trivial.
01:44:01.580 | But you can also do more.
01:44:04.940 | The events in the probability space need not be computable.
01:44:08.820 | So the Markovian dynamics is not restricted
01:44:12.420 | to computable functions, because the very events themselves
01:44:16.380 | need not be computable.
01:44:17.860 | So this can capture any computable theory.
01:44:21.180 | Anything we can do with neural networks,
01:44:22.580 | we can do with conscious agent networks.
01:44:25.300 | But it leaves open the door for the possibility
01:44:28.120 | of non-computable interactions between conscious agents.
01:44:32.520 | So we have to, if we want a theory of memory,
01:44:35.600 | we have to build it.
01:44:38.080 | And there's lots of different ways you could build.
01:44:39.580 | We've actually got a paper, Chris Fields took the lead
01:44:41.820 | on this, and we have a paper called Conscious Agent Networks
01:44:45.180 | where Chris takes the lead and shows how to use
01:44:47.700 | these networks of conscious agents to build memory
01:44:49.620 | and to build primitive kinds of learning.
01:44:52.780 | - But can you provide some intuition
01:44:56.700 | of what conscious networks, network of conscious,
01:44:59.980 | networks of conscious agents helps you,
01:45:04.080 | first of all, what that looks like?
01:45:05.880 | And I don't just mean mathematically.
01:45:08.940 | Of course, maybe that might help build up intuition,
01:45:11.580 | but how that helps us potentially solve
01:45:14.020 | the hard problem of consciousness.
01:45:16.460 | - Right.
01:45:17.660 | - Or is that baked in, that that exists?
01:45:21.620 | Can you solve the hard problem of consciousness,
01:45:27.100 | why it tastes delicious when you eat a delicious ice cream
01:45:30.340 | with networks of conscious agents?
01:45:33.860 | Or is that taken as an assumption?
01:45:36.020 | - So the standard way the hard problem is thought of
01:45:40.060 | is we're assuming space and time in particles,
01:45:44.580 | or neurons, for example.
01:45:47.100 | These are just physical things that have no consciousness.
01:45:50.020 | And we have to explain how the conscious experience
01:45:51.760 | of the taste of chocolate could emerge from those.
01:45:54.940 | So that's the typical hard problem of consciousness
01:45:57.100 | is that problem, right?
01:45:58.740 | How do you boot up the taste of chocolate,
01:46:02.060 | the experience of the taste of chocolate from neurons, say,
01:46:06.260 | or the right kind of artificial intelligence circuitry?
01:46:10.560 | How do you boot that up?
01:46:11.660 | That's typically what the hard problem of consciousness
01:46:14.300 | means to researchers.
01:46:15.820 | Notice that I'm changing the problem.
01:46:18.340 | I'm not trying to boot up conscious experiences
01:46:21.380 | from the dynamics of neurons or silicon
01:46:23.780 | or something like that.
01:46:25.020 | I'm saying that that's the wrong problem.
01:46:27.840 | My hard problem would go in the other direction.
01:46:29.900 | If I start with conscious experiences,
01:46:33.500 | how do I build up space and time?
01:46:35.620 | How do I build up what I call the physical world?
01:46:37.540 | How do I build up what we call brains?
01:46:39.580 | Because I'm saying consciousness
01:46:43.060 | is not something that brains do.
01:46:45.620 | Brains are something that consciousness makes up.
01:46:48.020 | It's among the experience,
01:46:50.820 | it's an ephemeral experience in consciousness.
01:46:54.540 | I look inside, so to be very, very clear,
01:46:57.020 | right now I have no neurons.
01:46:58.740 | If you looked, you would see neurons.
01:47:03.200 | That's a data structure that you would create on the fly,
01:47:05.300 | and it's a very useful one.
01:47:06.440 | As soon as you look away,
01:47:08.540 | you garbage collect that data structure,
01:47:10.020 | just like that Necker cube that I was talking about
01:47:11.820 | on the piece of paper.
01:47:12.960 | When you look, you see a 3D cube.
01:47:16.080 | You create it on the fly.
01:47:17.360 | As soon as you look away, that's gone.
01:47:19.600 | - When you say you, you mean a human being scientist?
01:47:22.920 | - Right now, that's right.
01:47:24.600 | More generally, it'll be conscious agents,
01:47:26.960 | because as you pointed out,
01:47:28.360 | am I asking for a theory of consciousness only about humans?
01:47:31.720 | No, it's consciousness,
01:47:33.920 | which human consciousness is just a tiny sliver.
01:47:38.200 | - But you are saying that there is,
01:47:40.160 | that's a useful data structure.
01:47:41.880 | How many other data structures are there?
01:47:43.800 | That's why I said you, human.
01:47:45.480 | If there's another Earth,
01:47:47.520 | if there's another alien civilization
01:47:49.680 | and doing these kinds of investigations,
01:47:51.680 | would they come up with similar data structures?
01:47:54.320 | - Probably not.
01:47:55.160 | - What is the space of data structures,
01:47:56.560 | I guess is what I'm asking.
01:47:58.000 | - My guess is that if consciousness is fundamental,
01:48:04.040 | consciousness is all there is,
01:48:05.540 | then the only thing that mathematical structure can be about
01:48:11.000 | is possibilities of consciousness.
01:48:13.500 | And that suggests to me
01:48:17.900 | that there could be an infinite variety of consciousnesses.
01:48:21.260 | And a vanishingly small fraction of them
01:48:25.140 | use space-time data structures,
01:48:27.820 | and the kinds of structures that we use.
01:48:29.540 | There's an infinite variety of data structures.
01:48:32.020 | Now, this is very similar to something
01:48:33.620 | that Max Tegmark has said, but I want to distinguish it.
01:48:36.060 | He has this level four multiverse idea.
01:48:40.460 | He thinks that mathematics is fundamental.
01:48:43.060 | And so that's the fundamental reality.
01:48:45.020 | And since there's an infinite variety of,
01:48:47.380 | endless variety of mathematical structures,
01:48:49.020 | there's an infinite variety of multiverses, in his view.
01:48:52.580 | I'm saying something similar in spirit,
01:48:55.320 | but importantly different.
01:48:56.980 | There's an infinite variety of mathematical structures,
01:48:59.060 | absolutely.
01:48:59.880 | But mathematics isn't the fundamental reality
01:49:03.180 | in this framework.
01:49:04.980 | Consciousness is.
01:49:06.660 | And mathematics is to consciousness,
01:49:09.360 | like bones are to an organism.
01:49:11.980 | You need the bones.
01:49:12.860 | So mathematics is not divorced from consciousness,
01:49:16.480 | but it's not the entirety of consciousness by any means.
01:49:20.340 | And so there's an infinite variety of consciousnesses
01:49:24.420 | and signaling games that consciousnesses
01:49:27.260 | could interact via.
01:49:30.140 | And therefore worlds, common worlds, data structures,
01:49:34.340 | that they can use to communicate.
01:49:37.200 | So space and time is just one of an infinite variety.
01:49:40.220 | And so I think that what we'll find is that
01:49:43.580 | as we go outside of our little space time bubble,
01:49:46.240 | we will encounter utterly alien forms
01:49:51.700 | of conscious experience that we may not be able to
01:49:56.460 | really comprehend in the following sense.
01:50:00.300 | If I ask you to imagine a color
01:50:03.400 | that you've never seen before,
01:50:04.900 | does anything happen?
01:50:07.040 | (Luke laughs)
01:50:08.780 | Nothing happens.
01:50:09.620 | - No.
01:50:10.820 | - Nothing happens.
01:50:11.660 | And that's just one color.
01:50:13.260 | I'm asking for just a color.
01:50:14.940 | We actually know, by the way,
01:50:16.620 | that apparently there are women called tetraphams
01:50:20.660 | who have four color receptors, not just three.
01:50:25.900 | And Kimberly Jamison and others who've studied these women
01:50:29.140 | have good evidence that they apparently have
01:50:31.380 | a new dimension of color experience
01:50:33.260 | that the rest of us don't have.
01:50:36.180 | So these women are apparently living in a world of color
01:50:40.540 | that you and I can't even concretely imagine.
01:50:42.580 | No man can imagine them.
01:50:44.100 | - Yeah.
01:50:44.940 | - And yet they're real color experiences.
01:50:47.140 | And so in that sense, I'm saying,
01:50:49.140 | now take that little baby step,
01:50:51.260 | oh, there are women who have color experiences
01:50:53.280 | that I could never have.
01:50:54.120 | Well, that's shocking.
01:50:55.300 | Now take that infinite.
01:50:57.820 | There are consciousnesses where every aspect
01:51:00.580 | of their experiences is like that new color.
01:51:03.380 | It's something utterly alien to you.
01:51:05.740 | You have nothing like that.
01:51:07.820 | And yet these are all possible varieties
01:51:10.020 | of conscious experience.
01:51:11.300 | - When you say there's a lot of consciousnesses,
01:51:13.340 | as a singular consciousness,
01:51:16.380 | basically the set of possible experiences you can have
01:51:19.900 | in that subjective way,
01:51:21.220 | as opposed to the underlying mechanism.
01:51:25.040 | 'Cause you say that having extra color receptor,
01:51:30.860 | ability to have new experiences
01:51:34.100 | is somehow a different consciousness.
01:51:36.660 | Is there a way to see that as all the same consciousness,
01:51:39.660 | the subjectivity itself?
01:51:41.500 | - Right, because when we have two of these conscious agents
01:51:45.180 | interacting in the mathematics,
01:51:46.720 | they actually satisfy the definition of a conscious agent.
01:51:49.540 | So in fact, they are a single conscious agent.
01:51:52.220 | So in fact, one way to think about what I'm saying,
01:51:55.500 | I'm postulating with my colleagues,
01:51:57.340 | Chetan and Chris and others, Robert Prentner and so forth.
01:52:01.940 | There is one big conscious agent, infinitely complicated.
01:52:05.300 | But fortunately, we can, for analytic purposes,
01:52:08.320 | break it down all the way to, in some sense,
01:52:10.740 | the simplest conscious agent,
01:52:11.740 | which has one conscious experience, one.
01:52:13.960 | This one agent can experience red 35, that's it.
01:52:18.340 | That's what it experiences.
01:52:20.260 | You can get all the way down to that.
01:52:22.860 | - So you think it's possible that consciousness,
01:52:26.860 | whatever that is,
01:52:30.300 | is much more, is fundamental,
01:52:34.020 | or at least much more in the direction of the fundamental
01:52:37.500 | than is space-time as we perceive it?
01:52:40.500 | - That's the proposal.
01:52:41.660 | And therefore, what I have to do,
01:52:45.020 | in terms of the hard problem of consciousness,
01:52:47.540 | is to show how dynamical systems of conscious agents
01:52:51.660 | could lead to what we call space and time and neurons
01:52:55.180 | and brain activity.
01:52:56.700 | In other words, we have to show how you get space-time
01:53:00.380 | and physical objects
01:53:02.420 | entirely from a theory of conscious agents
01:53:06.260 | outside of space-time, with the dynamics
01:53:08.760 | outside of space-time.
01:53:09.940 | And I can tell you how we plan to do that,
01:53:13.940 | but that's the idea.
01:53:15.780 | - Okay, the magic of it, the chocolate is delicious.
01:53:18.940 | So there's a mathematical kind of thing
01:53:22.740 | that we could say here, how it can emerge
01:53:24.460 | within this system of networks of conscious agents,
01:53:27.780 | but is there going to be at the end of the proof
01:53:32.780 | why chocolate is so delicious?
01:53:35.980 | Or no?
01:53:38.300 | I guess I'm going to ask different kinds of dumb questions
01:53:41.980 | to try to sneak up.
01:53:43.220 | - Oh, well, that's the right question.
01:53:44.780 | And when I say that I took conscious experiences
01:53:47.020 | as fundamental, what that means is,
01:53:49.540 | in the current version of my theory,
01:53:51.580 | I'm not explaining conscious experiences
01:53:54.140 | where they came from.
01:53:55.700 | That's the miracle, that's one of the miracles.
01:53:58.160 | So I have two miracles in my theory.
01:53:59.620 | There are conscious experiences, like the taste of chocolate
01:54:02.300 | and that there's a probabilistic relationship.
01:54:06.180 | When certain conscious experiences occur,
01:54:08.660 | others are more likely to occur.
01:54:10.580 | Those are the two miracles that--
01:54:12.460 | - It's possible to get beyond that
01:54:17.460 | and somehow start to chip away
01:54:19.560 | at the miracle-ness of that miracle,
01:54:22.380 | that chocolate is delicious.
01:54:24.660 | - I hope so.
01:54:25.500 | I've got my hands full with what I'm doing right now,
01:54:27.620 | but I can just say at top level how I would think about that.
01:54:32.220 | That would get at this consciousness without form.
01:54:35.220 | This is really tough, because it's consciousness
01:54:46.140 | without form versus the various forms
01:54:48.040 | that consciousness takes for the experiences that it has.
01:54:52.080 | - Right, right.
01:54:55.080 | - So when I write down a probability space
01:55:00.080 | for these conscious experiences,
01:55:02.960 | I say, here's a probability space
01:55:04.280 | for the possible conscious experiences.
01:55:06.180 | It's just like when I write down a probability space
01:55:08.760 | for an experiment, like I'm gonna flip a coin twice,
01:55:12.480 | and I want to look at the probabilities of various outcomes.
01:55:15.200 | So I have to write down a probability space.
01:55:16.360 | There could be heads, heads, heads, tails,
01:55:18.600 | tails, heads, tails, tails.
01:55:20.400 | So you, before, as any class in probability,
01:55:23.400 | you're told, write down your probability space.
01:55:25.360 | If you don't write down your probability space,
01:55:26.720 | you can't get started.
01:55:28.160 | So here's my probability space for consciousness.
01:55:30.640 | How do I want to interpret that structure?
01:55:33.240 | The structure's just sitting there.
01:55:34.580 | There's gonna be a dynamics that happens on it, right?
01:55:37.520 | Experiences appear and then they disappear,
01:55:39.600 | just like heads appears and disappears.
01:55:42.000 | So one way to think about that fundamental probability space
01:55:47.000 | is that corresponds to consciousness without any content.
01:55:51.260 | The infinite consciousness
01:55:54.400 | that transcends any particular content.
01:55:58.840 | - Well, do you think of that as a mechanism, as a thing,
01:56:02.440 | like the rules that govern the dynamics of the thing
01:56:06.960 | outside of space-time?
01:56:08.540 | Isn't that, if you think consciousness is fundamental,
01:56:10.680 | isn't it essentially getting like,
01:56:12.760 | it is solving the hard problem,
01:56:14.920 | which is like, from where does this thing pop up,
01:56:19.920 | which is the mechanism of the thing popping up.
01:56:24.320 | Whatever the consciousness is, the different kinds,
01:56:27.360 | so on, that mechanism.
01:56:29.680 | And also, the question I want to ask is,
01:56:32.440 | how tricky do you think it is to solve that problem?
01:56:37.440 | You've solved a lot of difficult problems
01:56:40.280 | throughout the history of humanity.
01:56:42.080 | There's probably more problems to solve left
01:56:47.600 | than we've solved by like an infinity.
01:56:51.520 | But along that long journey of intelligent species,
01:56:57.160 | when will we solve this consciousness one?
01:57:01.080 | Just one way to measure the difficulty of the problem.
01:57:04.040 | - So I'll give two answers.
01:57:05.600 | There's one problem I think we can solve,
01:57:07.640 | but we haven't solved yet.
01:57:09.920 | And that is the reverse of what my colleagues
01:57:12.840 | call the hard problem.
01:57:13.940 | The problem of how do you start with conscious experiences
01:57:17.160 | in the way that I've just described them,
01:57:18.560 | and the dynamics, and build up space and time and brains,
01:57:21.840 | that I think is a tough technical problem,
01:57:25.120 | but it's in principle solvable.
01:57:26.240 | So I think we can solve that.
01:57:27.760 | So we would solve the hard problem,
01:57:29.400 | not by showing how brains create consciousness,
01:57:31.280 | but how networks of conscious agents
01:57:33.080 | create what we call the symbols that we call brains.
01:57:38.520 | So that I think, but does that allow you to,
01:57:41.400 | so that's interesting, that's an interesting idea.
01:57:43.480 | Consciousness creates the brain,
01:57:44.720 | not the brain creates consciousness.
01:57:46.440 | But does that allow you to build the thing?
01:57:49.440 | My guess is that it will enable unbelievable technologies.
01:57:53.480 | And I'll tell you why.
01:57:55.880 | I think it plugs into the work
01:57:57.760 | that the physicists are doing.
01:57:58.880 | So this theory of consciousness will be even deeper
01:58:01.920 | than the structures that the physicists are finding,
01:58:03.960 | like the amplituhedron.
01:58:05.120 | But the other answer to your question
01:58:08.100 | is less positive.
01:58:09.280 | As I said earlier, I think that there is no such thing
01:58:13.080 | as a theory of everything.
01:58:14.380 | So that I think that my,
01:58:19.200 | the theory that my team is working on,
01:58:21.720 | this conscious agent theory, is just a 1.0 theory.
01:58:25.440 | We're using probability of spaces and Markovian kernels.
01:58:29.920 | I can easily see people now saying,
01:58:31.720 | "Well, we can do better if we go to category theory,
01:58:35.000 | "and we can get a deeper, perhaps more interesting."
01:58:38.900 | And then someone will say,
01:58:39.740 | "Well, now I'll go to topoi theory."
01:58:41.580 | And then there'll be,
01:58:42.460 | so I imagine that there'll be conscious agents,
01:58:45.620 | five, 10, 3 trillion, 0.0.
01:58:49.260 | But I think it will never end.
01:58:51.800 | I think ultimately, this question
01:58:54.400 | that we sort of put our fingers on,
01:58:55.940 | of how does the formless give birth to form?
01:59:03.100 | To the taste, the wonderful taste of chocolate.
01:59:05.820 | I think that we will always go deeper and deeper,
01:59:10.240 | but we will never solve that.
01:59:11.700 | That in some sense, that will be a primitive.
01:59:15.160 | I hope I'm wrong.
01:59:16.780 | Maybe it's just the limits of my current imagination.
01:59:21.260 | So I'll just say my imagination right now
01:59:26.040 | doesn't peer that deep.
01:59:29.280 | Hopefully, so I don't, by the way, I'm saying this,
01:59:31.680 | I don't want to discourage some brilliant 20-year-old
01:59:35.100 | who then later on proves me dead wrong.
01:59:37.940 | I hope to be proven dead wrong.
01:59:39.340 | - Just like you said, essentially from now,
01:59:41.060 | everything we're saying now, everything you're saying,
01:59:43.340 | all of your theories will be laughing stock.
01:59:45.580 | They will respect the puzzle-solving abilities
01:59:50.580 | and how much we were able to do with so little,
01:59:54.060 | but outside of that, it will all be just,
01:59:58.240 | the silliness will be entertainment for a teenager.
02:00:01.760 | - Especially the silliness when we thought
02:00:03.120 | that we were so smart and we knew it all.
02:00:05.120 | - So it would be interesting to explore your ideas
02:00:08.320 | by contrasting, you mentioned Annika, Annika Harris,
02:00:12.560 | you mentioned Philip Goff.
02:00:14.560 | So outside of, if you're not allowed to say
02:00:18.320 | the fundamental disagreement is the fact
02:00:21.640 | that space-time is fundamental,
02:00:23.320 | what are interesting distinctions
02:00:26.640 | between ideas of consciousness
02:00:28.480 | between you and Annika, for example?
02:00:30.080 | You guys have, you've been on a podcast together,
02:00:33.400 | I'm sure in private,
02:00:36.280 | you guys have some incredible conversations.
02:00:38.120 | So where are some interesting sticking points,
02:00:41.600 | some interesting disagreements,
02:00:44.040 | let's say with Annika first?
02:00:45.700 | Maybe there'll be a few other people.
02:00:47.680 | - Well, Annika and I just had a conversation this morning
02:00:49.720 | where we were talking about our ideas
02:00:51.560 | and what we discovered really in our conversation
02:00:53.840 | was that we're pretty much on the same page.
02:00:57.080 | It was really just--
02:00:58.680 | - About consciousness.
02:00:59.520 | - About consciousness, yeah.
02:01:00.960 | Our ideas about consciousness
02:01:02.100 | are pretty much on the same page.
02:01:04.080 | She rightly has cautioned me to,
02:01:07.880 | when I talk about conscious agents,
02:01:10.120 | to point out that the notion of agency
02:01:12.040 | is not fundamental in my theory.
02:01:15.000 | The notion of self is not fundamental
02:01:17.280 | and that's absolutely true.
02:01:18.920 | I can use this network of conscious agents,
02:01:22.560 | I now use as a technical term,
02:01:25.040 | conscious agents is a technical term
02:01:26.440 | for that probability space with the Markovian dynamics.
02:01:29.360 | I can use that to build models of a self
02:01:31.920 | and to build models of agency,
02:01:33.280 | but they're not fundamental.
02:01:35.260 | So she has really been very helpful
02:01:40.260 | in helping me to be a little bit clear about these ideas
02:01:43.640 | and not say things that are misleading.
02:01:45.920 | - Sure.
02:01:47.200 | I mean, this is the interesting thing
02:01:48.440 | about language actually,
02:01:50.040 | is that language, quite obviously,
02:01:52.320 | is an interface to truth.
02:01:56.680 | It's so fascinating that individual words
02:02:00.440 | can have so much ambiguity
02:02:05.320 | and the specific choices of a word
02:02:10.800 | within a particular sentence,
02:02:12.160 | within the context of a sentence,
02:02:13.640 | can have such a difference in meaning.
02:02:17.280 | It's quite fascinating,
02:02:18.640 | especially when you're talking about topics
02:02:20.400 | like consciousness,
02:02:22.280 | because it's a very loaded term.
02:02:23.960 | It means a lot of things to a lot of people
02:02:26.160 | and the entire concept is shrouded in mystery.
02:02:29.160 | So combination of the fact that it's a loaded term
02:02:32.480 | and that there's a lot of mystery,
02:02:34.340 | people can just interpret it in all kinds of ways.
02:02:36.960 | And so you have to be both precise
02:02:39.160 | and help them avoid getting stuck
02:02:43.640 | on some kind of side road of miscommunication,
02:02:48.120 | lost in translation because you used the wrong word.
02:02:50.840 | That's interesting.
02:02:51.680 | I mean, 'cause for a lot of people,
02:02:54.360 | consciousness is ultimately connected to a self.
02:02:59.100 | I mean, our experience of consciousness
02:03:04.920 | is very, it's connected to this ego.
02:03:08.840 | I mean, I just, I mean, what else could it possibly be?
02:03:12.840 | I can't even, how do you begin to comprehend,
02:03:15.760 | to visualize, to conceptualize a consciousness
02:03:19.000 | that's not connected to this particular organism?
02:03:22.400 | - I'll have a way of thinking about this whole problem now
02:03:26.500 | that comes out of this framework that's different.
02:03:29.680 | So we can imagine a dynamics of consciousness,
02:03:34.260 | not in space and time, just abstractly.
02:03:37.660 | It could be cooperative, for all we know.
02:03:40.560 | It could be very friendly, I don't know.
02:03:43.960 | And you can set up a dynamics, a Markovian dynamics
02:03:46.720 | that is so-called stationary.
02:03:48.820 | And that's a technical term,
02:03:50.160 | which means that the entropy effectively is not increasing.
02:03:53.400 | There is some entropy, but it's constant.
02:03:55.160 | So there's no increasing entropy.
02:03:56.960 | And in that sense, the dynamics is timeless.
02:04:00.340 | There is no entropic time.
02:04:02.980 | But it's a trivial theorem, three-line proof,
02:04:08.020 | that if you have a stationary Markovian dynamics,
02:04:12.080 | any projection that you make of that dynamics
02:04:14.520 | by conditional probability,
02:04:16.240 | and if you want, I can state a little bit more,
02:04:17.960 | even more mathematically precisely
02:04:19.360 | for some readers or listeners.
02:04:21.920 | But if any projection you take by conditional probability,
02:04:25.120 | the induced image of that Markov chain
02:04:28.980 | will have increasing entropy.
02:04:30.440 | You will have entropic time.
02:04:34.080 | So I'll be very, very precise.
02:04:36.160 | I'll have a Markov chain, X1, X2, through Xn,
02:04:40.720 | where Xn goes to infinity, right?
02:04:42.380 | The entropy H, capital H of Xn,
02:04:48.040 | is equal to the entropy H of Xn minus one for all n.
02:04:51.600 | So the entropy is the same.
02:04:54.180 | But it's a theorem that H of Xn, say, given X sub one,
02:05:00.660 | is greater than or equal to H of Xn minus one, given X1.
02:05:10.480 | - Sure, where does the greater come from?
02:05:13.600 | - Because, well, the three-line proof,
02:05:15.940 | H of Xn given X1 is greater than or equal to
02:05:22.240 | H of Xn given X1 and X2, because conditioning reduces.
02:05:27.520 | But then H of Xn minus one, given X1, X2,
02:05:35.360 | is equal to H of Xn, given X2,
02:05:39.640 | Xn minus one, given X2, by the Markov property.
02:05:43.080 | And then, because it's stationary,
02:05:46.220 | it's equal to H of X, I have to write it down, Xn minus.
02:05:51.220 | - Sure, sure.
02:05:54.080 | - I have to write it down.
02:05:54.920 | But anyway, there's a three-line proof.
02:05:56.800 | - Sure.
02:05:57.640 | But the assumption of stationarity,
02:06:00.940 | we're using a lot of terms that people won't understand.
02:06:04.040 | - Right, right. - Doesn't matter.
02:06:05.740 | So there's some kind of, Markovian dynamics
02:06:10.240 | is basically trying to model some kind of system
02:06:13.560 | with some probabilities, and there's agents,
02:06:15.720 | and they interact in some kind of way,
02:06:17.280 | and you can say something about that system
02:06:20.000 | as it evolves stationarity.
02:06:22.920 | So a stationary system is one that has certain properties
02:06:27.920 | in terms of entropy very well.
02:06:30.880 | But we don't know if it's stationary or not.
02:06:33.040 | We don't know what the properties.
02:06:35.520 | - Right.
02:06:36.480 | - So you have to kind of take assumptions and see,
02:06:38.560 | okay, well, what does the system behave like
02:06:42.040 | under these different properties?
02:06:43.560 | The more constraints, the more assumptions you take,
02:06:46.360 | the more interesting, powerful things you can say,
02:06:49.920 | but sometimes they're limiting.
02:06:52.040 | That said, we're talking about consciousness here.
02:06:54.480 | How does that, you said cooperative, okay, competitive.
02:07:02.420 | It's just, I like chocolate.
02:07:04.140 | I'm sitting here, I have a brain, I'm wearing a suit.
02:07:08.500 | It sure as hell feels like I'm a self.
02:07:11.180 | What, am I tuning in, am I plugging into something?
02:07:16.800 | Am I a projection, a simple, trivial projection
02:07:20.700 | into space-time from some much larger organism
02:07:23.980 | that I can't possibly comprehend?
02:07:25.980 | How the hell, you're saying some,
02:07:28.020 | you're building up mathematical intuitions, fine, great,
02:07:31.500 | but I'm just, I'm having an existential crisis here
02:07:35.100 | and I'm gonna die soon.
02:07:36.700 | We'll all die pretty quickly,
02:07:37.940 | so I wanna figure out why chocolate's so delicious.
02:07:42.220 | So help me out here.
02:07:44.200 | So let's just keep sneaking up to this.
02:07:47.100 | - Right, so the whole technical thing was to say this.
02:07:52.100 | Even if the dynamics of consciousness is stationary
02:07:56.220 | so that there is no entropic time,
02:07:58.700 | any projection of it, any view of it
02:08:02.500 | will have the artifact of entropic time.
02:08:07.440 | That's a limited resource.
02:08:10.000 | Limited resources, so that the fundamental dynamics
02:08:14.220 | may have no limited resources whatsoever.
02:08:17.500 | Any projection will have, certainly time
02:08:19.980 | is a limited resource,
02:08:21.180 | and probably lots of other limited resources.
02:08:24.140 | Hence, we could get competition and evolution
02:08:28.140 | and nature red in tooth and claw
02:08:30.380 | as an artifact of a deeper system
02:08:32.700 | in which those aren't fundamental.
02:08:34.580 | And in fact, I take it as something
02:08:37.820 | that this theory must do at some point
02:08:41.020 | is to show how networks of conscious agents,
02:08:42.740 | even if they're not resource limited,
02:08:45.660 | give rise to evolution by natural selection
02:08:47.620 | via a projection.
02:08:49.340 | - Yeah, but you're saying,
02:08:51.100 | I'm trying to understand how the limited resources
02:08:53.260 | that give rise to, so first the thing gives rise to time,
02:08:57.940 | it gives rise to limited resource,
02:08:59.860 | it gives rise to evolution by natural selection,
02:09:03.140 | how that has to do with the fact
02:09:04.340 | that chocolate is delicious.
02:09:05.940 | - Well, it's not gonna do that directly,
02:09:08.060 | it's gonna get to this notion of self.
02:09:10.700 | - Oh, it's gonna give you--
02:09:12.140 | - The notion of self.
02:09:12.980 | - Evolution gives you the notion of self.
02:09:14.700 | - And also of a self separate from other selves.
02:09:18.420 | So the idea would be that--
02:09:20.180 | - That's competition, has life and death,
02:09:22.140 | all those kinds of things.
02:09:23.300 | - That's right, so it won't,
02:09:25.020 | I don't think, as I said,
02:09:27.220 | I don't think that I can tell you how the formless
02:09:29.780 | gives rise to the experience of chocolate.
02:09:32.380 | Right now my current theory says
02:09:33.660 | that's one of the miracles I'm assuming.
02:09:35.700 | - Yeah.
02:09:36.660 | - So my theory can't do it.
02:09:38.180 | And the reason my theory can't do it
02:09:39.260 | is 'cause Hoffman's brain can't do it right now.
02:09:41.260 | (laughing)
02:09:42.940 | But the notion of self, yes,
02:09:46.020 | the notion of self can be an artifact
02:09:49.100 | of the projection of it.
02:09:51.740 | So there's one conscious agent,
02:09:53.800 | 'cause anytime conscious agents interact,
02:09:56.820 | they form a new conscious agent.
02:09:57.900 | So there's one conscious agent.
02:09:59.460 | Any projection of that one conscious agent
02:10:02.700 | gives rise to time, even if there wasn't any time
02:10:06.220 | in that one conscious agent.
02:10:07.820 | And it gives rise, I want to,
02:10:10.140 | now I haven't proven this,
02:10:11.660 | so now this is me guessing where the theory's gonna go.
02:10:14.900 | I haven't done this, there's no paper on this yet.
02:10:16.940 | So now I'm speculating.
02:10:18.440 | My guess is I'll be able to show,
02:10:20.260 | or my brighter colleagues working with me
02:10:22.500 | will be able to show that we will get
02:10:25.340 | evolution of a natural selection,
02:10:26.700 | so notion of individual selves,
02:10:28.300 | individual physical objects and so forth
02:10:29.940 | coming out as a projection of this thing.
02:10:31.660 | And that the self, this then will be really interesting
02:10:35.900 | in terms of how it starts to interact
02:10:37.460 | with certain spiritual traditions, right?
02:10:41.020 | Where they will say that there is a notion of self
02:10:44.140 | that needs to be let go, which is this finite self
02:10:46.900 | that's competing with other selves
02:10:48.340 | to get more money and prestige and so forth.
02:10:52.060 | That self in some sense has to die,
02:10:54.400 | but there's a deeper self, which is the timeless being
02:10:58.700 | that precedes, not precludes,
02:11:04.380 | but precedes any particular conscious experiences,
02:11:08.100 | the ground of all experience.
02:11:10.420 | That there's that notion of a deep capital self.
02:11:13.700 | But our little capital lowercase s selves
02:11:18.020 | could be artifacts of projection.
02:11:20.840 | And it may be that what consciousness is doing
02:11:25.660 | in this framework is, right,
02:11:26.820 | it's projected itself down into a self
02:11:30.340 | that calls itself Don and a self that calls itself Lex.
02:11:33.940 | And through conversations like this,
02:11:36.060 | it's trying to find out about itself
02:11:37.940 | and eventually transcend the limits
02:11:41.220 | of the Don and Lex little icons that it's using
02:11:45.860 | and that little projection of itself.
02:11:49.060 | Through this conversation,
02:11:50.600 | somehow it's learning about itself.
02:11:53.680 | - So that thing dressed me up today
02:11:56.660 | in order to understand itself.
02:11:59.960 | - And in some sense, you and I are not separate
02:12:02.240 | from that thing and we're not separate from each other.
02:12:03.940 | - Yeah, well, I have to question
02:12:06.000 | the fashion choices on my end then.
02:12:08.480 | All right, so you mentioned you agree
02:12:10.920 | in terms of consciousness on a lot of things with Anika.
02:12:15.480 | Is there somebody, friend or friendly foe
02:12:20.260 | that you disagree with in some nuanced, interesting way
02:12:25.260 | or some major way about consciousness,
02:12:28.580 | about these topics of reality that you return to often?
02:12:33.140 | It's like Christopher Hitchens with Rabbi David Wolpe
02:12:40.260 | have had interesting conversations through years
02:12:44.020 | that added to the complexity
02:12:46.240 | and the beauty of their friendship.
02:12:47.640 | Is there somebody like that that over the years
02:12:51.920 | has been a source of disagreement with you
02:12:54.520 | that's strengthened your ideas?
02:12:56.520 | - Hmm, my ideas have been really shaped by several things.
02:13:01.520 | One is the physicalist framework
02:13:06.560 | that my scientific colleagues, almost to a person,
02:13:10.420 | have adopted and that I adopted too.
02:13:12.780 | The reason I walked away from it was because
02:13:15.400 | it became clear that we couldn't start
02:13:20.360 | with unconscious ingredients and boot up consciousness.
02:13:22.840 | - Can you define physicalist in contrast to reductionist?
02:13:27.840 | - So a physicalist, I would say,
02:13:32.720 | as someone who takes space-time
02:13:33.920 | and the objects within space-time
02:13:35.880 | as ontologically fundamental.
02:13:38.860 | - Right, and then reductionist is saying
02:13:42.040 | the smaller, the more fundamental.
02:13:43.760 | - That's a methodological thing.
02:13:45.820 | That's saying within space-time,
02:13:47.940 | as you go to smaller and smaller scales in space,
02:13:50.540 | you get deeper and deeper laws,
02:13:51.940 | more and more fundamental laws.
02:13:54.380 | And the reduction of temperature to particle movement
02:13:59.380 | was an example of that, but I think that
02:14:02.160 | the reason that worked was almost an artifact
02:14:05.940 | of the nature of our interface.
02:14:07.980 | - That was for a long time, and your colleagues,
02:14:10.340 | including yourself, were physicalists,
02:14:12.080 | and now you broke away.
02:14:13.520 | - Broke away because I think you can't start
02:14:15.680 | with unconscious ingredients and boot up consciousness.
02:14:18.480 | And--
02:14:19.580 | - So even with Roger Penrose, where there's a gray area.
02:14:23.880 | - Right, and here's the challenge I would put
02:14:26.880 | to all of my friends and colleagues
02:14:30.040 | who give one specific conscious experience
02:14:34.120 | that you can boot up.
02:14:37.680 | So if you think that it's integrated information,
02:14:40.700 | and I've asked this of Giulio Tononi a couple times,
02:14:44.140 | back in the '90s and then just a couple years ago.
02:14:46.620 | I asked Giulio, okay, so great, integrated information.
02:14:49.020 | So we're all interested in explaining
02:14:51.540 | some specific conscious experiences,
02:14:53.180 | so pick one, the taste of chocolate.
02:14:56.860 | What is the integrated information,
02:14:58.660 | precise structure that we need for chocolate,
02:15:02.580 | and why does that structure have to be for chocolate,
02:15:05.820 | and why is it that it could not possibly be vanilla?
02:15:09.260 | Is there any, I asked him, is there any one specific
02:15:11.560 | conscious experience that you can account for?
02:15:13.500 | Because notice, they've set themselves the task
02:15:18.360 | of booting up conscious experiences from physical systems.
02:15:21.440 | That's the task they've set themselves.
02:15:22.920 | - But that doesn't mean they're,
02:15:24.520 | I understand your intuition,
02:15:26.840 | but that doesn't mean they're wrong
02:15:28.280 | just because they can't find a way to boot it up yet.
02:15:31.320 | - That's right, no, that doesn't mean that they're wrong,
02:15:32.680 | it just means that they haven't done it.
02:15:37.340 | I think it's principled, the reason is principled,
02:15:39.980 | but I'm happy that they're exploring it.
02:15:43.060 | But the fact is, the remarkable fact is
02:15:45.060 | there's not one theory, so integrated information theory,
02:15:48.400 | orchestrated collapse of microtubules,
02:15:51.480 | global workspace theory.
02:15:54.780 | - These are all theories of consciousness.
02:15:56.260 | - These are all theories of consciousness.
02:15:57.740 | There's not a single theory that can give you
02:16:01.020 | a specific conscious experience, that they say,
02:16:03.220 | here is the physical dynamics or the physical structure
02:16:06.680 | that must be the taste of chocolate
02:16:08.020 | or whatever one they want.
02:16:09.100 | - So you're saying it's impossible,
02:16:11.180 | they're saying it's just hard.
02:16:13.040 | - Yeah, my attitude is, okay, no one said
02:16:18.040 | you had to start with neurons or physical systems
02:16:20.540 | and boot up consciousness.
02:16:21.380 | - You guys are just taking that--
02:16:22.200 | - You chose that problem.
02:16:23.500 | So since you chose that problem,
02:16:26.080 | how much progress have you made?
02:16:27.340 | Well, when you've not been able to come up
02:16:30.060 | with a single specific conscious experience,
02:16:33.020 | and you've had these brilliant people
02:16:34.540 | working on it for decades now,
02:16:36.740 | that's not really good progress.
02:16:38.420 | - Let me ask you to play devil's advocate.
02:16:41.580 | Can you try to steel man, steel man meaning,
02:16:46.180 | argue the best possible case for reality,
02:16:49.880 | the opposite of your book title,
02:16:51.480 | or maybe just stick into consciousness,
02:16:54.400 | can you take the physicalist view?
02:16:57.020 | Can you steel man the physicalist view
02:16:58.820 | for a brief moment, playing devil's advocate too,
02:17:01.780 | or steel man the person you used to be?
02:17:05.700 | - Right, right. - She's a physicalist.
02:17:07.220 | What's a good, like saying that you might be wrong right now
02:17:11.380 | what would be a convincing argument for that?
02:17:16.420 | - Well, I think the argument I would give
02:17:21.180 | and that I believed was, look,
02:17:23.860 | when you have very simple physical systems,
02:17:25.920 | like a piece of dirt,
02:17:27.560 | there's not much evidence of life for consciousness.
02:17:30.600 | It's only when you get really complicated physical systems
02:17:32.800 | like that have brains,
02:17:34.600 | and really the more complicated the brains,
02:17:36.400 | the more it looks like there's consciousness,
02:17:39.240 | and the more complicated that consciousness is.
02:17:41.200 | Surely that means that simple physical systems
02:17:45.220 | don't create much consciousness, or if maybe not any,
02:17:49.000 | or maybe if I'm a panpsychist,
02:17:51.000 | they create the most elementary kinds
02:17:52.320 | of simple conscious experiences.
02:17:54.120 | But you need more complicated physical systems
02:17:57.700 | to boot up, to create more complicated consciousnesses.
02:18:02.000 | I think that's the intuition
02:18:03.080 | that drives most of my colleagues.
02:18:04.920 | - And you're saying that this concept of complexity
02:18:09.000 | is ill-defined when you ground it to space-time.
02:18:13.200 | - I think it's well-defined
02:18:15.680 | within the framework of space-time, right?
02:18:17.440 | - No, it's ill-defined relative to what you need
02:18:21.100 | to actually understand consciousness
02:18:23.220 | because you're grounding complexity in space-time.
02:18:26.320 | - Oh, gotcha, right, right.
02:18:27.640 | Yeah, what I'm saying is,
02:18:30.440 | if it were true that space-time was fundamental,
02:18:35.560 | then I would have to agree
02:18:38.040 | that if there is such a thing as consciousness,
02:18:40.240 | given the data that we've got
02:18:41.400 | that complex brains have consciousness and dirt doesn't,
02:18:45.880 | that somehow it's the complexity of the dynamics
02:18:48.640 | or organization, the function of the physical system
02:18:52.120 | that somehow is creating the consciousness.
02:18:54.260 | So under those assumptions, yes.
02:18:58.260 | But when the physicists themselves are telling us
02:19:00.080 | that space-time is not fundamental,
02:19:02.620 | then I can understand.
02:19:03.700 | See, then the whole picture starts to come into focus.
02:19:07.020 | Why, my colleagues are brilliant, right?
02:19:10.020 | These are really smart people.
02:19:12.020 | I mean, Francis Crick worked on this
02:19:14.100 | for the last 20 years of his life.
02:19:15.800 | These are not stupid people.
02:19:17.860 | These are brilliant, brilliant people.
02:19:19.720 | The fact that we've come up
02:19:20.740 | with not a single specific conscious experience
02:19:22.980 | that we can explain, and no hope.
02:19:25.460 | There's no one that says, "I'm really close.
02:19:27.420 | "I'll have it for you in a year."
02:19:29.180 | No, there's just like, there's this fundamental gap.
02:19:33.280 | So much so that Steve Pinker, in one of his writings,
02:19:36.180 | says, look, he likes the global workspace theory,
02:19:39.860 | but he says the last dollop of the theory,
02:19:41.500 | in which there's something it's like to,
02:19:44.020 | he says we may have to just stipulate that as a brute fact.
02:19:48.740 | That's, I mean, Pinker is brilliant, right?
02:19:52.020 | He understands the state of play
02:19:54.980 | on this problem of the hard problem of consciousness,
02:19:57.340 | starting with physicalist assumptions,
02:20:00.860 | and then trying to boot up consciousness.
02:20:02.620 | So you've set yourself the problem.
02:20:04.480 | I'm starting with physical stuff that's not conscious.
02:20:07.860 | I'm trying to get the taste of chocolate out
02:20:11.340 | as maybe some kind of function of the dynamics of that.
02:20:14.860 | We've not been able to do that.
02:20:16.020 | And so Pinker is saying we may have to punt,
02:20:18.900 | we may have to just stipulate that last bit,
02:20:21.900 | he calls it the last dollop,
02:20:23.820 | and just stipulate it as a bare fact of nature
02:20:27.080 | that there is something it's like.
02:20:28.380 | Well, from my point of view as the,
02:20:30.540 | the whole point, the whole promise of the physicalist
02:20:33.020 | was we wouldn't have to stipulate.
02:20:34.500 | I was gonna start with the physical stuff
02:20:36.460 | and explain where the consciousness came from.
02:20:38.780 | If I'm going to stipulate consciousness,
02:20:40.700 | why don't I just stipulate consciousness
02:20:42.580 | and not stipulate all the physical stuff too?
02:20:45.020 | So I'm stipulating less.
02:20:46.860 | I'm saying, okay, I agree--
02:20:47.700 | - Which is the panpsychist perspective.
02:20:49.540 | - Well, it's actually what I call
02:20:51.140 | the conscious realist perspective.
02:20:52.620 | - Conscious realist.
02:20:53.780 | - Panpsychists are effectively dualists, right?
02:20:55.880 | They're saying there's physical stuff
02:20:57.180 | that really is fundamental, and then consciousness stuff.
02:21:00.020 | So I would go with Pinker and say,
02:21:01.260 | look, let's just stipulate the consciousness stuff.
02:21:04.100 | But I'm not gonna stipulate the physical stuff.
02:21:05.900 | I'm gonna actually now show how to boot up
02:21:08.100 | the physical stuff from just the consciousness stuff.
02:21:11.140 | So I'll stipulate less.
02:21:12.340 | - Is it possible, so if you stipulate less,
02:21:15.180 | is it possible for our limited brains to visualize reality
02:21:20.180 | as we delve deeper and deeper and deeper?
02:21:25.660 | Is it possible to visualize somehow?
02:21:27.940 | With the tools of math, with the tools of computers,
02:21:31.020 | with the tools of our mind, are we hopelessly lost?
02:21:34.620 | You said there's ways to intuit
02:21:39.320 | what's true using mathematics and probability
02:21:44.320 | and sort of Markovian dynamics, all that kind of stuff,
02:21:49.720 | but that's not visualizing.
02:21:51.920 | That's what's a kind of building intuition.
02:21:55.880 | But is it possible to visualize in the way we visualize
02:21:58.480 | so nicely in space-time in four dimensions,
02:22:02.320 | in three dimensions, sorry?
02:22:04.240 | Well, we really are looking through
02:22:06.560 | a two-dimensional screen until what we intuit
02:22:10.360 | to be a three-dimensional world
02:22:12.720 | and also inferring dynamic stuff, making it 4D.
02:22:17.320 | Anyway, is it possible to visualize some pretty pictures
02:22:20.540 | that give us a deeper sense of the truth of reality?
02:22:25.540 | - I think that we will incrementally be able to do that.
02:22:29.640 | I think that, for example, the picture that we have
02:22:33.680 | of electrons and photons interacting and scattering,
02:22:38.440 | it may have not been possible
02:22:43.360 | until Faraday did all of his experiments
02:22:45.320 | and then Maxwell wrote down his equations.
02:22:47.280 | And we were then sort of forced by his equations
02:22:50.160 | to think in a new way.
02:22:51.940 | And then when Planck in 1900,
02:22:55.920 | desperate to try to solve the problem
02:23:00.920 | of black-body radiation,
02:23:02.240 | what they call the ultraviolet catastrophe
02:23:03.840 | where Newton was predicting infinite energies,
02:23:08.320 | where there weren't infinite energies
02:23:09.800 | in black-body radiation.
02:23:11.460 | And he, in desperation, proposed packets of energy.
02:23:16.460 | Then once you've done that,
02:23:23.060 | and then you have an Einstein come along five years later
02:23:25.280 | and show how that explains the photoelectric effect.
02:23:29.360 | And then eventually in 1926, you get quantum theory.
02:23:33.560 | And then you get this whole new way of thinking
02:23:35.320 | that was, from the Newtonian point of view,
02:23:38.360 | completely contradictory and counterintuitive, certainly.
02:23:43.360 | And maybe if Gieson is right, not contradictory.
02:23:47.680 | Maybe if you use intuitionist math,
02:23:49.880 | they're not contradictory, but still.
02:23:52.400 | Certainly you wouldn't have gone there.
02:23:54.440 | And so here's a case where the experiments
02:23:57.740 | and then a desperate mathematical move,
02:24:00.600 | sort of we use those as a flashlight into the deep fog.
02:24:05.900 | And so that science may be the flashlight into the deep fog.
02:24:12.200 | - I wonder if it's still possible to visualize,
02:24:16.220 | we talk about consciousness from a self-perspective,
02:24:21.480 | experience it, hold that idea in our mind,
02:24:25.700 | where you can experience things directly.
02:24:27.820 | We've evolved to experience things in this 3D world.
02:24:32.260 | And that's a very rich experience.
02:24:35.960 | When you're thinking mathematically,
02:24:38.040 | you still, in the end of the day,
02:24:42.380 | have to project it down to a low dimensional space
02:24:47.380 | to make conclusions.
02:24:49.680 | Your conclusions will be a number,
02:24:51.920 | or a line, or a plot, or a visual.
02:24:56.060 | So I wonder how we can really touch some deep truth
02:25:00.140 | in a subjective way, like experience it,
02:25:03.740 | really feel the beauty of it,
02:25:05.900 | in the way that humans feel beauty.
02:25:08.540 | - Right, are we screwed?
02:25:10.960 | I don't think we're screwed.
02:25:11.800 | I think that we get little hints of it
02:25:14.420 | from psychedelic drugs and so forth.
02:25:17.540 | We get hints that there are certain interventions
02:25:19.680 | that we can take on our interface.
02:25:21.420 | I apply this chemical,
02:25:22.700 | which is just some element of my interface,
02:25:25.940 | to this other, to a brain, I ingest it.
02:25:29.720 | And all of a sudden, I seem like I've opened new portals
02:25:33.440 | into conscious experiences.
02:25:36.340 | Well, that's very, very suggestive.
02:25:38.320 | That's like the black body radiation
02:25:40.980 | doing something that we didn't expect, right?
02:25:42.700 | It doesn't go to infinity
02:25:44.040 | when we thought it was gonna go to infinity,
02:25:45.420 | and we're forced to propose these quanta.
02:25:49.220 | So once we have a theory of conscious agents,
02:25:52.220 | and is projection into space,
02:25:55.100 | and I should say, I should sketch
02:25:56.700 | what I think that projection is.
02:25:58.300 | But then I think we can then start
02:26:01.740 | to ask specific questions.
02:26:03.500 | When you're taking DMT,
02:26:06.820 | or you're taking LSD or something like that,
02:26:09.720 | now that we have this deep model,
02:26:12.940 | we've reverse engineered space and time
02:26:14.580 | and physical particles,
02:26:16.140 | we've pulled them back to this theory of conscious agents.
02:26:18.540 | Now we can ask ourselves in this idealized future,
02:26:22.020 | what are we doing to conscious agents
02:26:25.100 | when we apply 5-MeO DMT?
02:26:28.220 | What are we doing?
02:26:29.860 | Are we opening a new portal?
02:26:31.420 | So when I say that, I mean,
02:26:33.720 | I have a portal into consciousness
02:26:35.940 | that I call my body of Lex Friedman that I'm creating.
02:26:39.200 | And it's a genuine portal, not perfect,
02:26:41.820 | but it's a genuine portal.
02:26:42.960 | I'm definitely communicating with your consciousness.
02:26:45.980 | And we know that we have one technology
02:26:49.340 | for building new portals.
02:26:51.420 | We know one technology, and that is having kids.
02:26:53.820 | Having kids is how we build new portals into consciousness.
02:26:59.180 | It takes a long time.
02:27:00.100 | - Can you elaborate that?
02:27:01.380 | Oh, oh, oh, you mean like-
02:27:03.980 | - Your son and your daughter didn't exist.
02:27:07.220 | That was a portal.
02:27:08.300 | You're having contact with consciousness
02:27:10.460 | that you never would have had before.
02:27:12.380 | But now you've got a son or a daughter.
02:27:14.740 | You went through this physical process.
02:27:16.940 | They were born, then you, there was all the-
02:27:19.980 | - But is that portal yours?
02:27:22.780 | So when you have kids, are you creating new portals
02:27:25.340 | that are completely distinct from the portals
02:27:27.340 | that you've created with other consciousness?
02:27:29.340 | Like, can you elaborate on that?
02:27:31.340 | To which degree are the consciousness of your kids
02:27:34.540 | a part of you?
02:27:37.380 | - Well, so every person that I see,
02:27:39.740 | that symbol that I see, the body that I see,
02:27:43.140 | is a portal, potentially,
02:27:45.940 | for me to interact with the consciousness.
02:27:48.260 | - Yeah.
02:27:49.100 | - And each consciousness has a unique character.
02:27:53.620 | We call it a personality, and so forth.
02:27:56.260 | So with each new kid that's born,
02:27:59.900 | we come in contact with a personality
02:28:01.980 | that we've never seen before,
02:28:03.700 | and a version of consciousness
02:28:05.700 | that we've never seen before.
02:28:06.980 | At a deeper level, as I said,
02:28:08.660 | the theory says there's one agent.
02:28:10.500 | So this is a different projection
02:28:12.780 | of that one agent.
02:28:14.660 | So that's what I mean by a portal is,
02:28:18.900 | within my own interface, my own projection,
02:28:22.980 | can I see other projections
02:28:26.180 | of that one consciousness?
02:28:29.780 | So can I get portals in that sense?
02:28:32.100 | So I think we will get a theory of that,
02:28:36.460 | that we will get a theory of portals,
02:28:38.220 | and then we can ask how the psychedelics are acting.
02:28:41.460 | Are they actually creating new portals, or not?
02:28:44.140 | If they're not, we should, nevertheless,
02:28:46.860 | then understand how we could create a new portal.
02:28:50.220 | Maybe we have to just study what happens
02:28:51.860 | when we have kids.
02:28:53.660 | We know that that technology creates new portals.
02:28:57.380 | So we have to reverse engineer that,
02:28:58.900 | and then say, okay, could we somehow
02:29:01.380 | create new portals de novo?
02:29:06.340 | With that view, once we understand--
02:29:07.180 | - With something like brain-computer interfaces,
02:29:10.860 | for example. - Yeah, or maybe just
02:29:11.900 | a chemical or something.
02:29:12.740 | It's probably more complicated than a chemical.
02:29:14.580 | That's why I think that the psychedelics may,
02:29:17.620 | because they might be affecting this portal
02:29:19.580 | in certain ways that it turns it around and opens up.
02:29:22.860 | In other words, maybe once we understand
02:29:24.580 | what this thing is, a portal, your body is a portal,
02:29:27.380 | and understand all of its complexities,
02:29:28.620 | maybe we'll realize that that portal can be shifted
02:29:31.500 | to different parts of the deeper consciousness,
02:29:34.340 | and give new windows on it.
02:29:36.420 | And so in that way, maybe, yes,
02:29:38.860 | psychedelics could open up new portals
02:29:41.180 | in the sense that they're taking something
02:29:42.300 | that's already a complex portal and just tweaking it a bit.
02:29:45.980 | - Well, but creating is a very powerful difference
02:29:48.620 | between morphing.
02:29:50.540 | - Right, right, tweaking versus creating, I agree.
02:29:53.900 | - But maybe it gives you intuition
02:29:55.780 | to at least the full space of the kinds of things
02:29:58.840 | that this particular system is capable of.
02:30:01.060 | I mean, the idea, the idea that consciousness
02:30:04.300 | creates brains, I mean, that breaks my brain,
02:30:07.460 | because, you know, I guess I'm still a physicalist
02:30:12.220 | in that sense, 'cause you could,
02:30:14.180 | it's just much easier to intuit the world.
02:30:19.100 | It's very, it's practical to think,
02:30:21.740 | there's a neural network, and what are the different ways
02:30:25.660 | fascinating capabilities can emerge
02:30:30.660 | from this neural network?
02:30:32.300 | - I agree, it's easier.
02:30:34.380 | And so you start to, and then present to yourself
02:30:37.700 | the problem of, okay, well, how does consciousness arise?
02:30:40.820 | How does intelligence arise?
02:30:42.900 | How does emotion arise?
02:30:46.540 | How does memory arise in the,
02:30:49.780 | how do we filter within this system
02:30:52.540 | all the incoming sensory information
02:30:54.940 | we're able to allocate attention
02:30:57.620 | in different, interesting ways?
02:30:58.740 | How do all those mechanisms arise?
02:31:02.020 | To say that there's other fundamental things
02:31:04.100 | we don't understand outside of space-time
02:31:06.540 | that are actually core to how this whole thing works
02:31:10.020 | is a bit paralyzing, because it's like,
02:31:14.060 | oh, we're not 10% done, we're like 0.001% done,
02:31:19.060 | is the immediate feeling.
02:31:22.700 | - I certainly understand that.
02:31:24.380 | My attitude about it is, if you look at the young physicists
02:31:29.740 | who are searching for these structures beyond space-time,
02:31:32.580 | like Amplitude and so forth, they're having a ball.
02:31:37.320 | Space-time, that's what the old folks did.
02:31:41.340 | That's what the older generation did.
02:31:44.340 | We're doing something that really is fun and new,
02:31:48.540 | and they're having a blast,
02:31:51.900 | and they're finding all these new structures.
02:31:53.860 | So I think that we're going to succeed
02:31:58.580 | in getting a new, deeper theory.
02:32:03.500 | I can just say what I'm hoping
02:32:04.540 | with the theory that I'm working on.
02:32:06.540 | I'm hoping to show that I could have
02:32:09.060 | this timeless dynamics of consciousness, no entropic time.
02:32:12.740 | I take a projection, and I show how this timeless dynamics
02:32:16.240 | looks like the Big Bang,
02:32:17.760 | and the entire evolution of space-time.
02:32:21.860 | In other words, I see how my whole space-time interface.
02:32:25.140 | - So not just the projection,
02:32:27.980 | doesn't just look like space-time.
02:32:29.780 | You can explain the whole,
02:32:32.660 | from the origin of the universe.
02:32:34.420 | - That's what we have to do,
02:32:35.740 | and that's what the physicists understand.
02:32:37.060 | When they go beyond space-time to the amplitude
02:32:39.220 | and the cosmological polytope,
02:32:40.980 | they ultimately know that they have to get back
02:32:43.180 | the Big Bang story and the whole evolution,
02:32:46.580 | that whole story where there were no living things.
02:32:49.380 | There was just a point, and then the explosion,
02:32:53.660 | and then just particles at high energy,
02:32:55.580 | and then eventually the cooling down
02:32:57.020 | and the differentiation, and finally matter condenses,
02:33:01.780 | and then life, and then consciousness.
02:33:03.520 | That whole story has to come out of something
02:33:05.780 | that's deeper and without time,
02:33:07.860 | and that's what we're up to.
02:33:09.660 | So the whole story that we've been telling ourselves
02:33:14.480 | about Big Bang and how brains evolve in consciousness
02:33:17.140 | will come out of a much deeper theory.
02:33:19.100 | For someone like me, it's a lot.
02:33:23.340 | I mean, but for the younger generation,
02:33:26.940 | this is like, oh, wow, all the low chariots aren't picked.
02:33:30.740 | This is really good stuff.
02:33:31.860 | This is really new, fundamental stuff that we can do.
02:33:35.340 | So I can't wait to read the papers
02:33:38.220 | of the younger generation, and I wanna see them.
02:33:41.900 | - Kids these days with their non-space-time assumptions.
02:33:48.820 | It's just interesting looking at the philosophical tradition
02:33:51.660 | of this difficult ideas you struggle with.
02:33:53.780 | If you look like somebody like Immanuel Kant,
02:33:57.660 | what are some interesting agreements and disagreements
02:34:00.300 | you have with a guy about the nature of reality?
02:34:04.900 | - So there's a lot in agreement.
02:34:06.900 | So Kant was an idealist, transcendental idealist,
02:34:10.980 | and he basically had the idea that
02:34:15.540 | we don't see nature as it is.
02:34:19.340 | We impose a structure on nature.
02:34:21.160 | And so in some sense, I'm saying something similar.
02:34:26.160 | I'm saying that, by the way, I don't call myself an idealist.
02:34:31.680 | I call myself a conscious realist
02:34:33.520 | because idealism has a long history.
02:34:35.640 | A lot of different ideas come under idealism,
02:34:38.440 | and there's a lot of debates and so forth.
02:34:40.560 | It tends to be identified with, in many cases,
02:34:44.560 | anti-science and anti-realism.
02:34:46.680 | And I don't want either connection with my ideas.
02:34:49.400 | And so I just called mine conscious realism
02:34:51.560 | with an emphasis on realism and not anti-realism.
02:34:55.500 | But one place where I would, of course, disagree with Kant
02:34:59.680 | was that he thought that Euclidean space-time was a priori.
02:35:04.680 | We just know that that's false.
02:35:07.260 | So he went too far on that.
02:35:11.920 | But in general, the idea that we don't start with space-time,
02:35:16.240 | that space and time is, in some sense,
02:35:17.800 | forms of our perceptions, yes, absolutely.
02:35:20.440 | And I would say that there's a lot in common
02:35:25.680 | with Berkeley in that regard.
02:35:27.320 | There's a lot of ingenious arguments in Berkeley.
02:35:31.760 | Leibniz, in his monadology, understood very clearly
02:35:36.760 | that the hard problem was not solvable.
02:35:39.160 | He posed the hard problem and basically dismissed it,
02:35:42.200 | and just said, "You can't do this."
02:35:44.040 | And so if he came here and saw where we are,
02:35:47.360 | he'd say, "Look, guys, I told you this 300 years ago."
02:35:50.360 | And he had his monadology.
02:35:51.560 | He was trying to do something like,
02:35:54.200 | it's different from what I'm doing,
02:35:57.220 | but he had these things that were not in space and time,
02:35:59.840 | these monads.
02:36:00.680 | He was trying to build something.
02:36:02.280 | I'm trying to build a theory of conscious agents.
02:36:05.580 | My guess is that if he came here,
02:36:08.480 | I could just, if he saw what I was doing,
02:36:10.480 | he would say, he would understand it
02:36:13.080 | and immediately take off with it
02:36:15.120 | and go places that I couldn't.
02:36:17.040 | He would have no problem with it.
02:36:19.760 | - Right, there would be overlap of the spirit of the ideas.
02:36:23.560 | - Absolutely. - It would be
02:36:24.400 | totally overlapping.
02:36:25.480 | - But his genius would then just run with it
02:36:26.880 | far faster than I could.
02:36:28.080 | - I love the humility here.
02:36:29.840 | So let me ask you about sort of practical implications
02:36:32.620 | of your ideas to our world, our complicated world.
02:36:36.440 | When you look at the big questions of humanity,
02:36:38.800 | of hate, war, what else is there?
02:36:46.800 | Evil, maybe there's the positive aspects of that,
02:36:51.480 | of meaning, of love.
02:36:53.340 | What is the fact that reality is an illusion?
02:36:59.120 | Perceived, what is the conscious realism
02:37:05.320 | when applied to daily life?
02:37:07.500 | What kind of impact does it have?
02:37:09.300 | - A lot, and it's sort of scary.
02:37:15.460 | We all know that life is ephemeral
02:37:17.120 | and spiritual traditions have said,
02:37:20.300 | wake up to the fact that anything that you do here
02:37:22.640 | is going to disappear.
02:37:24.480 | But it's even more ephemeral than perhaps we've thought.
02:37:27.960 | I see this bottle because I create it right now.
02:37:30.880 | As soon as I look away,
02:37:33.680 | that data structure has been garbage collected.
02:37:36.080 | That bottle, I have to recreate it every time I look.
02:37:38.880 | So I spend all my money and I buy this fancy car.
02:37:41.320 | That car, I have to keep recreating it
02:37:44.360 | every time I look at it.
02:37:45.200 | It's that ephemeral.
02:37:46.560 | So all the things that we invest ourselves in,
02:37:50.200 | we fight over, we kill each other over,
02:37:52.320 | we have wars over.
02:37:53.440 | These are all, it's just like people
02:37:56.560 | in a virtual reality simulation.
02:37:58.400 | And there's this Porsche and we all see the Porsche.
02:38:03.800 | Well, that Porsche exists when I look at it.
02:38:07.080 | I turn my headset and I look at it.
02:38:09.080 | And then if Joe turns his headset the right way,
02:38:12.080 | he'll see his Porsche.
02:38:13.840 | It's not even the same Porsche that I see.
02:38:15.280 | He's creating his own Porsche.
02:38:16.780 | So these things are exceedingly ephemeral.
02:38:20.480 | And now, just imagine saying that that's my Porsche.
02:38:25.480 | Well, you can agree to say that it's your Porsche,
02:38:29.320 | but really, the Porsche only exists as long as you look.
02:38:32.840 | So this all of a sudden,
02:38:34.600 | what the spiritual traditions have been saying
02:38:36.560 | for a long, long time,
02:38:38.200 | this gets cashed out in mathematically precise science.
02:38:41.360 | It's saying ephemeral, yes, in fact,
02:38:43.920 | it lasts for a few milliseconds,
02:38:45.840 | a few hundred milliseconds while you look at it,
02:38:47.440 | and then it's gone.
02:38:48.720 | So the whole idea,
02:38:51.560 | why are we fighting?
02:38:54.200 | Why do we hate?
02:38:55.060 | We fight over possessions
02:38:59.800 | because we think that we're small little objects
02:39:05.160 | inside this pre-existing space-time.
02:39:07.780 | We assume that that mansion and that car
02:39:11.200 | exists independent of us,
02:39:12.940 | and that somehow we, these little things,
02:39:16.040 | can have our sense of self and importance enhanced
02:39:20.000 | by having that special car or that special house
02:39:21.800 | or that special person.
02:39:23.700 | When in fact, it's just the opposite.
02:39:26.960 | You create that mansion every time you look.
02:39:29.160 | You're something far deeper than that mansion.
02:39:32.960 | You're the entity which can create that mansion on the fly.
02:39:37.100 | And there's nothing to the mansion
02:39:39.820 | except what you create in this moment.
02:39:41.620 | So all of a sudden, when you take this point of view,
02:39:45.560 | it has all sorts of implications
02:39:49.540 | for how we interact with each other,
02:39:51.580 | how we treat each other.
02:39:53.320 | And again, a lot of things
02:39:58.700 | that spiritual traditions have said,
02:40:00.660 | it's a mixed bag.
02:40:02.580 | Spiritual traditions are a mixed bag.
02:40:03.700 | So let me just be right up front about that.
02:40:05.060 | I'm not promoting any particular,
02:40:06.740 | but they do have some insights.
02:40:08.380 | - Yeah, they have wisdom.
02:40:09.500 | - They have certain wisdom.
02:40:11.100 | I can point to nonsense, I won't go into it,
02:40:12.780 | but I can also point to lots of nonsense.
02:40:14.460 | So the issue is to then to look for the key insights.
02:40:19.300 | And I think they have a lot of insights
02:40:21.600 | about the ephemeral nature of objects in space and time
02:40:25.540 | and not being attached to them, including our own bodies,
02:40:28.780 | and reversing that I'm not this little thing,
02:40:31.640 | a little consciousness trapped in the body.
02:40:33.660 | And the consciousness itself is only a product of the body.
02:40:36.060 | So when the body dies, the consciousness disappears.
02:40:38.900 | It turns completely around.
02:40:40.680 | The consciousness is fundamental.
02:40:42.380 | The body, my hand exists right now
02:40:46.420 | because I'm looking at it.
02:40:47.940 | My hand is gone.
02:40:48.820 | I have no hand.
02:40:50.660 | I have no brain.
02:40:52.140 | I have no heart.
02:40:53.220 | If you looked, you'll see a heart.
02:40:55.400 | Whatever I am is this really complicated thing
02:41:00.100 | in consciousness.
02:41:01.260 | That's what I am.
02:41:02.440 | All the stuff that I thought I was
02:41:05.660 | is something that I create on the fly and delete.
02:41:07.540 | So this is completely a radical restructuring
02:41:11.940 | of how we think about possessions,
02:41:15.020 | about identity, about survival of death, and so forth.
02:41:20.020 | This is completely transformative.
02:41:22.760 | But the nice thing is that this whole approach
02:41:24.900 | of conscious agents, unlike the spiritual traditions,
02:41:27.680 | which have said in some cases similar things,
02:41:30.680 | they've said it imprecisely.
02:41:32.080 | This is mathematics.
02:41:34.920 | We can actually now begin to state precisely,
02:41:38.260 | here's the mathematical model of consciousness,
02:41:40.180 | conscious agents.
02:41:41.140 | Here's how it maps onto space-time,
02:41:42.420 | which I should sketch really briefly.
02:41:44.620 | And here's why things are ephemeral.
02:41:50.360 | And here's why you shouldn't be worried
02:41:52.680 | about the ephemeral nature of things,
02:41:54.040 | because you're not a little tiny entity
02:41:57.300 | inside space and time.
02:41:59.180 | Quite the opposite, you're the author of space and time.
02:42:02.240 | The I and the am and the I am is all kind of emerging
02:42:06.100 | through this whole process of evolution and so on
02:42:09.040 | that's just surface waves,
02:42:12.360 | and there's a much deeper ocean
02:42:13.700 | that we're trying to figure out here.
02:42:15.280 | So how does, you said,
02:42:16.900 | you said some of the stuff you're thinking about
02:42:18.820 | maps to space-time.
02:42:19.840 | How does it map to space-time?
02:42:21.180 | - So just a very, very high level, and I'll keep it brief.
02:42:25.220 | The structures that the physicists are finding,
02:42:28.140 | like the Amplituhedron,
02:42:29.940 | it turns out they're just static structure.
02:42:32.100 | They're polytopes.
02:42:33.120 | But they, remarkably, most of the information in them
02:42:37.860 | is contained in permutation matrices.
02:42:40.440 | So it's a matrix, like an end by end matrix
02:42:45.720 | that just has zeros and ones.
02:42:47.180 | That contains almost all of the information.
02:42:51.940 | And you can, they have these plavit graphs and so forth
02:42:54.680 | that they use to boot up the scattering.
02:42:56.780 | You can compute those scattering amplitudes
02:42:59.020 | almost entirely from these permutation matrices.
02:43:02.100 | So that's just, now from my point of view,
02:43:07.200 | I have this conscious agent dynamics.
02:43:09.340 | It turns out that the stationary dynamics
02:43:12.020 | that I was talking about,
02:43:13.700 | where the entropy is increasing,
02:43:15.900 | all the stationary dynamics are sketched out
02:43:19.700 | by permutation matrices.
02:43:22.400 | So if you, there's so-called Birkhoff polytope.
02:43:27.700 | All the vertices of this polytope,
02:43:29.780 | all the points are permutation matrices.
02:43:33.260 | All the internal points are Markovian kernels
02:43:37.460 | that have the uniform measure as a stationary measure.
02:43:42.620 | - I need to intuit a little better
02:43:44.340 | what the heck you're talking about.
02:43:46.140 | But so basically, there's some complicated thing going on
02:43:51.140 | with the network of conscious agents,
02:43:54.580 | and that's mappable to this,
02:43:56.580 | you're saying a two-dimensional matrix
02:43:58.540 | that scattering has to do with what,
02:44:02.060 | with our perception, like that's like photon stuff.
02:44:05.300 | I mean, I don't know if it's useful
02:44:06.500 | to sort of dig into detail.
02:44:09.660 | - I'll do just a high-level thing.
02:44:11.020 | - Yes.
02:44:11.860 | - So the high level is the long-term behavior
02:44:15.540 | of the conscious agent dynamics.
02:44:17.000 | So that's the projection.
02:44:17.940 | Just looking at the long-term behavior,
02:44:20.660 | I'm hoping will give rise to the amplituhedron.
02:44:23.820 | The amplituhedron then gives rise to space-time.
02:44:27.700 | So then I can just use their link
02:44:29.860 | to go all the way from consciousness
02:44:31.140 | through its asymptotics through the amplituhedron
02:44:34.260 | into space-time and get the map
02:44:36.020 | all the way into our interface.
02:44:37.820 | - And that's why you mentioned the permutation matrix,
02:44:39.660 | 'cause it gives you a nice thing to try to generate.
02:44:42.420 | - That's right, it's the connection with the amplituhedron.
02:44:44.500 | The permutation matrices are the core of the amplituhedron,
02:44:47.420 | and it turns out they're the core
02:44:49.120 | of the asymptotic description of the conscious agents.
02:44:52.300 | - So not to sort of bring up the idea of a creator,
02:44:54.820 | but I like, first of all, I like video games,
02:44:57.820 | and you mentioned this kind of simulation idea.
02:45:01.140 | First of all, do you think of it as an interesting idea,
02:45:03.100 | this thought experiment that will live in a simulation?
02:45:05.860 | And in general, do you think will live in a simulation?
02:45:10.360 | - So Nick Bostrom's idea about the simulation
02:45:14.300 | is typically couched in a physicalist framework.
02:45:17.880 | - Yes.
02:45:18.800 | - So there is the bottom level.
02:45:20.980 | There's some programmer in a physical space time,
02:45:24.260 | and they have a computer that they've programmed
02:45:25.700 | really cleverly, where they've created conscious entities.
02:45:29.680 | So you have the hard problem of consciousness, right?
02:45:32.660 | The standard hard problem,
02:45:33.540 | how could a computer simulation create a consciousness?
02:45:36.640 | Which isn't explained by that simulation theory.
02:45:39.080 | But then the idea is that the next level,
02:45:41.660 | the entities that are created in the first level simulation
02:45:46.660 | then can write their own simulations,
02:45:48.060 | and you get this nesting.
02:45:50.420 | So the idea that this is a simulation is fine,
02:45:55.420 | but the idea that it starts with a physical space,
02:45:58.580 | I think, isn't fine.
02:46:00.060 | - Well, there's different properties here,
02:46:01.820 | the partial rendering.
02:46:03.700 | I mean, to me, that's the interesting idea
02:46:07.180 | is not whether the entirety of the universe is simulated,
02:46:11.300 | but how efficiently can you create interfaces
02:46:17.900 | that are convincing to all other entities
02:46:21.780 | that can appreciate such interfaces?
02:46:24.060 | How little does it take?
02:46:25.900 | 'Cause you said partial rendering,
02:46:27.900 | or temporal, ephemeral rendering of stuff.
02:46:31.780 | Only render the tree falling in the forest
02:46:33.900 | when there's somebody there to see it.
02:46:36.340 | It's interesting to think,
02:46:38.260 | how can you do that super efficiently
02:46:39.780 | without having to render everything?
02:46:41.380 | And that, to me, is one perspective on the simulation,
02:46:44.440 | just like it is with video games,
02:46:46.740 | where a video game doesn't have to render
02:46:48.260 | every single thing.
02:46:49.500 | It's just the thing that the observer is looking at.
02:46:52.300 | - Right.
02:46:53.140 | There is actually, that's a very nice question,
02:46:55.580 | and there's whole groups of researchers
02:46:58.060 | that are actually studying in virtual reality,
02:47:00.660 | what is the sort of minimal requirements on the system?
02:47:05.660 | How does it have to operate
02:47:07.760 | to give you an immersion experience,
02:47:09.820 | to give you the feeling that you have a body,
02:47:12.600 | to get you to take it real?
02:47:14.300 | And there's actually a lot of really good work
02:47:15.980 | on that right now,
02:47:16.820 | and it turns out it doesn't take that much.
02:47:18.180 | You do need to get the perception action loop tight,
02:47:21.580 | and you have to give them the perceptions
02:47:25.100 | that they're expecting, if you want them to.
02:47:26.940 | But if you, you can lead them along.
02:47:30.080 | If you give them perceptions
02:47:31.140 | that are close to what they're expecting,
02:47:32.320 | you can then maybe move their reality around a bit.
02:47:35.460 | - Yeah, it's a tricky engineering problem,
02:47:36.940 | especially when you're trying to create a product
02:47:39.340 | that costs little, but that's,
02:47:41.260 | it feels like an engineering problem,
02:47:42.980 | not a deeply scientific problem.
02:47:45.500 | Or meaning, obviously, it's a scientific problem,
02:47:47.660 | but as a scientific problem,
02:47:49.060 | it's not that difficult to trick us descendants of apes.
02:47:53.300 | - But here's a case for just us,
02:47:55.660 | in our own, if this is a virtual reality
02:47:57.500 | that we're experiencing right now.
02:47:58.540 | So here's something you can try for yourself.
02:48:01.680 | If you just close your eyes
02:48:03.020 | and look at your experience in front of you,
02:48:08.740 | be aware of your experience in front of you,
02:48:09.900 | what you experience is just like a modeled dark gray,
02:48:14.020 | but there's all sort of, there's some dynamics to it,
02:48:15.940 | but it's just dark gray.
02:48:17.620 | But now I ask you, instead of having your attention forward,
02:48:21.740 | put your attention backward.
02:48:24.340 | What is it like behind you with your eyes closed?
02:48:26.880 | And there, it's like nothing.
02:48:32.600 | It's real.
02:48:35.380 | So what is going on here?
02:48:37.860 | What am I experiencing back there?
02:48:43.740 | Right?
02:48:44.580 | - I don't know if it's nothing.
02:48:47.140 | It's like, I guess it's the absence of,
02:48:49.780 | it's not even like darkness or something.
02:48:51.820 | - It's not even darkness.
02:48:54.180 | There's no qualia to it.
02:48:58.660 | And yet there is a sense of being.
02:49:01.140 | And that's the interesting thing.
02:49:02.100 | There's a sense of being back.
02:49:03.580 | So I close my, I put my attention forward,
02:49:06.780 | I have the qualia of a gray modeled thing.
02:49:08.900 | But when I put my attention backward,
02:49:10.380 | there's no qualia at all, but there is a sense of being.
02:49:13.460 | - Yeah.
02:49:14.380 | I personally, now you haven't been to that side of the room.
02:49:18.540 | I have been to that side of the room.
02:49:20.020 | So for me, memories, I start playing
02:49:25.020 | the engine of memory replay.
02:49:27.200 | Which is like, I take myself back in time
02:49:31.380 | and think about that place where I was,
02:49:33.420 | hanging out in that part.
02:49:34.500 | And that's what I see when I'm behind.
02:49:35.740 | So that's an interesting quirk of humans too,
02:49:38.660 | we're able to, we're collecting these experiences
02:49:41.180 | and we can replay them in interesting ways
02:49:43.060 | whenever we feel like it.
02:49:44.420 | And it's almost like being there,
02:49:46.620 | but not really, but almost.
02:49:49.140 | - That's right.
02:49:50.500 | And yet we can go our entire lives in this.
02:49:53.180 | You're talking about the minimal thing for VR.
02:49:54.740 | We can go our entire lives and not realize
02:49:56.740 | that all of my life, it's been like nothing behind me.
02:50:01.200 | We're not even aware that all of our lives,
02:50:06.300 | if you just pay attention, close your eyes,
02:50:09.420 | pay attention to what's behind me,
02:50:10.660 | we're like, oh, holy smoke.
02:50:12.060 | It's scary, I mean, it's like nothing.
02:50:14.740 | There's no quality there at all.
02:50:16.100 | How did I not notice that my entire life?
02:50:18.340 | We're so immersed in the simulation, we buy it so much.
02:50:21.300 | - Yeah, I mean, you could see this with children, right?
02:50:24.860 | Though with persistence, you could do the peekaboo game.
02:50:28.420 | You can hide from them and appear,
02:50:30.780 | and they're fully tricked.
02:50:32.100 | And in the same way, we're fully tricked.
02:50:34.820 | There's nothing behind us and we assume there is.
02:50:37.820 | And that's really interesting.
02:50:39.340 | These theories are pretty heavy.
02:50:42.280 | You as a human being, as a mortal human being,
02:50:46.640 | how has these theories been to you personally?
02:50:49.880 | Like, are there good days and bad days
02:50:51.720 | when you wake up and look in the mirror
02:50:54.080 | and the fact that you can't see anything behind you?
02:50:56.880 | The fact that it's rendered,
02:50:58.800 | like, is there interesting quirks?
02:51:00.680 | Nietzsche, if you gaze long into the abyss,
02:51:05.200 | the abyss gazes into you.
02:51:08.080 | How has these theories, these ideas changed you as a person?
02:51:12.040 | - It's been very, very difficult.
02:51:15.420 | This stuff is not just abstract theory building
02:51:19.720 | because it's about us.
02:51:21.720 | Sometimes I realize that there's this big division in me.
02:51:23.840 | My mind is doing all the science
02:51:26.560 | and coming up with these conclusions,
02:51:28.440 | and the rest of me is not integrating.
02:51:29.960 | I'm just like, I don't believe it.
02:51:31.520 | I just don't believe this.
02:51:33.560 | So as I start to take it seriously,
02:51:35.880 | I get scared myself.
02:51:36.960 | It's like, but it's very much,
02:51:41.000 | then I read these spiritual traditions
02:51:42.960 | and realize they're saying very, very similar things.
02:51:45.080 | It's like, there's a lot of convergence.
02:51:48.200 | So for me, I have,
02:51:51.320 | the first time I thought it might be possible
02:51:55.720 | that we're not seeing the truth was in 1986.
02:51:59.920 | It was from some mathematics we were doing.
02:52:02.640 | And when that hit me,
02:52:04.560 | it hit me like a ton of bricks I had to sit down.
02:52:06.640 | It was really, it was scary.
02:52:11.280 | It was really a shock to the system.
02:52:14.080 | And then to realize that everything
02:52:16.640 | that has been important to me,
02:52:18.160 | like getting a house, getting a car,
02:52:23.160 | getting a reputation and so forth.
02:52:25.440 | Well, that car is just like the car I see
02:52:28.840 | in the virtual reality.
02:52:29.680 | It's just there when you perceive it and it's not there.
02:52:32.680 | So the whole question of what am I doing and why?
02:52:36.600 | What's worthwhile doing in life?
02:52:39.500 | Clearly, getting a big house and getting a big car.
02:52:45.220 | I mean, we all knew that we were gonna die.
02:52:48.200 | So we tend not to know that.
02:52:50.520 | We tend to hide it, especially when we're young.
02:52:52.720 | Before age 30, we don't believe we're gonna die.
02:52:54.680 | - But we factually maybe know
02:52:56.920 | that you kind of are supposed to, yeah.
02:52:59.480 | - But they'll figure something out
02:53:00.640 | and we'll be the generation that's the first one
02:53:02.960 | that doesn't have to die.
02:53:04.320 | That's the kind of thing.
02:53:05.200 | But when you really face the fact that you're going to die,
02:53:09.420 | and then when I start to look at it
02:53:12.720 | from this point of view that,
02:53:13.560 | well, this thing was an interface to begin with.
02:53:16.480 | So what I'm really gonna be doing,
02:53:20.200 | just taking off a headset.
02:53:21.600 | So I've been playing in a virtual reality game all day
02:53:24.160 | and I got lost in the game
02:53:25.840 | when I was fighting over a Porsche.
02:53:27.720 | And I shot some guys up and I punctured their tires
02:53:31.440 | and I got the Porsche.
02:53:33.000 | Now I take the headset off and what was that for?
02:53:35.160 | Nothing.
02:53:36.120 | It was just, it was a data structure
02:53:37.720 | and the data structure is gone.
02:53:39.080 | So all of the wars, the fighting and the reputations
02:53:42.960 | and all this stuff, it's just a headset.
02:53:46.960 | So my theory says that intellectually,
02:53:52.520 | my mind, my emotions rebel all over the place.
02:53:57.520 | This is like, and so I have to meditate, I meditate a lot.
02:54:03.480 | - What percent of the day would you say you spend
02:54:06.320 | as a physicalist sort of living life,
02:54:11.040 | pretending your car matters, your reputation matters?
02:54:16.040 | Like how much, what's that Tom Waits song?
02:54:19.680 | I like my town with a little drop of poison.
02:54:22.480 | How much poison do you allow yourself to have?
02:54:25.760 | - I think my default mode is physicalist.
02:54:27.680 | I think that that's just the default.
02:54:30.800 | When I'm not being conscious, consciously attentive.
02:54:35.800 | - Intellectually consciously attentive.
02:54:39.000 | 'Cause if you're just, you're still,
02:54:40.800 | if you're tasting coffee and not thinking
02:54:43.140 | or drinking or just taking in the sunset,
02:54:45.560 | you're not being intellectual,
02:54:47.440 | but you're still experiencing it.
02:54:49.440 | So it's when you turn on the introspective machine,
02:54:53.640 | that's when you can start.
02:54:54.960 | - And turn off the thinker.
02:54:56.840 | When I actually just start looking without thinking,
02:55:00.060 | - Huh.
02:55:00.980 | - So that's when I feel like I,
02:55:03.820 | all of a sudden I'm starting to see through.
02:55:06.820 | Sort of like, okay, part of the addiction to the interface
02:55:11.820 | is all the stories I'm telling about it.
02:55:14.780 | It's really important for me to get that,
02:55:15.980 | really important to do that.
02:55:18.060 | So I'm telling all these stories and so I'm all wrapped up.
02:55:21.780 | Almost all of the mind stuff that's going on in my head
02:55:24.420 | is about attachment to the interface.
02:55:28.740 | And so what I found is that the,
02:55:32.740 | essentially the only way to really detach
02:55:37.420 | from the interface is to literally let go
02:55:42.300 | of thoughts altogether.
02:55:44.360 | And then all of a sudden, even my identity,
02:55:49.180 | my whole history, my name, my education,
02:55:52.020 | and all this stuff is almost irrelevant
02:55:54.760 | because it's just now here is the present moment.
02:55:59.500 | And this is the reality right now.
02:56:03.860 | And all of that other stuff is an interface story.
02:56:07.160 | But this conscious experience right now,
02:56:09.520 | this is the only reality as far as I can tell.
02:56:14.260 | The rest of it's a story.
02:56:16.080 | And, but that is again, not my default.
02:56:20.620 | That is, I have to make a really conscious choice
02:56:25.320 | to say, okay, I know intellectually
02:56:28.400 | this is all an interface.
02:56:30.440 | I'm gonna take the headset off and so forth.
02:56:33.580 | And then immediately sink back into the game
02:56:36.540 | and just be out there playing the game and get lost.
02:56:39.760 | So I'm always lost in the game
02:56:41.520 | unless I literally consciously choose to stop thinking.
02:56:47.960 | - Isn't it terrifying to acknowledge
02:56:50.840 | that to look beyond the game?
02:56:55.220 | Isn't it--
02:56:57.680 | - Scares the hell out of me.
02:56:59.800 | It really is scary because I'm so attached.
02:57:03.400 | I'm attached to this body.
02:57:04.480 | I'm attached to the interface.
02:57:05.560 | - Are you ever worried about breaking your brain a bit?
02:57:09.840 | Meaning like, it's, I mean, some of these ideas
02:57:16.080 | when you think about reality, even with like Einstein,
02:57:18.900 | just realizing, you said interface,
02:57:22.400 | just realizing that light,
02:57:25.080 | you know, that there's a speed of light
02:57:28.120 | and you can't go fast in the speed of light
02:57:29.840 | and like what kind of things black holes
02:57:32.120 | and can do with light, even that can mess with your head.
02:57:37.040 | - Yes.
02:57:38.120 | - But that's still space time.
02:57:41.000 | - That's a big mess, but it's still just space time.
02:57:42.720 | It's still a property of our interface.
02:57:44.720 | That's right.
02:57:45.560 | But it's still like, even Einstein realized
02:57:49.880 | that this particular thing, some of the stories
02:57:52.280 | we tell ourselves is constructing interfaces
02:57:56.040 | that are oversimplifying the way things work
02:57:59.440 | because it's nice.
02:58:01.640 | The stories are nice.
02:58:03.640 | Stories are nice.
02:58:04.480 | That's what I mean, just like video games.
02:58:06.400 | They're nice.
02:58:07.880 | - Right, but Einstein was a realist, right?
02:58:10.200 | He was a famous realist in the sense
02:58:12.160 | that he was very explicit in a 1935 paper
02:58:15.160 | with Podolsky and Rosen, the EPR paper, right?
02:58:18.240 | He, they said, "If without in any way disturbing a system,
02:58:23.240 | "I can predict with probability one,
02:58:28.580 | "the outcome of a measurement,
02:58:30.080 | "then there exists in reality that element," right?
02:58:35.920 | That value that, and we now know from quantum theory
02:58:39.400 | that that's false.
02:58:41.880 | That Einstein's idea of local realism
02:58:44.440 | is strictly speaking false.
02:58:47.320 | - Yeah.
02:58:48.160 | - And so we can predict, we can set up,
02:58:50.920 | in quantum theory, you can set up,
02:58:52.960 | and there's a paper by Chris Fuchs,
02:58:55.080 | quantum Bayesianism, where he scouts this out.
02:58:58.680 | It was done by other people,
02:58:59.520 | but he gives a good presentation of this,
02:59:01.460 | where they have a sequence of something
02:59:02.960 | like nine different quantum measurements that you can make.
02:59:06.280 | And you can predict with probability one
02:59:08.800 | what a particular outcome will be,
02:59:10.640 | which you can actually prove that it's impossible
02:59:15.480 | that the value existed before you made the measurement.
02:59:18.760 | So you know with probability one what you're gonna get,
02:59:20.800 | but you also know with certainty
02:59:23.000 | that that value was not there
02:59:24.120 | until you made the measurement.
02:59:25.880 | So we know from quantum theory
02:59:28.040 | that the act of observation is an act of fact creation.
02:59:31.660 | And that is built into what I'm saying
02:59:35.760 | with this theory of consciousness.
02:59:37.240 | If consciousness is fundamental,
02:59:39.520 | space-time itself is an act of fact creation.
02:59:42.200 | It's an interface that we create,
02:59:43.760 | consciousness creates, plus all the objects in it.
02:59:46.000 | So local realism is not true.
02:59:48.880 | Quantum theory is established,
02:59:51.800 | also non-contextual realism is not true.
02:59:54.560 | And that fits in perfectly with this idea
02:59:57.160 | that consciousness is fundamental.
02:59:59.560 | These things are, these exist as data structures
03:00:01.800 | when we create them.
03:00:02.800 | As Chris Fuchs says, the act of observation
03:00:06.700 | is an act of fact creation.
03:00:08.720 | But I must say on a personal level,
03:00:10.920 | I'm having to spend,
03:00:14.960 | I spend a couple hours a day
03:00:17.680 | just sitting in meditation on this
03:00:22.460 | and facing the rebellion in me
03:00:27.460 | that goes to the core,
03:00:28.720 | it feels like it goes to the core of my being,
03:00:30.340 | rebellion against these ideas.
03:00:31.920 | So here it's very, very interesting
03:00:33.540 | for me to look at this because,
03:00:34.840 | so here I'm a scientist and I'm a person.
03:00:37.280 | The science is really clear.
03:00:39.120 | Local realism is false, non-contextual realism is false.
03:00:42.180 | Space-time is doom, it's very, very clear.
03:00:44.520 | It couldn't be clearer.
03:00:45.960 | And my emotions rebel left and right.
03:00:50.200 | When I sit there and say, okay,
03:00:52.000 | I am not something in space and time.
03:00:53.840 | Something inside of me says, you're crazy.
03:00:57.680 | Of course you are.
03:00:58.520 | And I'm completely attached to it.
03:01:00.120 | I'm completely attached to all this stuff.
03:01:01.960 | I'm attached to my body, I'm attached to the headset,
03:01:04.600 | I'm attached to my car, attached to people,
03:01:07.560 | I'm attached to all of it.
03:01:08.860 | And yet I know as a absolute fact,
03:01:12.800 | I'm gonna walk away from all of it.
03:01:14.680 | I'm gonna die.
03:01:15.520 | In fact, I almost died last year.
03:01:21.560 | COVID almost killed me.
03:01:22.900 | I sent a goodbye text to my wife.
03:01:26.600 | So I was, I thought I was--
03:01:28.360 | - You really did.
03:01:29.240 | - I sent her a goodbye.
03:01:30.500 | I was in the emergency room
03:01:32.720 | and it had attacked my heart
03:01:35.760 | and it'd been at 190 beats per minute for 36 hours.
03:01:39.280 | I couldn't last much longer.
03:01:41.600 | I knew I couldn't, they couldn't stop it.
03:01:44.800 | - That was it.
03:01:46.680 | - So that was it.
03:01:47.520 | So I texted her goodbye from the emergency room.
03:01:50.640 | - I love you, goodbye kind of thing.
03:01:52.440 | - Yeah, right.
03:01:53.800 | Yeah, that was it.
03:01:55.860 | - Were you afraid?
03:01:56.700 | - Yeah, it's a scarcity, right?
03:01:59.320 | But there was, you're just feeling so bad anyway
03:02:02.800 | that all, you know, that sort of you're scared
03:02:05.700 | but you're just feeling so bad
03:02:07.040 | that in some sense you just want it to stop anyway.
03:02:09.600 | So I've been there and faced it just a year ago.
03:02:16.080 | - How did that change you, by the way?
03:02:19.040 | Having this intellectual reality
03:02:22.000 | that's so challenging that you meditate on,
03:02:24.240 | that it's just an interface.
03:02:25.700 | And one of the hardest things to come to terms with
03:02:29.040 | is that that means that, you know, it's gonna end.
03:02:32.980 | How did that change you having come so close
03:02:37.520 | to the reality of it?
03:02:38.360 | It's not just an intellectual reality,
03:02:40.040 | it's a reality of death.
03:02:43.800 | - It's forced, I've meditated for 20 years now.
03:02:46.840 | And I would say averaging three or four hours a day.
03:02:50.400 | But it's put a new urgency,
03:02:57.440 | but urgency is not the right word
03:02:59.280 | because it's riveted my attention, I'll put it that way.
03:03:04.280 | It's really riveted my attention
03:03:06.920 | and I've really paid, I spent a lot more time
03:03:11.520 | looking at what spiritual traditions say.
03:03:13.860 | I don't, by the way, again, not taking it with, you know,
03:03:19.800 | take it all with a grain of salt.
03:03:21.440 | But on the other hand, I think it's stupid for me
03:03:23.380 | to ignore it.
03:03:24.320 | So I try to listen to the best ideas
03:03:28.860 | and to sort out nonsense from,
03:03:32.200 | and it's just, we all have to do it for ourselves, right?
03:03:34.480 | It's not easy.
03:03:35.480 | So what makes sense, and I have the advantage of some science
03:03:39.240 | so I can look at what science says
03:03:40.760 | and try to compare with spiritual tradition.
03:03:43.040 | I try to sort it out for myself.
03:03:44.840 | But then I also look and realize
03:03:48.320 | that there's another aspect to me,
03:03:49.560 | which is this whole emotional aspect.
03:03:53.200 | I seem to be wired up.
03:03:55.280 | As evolutionary psychology says, I'm wired up, right?
03:04:01.600 | All these defensive mechanisms, you know,
03:04:04.120 | I'm inclined to lie if I need to.
03:04:06.920 | I'm inclined to be angry, to protect myself,
03:04:10.960 | to have an in-group and an out-group,
03:04:13.140 | to try to make my reputation as big as possible,
03:04:17.680 | to try to demean the out-group.
03:04:19.440 | There's all these things that evolutionary psychology
03:04:22.000 | is spot on, it's really brilliant about the human condition.
03:04:26.240 | And yet I think evolution, as I said, evolutionary theory
03:04:30.520 | is a projection of a deeper theory
03:04:32.080 | where there may be no competition.
03:04:33.780 | So I'm in this very interesting position
03:04:38.600 | where I feel like, okay, according to my own theory,
03:04:41.840 | I'm consciousness, and maybe this is what it means
03:04:44.560 | for consciousness to wake up.
03:04:47.280 | It's not easy.
03:04:51.160 | It's almost like I feel like I have real skin in the game.
03:04:54.720 | It really is scary.
03:04:55.820 | I really was scared when I was about to die.
03:04:58.760 | It really was hard to say goodbye to my wife.
03:05:02.360 | It really pained.
03:05:04.660 | And to then look at that and then look at the fact
03:05:09.400 | that I'm gonna walk away from this anyway,
03:05:11.600 | and it's just an interface.
03:05:12.800 | How do I?
03:05:13.640 | So it's trying to put all this stuff together
03:05:16.280 | and really grok it, so to speak,
03:05:19.340 | not just intellectually, but grok it at an emotional level.
03:05:22.920 | - Yeah, what are you afraid of, you silly evolved organism
03:05:26.680 | that's gotten way too attached to the interface?
03:05:30.020 | What are you really afraid of?
03:05:32.000 | - That's right.
03:05:32.840 | - Is there a--
03:05:34.880 | - Very personal, you know, it's very, very personal.
03:05:36.640 | - Yeah. - Yeah.
03:05:37.720 | - I mean, speaking of that text,
03:05:40.960 | what do you think is this whole love thing?
03:05:43.920 | What's the role of love in our human condition?
03:05:49.200 | This interface thing we have,
03:05:51.020 | is it somehow interweaved, interconnected with consciousness?
03:05:54.620 | This attachment we have to other humans,
03:05:56.580 | and this deep, like some quality to it
03:06:01.580 | that seems very interesting, peculiar.
03:06:07.420 | - Well, there are two levels I would think about that.
03:06:11.180 | There's love in the sexual sense,
03:06:12.860 | and then there's love in a deeper sense.
03:06:15.420 | And in the sexual sense,
03:06:16.980 | we can give an evolutionary account of that,
03:06:19.140 | and so forth, and I think that's pretty clear to people.
03:06:21.980 | In this deeper sense, right, so of course,
03:06:29.240 | a marriage, I love my wife in a sexual sense,
03:06:32.340 | but there is a deeper sense as well.
03:06:34.060 | When I was saying goodbye to her,
03:06:35.100 | there was a much deeper love that was really at play there.
03:06:38.420 | That's one place where I think that the mixed bag
03:06:42.100 | from spiritual traditions has something right.
03:06:44.100 | When they say, you know, love your neighbor as yourself,
03:06:46.260 | that in some sense, love is fundamental,
03:06:49.460 | I think that they're onto something,
03:06:51.900 | something very, very deep and profound.
03:06:53.820 | And every once in a while,
03:06:57.460 | I can get a personal glimpse of that,
03:06:58.940 | especially when I'm in the space with no thought.
03:07:03.780 | Like when I can really let go of thoughts,
03:07:05.980 | I get little glimpses of a love
03:07:10.240 | in the sense that I'm not separate.
03:07:11.940 | It's a love in the sense that I'm not different
03:07:16.140 | from that, you know?
03:07:17.860 | - Yeah.
03:07:18.700 | - If you and I are separate, then I can fight you.
03:07:21.300 | But if you and I are the same, if there's a union there.
03:07:25.180 | - The togetherness of it, yeah.
03:07:26.740 | What, who's God?
03:07:28.620 | All those gods, the stories that have been told
03:07:32.900 | throughout history, you said,
03:07:34.340 | through the spiritual traditions.
03:07:36.720 | What do you think that is?
03:07:37.860 | Is that us trying to find that common thing at the core?
03:07:44.340 | - Well, in many traditions, not all.
03:07:48.840 | The one I was raised in, so my dad was a Protestant minister.
03:07:53.780 | We tend to think of God as a being.
03:07:57.940 | But I think that that's not right.
03:08:02.820 | I think the closest way to think about God is being, period.
03:08:06.900 | Not a being, but being.
03:08:08.980 | The very ground of being itself is God.
03:08:12.340 | I think that's the deep, and from my point of view,
03:08:16.340 | that's the ground of consciousness.
03:08:17.620 | So the ground of conscious being is what we might call God.
03:08:22.620 | But the word God has always been, you know,
03:08:25.260 | for example, you don't believe the same God is my God,
03:08:27.640 | so I'm gonna fight you.
03:08:29.220 | We'll have wars over, because the being,
03:08:32.380 | the specific being that you call God
03:08:34.460 | is different from the being that I call God,
03:08:35.940 | and so we fight.
03:08:36.920 | Whereas if it's not a being, but just being,
03:08:40.980 | and you and I share being, then you and I are not separate,
03:08:45.060 | and there's no reason to fight.
03:08:46.920 | We're both part of that one being,
03:08:48.780 | and loving you is loving myself,
03:08:52.020 | 'cause we're all part of that one being.
03:08:54.060 | The spiritual traditions that point to that,
03:08:57.300 | I think are pointing in a very interesting direction,
03:09:01.340 | and that does seem to match with the mathematics
03:09:04.780 | of the conscious agent stuff
03:09:05.820 | that I've been working on as well.
03:09:07.420 | That it really fits with that,
03:09:09.820 | although that wasn't my goal.
03:09:11.620 | - Is there, you mentioned,
03:09:13.660 | you mentioned that the young physicists that you talk to,
03:09:19.580 | or whose work you follow, have quite a lot of fun
03:09:23.800 | breaking with the traditions of the past,
03:09:26.260 | the assumptions of the past.
03:09:27.860 | What advice would you give to young people today,
03:09:31.220 | in high school and college, not just physicists,
03:09:34.140 | but in general, how to have a career they can be proud of,
03:09:38.820 | how they can have a life they can be proud of,
03:09:41.460 | how to make their way in the world,
03:09:43.060 | from the lessons, from the wins and the losses
03:09:45.980 | in your own life, what little insights could you pull out?
03:09:50.260 | - I would say the universe is a lot more interesting
03:09:53.980 | than you might expect, and you are a lot more special
03:09:58.580 | and interesting than you might expect.
03:10:00.020 | You might think that you're just a little,
03:10:02.540 | tiny, irrelevant, 100 pound, 200 pound,
03:10:08.540 | person in a vast, billions of light years across space,
03:10:12.740 | and that's not the case.
03:10:15.940 | You are, in some sense, the being that's creating
03:10:18.940 | that space all the time, every time you look.
03:10:21.540 | So, waking up to who you really are,
03:10:25.300 | outside of space and time, as the author of space and time,
03:10:29.340 | as the author of everything that you see.
03:10:32.060 | - The author of space and time, that's beautiful.
03:10:35.900 | - You're the author of space and time,
03:10:38.300 | and I'm the author of space and time,
03:10:40.100 | and space and time is just one little data structure.
03:10:42.660 | Many other consciousnesses are creating other data structures
03:10:45.500 | that are authors of various other things.
03:10:48.260 | So, realizing, and then realizing that,
03:10:51.020 | I had this feeling growing up, going to college,
03:10:54.940 | reading all these textbooks, oh man, it's all been done.
03:10:57.620 | If I'd just been there 50 years ago,
03:11:00.580 | I could've discovered this stuff,
03:11:01.580 | but it's all in the textbooks now.
03:11:03.820 | - Well, believe me, the textbooks are gonna look silly
03:11:07.260 | in 50 years, and it's your chance
03:11:10.180 | to write the new textbook.
03:11:11.380 | So, of course, study the current textbooks.
03:11:14.620 | You have to understand them.
03:11:15.940 | There's no way to progress until you understand
03:11:19.180 | what's been done, but then,
03:11:22.020 | the only limit is your imagination, frankly.
03:11:25.980 | That's the only limit.
03:11:26.820 | - The greatest books, the greatest textbooks ever written
03:11:29.920 | on Earth are yet to be written.
03:11:31.860 | - Exactly.
03:11:33.700 | - What do you think is the meaning of this whole thing?
03:11:36.900 | What's the meaning of life from your limited interface?
03:11:40.180 | Can you figure it all out?
03:11:42.340 | Like, why, so you said the universe is kinda trying
03:11:45.780 | to figure itself out through us.
03:11:51.180 | - Yeah, that's the closest I've come.
03:11:55.420 | So, I'll give you, so I will say that I don't know,
03:12:00.420 | but here's my guess, right?
03:12:02.700 | - That's a good first sentence.
03:12:03.940 | That's a good starting point.
03:12:05.220 | - And maybe that's gonna be a profound part
03:12:08.620 | of the final answer is to start with the I don't know.
03:12:10.740 | It's quite possible that that's really important
03:12:13.820 | to start with the I don't know.
03:12:15.260 | My guess is that if consciousness is fundamental,
03:12:18.640 | and if Gödel's incompleteness theorem holds here,
03:12:23.020 | and there's infinite variety of structures
03:12:27.660 | for consciousness to some sense explore,
03:12:29.980 | that maybe that's what it's about.
03:12:37.100 | This is something that Annika and I talked about
03:12:39.140 | a little bit, and she doesn't like this way
03:12:40.220 | of talking about it, and so I'm gonna have to talk
03:12:41.420 | with her some more about this way of talking.
03:12:43.940 | But right now, I'll just put it this way,
03:12:45.460 | and I'll have to talk with her more
03:12:46.500 | and see if I can say it more clearly.
03:12:48.940 | But the way I'm talking about it now is that
03:12:55.740 | there's a sense in which there's being,
03:12:59.300 | and then there's experiences or forms
03:13:03.860 | that come out of being.
03:13:05.700 | That's one deep, deep mystery.
03:13:07.360 | And the question that you asked, what is it all about?
03:13:13.700 | Somehow it's related to that.
03:13:15.260 | Why does being, why doesn't it just stay without any forms?
03:13:19.620 | Why do we have experiences?
03:13:23.100 | Why not just have, when you close your eyes
03:13:26.700 | and you pay attention to what's behind you,
03:13:28.540 | there's nothing, but there's being.
03:13:30.940 | Why don't we just stop there?
03:13:35.980 | Why didn't we just stop there?
03:13:36.980 | Why did we create all tables and chairs
03:13:39.420 | and the sun and moon and people?
03:13:41.700 | All this really complicated stuff, why?
03:13:43.840 | And all I can guess right now,
03:13:49.420 | and I'll probably kick myself in a couple years
03:13:51.860 | and say that was dumb, but all I can guess right now
03:13:53.940 | is that somehow consciousness wakes up to itself
03:13:57.860 | by knowing what it's not.
03:13:59.140 | So here I am, I'm not this body.
03:14:03.140 | And I sort of saw that, it was sort of in my face
03:14:06.420 | when I sent a text goodbye,
03:14:09.100 | but then as soon as I'm better, it's sort of like,
03:14:11.300 | okay, I sort of don't wanna go there, right?
03:14:14.220 | Okay, so I am my body.
03:14:18.340 | I go back to the standard, I am my body,
03:14:20.380 | and I want to get that car,
03:14:22.340 | and even though I was just about to die a year ago,
03:14:24.820 | so that comes rushing back.
03:14:26.780 | So consciousness immerses itself fully
03:14:31.540 | into a particular headset, gets lost in it,
03:14:37.460 | and then slowly wakes up.
03:14:40.420 | - Just so it can escape, and that is the waking up,
03:14:42.580 | but it needs to have--
03:14:43.780 | - It needs to know what it's not.
03:14:45.620 | It needs to know what you are.
03:14:48.380 | You have to say, oh, I'm not that, I'm not that.
03:14:50.260 | That wasn't important, that wasn't important.
03:14:53.260 | - That's really powerful.
03:14:54.340 | Don, let me just say that,
03:14:56.020 | because I've been a long-term fan of yours,
03:15:00.460 | and we're supposed to have a conversation
03:15:02.500 | during this very difficult moment in your life,
03:15:05.420 | let me just say you're a truly special person,
03:15:07.380 | and I, for one, and I know there's a lot of others
03:15:10.340 | that agree, I'm glad that you're still here
03:15:12.740 | with us on this earth, if for a short time.
03:15:17.900 | So whatever the universe, whatever plan it has for you
03:15:22.900 | that brought you close to death
03:15:26.660 | to maybe enlighten you some kind of way,
03:15:28.660 | I think it has an interesting plan for you.
03:15:34.060 | You're one of the truly special humans,
03:15:35.980 | and it's a huge honor that you would sit
03:15:37.500 | and talk with me today.
03:15:38.820 | Thank you so much.
03:15:39.660 | - Thank you very much, Lex.
03:15:40.500 | I really appreciate that, thank you.
03:15:42.780 | - Thanks for listening to this conversation
03:15:44.220 | with Donald Hoffman.
03:15:45.500 | To support this podcast, please check out our sponsors
03:15:48.180 | in the description.
03:15:49.500 | And now, let me leave you with some words
03:15:51.540 | from Albert Einstein, relevant to the ideas
03:15:54.420 | discussed in this conversation.
03:15:56.020 | Time and space are modes by which we think,
03:16:01.220 | and not conditions in which we live.
03:16:03.660 | Thank you for listening, and hope to see you next time.
03:16:07.580 | (upbeat music)
03:16:10.180 | (upbeat music)
03:16:12.780 | [BLANK_AUDIO]