back to index

Joscha Bach: Artificial Consciousness and the Nature of Reality | Lex Fridman Podcast #101


Chapters

0:0 Introduction
3:14 Reverse engineering Joscha Bach
10:38 Nature of truth
18:47 Original thinking
23:14 Sentience vs intelligence
31:45 Mind vs Reality
46:51 Hard problem of consciousness
51:9 Connection between the mind and the universe
56:29 What is consciousness
62:32 Language and concepts
69:2 Meta-learning
76:35 Spirit
78:10 Our civilization may not exist for long
97:48 Twitter and social media
104:52 What systems of government might work well?
107:12 The way out of self-destruction with AI
115:18 AI simulating humans to understand its own nature
124:32 Reinforcement learning
129:12 Commonsense reasoning
135:47 Would AGI need to have a body?
142:34 Neuralink
147:1 Reasoning at the scale of neurons and societies
157:16 Role of emotion
168:3 Happiness is a cookie that your brain bakes for itself

Whisper Transcript | Transcript Only Page

00:00:00.000 | The following is a conversation with Yosha Bach,
00:00:03.040 | VP of Research at the AI Foundation
00:00:05.640 | with a history of research positions at MIT and Harvard.
00:00:09.480 | Yosha is one of the most unique and brilliant people
00:00:13.400 | in the artificial intelligence community,
00:00:15.680 | exploring the workings of the human mind,
00:00:18.080 | intelligence, consciousness, life on earth,
00:00:21.440 | and the possibly simulated fabric of our universe.
00:00:24.900 | I could see myself talking to Yosha
00:00:27.560 | many times in the future.
00:00:29.800 | Quick summary of the ads.
00:00:31.480 | Two sponsors, ExpressVPN and Cash App.
00:00:35.440 | Please consider supporting the podcast
00:00:37.160 | by signing up at expressvpn.com/lexpod
00:00:41.200 | and downloading Cash App and using code LEXPODCAST.
00:00:46.200 | This is the Artificial Intelligence Podcast.
00:00:49.080 | If you enjoy it, subscribe on YouTube,
00:00:51.300 | review it with Five Stars on Apple Podcast,
00:00:53.520 | support it on Patreon,
00:00:54.920 | or simply connect with me on Twitter @lexfriedman.
00:00:59.040 | Since this comes up more often
00:01:01.640 | than I ever would have imagined,
00:01:03.520 | I challenge you to try to figure out
00:01:05.740 | how to spell my last name without using the letter E,
00:01:09.540 | and it'll probably be the correct way.
00:01:12.480 | As usual, I'll do a few minutes of ads now
00:01:14.800 | and never any ads in the middle
00:01:16.140 | that can break the flow of the conversation.
00:01:19.000 | This show is sponsored by ExpressVPN.
00:01:21.600 | Get it at expressvpn.com/lexpod
00:01:25.240 | to support this podcast
00:01:26.480 | and to get an extra three months free
00:01:28.960 | on a one-year package.
00:01:30.860 | I've been using ExpressVPN for many years.
00:01:34.060 | I love it.
00:01:35.360 | I think ExpressVPN is the best VPN out there.
00:01:38.620 | They told me to say it,
00:01:40.280 | but I think it actually happens to be true.
00:01:43.080 | It doesn't log your data,
00:01:44.580 | it's crazy fast,
00:01:46.000 | and it's easy to use, literally.
00:01:47.880 | Just one big power on button.
00:01:50.300 | Again, for obvious reasons,
00:01:52.200 | it's really important that they don't log your data.
00:01:55.200 | It works on Linux and everywhere else too.
00:01:57.720 | Shout out to my favorite flavor of Linux, Ubuntu Mate 2004.
00:02:02.720 | Once again, get it at expressvpn.com/lexpod
00:02:07.400 | to support this podcast
00:02:08.680 | and to get an extra three months free
00:02:12.120 | on a one-year package.
00:02:13.540 | This show is presented by Cash App,
00:02:16.620 | the number one finance app in the App Store.
00:02:18.980 | When you get it, use code LEXPODCAST.
00:02:22.220 | Cash App lets you send money to friends,
00:02:24.400 | buy Bitcoin, and invest in the stock market
00:02:27.120 | with as little as $1.
00:02:28.920 | Since Cash App does fractional share trading,
00:02:31.320 | let me mention that the order execution algorithm
00:02:34.100 | that works behind the scenes
00:02:35.400 | to create the abstraction of the fractional orders
00:02:38.600 | is an algorithmic marvel.
00:02:40.340 | So big props to the Cash App engineers
00:02:42.640 | for taking a step up to the next layer of abstraction
00:02:45.080 | over the stock market,
00:02:46.360 | making trading more accessible for new investors
00:02:49.060 | and diversification much easier.
00:02:51.840 | So again, if you get Cash App from the App Store or Google Play
00:02:55.360 | and use the code LEXPODCAST, you get $10.
00:02:59.280 | And Cash App will also donate $10 to FIRST,
00:03:02.480 | an organization that is helping advance robotics
00:03:05.240 | and STEM education for young people around the world.
00:03:08.240 | And now, here's my conversation with Josia Bach.
00:03:13.380 | As you've said, you grew up in a forest in East Germany,
00:03:17.840 | just as we were talking about off mic,
00:03:20.240 | to parents who were artists.
00:03:22.680 | And now I think, at least to me,
00:03:24.800 | you've become one of the most unique thinkers
00:03:26.520 | in the AI world.
00:03:28.160 | So can we try to reverse engineer your mind a little bit?
00:03:31.600 | What were the key philosophers, scientists, ideas,
00:03:35.960 | maybe even movies, or just realizations
00:03:38.440 | that had an impact on you when you were growing up
00:03:41.040 | that kind of led to the trajectory,
00:03:43.920 | or were the key sort of crossroads in the trajectory
00:03:46.640 | of your intellectual development?
00:03:49.760 | - My father came from a long tradition of architects,
00:03:53.320 | a distant branch of the Bach family.
00:03:57.280 | And so basically he was technically a nerd,
00:04:00.560 | and nerds need to interface in society
00:04:04.200 | with non-standard ways.
00:04:05.740 | Sometimes I define a nerd as somebody
00:04:08.560 | who thinks that the purpose of communication
00:04:10.840 | is to submit your ideas to peer review.
00:04:13.480 | And normal people understand that the primary purpose
00:04:17.560 | of communication is to negotiate alignment.
00:04:20.400 | And these purposes tend to conflict,
00:04:22.840 | which means that nerds have to learn
00:04:24.360 | how to interact with society at large.
00:04:26.280 | - Who is the reviewer in the nerd's view of communication?
00:04:32.200 | - Everybody who you consider to be a peer.
00:04:35.000 | So whatever hapless individual is around,
00:04:37.840 | well, you would try to make him or her
00:04:40.280 | the gift of information.
00:04:41.680 | - Okay, so you're, now by the way,
00:04:45.740 | my research malinformed me.
00:04:48.320 | So you're architect or artist?
00:04:51.840 | Or do you see those two as the same?
00:04:52.680 | - So he did study architecture,
00:04:54.600 | but basically my grandfather made the wrong decision.
00:04:57.840 | He married an aristocrat and was drawn into the war.
00:05:02.840 | And he came back after 15 years.
00:05:05.600 | So basically my father was not parented by a nerd,
00:05:10.600 | but by somebody who tried to tell him what to do
00:05:13.840 | and expected him to do what he was told.
00:05:16.900 | And he was unable to, he's unable to do things
00:05:20.260 | if he's not intrinsically motivated.
00:05:21.780 | So in some sense, my grandmother broke her son
00:05:24.980 | and her son responded by, when he became an architect,
00:05:28.540 | to become an artist.
00:05:30.080 | So he built 100 Wasser architecture,
00:05:32.260 | he built houses without right angles.
00:05:34.460 | He'd built lots of things that didn't work
00:05:36.100 | in the more brutalist traditions of Eastern Germany.
00:05:39.500 | And so he bought an old watermill,
00:05:41.780 | moved out to the countryside
00:05:43.460 | and did only what he wanted to do, which was art.
00:05:46.280 | Eastern Germany was perfect for Bohem,
00:05:48.560 | because you had complete material safety,
00:05:51.380 | put was heavily subsidized, healthcare was free.
00:05:54.000 | You didn't have to worry about rent or pensions or anything.
00:05:56.640 | - So it's a socialized communist side of Germany.
00:05:58.980 | - And the other thing is it was almost impossible
00:06:01.380 | not to be in political disagreement with your government,
00:06:03.700 | which is very productive for artists.
00:06:05.560 | So everything that you do is intrinsically meaningful
00:06:08.260 | because it will always touch on the deeper currents
00:06:11.140 | of society, of culture, and be in conflict with it,
00:06:13.740 | and tension with it, and you will always have
00:06:15.880 | to define yourself with respect to this.
00:06:18.340 | - So what impacted your father,
00:06:19.780 | this outside-of-the-box thinker against the government,
00:06:24.780 | against the world artists?
00:06:26.580 | - He was actually not a thinker.
00:06:28.340 | He was somebody who only got self-aware
00:06:30.540 | to the degree that he needed to make himself functional.
00:06:33.540 | So in some sense, he was also in the late 1960s,
00:06:37.940 | and he was in some sense a hippie.
00:06:40.340 | So he became a one-person cult.
00:06:42.700 | He lived out there in his kingdom.
00:06:44.500 | He built big sculpture gardens
00:06:46.100 | and started many avenues of art and so on,
00:06:51.100 | and convinced a woman to live with him.
00:06:53.980 | She was also an architect, and she adored him
00:06:56.280 | and decided to share her life with him.
00:06:58.660 | And I basically grew up in a big cave full of books.
00:07:02.220 | I'm almost feral, and I was bored out there.
00:07:05.700 | It was very, very beautiful, very quiet, and quite lonely.
00:07:08.940 | So I started to read, and by the time I came to school,
00:07:12.620 | I've read everything until fourth grade and then some,
00:07:15.700 | and there was not a real way for me
00:07:17.420 | to relate to the outside world.
00:07:19.420 | And I couldn't quite put my finger on why,
00:07:21.920 | and today I know it was because I was a nerd, obviously,
00:07:24.460 | and I was the only nerd around,
00:07:26.780 | so there were no other kids like me.
00:07:29.140 | And there was nobody interested in physics
00:07:31.740 | or computing or mathematics and so on.
00:07:34.780 | And this village school that I went to
00:07:37.300 | was basically a nice school.
00:07:39.140 | Kids were nice to me.
00:07:40.060 | I was not beaten up, but I also didn't make many friends
00:07:42.460 | or build deep relationships.
00:07:44.140 | That only happened starting from ninth grade
00:07:46.340 | when I went into a school for mathematics and physics.
00:07:49.500 | - Do you remember any key books from this moment?
00:07:51.140 | - Yes, yes, I basically read everything.
00:07:52.700 | So I went to the library, and I worked my way
00:07:56.180 | through the children's and young adult sections,
00:07:58.680 | and then I read a lot of science fiction.
00:08:01.340 | For instance, Danis Laflamme,
00:08:03.460 | basically the great author of cybernetics,
00:08:05.580 | has influenced me.
00:08:06.720 | Back then, I didn't see him as a big influence
00:08:08.620 | because everything that he wrote
00:08:09.740 | seemed to be so natural to me.
00:08:11.540 | It's only later that I contrasted it
00:08:13.700 | with what other people wrote.
00:08:15.140 | Another thing that was very influential on me
00:08:17.940 | were the classical philosophers
00:08:19.740 | and also the literature of Romanticism,
00:08:22.100 | so German poetry and art, Droster-Hilzhoff and Heine
00:08:27.100 | and up to Hesse and so on.
00:08:30.300 | - Hesse, I love Hesse.
00:08:31.580 | So at which point do the classical philosophers end?
00:08:34.980 | At this point, we're in the 21st century,
00:08:37.360 | what's the latest classical philosopher?
00:08:39.520 | Does this stretch through even as far as Nietzsche,
00:08:43.080 | or is this, are we talking about Plato and Aristotle?
00:08:46.000 | - I think that Nietzsche is the classical equivalent
00:08:48.360 | of a shit poster.
00:08:49.380 | - So he's a classical troll. - He's very smart
00:08:53.840 | and easy to read, but he's not so much trolling others,
00:08:57.160 | he's trolling himself because he was at odds with the world.
00:08:59.960 | Largely, his Romantic relationships didn't work out.
00:09:02.640 | He got angry and he basically became a nihilist.
00:09:05.040 | - Isn't that a beautiful way to be as an intellectual,
00:09:09.860 | is to constantly be trolling yourself,
00:09:11.940 | to be in that conflict and that tension?
00:09:15.100 | - I think it's a lack of self-awareness.
00:09:16.680 | At some point, you have to understand
00:09:18.800 | the comedy of your own situation.
00:09:20.300 | If you take yourself seriously and you are not functional,
00:09:24.020 | it ends in tragedy, as it did for Nietzsche.
00:09:25.900 | - I think you think he took himself too seriously
00:09:28.620 | in that tension.
00:09:29.940 | - And if you find the same thing in Hesse and so on,
00:09:32.300 | this Steppenwolf syndrome is classic adolescence,
00:09:35.340 | where you basically feel misunderstood by the world
00:09:37.780 | and you don't understand that all the misunderstandings
00:09:39.860 | are the result of your own lack of self-awareness,
00:09:43.100 | because you think that you are a prototypical human
00:09:45.780 | and the others around you should behave the same way
00:09:48.180 | as you expect them based on your innate instincts
00:09:50.300 | and it doesn't work out.
00:09:51.860 | And you become a transcendentalist to deal with that.
00:09:55.540 | So it's very, very understandable
00:09:56.980 | and I have great sympathies for this,
00:09:58.620 | to the degree that I can have sympathy
00:10:00.280 | for my own intellectual history.
00:10:02.260 | But you have to grow out of it.
00:10:03.980 | (laughs)
00:10:04.820 | - So as an intellectual, a life well-lived,
00:10:07.380 | a journey well-traveled is one
00:10:09.300 | where you don't take yourself seriously,
00:10:10.980 | from that perspective. - No, I think that you
00:10:12.860 | are neither serious or not serious yourself,
00:10:16.100 | because you need to become unimportant as a subject.
00:10:19.940 | That is, if you are a philosopher, belief is not a verb.
00:10:23.580 | You don't do this for the audience
00:10:26.260 | and you don't do it for yourself.
00:10:27.480 | You have to submit to the things that are possibly true
00:10:30.700 | and you have to follow wherever your inquiry leads,
00:10:33.440 | but it's not about you, it has nothing to do with you.
00:10:36.620 | - So do you think then people like Ayn Rand
00:10:39.940 | believed sort of in the idea of there's objective truth,
00:10:42.500 | so what's your sense in the philosophical,
00:10:45.200 | if you remove yourself as objective from the picture,
00:10:48.420 | you think it's possible to actually discover
00:10:50.700 | ideas that are true, or are we just in a mesh
00:10:53.140 | of relative concepts that are neither true nor false?
00:10:56.620 | It's just a giant mess.
00:10:57.920 | - You cannot define objective truth
00:11:00.660 | without understanding the nature of truth
00:11:02.380 | in the first place, so what does the brain mean
00:11:04.820 | by saying that it discovers something as truth?
00:11:07.460 | So for instance, a model can be predictive or not predictive.
00:11:10.940 | Then there can be a sense in which a mathematical statement
00:11:15.120 | can be true because it's defined as true
00:11:17.140 | under certain conditions, so it's basically
00:11:19.680 | a particular state that a variable can have
00:11:23.380 | in a symbol game, and then you can have a correspondence
00:11:27.220 | between systems and talk about truth,
00:11:29.020 | which is again a type of model correspondence.
00:11:31.260 | And there also seems to be a particular
00:11:32.820 | kind of ground truth, so for instance,
00:11:35.220 | you're confronted with the enormity
00:11:37.300 | of something existing at all, right?
00:11:39.460 | That's stunning when you realize something exists
00:11:43.220 | rather than nothing, and this seems to be true, right?
00:11:45.660 | There's an absolute truth in the fact
00:11:48.020 | that something seems to be happening.
00:11:49.940 | - Yeah, that to me is a showstopper.
00:11:52.100 | I could just think about that idea
00:11:53.740 | and be amazed by that idea for the rest of my life
00:11:56.820 | and not go any farther, 'cause I don't even know
00:11:58.740 | the answer to that.
00:11:59.580 | Why does anything exist at all?
00:12:01.220 | - Well, the easiest answer is existence is the default,
00:12:03.620 | right, so this is the lowest number of bits
00:12:05.380 | that you would need to encode this.
00:12:06.980 | - Whose answer, who provides that?
00:12:07.820 | - The simplest answer to this is that existence
00:12:10.420 | is the default.
00:12:11.460 | - What about nonexistence?
00:12:12.620 | I mean, that seems--
00:12:14.340 | - Nonexistence might not be a meaningful notion
00:12:16.460 | in this sense, so in some sense,
00:12:18.180 | if everything that can exist exists,
00:12:20.460 | for something to exist, it probably needs
00:12:22.220 | to be implementable.
00:12:23.660 | The only thing that can be implemented
00:12:25.620 | is finite automata, so maybe the whole of existence
00:12:28.220 | is the superposition of all finite automata,
00:12:30.580 | and we are in some region of the fractal
00:12:32.300 | that has the properties that it can contain us.
00:12:35.060 | - What does it mean to be a superposition of finite,
00:12:38.140 | so I understand, superposition of all possible rules.
00:12:43.060 | - Imagine that every automaton is basically an operator
00:12:45.980 | that acts on some substrate, and as a result,
00:12:49.220 | you get emergent patterns.
00:12:50.580 | - What's a substrate?
00:12:51.580 | - I have no idea to know, so it's basically--
00:12:54.460 | - But some substrate.
00:12:55.300 | - Something that can store information.
00:12:58.580 | - Something that can store information,
00:12:59.740 | there's a automaton operator.
00:13:00.580 | - Something that can hold state.
00:13:01.780 | - Still, doesn't make sense to me the why that exists at all.
00:13:04.740 | I could just sit there with a beer or a vodka
00:13:08.860 | and just enjoy the fact, pondering the why.
00:13:11.580 | - May not have a why.
00:13:13.100 | This might be the wrong direction to ask into this,
00:13:15.820 | so there could be no relation in the why direction
00:13:19.300 | without asking for a purpose or for a cause.
00:13:22.420 | It doesn't mean that everything has to have a purpose
00:13:24.100 | or a cause, right?
00:13:25.060 | - So we mentioned some philosophers in that earlier,
00:13:28.940 | just taking a brief step back into that.
00:13:31.660 | - Okay, so we asked ourselves,
00:13:33.140 | when did classical philosophy end?
00:13:35.060 | I think for Germany, it largely ended
00:13:36.940 | with the first revolution.
00:13:38.500 | That's basically when we--
00:13:39.580 | - Which one was that?
00:13:40.700 | - This was when we ended the monarchy
00:13:43.300 | and started a democracy, and at this point,
00:13:45.860 | we basically came up with a new form of government
00:13:48.740 | that didn't have a good sense of this new organism
00:13:52.220 | that society wanted to be, and in a way,
00:13:54.100 | it decapitated the universities.
00:13:56.260 | So the universities went on through modernism
00:13:58.580 | like a headless chicken.
00:13:59.980 | At the same time, democracy failed in Germany
00:14:02.180 | and we got fascism as a result,
00:14:04.340 | and it burned down things in a similar way
00:14:06.580 | as Stalinism burned down intellectual traditions in Russia.
00:14:09.820 | And Germany, both Germanys have not recovered from this.
00:14:12.820 | Eastern Germany had this vulgar dialectic materialism,
00:14:16.460 | and Western Germany didn't get much more edgy
00:14:18.540 | than Habermas.
00:14:19.780 | So in some sense, both countries
00:14:21.860 | lost their intellectual traditions,
00:14:23.280 | and killing off and driving out the Jews didn't help.
00:14:25.980 | - Yeah, so that was the end.
00:14:29.540 | That was the end of really rigorous,
00:14:31.540 | what you would say is classical philosophy.
00:14:34.180 | - There's also this thing that in some sense,
00:14:37.620 | the low-hanging fruits in philosophy were mostly wrapped.
00:14:42.500 | And the last big things that we discovered
00:14:45.220 | was the constructivist turn in mathematics.
00:14:48.180 | So to understand that the parts of mathematics
00:14:50.300 | that work are computation.
00:14:52.300 | There was a very significant discovery
00:14:54.700 | in the first half of the 20th century.
00:14:57.060 | And it hasn't fully permeated philosophy
00:14:59.460 | and even physics yet.
00:15:01.700 | Physicists checked out the code libraries for mathematics
00:15:04.500 | before constructivism became universal.
00:15:07.460 | - What's constructivism?
00:15:08.460 | What are you referring to,
00:15:09.780 | Gödel's incompleteness theorem,
00:15:11.020 | that kind of those kinds of ideas?
00:15:11.860 | - So basically, Gödel himself, I think, didn't get it yet.
00:15:14.860 | Hilbert could get it.
00:15:16.660 | Hilbert saw that, for instance,
00:15:18.260 | Kantor's set theoretic experiments in mathematics
00:15:20.900 | led into contradictions.
00:15:22.060 | And he noticed that with the current semantics,
00:15:26.060 | we cannot build a computer in mathematics
00:15:27.940 | that runs mathematics without crashing.
00:15:30.140 | And Gödel could prove this.
00:15:32.260 | And so what Gödel could show
00:15:33.700 | is using classical mathematical semantics,
00:15:36.100 | you run into contradictions.
00:15:37.420 | And because Gödel strongly believed in these semantics
00:15:40.260 | and more than in what he could observe and so on,
00:15:43.380 | he was shocked.
00:15:44.380 | It basically shook his world to the core
00:15:46.700 | because in some sense, he felt that the world
00:15:48.260 | has to be implemented in classical mathematics.
00:15:50.980 | And for Turing, it wasn't quite so bad.
00:15:53.780 | I think that Turing could see that the solution
00:15:56.220 | is to understand that mathematics
00:15:58.340 | was computation all along,
00:15:59.540 | which means, for instance,
00:16:01.540 | pi in classical mathematics is a value.
00:16:04.100 | It's also a function, but it's the same thing.
00:16:07.940 | And in computation, a function is only a value
00:16:10.980 | when you can compute it.
00:16:12.140 | And if you cannot compute the last digit of pi,
00:16:14.540 | you only have a function.
00:16:15.540 | You can plug this function into your local sun,
00:16:17.740 | let it run until the sun burns out.
00:16:19.620 | This is it.
00:16:20.460 | This is the last digit of pi you will know.
00:16:22.140 | But it also means that there can be no process
00:16:24.300 | in the physical universe
00:16:25.500 | or in any physically realized computer
00:16:27.940 | that depends on having known the last digit of pi.
00:16:30.940 | - Yes.
00:16:31.780 | - Which means there are parts of physics
00:16:33.420 | that are defined in such a way
00:16:34.740 | that cannot strictly be true
00:16:36.020 | because assuming that this could be true
00:16:37.900 | leads into contradictions.
00:16:39.340 | - So I think putting computation at the center
00:16:42.260 | of the worldview is actually the right way
00:16:44.700 | to think about it.
00:16:45.540 | - Yes.
00:16:46.380 | And Wittgenstein could see it.
00:16:47.940 | And Wittgenstein basically preempted
00:16:50.060 | the logitist program of AI
00:16:51.580 | that Minsky started later, like 30 years later.
00:16:54.700 | Turing was actually a pupil of Wittgenstein.
00:16:57.340 | - Really?
00:16:58.180 | I didn't know there's any connection between Turing and-
00:17:00.660 | - Wittgenstein even canceled some classes
00:17:02.220 | when Turing was not present
00:17:03.300 | because he thought it was not worth
00:17:04.340 | spending the time on with the others.
00:17:06.660 | - Interesting.
00:17:07.500 | - If you read the "Tractatus,"
00:17:08.860 | it's a very beautiful book,
00:17:10.220 | like basically one thought on 75 pages.
00:17:12.660 | It's very non-typical for philosophy
00:17:14.540 | because it doesn't have arguments in it
00:17:17.420 | and it doesn't have references in it.
00:17:19.060 | It's just one thought
00:17:20.220 | that is not intending to convince anybody.
00:17:22.820 | He says, "It's mostly for people
00:17:24.740 | "that had the same insight as me."
00:17:26.420 | Just spell it out.
00:17:28.140 | And this insight is,
00:17:29.700 | "There is a way in which mathematics
00:17:31.340 | "and philosophy ought to meet.
00:17:33.260 | "Mathematics tries to understand
00:17:34.820 | "the domain of all languages
00:17:36.060 | "by starting with those that are so formalizable
00:17:38.820 | "that you can prove all the properties
00:17:40.900 | "of the statements that you make.
00:17:42.620 | "But the price that you pay
00:17:44.140 | "is that your language is very, very simple,
00:17:45.900 | "so it's very hard to say something meaningful
00:17:47.860 | "in mathematics."
00:17:49.540 | And it looks complicated to people,
00:17:51.180 | but it's far less complicated
00:17:52.500 | than what our brain is casually doing all the time
00:17:54.660 | and it makes sense of reality.
00:17:56.820 | And philosophy is coming from the top,
00:17:59.300 | so it's mostly starting from natural languages
00:18:01.620 | with vaguely defined concepts.
00:18:03.540 | And the hope is that mathematics and philosophy
00:18:05.820 | can meet at some point.
00:18:07.460 | And Wittgenstein was trying to make them meet.
00:18:09.660 | And he already understood that, for instance,
00:18:11.260 | you could express everything with the Nand calculus,
00:18:13.420 | that you could reduce the entire logic to Nand gates
00:18:16.700 | as we do in our modern computers.
00:18:18.460 | So in some sense,
00:18:19.300 | he already understood Turing universality
00:18:21.180 | before Turing spelled it out.
00:18:22.500 | I think when he wrote the "Tractatus,"
00:18:24.580 | he didn't understand yet that the idea
00:18:26.100 | was so important and significant.
00:18:28.060 | And I suspect then when Turing wrote it out,
00:18:30.500 | nobody cared that much.
00:18:32.140 | Turing was not that famous when he lived.
00:18:34.300 | It was mostly his work in decrypting the German codes
00:18:38.900 | that made him famous or gave him some notoriety.
00:18:41.420 | But this saint status that he has to computer science
00:18:44.140 | right now in the AI is something
00:18:45.740 | that I think he could acquire later.
00:18:47.580 | - That's kind of interesting.
00:18:48.740 | Do you think of computation and computer science,
00:18:51.220 | and you kind of represent that to me,
00:18:52.820 | is maybe that's the modern day.
00:18:55.220 | You, in a sense, are the new philosopher
00:18:57.580 | by sort of the computer scientist
00:19:00.820 | who dares to ask the bigger questions
00:19:03.740 | that philosophy originally started,
00:19:05.500 | is the new philosopher?
00:19:07.860 | - Certainly not me, I think.
00:19:09.220 | I'm mostly still this child that grows up
00:19:12.220 | in a very beautiful valley
00:19:13.420 | and looks at the world from the outside
00:19:14.960 | and tries to understand what's going on.
00:19:16.860 | And my teachers tell me things
00:19:18.180 | and they largely don't make sense.
00:19:19.860 | So I have to make my own models.
00:19:21.180 | I have to discover the foundations
00:19:23.220 | of what the others are saying.
00:19:24.300 | I have to try to fix them, to be charitable.
00:19:26.620 | I try to understand what they must have thought originally
00:19:29.900 | or what their teachers or their teacher's teachers
00:19:31.740 | must have thought until everything got lost
00:19:33.480 | in translation and how to make sense
00:19:35.260 | of the reality that we are in.
00:19:36.900 | And whenever I have an original idea,
00:19:39.060 | I'm usually late to the party by say 400 years.
00:19:41.700 | And the only thing that's good
00:19:42.820 | is that the parties get smaller and smaller,
00:19:44.820 | the older I get and the more I explore.
00:19:47.420 | - The parties get smaller.
00:19:49.180 | - And more exclusive.
00:19:50.140 | - And more exclusive.
00:19:51.460 | So it seems like one of the key qualities
00:19:54.480 | of your upbringing was that you were not tethered,
00:19:57.300 | whether it's because of your parents
00:19:59.940 | or in general maybe something within your mind,
00:20:04.380 | some genetic material,
00:20:06.160 | that you were not tethered to the ideas
00:20:07.960 | of the general populace,
00:20:09.860 | which is actually a unique property.
00:20:12.040 | We're kind of, you know, the education system
00:20:15.260 | and whatever, not education system,
00:20:16.880 | just existing in this world forces
00:20:19.520 | a certain sets of ideas onto you.
00:20:21.240 | Can you disentangle that?
00:20:23.960 | Why were you, why are you not so tethered?
00:20:26.980 | Even in your work today, you seem to not care
00:20:30.760 | about perhaps a best paper in Europe, right?
00:20:35.760 | Being tethered to particular things
00:20:38.860 | that current today in this year,
00:20:42.240 | people seem to value as a thing you put on your CV
00:20:44.960 | and resume.
00:20:45.960 | You're a little bit more outside of that world,
00:20:48.560 | outside of the world of ideas
00:20:49.920 | that people are especially focusing
00:20:51.200 | in the benchmarks of today, the things.
00:20:54.320 | What's, can you disentangle that?
00:20:56.240 | 'Cause I think that's inspiring.
00:20:57.640 | And if there were more people like that,
00:20:59.620 | we might be able to solve some of the bigger problems
00:21:01.720 | that sort of AI dreams to solve.
00:21:05.900 | - And there's a big danger in this
00:21:07.440 | because in a way you are expected
00:21:09.180 | to marry into an intellectual tradition
00:21:12.320 | and visit this tradition into a particular school.
00:21:15.220 | If everybody comes up with their own paradigms,
00:21:17.620 | the whole thing is not cumulative as an enterprise, right?
00:21:21.160 | So in some sense, you need a healthy balance.
00:21:23.120 | You need paradigmatic thinkers
00:21:24.920 | and you need people that work within given paradigms.
00:21:27.320 | Basically, scientists today define themselves
00:21:29.520 | largely by methods.
00:21:30.720 | And it's almost a disease that we think as a scientist,
00:21:34.120 | somebody who was convinced by their guidance counselor
00:21:38.240 | that they should join a particular discipline
00:21:40.520 | and then they find a good mentor
00:21:41.960 | to learn the right methods.
00:21:42.980 | And then they are lucky enough and privileged enough
00:21:45.600 | to join the right team.
00:21:46.620 | And then their name will show up on influential papers.
00:21:50.400 | But we also see that there are diminishing returns
00:21:52.960 | with this approach.
00:21:54.320 | And when our field, computer science and AI started,
00:21:58.160 | most of the people that joined this field
00:22:00.400 | had interesting opinions.
00:22:02.280 | And today's thinkers in AI
00:22:04.560 | either don't have interesting opinions at all,
00:22:06.400 | or these opinions are inconsequential
00:22:08.400 | for what they're actually doing.
00:22:09.400 | Because what they're doing is
00:22:10.720 | they apply the state-of-the-art methods
00:22:12.560 | with a small epsilon.
00:22:14.400 | And this is often a good idea
00:22:17.480 | if you think that this is the best way to make progress.
00:22:20.720 | And for me, it's first of all, very boring.
00:22:23.200 | If somebody else can do it, why should I do it?
00:22:25.680 | Right? - Yes.
00:22:26.880 | - If the current methods of machine learning
00:22:28.880 | lead to a strong AI, why should I be doing it?
00:22:31.680 | Right, I will just wait until they're done
00:22:33.480 | and wait until they do this on the beach.
00:22:36.240 | Or read interesting books or write some and have fun.
00:22:40.000 | But if you don't think that
00:22:41.760 | we are currently doing the right thing,
00:22:43.320 | if we are missing some perspectives,
00:22:46.460 | then it's required to think outside of the box.
00:22:50.320 | It's also required to understand the boxes.
00:22:53.460 | But it's necessary to understand what worked
00:22:57.120 | and what didn't work and for what reasons.
00:22:59.320 | So you have to be willing to ask new questions
00:23:02.080 | and design new methods whenever you want to answer them.
00:23:05.080 | And you have to be willing to dismiss the existing methods
00:23:08.480 | if you think that they're not going
00:23:09.640 | to yield the right answers.
00:23:11.160 | It's very bad career advice to do that.
00:23:13.180 | - So maybe to briefly stay for one more time
00:23:19.980 | in the early days, when would you say for you
00:23:22.760 | was the dream, before we dive into the discussions
00:23:26.880 | that we just almost started,
00:23:28.560 | when was the dream to understand
00:23:31.060 | or maybe to create human level intelligence born for you?
00:23:34.260 | - I think that you can see AI largely today
00:23:39.200 | as advanced information processing.
00:23:43.800 | If you would change the acronym of AI into that,
00:23:46.540 | most people in the field would be happy.
00:23:48.080 | It would not change anything what they're doing.
00:23:50.280 | We're automating statistics
00:23:51.720 | and many of the statistical models are more advanced
00:23:55.480 | than what statisticians had in the past.
00:23:57.800 | And it's pretty good work, it's very productive.
00:24:00.120 | And the other aspect of AI is philosophical project.
00:24:03.720 | And this philosophical project is very risky
00:24:06.400 | and very few people work on it
00:24:08.540 | and it's not clear if it succeeds.
00:24:10.560 | - So first of all, you keep throwing
00:24:13.800 | sort of a lot of really interesting ideas
00:24:15.500 | and I have to pick which ones we go with.
00:24:17.600 | But sort of, first of all,
00:24:20.440 | you use the term information processing,
00:24:22.940 | just information processing,
00:24:25.440 | as if it's the mere, it's the muck of existence,
00:24:30.440 | as if it's the epitome of,
00:24:33.200 | that the entirety of the universe
00:24:36.200 | might be information processing,
00:24:37.360 | that consciousness, the intelligence
00:24:38.560 | might be information processing.
00:24:39.540 | So that maybe you can comment on
00:24:41.720 | if the advanced information processing
00:24:44.800 | is a limiting kind of realm of ideas.
00:24:49.000 | And then the other one is,
00:24:50.080 | what do you mean by the philosophical project?
00:24:52.400 | - So I suspect that general intelligence
00:24:55.160 | is the result of trying to solve general problems.
00:24:59.400 | So intelligence, I think, is the ability to model.
00:25:02.320 | It's not necessarily goal-directed rationality
00:25:05.240 | or something, many intelligent people are bad at this.
00:25:07.960 | But it's the ability to be presented
00:25:10.880 | with a number of patterns
00:25:12.000 | and see a structure in those patterns
00:25:14.060 | and be able to predict the next set of patterns, right?
00:25:16.760 | To make sense of things.
00:25:18.640 | And some problems are very general.
00:25:21.680 | Usually intelligence serves control,
00:25:23.680 | so you make these models for a particular purpose
00:25:25.680 | of interacting as an agent with the world
00:25:27.440 | and getting certain results.
00:25:28.940 | But the intelligence itself
00:25:31.320 | is in a sense instrumental to something,
00:25:32.980 | but by itself, it's just the ability to make models.
00:25:35.400 | And some of the problems are so general
00:25:37.300 | that the system that makes them
00:25:39.220 | needs to understand what itself is
00:25:41.120 | and how it relates to the environment.
00:25:43.300 | So as a child, for instance,
00:25:44.900 | you notice you do certain things
00:25:46.840 | despite you perceiving yourself as wanting different things.
00:25:50.500 | So you become aware of your own psychology.
00:25:53.340 | You become aware of the fact
00:25:54.600 | that you have complex structure in yourself
00:25:57.020 | and you need to model yourself,
00:25:58.320 | to reverse engineer yourself,
00:25:59.880 | to be able to predict how you will react
00:26:02.040 | to certain situations and how you deal with yourself
00:26:04.680 | in relationship to your environment.
00:26:06.360 | And this process, this project,
00:26:08.460 | if you reverse engineer yourself
00:26:10.280 | and your relationship to reality
00:26:11.580 | and the nature of a universe
00:26:12.920 | that can continue,
00:26:14.360 | if you go all the way,
00:26:15.660 | this is basically the project of AI,
00:26:17.780 | or you could say the project of AI
00:26:19.320 | is a very important component in it.
00:26:21.520 | The Turing test in a way is,
00:26:23.480 | you ask a system, what is intelligence?
00:26:26.280 | If that system is able to explain what it is,
00:26:29.080 | how it works,
00:26:31.760 | then you should assign it the property
00:26:33.960 | of being intelligent in this general sense.
00:26:35.760 | So the test that Turing was administering in a way,
00:26:38.960 | I don't think that he couldn't see it,
00:26:40.840 | but he didn't express it yet in the original 1950 paper,
00:26:44.200 | is that he was trying to find out
00:26:47.000 | whether he was generally intelligent,
00:26:48.920 | because in order to take this test,
00:26:50.360 | the rub is, of course,
00:26:51.440 | you need to be able to understand
00:26:52.680 | what that system is saying.
00:26:53.880 | And we don't yet know if we can build an AI.
00:26:56.040 | We don't yet know if we are generally intelligent.
00:26:58.360 | Basically, you win the Turing test by building an AI.
00:27:01.640 | - Yes.
00:27:02.720 | So in a sense, hidden within the Turing test
00:27:05.760 | is a kind of recursive test.
00:27:07.240 | - Yes, it's a test on us.
00:27:09.040 | The Turing test is basically a test of the conjecture
00:27:11.880 | whether people are intelligent enough
00:27:13.960 | to understand themselves.
00:27:15.280 | - Okay, but you also mentioned
00:27:17.560 | a little bit of a self-awareness,
00:27:19.240 | and then the project of AI.
00:27:21.000 | Do you think this kind of emergent self-awareness
00:27:24.040 | is one of the fundamental aspects of intelligence?
00:27:27.360 | So as opposed to goal-oriented, as you said,
00:27:30.760 | kind of puzzle-solving,
00:27:32.760 | is coming to grips with the idea
00:27:37.040 | that you're an agent in the world?
00:27:39.480 | And like--
00:27:40.320 | - I find that many highly intelligent people
00:27:41.280 | are not very self-aware, right?
00:27:43.400 | So self-awareness and intelligence are not the same thing.
00:27:46.680 | And you can also be self-aware
00:27:48.520 | if you have good priors especially,
00:27:50.280 | without being especially intelligent.
00:27:52.840 | So you don't need to be very good at solving puzzles
00:27:55.240 | if the system that you are already implements the solution.
00:27:58.960 | - But I do find intelligence,
00:28:00.680 | so you kind of mentioned children, right?
00:28:04.920 | Is that the fundamental project of AI
00:28:07.480 | is to create the learning system
00:28:10.600 | that's able to exist in the world?
00:28:13.240 | So you kind of drew a difference
00:28:14.440 | between self-awareness and intelligence.
00:28:18.320 | And yet you said that the self-awareness
00:28:21.520 | seems to be important for children.
00:28:23.680 | - So I call this ability to make sense of the world
00:28:27.000 | and your own place in it,
00:28:28.200 | so to make you able to understand
00:28:30.360 | what you're doing in this world, sentience.
00:28:32.400 | And I would distinguish sentience from intelligence
00:28:35.160 | because sentience is possessing certain classes of models.
00:28:39.680 | And intelligence is a way to get to these models
00:28:41.840 | if you don't already have them.
00:28:43.400 | - I see, so can you maybe pause a bit
00:28:49.240 | and try to answer the question
00:28:53.200 | that we just said we may not be able to answer?
00:28:55.920 | And it might be a recursive meta question
00:28:58.000 | of what is intelligence?
00:29:00.440 | - I think that intelligence
00:29:01.960 | is the ability to make models.
00:29:04.000 | - So models, I think it's useful as examples,
00:29:07.640 | very popular now, neural networks form representations
00:29:12.400 | of a large-scale dataset.
00:29:17.120 | They form models of those datasets.
00:29:20.160 | When you say models and look at today's neural networks,
00:29:23.640 | what are the difference of how you're thinking about
00:29:25.800 | what is intelligence in saying that intelligence
00:29:29.320 | is the process of making models?
00:29:31.600 | - There are two aspects to this question.
00:29:33.920 | One is the representation,
00:29:35.720 | is the representation adequate for the domain
00:29:37.840 | that we want to represent?
00:29:39.800 | And the other one is the type of the model
00:29:42.760 | that you arrive at adequate?
00:29:44.320 | So basically, are you modeling the correct domain?
00:29:47.760 | And I think in both of these cases,
00:29:50.960 | modern AI is lacking still.
00:29:52.600 | And I think that I'm not saying anything new here.
00:29:54.920 | I'm not criticizing the field.
00:29:56.320 | Most of the people that design our paradigms
00:30:00.160 | are aware of that.
00:30:01.480 | And so one aspect that we are missing is unified learning.
00:30:05.240 | When we learn, we at some point discover
00:30:07.480 | that everything that we sense is part of the same object,
00:30:11.680 | which means we learn it all into one model.
00:30:13.320 | And we call this model the universe.
00:30:15.000 | So experience of the world that we are embedded on
00:30:17.160 | is not a secret direct via to physical reality.
00:30:20.320 | Physical reality is a weird quantum graph
00:30:22.360 | that we can never experience or get access to.
00:30:24.920 | But it has this properties
00:30:26.800 | that it can create certain patterns
00:30:28.400 | that our systemic interface to the world.
00:30:30.440 | And we make sense of these patterns
00:30:31.960 | and the relationship between the patterns that we discover
00:30:34.600 | is what we call the physical universe.
00:30:36.600 | So at some point in our development as a nervous system,
00:30:41.600 | we discover that everything that we relate to in the world
00:30:45.840 | can be mapped to a region
00:30:47.000 | in the same three-dimensional space by and large.
00:30:50.240 | We now know in physics that this is not quite true.
00:30:52.960 | The world is not actually three-dimensional,
00:30:54.800 | but the world that we are entangled with
00:30:56.640 | at the level of which we are entangled with
00:30:58.400 | is largely a flat three-dimensional space.
00:31:01.680 | And so this is the model that our brain
00:31:03.840 | is intuitively making.
00:31:05.080 | And this is, I think, what gave rise to this intuition
00:31:08.000 | of ResX Tensa, of this material world, this material domain.
00:31:11.400 | It's one of the mental domains,
00:31:12.840 | but it's just the class of all models
00:31:14.320 | that relate to this environment,
00:31:16.640 | this three-dimensional physics engine
00:31:18.280 | in which we are embedded.
00:31:19.640 | - Physics engine in which we are embedded, I love that.
00:31:21.960 | - Right?
00:31:22.800 | - Just slowly pause.
00:31:24.120 | So the quantum graph, I think you called it,
00:31:29.120 | which is the real world which you can never get access to.
00:31:34.560 | There's a bunch of questions I wanna sort of
00:31:36.680 | disentangle that, but maybe one useful one,
00:31:40.680 | one of your recent talks I looked at,
00:31:42.440 | can you just describe the basics?
00:31:43.920 | Can you talk about what is dualism, what is idealism,
00:31:47.880 | what is materialism, what is functionalism,
00:31:50.040 | and what connects with you most?
00:31:51.840 | In terms of, 'cause you just mentioned,
00:31:53.120 | there's a reality we don't have access to.
00:31:55.200 | Okay, what does that even mean?
00:31:57.480 | And why don't we get access to it?
00:32:00.320 | Aren't we part of that reality?
00:32:01.560 | Why can't we access it?
00:32:03.520 | - So the particular trajectory that mostly exists
00:32:05.960 | in the West is the result of our indoctrination
00:32:09.460 | by a cult for 2,000 years.
00:32:11.320 | - A cult, which one?
00:32:12.160 | - Yes, the Catholic cult mostly.
00:32:14.280 | And for better or worse, it has created or defined
00:32:18.160 | many of the modes of interaction that we have
00:32:20.040 | that has created this society,
00:32:22.000 | but it has also in some sense scarred our rationality.
00:32:26.680 | And the intuition that exists, if you would translate
00:32:31.480 | the mythology of the Catholic church into the modern world
00:32:34.960 | is that the world in which you and me interact
00:32:37.680 | is something like a multiplayer role-playing adventure.
00:32:41.480 | And the money and the objects that we have in this world,
00:32:44.160 | this is all not real.
00:32:45.200 | Or as Eastern philosophers would say, it's Maya.
00:32:49.200 | It's just stuff that appears to be meaningful
00:32:52.600 | and this embedding in this meaning,
00:32:54.880 | if you believe in it, is samsara.
00:32:57.120 | It's basically the identification with the needs
00:33:00.160 | of the mundane, secular, everyday existence.
00:33:03.080 | And the Catholics also introduced the notion
00:33:06.720 | of higher meaning, the sacred.
00:33:08.800 | And this existed before, but eventually the natural shape
00:33:12.080 | of God is the platonic form of the civilization
00:33:15.140 | that you're a part of.
00:33:15.980 | It's basically the superorganism that is formed
00:33:17.720 | by the individuals as an intentional agent.
00:33:20.800 | And basically the Catholics used relatively crude mythology
00:33:25.560 | to implement software on the minds of people
00:33:27.760 | and get the software synchronized
00:33:29.320 | to make them walk on lockstep.
00:33:30.760 | - To get the software synchronized.
00:33:31.600 | - To basically get this God online
00:33:34.320 | and to make it efficient and effective.
00:33:37.320 | And I think God technically is just a self
00:33:40.320 | that spends multiple brains as opposed to your and myself,
00:33:43.120 | which mostly exists just on one brain.
00:33:46.080 | So in some sense, you can construct a self functionally
00:33:48.760 | as a function that is implemented by brains
00:33:51.160 | that exists across brains.
00:33:53.360 | And this is a God with a small g.
00:33:55.160 | - That's one of the, if you look,
00:33:57.080 | Yuval Harari kind of talking about,
00:33:59.140 | this is one of the nice features of our brains,
00:34:02.360 | it seems to, that we can all download
00:34:04.480 | the same piece of software, like God in this case,
00:34:06.640 | and kind of share it.
00:34:08.120 | - Yeah, so basically you give everybody a spec
00:34:10.200 | and the mathematical constraints
00:34:12.480 | that are intrinsic to information processing,
00:34:16.280 | make sure that given the same spec,
00:34:18.240 | you come up with a compatible structure.
00:34:20.360 | - Okay, so there's this space of ideas that we all share
00:34:23.560 | and we think that's kind of the mind.
00:34:25.960 | But that's separate from,
00:34:27.840 | the idea is, from Christianity, from religion,
00:34:32.940 | is that there's a separate thing between the mind--
00:34:35.400 | - There is a real world.
00:34:36.240 | And this real world is the world in which God exists.
00:34:39.800 | God is the coder of the multiplayer adventure,
00:34:42.120 | so to speak, and we are all players in this game.
00:34:45.640 | - And that's dualism, you would say.
00:34:48.480 | - But the dualist aspect is because the mental realm
00:34:52.120 | exists in a different implementation than the physical realm
00:34:55.520 | and the mental realm is real.
00:34:57.400 | And a lot of people have this intuition
00:34:59.440 | that there is this real room
00:35:00.680 | in which you and me talk and speak right now,
00:35:03.320 | then comes a layer of physics and abstract rules and so on,
00:35:07.680 | and then comes another real room where our souls are
00:35:10.240 | and our true form isn't a thing
00:35:12.440 | that gives us phenomenal experience.
00:35:13.800 | And this, of course, is a very confused notion
00:35:16.600 | that you would get.
00:35:17.880 | And it's basically, it's the result of connecting
00:35:20.960 | materialism and idealism in the wrong way.
00:35:24.920 | - So, okay, I apologize, but I think it's really helpful
00:35:27.880 | if we just try to define, try to define terms.
00:35:31.520 | Like, what is dualism, what is idealism,
00:35:33.240 | what is materialism for people that don't know?
00:35:35.120 | - So the idea of dualism in our cultural tradition
00:35:38.160 | is that there are two substances,
00:35:39.720 | a mental substance and a physical substance,
00:35:42.560 | and they interact by different rules.
00:35:45.000 | And the physical world is basically causally closed
00:35:48.160 | and is built on a low-level causal structure.
00:35:51.600 | So there's basically a bottom level
00:35:53.400 | that is causally closed that's entirely mechanical.
00:35:56.280 | And mechanical in the widest sense, so it's computational.
00:35:59.160 | There's basically a physical world
00:36:00.600 | in which information flows around,
00:36:02.440 | and physics describes the laws
00:36:04.080 | of how information flows around in this world.
00:36:06.680 | - Would you compare it to like a computer
00:36:08.560 | where you have hardware and software?
00:36:10.560 | - A computer is a generalization
00:36:12.040 | of information flowing around.
00:36:13.720 | Basically, what Turing discovered,
00:36:16.000 | that there is a universal principle,
00:36:17.840 | you can define this universal machine
00:36:20.400 | that is able to perform all the computations.
00:36:23.120 | So all these machines have the same power.
00:36:25.320 | This means that you can always define
00:36:27.200 | a translation between them,
00:36:28.520 | as long as they have unlimited memory,
00:36:30.400 | to be able to perform each other's computations.
00:36:34.440 | - So would you then say that materialism
00:36:36.400 | is this whole world is just the hardware,
00:36:38.920 | and idealism is this whole world is just the software?
00:36:42.280 | - Not quite.
00:36:43.120 | I think that most idealists
00:36:44.200 | don't have a notion of software yet,
00:36:46.120 | because software also comes down to information processing.
00:36:49.720 | So what you notice is the only thing
00:36:51.920 | that is real to you and me is this experiential world
00:36:54.600 | in which things matter, in which things have taste,
00:36:56.800 | in which things have color, phenomenal content, and so on.
00:36:59.840 | - Oh, there you are bringing up consciousness, okay.
00:37:02.360 | - And this is distinct from the physical world,
00:37:04.320 | in which things have values only in an abstract sense.
00:37:08.720 | And you only look at cold patterns moving around.
00:37:13.080 | So how does anything feel like something?
00:37:15.640 | And this connection between the two things
00:37:17.480 | is very puzzling to a lot of people,
00:37:19.320 | and of course, to many philosophers.
00:37:20.640 | So idealism starts out with the notion
00:37:22.560 | that mind is primary,
00:37:23.480 | materialism, things that matter, is primary.
00:37:26.320 | And so for the idealist,
00:37:28.960 | the material patterns that we see playing out
00:37:32.160 | are part of the dream that the mind is dreaming.
00:37:34.760 | And we exist in a mind
00:37:37.280 | on a higher plane of existence, if you want.
00:37:39.560 | And for the materialist,
00:37:42.440 | there is only this material thing,
00:37:44.240 | and that generates some models,
00:37:46.160 | and we are the result of these models.
00:37:49.360 | And in some sense, I don't think that we should understand,
00:37:52.400 | if you understand it properly,
00:37:53.680 | materialism and idealism is a dichotomy,
00:37:56.960 | but as two different aspects of the same thing.
00:37:59.920 | So the weird thing is we don't exist in the physical world.
00:38:02.280 | We do exist inside of a story that the brain tells itself.
00:38:05.460 | - Okay, let me, my information processing,
00:38:11.880 | take that in.
00:38:15.220 | We don't exist in the physical world,
00:38:16.640 | we exist in the narrative.
00:38:18.200 | - Basically, a brain cannot feel anything.
00:38:20.400 | A neuron cannot feel anything.
00:38:21.720 | They're physical things.
00:38:22.560 | Physical systems are unable to experience anything.
00:38:25.280 | But it would be very useful for the brain
00:38:27.160 | or for the organism to know what it would be like
00:38:29.480 | to be a person and to feel something.
00:38:31.680 | So the brain creates a simulacrum of such a person
00:38:35.080 | that it uses to model the interactions of the person.
00:38:37.360 | It's the best model of what that brain,
00:38:39.760 | this organism, thinks it is
00:38:41.280 | in relationship to its environment.
00:38:43.000 | So it creates that model.
00:38:44.120 | It's a story, a multimedia novel
00:38:45.640 | that the brain is continuously writing and updating.
00:38:47.960 | - But you also kind of said that,
00:38:50.160 | you said that we kind of exist in that story.
00:38:53.880 | - In that story, yes.
00:38:55.000 | - Yeah, good point.
00:38:55.840 | - What is real in any of this?
00:38:59.480 | So like, there's a, again, these terms are,
00:39:04.160 | you kind of said there's a quantum graph.
00:39:06.360 | I mean, what is this whole thing running on then?
00:39:09.240 | Is the story, and is it completely, fundamentally impossible
00:39:13.800 | to get access to it?
00:39:15.080 | Because isn't the story supposed to,
00:39:17.960 | isn't the brain in something,
00:39:21.600 | in existing in some kind of context?
00:39:24.680 | - So what we can identify as computer scientists,
00:39:27.560 | we can engineer systems and test our theories this way
00:39:31.360 | that may have the necessary insufficient properties
00:39:35.040 | to produce the phenomena that we are observing,
00:39:37.440 | which is there is a self in a virtual world
00:39:40.320 | that is generated in somebody's neocortex
00:39:42.560 | that is contained in the skull of this primate here.
00:39:46.520 | And when I point at this,
00:39:47.640 | this indexicality is of course wrong.
00:39:50.120 | But I do create something that is likely
00:39:52.480 | to give rise to patterns on your retina
00:39:55.680 | that allow you to interpret what I'm saying, right?
00:39:58.200 | But we both know that the world that you and me are seeing
00:40:00.980 | is not the real physical world.
00:40:03.080 | What we are seeing is a virtual reality generated
00:40:05.600 | in your brain to explain the patterns on your retina.
00:40:08.120 | - How close is it to the real world?
00:40:09.720 | That's kind of the question.
00:40:11.640 | Is it, when you have people like Donald Hoffman,
00:40:16.520 | let's say that you're really far away,
00:40:19.040 | the thing we're seeing, you and I now,
00:40:21.320 | that interface we have is very far away from anything.
00:40:25.080 | We don't even have anything close to the sense
00:40:27.500 | of what the real world is.
00:40:28.680 | Or is it a very surface piece of architecture?
00:40:32.160 | - Imagine you look at the Mandelbrot fractal, right?
00:40:34.600 | This famous thing that Bernard Mandelbrot discovered.
00:40:38.040 | If you see an overall shape in there, right?
00:40:41.400 | But you know, if you truly understand it,
00:40:43.160 | you know it's two lines of code.
00:40:45.180 | It's basically a series that is being tested
00:40:49.440 | for complex numbers in the complex number plane
00:40:52.320 | for every point.
00:40:53.160 | And for those where the series is diverging,
00:40:56.400 | you paint this black.
00:40:59.160 | And where it's converging, you don't.
00:41:01.660 | And you get the intermediate colors
00:41:05.360 | by checking how far it diverges.
00:41:09.080 | - Yes.
00:41:09.980 | - This gives you this shape of this fractal.
00:41:12.320 | But imagine you live inside of this fractal
00:41:14.100 | and you don't have access to where you are in the fractal.
00:41:17.040 | Or you have not discovered the generator function even.
00:41:20.440 | Right, so what you see is,
00:41:21.680 | all I can see right now is a spiral.
00:41:23.680 | And the spiral moves a little bit to the right.
00:41:25.460 | Is this an accurate model of reality?
00:41:27.120 | Yes, it is, right?
00:41:28.120 | It is an adequate description.
00:41:30.560 | You know that there is actually no spiral
00:41:32.960 | in the Mandelbrot fractal.
00:41:33.800 | It only appears like this to an observer
00:41:36.380 | that is interpreting things as a two-dimensional space
00:41:39.220 | and then defines certain irregularities in there
00:41:41.800 | at a certain scale that it currently observes.
00:41:43.940 | Because if you zoom in, the spiral might disappear
00:41:46.040 | and turn out to be something different
00:41:47.360 | at a different resolution, right?
00:41:48.680 | - Yes.
00:41:49.520 | - So at this level, you have the spiral.
00:41:50.720 | And then you discover the spiral moves to the right
00:41:52.640 | and at some point it disappears.
00:41:54.060 | So you have a singularity.
00:41:55.380 | At this point, your model is no longer valid.
00:41:57.680 | You cannot predict what happens beyond the singularity.
00:42:00.480 | But you can observe again and you will see
00:42:02.480 | it hit another spiral and at this point it disappeared.
00:42:05.240 | So we now have a second-order law.
00:42:07.280 | And if you make 30 layers of these laws,
00:42:09.460 | then you have a description of the world
00:42:11.260 | that is similar to the one that we come up with
00:42:13.240 | when we describe the reality around us.
00:42:14.920 | It's reasonably predictive.
00:42:16.480 | It does not cut to the core of it.
00:42:18.560 | It doesn't explain how it's being generated,
00:42:20.520 | how it actually works.
00:42:21.840 | But it's relatively good to explain the universe
00:42:24.520 | that we are entangled with.
00:42:25.360 | - But you don't think the tools of computer science,
00:42:27.240 | the tools of physics could step outside,
00:42:30.900 | see the whole drawing, and get at the basic mechanism
00:42:33.880 | of how the pattern, the spirals, is generated?
00:42:37.360 | - Imagine you would find yourself
00:42:39.000 | embedded into a motherboard fracture
00:42:40.400 | and you try to figure out what works
00:42:41.720 | and you somehow have a Turing machine.
00:42:44.040 | There's enough memory to think.
00:42:46.160 | And as a result, you come to this idea,
00:42:49.440 | it must be some kind of automaton.
00:42:51.360 | And maybe you just enumerate all the possible automata
00:42:53.720 | until you get to the one that produces your reality.
00:42:56.480 | So you can identify necessary and sufficient condition.
00:42:59.440 | For instance, we discover that mathematics itself
00:43:01.840 | is the domain of all languages.
00:43:04.160 | And then we see that most of the domains of mathematics
00:43:06.800 | that we have discovered are, in some sense,
00:43:09.240 | describing the same fractals.
00:43:10.520 | This is what category theory is obsessed about,
00:43:12.680 | that you can map these different domains to each other.
00:43:15.120 | So there are not that many fractals.
00:43:16.960 | And some of these have interesting structure
00:43:19.400 | and symmetry breaks.
00:43:20.840 | And so you can discover what region of this global fractal
00:43:25.680 | you might be embedded in from first principles.
00:43:28.240 | But the only way you can get there is from first principles.
00:43:30.560 | So basically, your understanding of the universe
00:43:33.020 | has to start with automata and then number theory
00:43:35.320 | and then spaces and so on.
00:43:37.060 | - Yeah, I think, like, Stephen Wolfram still dreams
00:43:39.440 | that he'll be able to arrive at the fundamental rules
00:43:43.440 | of the cellular automata, or the generalization of which
00:43:46.680 | is behind our universe.
00:43:48.060 | You've said on this topic,
00:43:52.120 | you said in a recent conversation that, quote,
00:43:55.400 | "Some people think that a simulation can't be conscious
00:43:58.620 | "and only a physical system can.
00:44:00.660 | "But they got it completely backward.
00:44:02.080 | "A physical system cannot be conscious.
00:44:04.880 | "Only a simulation can be conscious.
00:44:06.860 | "Consciousness is a simulated property
00:44:08.840 | "of the simulated self."
00:44:10.200 | Just like you said, the mind is kind of,
00:44:13.720 | we'll call it story, narrative.
00:44:16.400 | There's a simulation,
00:44:17.560 | so our mind is essentially a simulation?
00:44:20.080 | - Usually, I try to use the terminology
00:44:23.400 | so that the mind is basically the principles
00:44:25.720 | that produce the simulation.
00:44:26.920 | It's the software that is implemented by your brain.
00:44:29.880 | And the mind is creating both the universe that we are in
00:44:33.600 | and the self, the idea of a person
00:44:36.480 | that is on the other side of attention
00:44:38.200 | and is embedded in this world.
00:44:40.200 | - Why is that important, that idea of a self?
00:44:43.040 | Why is that an important feature in the simulation?
00:44:46.480 | - It's basically a result of the purpose
00:44:49.320 | that the mind has.
00:44:50.540 | It's a tool for modeling, right?
00:44:52.080 | We are not actually monkeys.
00:44:53.120 | We are side effects of the regulation needs of monkeys.
00:44:56.800 | And what the monkey has to regulate
00:44:59.640 | is the relationship of an organism to an outside world
00:45:03.880 | that is in large part also consisting of other organisms.
00:45:08.160 | And as a result, it basically has regulation targets
00:45:11.160 | that it tries to get to.
00:45:12.560 | These regulation targets start with priors.
00:45:14.520 | They're basically like unconditional reflexes
00:45:16.800 | that we are more or less born with.
00:45:18.280 | And then we can reverse engineer them
00:45:20.040 | to make them more consistent.
00:45:21.660 | And then we get more detailed models
00:45:23.060 | about how the world works and how to interact with it.
00:45:25.940 | And so these priors that you commit to
00:45:28.200 | are largely target values
00:45:30.160 | that our needs should approach, set points.
00:45:32.960 | And this deviation to the set point
00:45:34.600 | creates some urge, some tension.
00:45:37.360 | And we find ourselves living inside of feedback loops,
00:45:40.360 | right?
00:45:41.200 | Consciousness emerges over dimensions of disagreements
00:45:43.520 | with the universe.
00:45:44.720 | Things that you care,
00:45:46.520 | things are not the way they should be,
00:45:48.280 | but you need to regulate.
00:45:49.480 | And so in some sense, the sense itself
00:45:51.680 | is the result of all the identifications
00:45:53.580 | that you're having.
00:45:54.420 | An identification is a regulation target
00:45:56.600 | that you're committing to.
00:45:57.720 | It's a dimension that you care about,
00:45:59.560 | you think is important.
00:46:01.160 | And this is also what locks you in.
00:46:02.520 | If you let go of these commitments,
00:46:05.300 | of these identifications, you get free.
00:46:07.980 | There's nothing that you have to do anymore.
00:46:10.340 | And if you let go of all of them,
00:46:11.580 | you're completely free and you can enter Nirvana
00:46:13.500 | because you're done.
00:46:14.500 | (Lex laughing)
00:46:15.340 | - And actually, this is a good time to pause and say,
00:46:18.020 | thank you to a friend of mine, Gustav Sordestrom,
00:46:21.620 | who introduced me to your work.
00:46:23.100 | I want to give him a shout out.
00:46:25.460 | He's a brilliant guy.
00:46:26.380 | And I think the AI community is actually quite amazing.
00:46:29.380 | And Gustav is a good representative of that.
00:46:31.340 | You are as well.
00:46:32.180 | So I'm glad, first of all,
00:46:33.860 | I'm glad the internet exists and YouTube exists
00:46:35.580 | where I can watch your talks
00:46:38.300 | and then get to your book and study your writing
00:46:41.280 | and think about, you know, that's amazing.
00:46:43.740 | Okay, but you've kind of described
00:46:46.900 | in sort of this emergent phenomenon of consciousness
00:46:49.460 | from the simulation.
00:46:50.600 | So what about the hard problem of consciousness?
00:46:54.200 | Can you just linger on it?
00:46:56.580 | Like, why does it still feel?
00:47:01.860 | Like, I understand you're kind of,
00:47:03.660 | the self is an important part of the simulation,
00:47:06.140 | but why does the simulation feel like something?
00:47:10.060 | - So if you look at a book by, say, George R.R. Martin
00:47:14.260 | where the characters have plausible psychology
00:47:16.780 | and they stand on a hill
00:47:18.320 | because they want to conquer the city below the hill
00:47:20.260 | and they're done in it,
00:47:21.100 | and they look at the color of the sky
00:47:22.440 | and they are apprehensive and feel empowered
00:47:25.660 | and all these things.
00:47:26.500 | Why do they have these emotions?
00:47:27.460 | It's because it's written into the story, right?
00:47:29.740 | And it's written into the story
00:47:30.660 | because there's an adequate model of the person
00:47:32.740 | that predicts what they're going to do next.
00:47:35.500 | And the same thing is true for us.
00:47:37.580 | So it's basically a story that our brain is writing.
00:47:39.900 | It's not written in words.
00:47:41.060 | It's written in perceptual content,
00:47:44.140 | basically multimedia content.
00:47:46.020 | And it's a model of what the person would feel
00:47:48.900 | if it existed.
00:47:50.740 | So it's a virtual person.
00:47:52.700 | And you and me happen to be this virtual person.
00:47:54.940 | So this virtual person gets access to the language center
00:47:58.180 | and talks about the sky being blue.
00:48:00.580 | And this is us.
00:48:01.980 | - But hold on a second.
00:48:02.940 | Do I exist in your simulation?
00:48:05.860 | - You do exist in an almost similar way as me.
00:48:09.580 | So there are internal states that are less accessible
00:48:14.580 | for me that you have and so on.
00:48:18.420 | And my model might not be completely adequate.
00:48:20.900 | There are also things that I might perceive about you
00:48:22.660 | that you don't perceive.
00:48:24.120 | But in some sense, both you and me are some puppets,
00:48:26.980 | two puppets that enact a play in my mind.
00:48:30.380 | And I identify with one of them
00:48:32.260 | because I can control one of the puppet directly.
00:48:35.060 | And with the other one, I can create things in between.
00:48:38.780 | So for instance, we can go on an interaction
00:48:40.700 | that even leads to a coupling to a feedback loop.
00:48:43.200 | So we can think things together in a certain way
00:48:45.900 | or feel things together.
00:48:47.260 | But this coupling is itself not a physical phenomenon.
00:48:50.100 | It's entirely a software phenomenon.
00:48:51.900 | It's the result of two different implementations
00:48:53.980 | interacting with each other.
00:48:55.100 | - So that's interesting.
00:48:56.060 | So are you suggesting, like the way you think about it,
00:49:00.660 | is the entirety of existence a simulation
00:49:03.780 | and we're kind of each mind is a little sub simulation
00:49:07.860 | that like, why don't you,
00:49:11.700 | why doesn't your mind have access
00:49:14.620 | to my mind's full state?
00:49:18.740 | Like--
00:49:19.860 | - For the same reason that my mind
00:49:21.340 | doesn't have access to its own full state.
00:49:23.900 | - So what, I mean--
00:49:26.820 | - There is no trick involved.
00:49:28.340 | So basically when I know something about myself,
00:49:30.500 | it's because I made a model.
00:49:32.060 | So one part of your brain is tasked with modeling
00:49:34.140 | what other parts of your brain are doing.
00:49:36.220 | - Yes, but there seems to be an incredible consistency
00:49:39.680 | about this world in the physical sense,
00:49:42.260 | that there's repeatable experiments and so on.
00:49:44.700 | How does that fit into our silly
00:49:47.780 | descendant of apes simulation of the world?
00:49:50.580 | So why is it so repeatable?
00:49:51.580 | Why is everything so repeatable
00:49:53.220 | and not everything?
00:49:54.540 | There's a lot of fundamental physics experiments
00:49:57.340 | that are repeatable for a long time,
00:50:00.580 | all over the place and so on.
00:50:03.180 | Laws of physics, how does that fit in?
00:50:05.220 | - It seems that the parts of the world
00:50:07.060 | that are not deterministic are not long lived.
00:50:10.480 | So if you build a system, any kind of automaton,
00:50:14.700 | so if you build simulations of something,
00:50:17.220 | you'll notice that the phenomena that endure
00:50:20.540 | are those that give rise to stable dynamics.
00:50:23.700 | So basically, if you see anything that is complex
00:50:25.820 | in the world, it's the result of usually of some control,
00:50:28.720 | of some feedback that keeps it stable
00:50:30.580 | around certain attractors.
00:50:31.940 | And the things that are not stable,
00:50:33.560 | that don't give rise to certain harmonic patterns
00:50:35.900 | and so on, they tend to get weeded out over time.
00:50:39.140 | So if we are in a region of the universe
00:50:42.580 | that sustains complexity, which is required
00:50:45.100 | to implement minds like ours,
00:50:47.940 | this is going to be a region of the universe
00:50:49.780 | that is very tightly controlled and controllable.
00:50:52.640 | So it's going to have lots of interesting symmetries
00:50:55.400 | and also symmetry breaks that allow
00:50:57.820 | to the creation of structure.
00:50:59.580 | - But they exist where?
00:51:02.140 | So there's such an interesting idea
00:51:04.060 | that our mind is simulation that's constructing
00:51:05.820 | the narrative.
00:51:07.100 | My question is, just to try to understand
00:51:11.500 | how that fits with the entirety of the universe.
00:51:15.460 | You're saying that there's a region of this universe
00:51:18.100 | that allows enough complexity to create creatures like us,
00:51:20.820 | but what's the connection between the brain,
00:51:25.320 | the mind, and the broader universe?
00:51:28.140 | Which comes first?
00:51:29.060 | Which is more fundamental?
00:51:30.540 | Is the mind the starting point, the universe is emergent?
00:51:34.140 | Is the universe the starting point, the minds are emergent?
00:51:37.820 | - I think quite clearly the latter.
00:51:39.780 | That's at least a much easier explanation
00:51:41.780 | because it allows us to make causal models.
00:51:44.060 | And I don't see any way to construct an inverse causality.
00:51:47.620 | - So what happens when you die to your mind simulation?
00:51:50.640 | - My implementation ceases.
00:51:53.420 | So basically the thing that implements myself
00:51:56.220 | will no longer be present.
00:51:57.780 | Which means if I'm not implemented
00:51:59.220 | on the minds of other people,
00:52:00.300 | the thing that I identify with.
00:52:02.000 | The weird thing is I don't actually have an identity
00:52:06.500 | beyond the identity that I construct.
00:52:08.620 | If I was the Dalai Lama,
00:52:10.540 | he identifies as a form of government.
00:52:13.240 | So basically the Dalai Lama gets reborn,
00:52:15.420 | not because he's confused,
00:52:17.300 | but because he is not identifying as a human being.
00:52:21.860 | He runs on a human being.
00:52:23.140 | He's basically a governmental software
00:52:25.620 | that is instantiated in every new generation and you.
00:52:28.700 | So his advisors will pick someone
00:52:30.420 | who does this in the next generation.
00:52:32.220 | So if you identify with this, you are no longer human
00:52:35.700 | and you don't die in the sense,
00:52:37.500 | what dies is only the body of the human that you run on.
00:52:41.060 | To kill the Dalai Lama, you would have to kill his tradition.
00:52:44.140 | And if we look at ourselves,
00:52:46.260 | we realize that we are to a small part like this,
00:52:48.620 | most of us.
00:52:49.460 | So for instance, if you have children,
00:52:50.380 | you realize something lives on in them.
00:52:53.220 | Or if you spark an idea in the world, something lives on.
00:52:55.860 | Or if you identify with the society around you.
00:52:58.700 | Because you are a part that.
00:53:00.300 | You're not just this human being.
00:53:01.780 | - Yeah, so in a sense, you are kind of like a Dalai Lama
00:53:05.060 | in the sense that you, Josh or Bach,
00:53:07.980 | is just a collection of ideas.
00:53:09.620 | So you have this operating system
00:53:12.020 | on which a bunch of ideas live and interact.
00:53:14.380 | And then once you die, they kind of,
00:53:16.380 | some of them jump off the ship.
00:53:19.900 | - Put it the other way.
00:53:20.820 | Identity is a software state.
00:53:22.380 | It's a construction.
00:53:23.540 | It's not physically real.
00:53:24.820 | Identity is not a physical concept.
00:53:28.180 | It's basically a representation of different objects
00:53:30.560 | on the same world line.
00:53:32.300 | - But identity lives and dies.
00:53:36.660 | Are you attached?
00:53:37.500 | What's the fundamental thing?
00:53:39.960 | Is it the ideas that come together to form identity?
00:53:43.700 | Or is each individual identity
00:53:45.060 | actually a fundamental thing?
00:53:46.540 | - It's a representation that you can get agency over
00:53:48.900 | if you care.
00:53:49.740 | So basically you can choose what you identify with
00:53:52.180 | if you want to.
00:53:53.300 | - No, but it just seems, if the mind is not real,
00:53:58.300 | that the birth and death is not a crucial part of it.
00:54:04.900 | Well, maybe I'm silly.
00:54:10.200 | Maybe I'm attached to this whole biological organism,
00:54:14.520 | but it seems that the physical,
00:54:17.120 | being a physical object in this world
00:54:20.080 | is an important aspect of birth and death.
00:54:24.040 | Like it feels like it has to be physical to die.
00:54:26.680 | It feels like simulations don't have to die.
00:54:30.200 | - The physics that we experience is not the real physics.
00:54:32.760 | There is no color and sound in the real world.
00:54:35.400 | Color and sound are types of representations that you get
00:54:38.800 | if you want to model reality with oscillators.
00:54:41.480 | So colors and sound in some sense have octaves.
00:54:44.600 | And it's because they are represented properly
00:54:46.440 | with oscillators.
00:54:47.520 | So that's why colors form a circle of use.
00:54:50.840 | And colors have harmonics, sounds have harmonics
00:54:53.360 | as a result of synchronizing oscillators in the brain.
00:54:57.120 | So the world that we subjectively interact with
00:54:59.680 | is fundamentally the result
00:55:01.960 | of the representation mechanisms in our brain.
00:55:04.400 | They are mathematically to some degree universal.
00:55:06.420 | There are certain regularities that you can discover
00:55:08.840 | in the patterns and not others.
00:55:10.520 | But the patterns that we get, this is not the real world.
00:55:13.160 | The world that we interact with
00:55:14.320 | is always made of too many parts to count.
00:55:16.920 | So when you look at this table and so on,
00:55:19.600 | it's consisting of so many molecules and atoms
00:55:22.880 | that you cannot count them.
00:55:23.800 | So you only look at the aggregate dynamics,
00:55:26.040 | at limit dynamics.
00:55:27.580 | If you had almost infinitely many particles,
00:55:31.400 | what would be the dynamics of the table?
00:55:33.040 | And this is roughly what you get.
00:55:34.280 | So geometry that we are interacting with
00:55:36.680 | is the result of discovering those operators
00:55:38.780 | that work in the limit,
00:55:40.620 | that you get by building an infinite series that converges.
00:55:44.000 | For those parts where it converges, it's geometry.
00:55:46.520 | For those parts where it doesn't converge, it's chaos.
00:55:49.140 | - Right, and then so all of that is filtered
00:55:51.180 | through the consciousness that's emergent in our narrative.
00:55:56.180 | So the consciousness gives it color,
00:55:58.600 | gives it feeling, gives it flavor.
00:56:00.800 | - So I think the feeling, flavor, and so on
00:56:04.340 | is given by the relationship that a feature has
00:56:06.500 | to all the other features.
00:56:08.100 | It's basically a giant relational graph
00:56:10.140 | that is our subjective universe.
00:56:12.320 | The color is given by those aspects of the representation
00:56:15.720 | or this experiential color where you care about,
00:56:18.940 | where you have identifications,
00:56:20.560 | where something means something,
00:56:21.900 | where you are the inside of a feedback loop.
00:56:23.620 | And the dimensions of caring are basically dimensions
00:56:27.020 | of this motivational system that we emerge over.
00:56:29.740 | - The meaning of the relations, the graph.
00:56:33.480 | Can you elaborate on that a little bit?
00:56:35.020 | Like where does the, maybe we can even step back
00:56:38.340 | and ask the question of what is consciousness
00:56:41.180 | to be sort of more systematic.
00:56:42.740 | What do you, how do you think about consciousness?
00:56:47.340 | - I think that consciousness is largely a model
00:56:49.580 | of the contents of your attention.
00:56:51.180 | It's a mechanism that has evolved
00:56:53.380 | for a certain type of learning.
00:56:55.920 | At the moment, our machine learning systems
00:56:58.100 | largely work by building chains of weighted sums
00:57:02.020 | of real numbers with some non-linearity.
00:57:05.260 | And you will learn by piping an error signal
00:57:09.060 | through these different chain layers
00:57:11.900 | and adjusting the weights in these weighted sums.
00:57:15.940 | And you can approximate most polynoms with this
00:57:19.620 | if you have enough training data.
00:57:21.300 | But the price is you need to change a lot of these weights.
00:57:24.740 | Basically the error is piped backwards into the system
00:57:28.020 | until it accumulates at certain junctures in the network
00:57:31.260 | and everything else evens out statistically.
00:57:33.780 | And only at these junctures,
00:57:34.900 | this is where you had the actual error in the network,
00:57:37.020 | you make the change there.
00:57:38.020 | This is a very slow process
00:57:40.060 | and our brains don't have enough time for that
00:57:41.880 | because we don't get old enough to play go
00:57:44.020 | the way that our machines learn to play go.
00:57:46.520 | So instead what we do is an attention-based learning.
00:57:48.820 | We pinpoint the probable region in the network
00:57:51.500 | where we can make an improvement.
00:57:54.340 | And then we store this binding state
00:57:57.380 | together with the expected outcome in a protocol.
00:58:00.060 | This ability to make indexed memories
00:58:02.060 | for the purpose of learning
00:58:03.100 | to revisit these commitments later,
00:58:06.180 | this requires a memory of the contents of our attention.
00:58:10.300 | Another aspect is when I construct my reality,
00:58:12.660 | I make mistakes.
00:58:13.680 | So I see things that turn out to be reflections
00:58:16.100 | or shadows and so on,
00:58:17.780 | which means I have to be able to point out
00:58:19.620 | which features of my perception
00:58:21.440 | gave rise to a present construction of reality.
00:58:25.340 | So the system needs to pay attention
00:58:27.260 | to the features that are currently in its focus.
00:58:31.300 | And it also needs to pay attention
00:58:33.140 | to whether it pays attention itself,
00:58:34.740 | in part because the attentional system gets trained
00:58:36.900 | with the same mechanism, so it's reflexive,
00:58:39.140 | but also in part because your attention lapses
00:58:41.180 | if you don't pay attention to the attention itself.
00:58:44.340 | So it's the thing that I'm currently seeing,
00:58:45.900 | just a dream that my brain has spun off
00:58:48.700 | into some kind of daydream,
00:58:50.300 | or am I still paying attention to my percept?
00:58:52.540 | So you have to periodically go back
00:58:54.780 | and see whether you're still paying attention.
00:58:56.820 | And if you have this loop and you make it tight enough
00:58:59.180 | between the system becoming aware
00:59:01.060 | of the contents of its attention
00:59:02.540 | and the fact that it's paying attention itself
00:59:04.700 | and makes attention the object of its attention,
00:59:06.960 | I think this is the loop over which we wake up.
00:59:09.140 | - So there's this, (laughs)
00:59:11.820 | so there's this attentional mechanism
00:59:13.440 | that's somehow self-referential,
00:59:14.900 | that's fundamental to what consciousness is.
00:59:17.540 | So just to ask you a question,
00:59:20.620 | I don't know how much you're familiar
00:59:21.940 | with the recent breakthroughs
00:59:23.580 | in natural language processing,
00:59:24.860 | they use attentional mechanism,
00:59:26.460 | they use something called transformers
00:59:28.580 | to learn patterns and sentences
00:59:33.260 | by allowing the network to focus its attention
00:59:37.320 | to particular parts of the sentence of each individual.
00:59:40.140 | So like parametrize and make it learnable
00:59:43.020 | the dynamics of a sentence
00:59:44.980 | by having like a little window into the sentence.
00:59:49.520 | Do you think that's like a little step towards
00:59:53.740 | that eventually will take us to the intentional mechanisms
00:59:58.140 | from which consciousness can emerge?
01:00:00.420 | - Not quite, I think it models only one aspect of attention.
01:00:03.740 | In the early days of automated language translation,
01:00:07.660 | there was an example that I found particularly funny
01:00:10.700 | where somebody tried to translate a text
01:00:12.540 | from English into German
01:00:14.020 | and it was a bed broke the window.
01:00:17.140 | And the translation in German was,
01:00:21.140 | (speaking in foreign language)
01:00:25.340 | So to translate back into English,
01:00:27.100 | a bed, this flying mammal broke the window
01:00:30.460 | with a baseball bat.
01:00:32.180 | And it seemed to be the most similar to this program
01:00:35.580 | because it somehow maximized the possibility
01:00:39.180 | of translating the concept bed into German
01:00:41.340 | in the same sentence.
01:00:42.740 | And this is a mistake that the transformer model
01:00:45.260 | is not doing because it's tracking identity.
01:00:48.020 | And the attentional mechanism in the transformer model
01:00:50.220 | is basically putting its finger on individual concepts
01:00:53.380 | and make sure that these concepts pop up later in the text
01:00:57.660 | and tracks basically the individuals through the text.
01:01:00.980 | And it's why the system can learn things
01:01:03.940 | that other systems couldn't before it,
01:01:05.620 | which makes it, for instance, possible to write a text
01:01:08.220 | where it talks about the scientist
01:01:09.700 | and the scientist has a name and has a pronoun
01:01:12.100 | and it gets a consistent story about that thing.
01:01:15.340 | What it does not do, it doesn't fully integrate this.
01:01:17.620 | So it is meaningful or part at some point,
01:01:19.660 | it loses track of this context.
01:01:21.780 | It does not yet understand that everything that it says
01:01:24.300 | has to refer to the same universe.
01:01:26.020 | And this is where this thing falls apart.
01:01:28.620 | But the attention in the transformer model
01:01:31.140 | does not go beyond tracking identity.
01:01:33.180 | And tracking identity is an important part of attention,
01:01:36.020 | but it's a different, very specific attentional mechanism.
01:01:39.700 | And it's not the one that gives rise
01:01:41.020 | to the type of consciousness that we have.
01:01:43.060 | - Just to linger on, what do you mean by identity
01:01:45.580 | in the context of language?
01:01:47.300 | - So when you talk about language,
01:01:49.540 | you have different words that can refer
01:01:51.500 | to the same concept.
01:01:52.700 | - Got it.
01:01:53.540 | - And in the sense that-- - So space of concepts.
01:01:55.100 | So-- - Yes.
01:01:55.940 | And it can also be in a nominal sense
01:01:59.140 | or in an inexical sense that you say,
01:02:01.740 | this word does not only refer to this class of objects,
01:02:05.660 | but it refers to a definite object,
01:02:07.340 | to some kind of agent that waves their way through the story
01:02:11.580 | and is only referred by different ways in the language.
01:02:15.660 | So the language is basically a projection
01:02:17.860 | from a conceptual representation,
01:02:20.540 | from a scene that is evolving
01:02:22.700 | into a discrete string of symbols.
01:02:24.780 | And what the transformer is able to do,
01:02:26.820 | it learns aspects of this projection mechanism
01:02:30.420 | that other models couldn't learn.
01:02:32.460 | - So have you ever seen an artificial intelligence
01:02:34.860 | or any kind of construction idea
01:02:37.220 | that allows for, unlike neural networks,
01:02:39.820 | or perhaps within neural networks,
01:02:41.540 | that's able to form something where the space of concepts
01:02:46.540 | continues to be integrated?
01:02:48.260 | So what you're describing, building a knowledge base,
01:02:52.020 | building this consistent larger and larger sets of ideas
01:02:56.060 | that would then allow for a deeper understanding.
01:02:58.540 | - Wittgenstein thought that we can build everything
01:03:01.700 | from language, from basically
01:03:03.660 | a logical grammatical construct.
01:03:05.740 | And I think to some degree,
01:03:07.540 | this was also what Minsky believed.
01:03:09.780 | So that's why he focused so much on common sense,
01:03:11.980 | reasoning, and so on.
01:03:13.100 | And a project that was inspired by him was Psyche.
01:03:16.700 | - That's still going on.
01:03:19.740 | - Yes, of course, ideas don't die, only people die.
01:03:23.980 | - And that's true, but--
01:03:27.300 | - And all Psyche is a productive project.
01:03:29.140 | It's just probably not one that is going to converge
01:03:32.500 | to general intelligence.
01:03:33.620 | The thing that Wittgenstein couldn't solve,
01:03:35.660 | and he looked at this in his book
01:03:37.620 | at the end of his life, "Philosophical Investigations,"
01:03:40.300 | was the notion of images.
01:03:42.220 | So images play an important role in Tractatus.
01:03:44.500 | The Tractatus is an attempt to basically
01:03:46.460 | turn philosophy into logical programming language,
01:03:48.500 | to design a logical language in which you can do
01:03:51.100 | actual philosophy that's rich enough for doing this.
01:03:53.860 | And the difficulty was to deal with perceptual content.
01:03:57.460 | And eventually, I think he decided
01:04:00.180 | that he was not able to solve it.
01:04:02.300 | And I think this preempted the failure
01:04:04.780 | of the logitist program in AI.
01:04:06.540 | And the solution, as we see it today,
01:04:08.300 | is we need more general function approximation.
01:04:10.980 | There are functions, geometric functions,
01:04:13.220 | that we learn to approximate that cannot be
01:04:15.460 | efficiently expressed and computed
01:04:17.140 | in a grammatical language.
01:04:18.420 | We can, of course, build automata
01:04:19.900 | that go via number theory and so on,
01:04:22.380 | to learn in algebra, and then compute
01:04:24.580 | an approximation of this geometry.
01:04:26.780 | But to equate language and geometry
01:04:29.860 | is not an efficient way to think about it.
01:04:32.740 | - So functional, well, you kind of just said
01:04:35.020 | that neural networks are sort of,
01:04:37.300 | the approach that neural networks takes
01:04:38.860 | is actually more general than what can be expressed
01:04:43.300 | through language.
01:04:45.340 | - Yes, so what can be efficiently expressed
01:04:48.700 | through language at the data rates
01:04:50.220 | at which we process grammatical language.
01:04:52.540 | - Okay, so you don't think languages,
01:04:55.540 | so you disagree with Wittgenstein
01:04:57.100 | that language is not fundamental to--
01:04:59.020 | - I agree with Wittgenstein.
01:05:00.700 | I just agree with the late Wittgenstein.
01:05:03.820 | And I also agree with the beauty of the early Wittgenstein.
01:05:07.740 | I think that the Tractatus itself
01:05:09.660 | is probably the most beautiful philosophical text
01:05:11.700 | that was written in the 20th century.
01:05:14.020 | - But language is not fundamental to cognition
01:05:17.060 | and intelligence and consciousness.
01:05:18.580 | - So I think that language is a particular way,
01:05:21.580 | or the natural language that we're using
01:05:23.380 | is a particular level of abstraction
01:05:25.380 | that we use to communicate with each other.
01:05:27.540 | But the languages in which we express geometry
01:05:31.500 | are not grammatical languages in the same sense.
01:05:33.940 | So they work slightly different.
01:05:35.460 | They're more general expressions of functions.
01:05:38.180 | And I think the general nature of a model
01:05:40.980 | is you have a bunch of parameters.
01:05:42.780 | These have a range, these are the variances of the world.
01:05:46.900 | And you have relationships between them
01:05:48.540 | which are constraints, which say,
01:05:50.180 | if certain parameters have these values,
01:05:52.580 | then other parameters have to have the following values.
01:05:55.860 | And this is a very early insight in computer science.
01:05:59.820 | And I think some of the earliest formulations
01:06:02.300 | is the Boltzmann machine.
01:06:03.780 | And the problem with the Boltzmann machine
01:06:05.180 | is that while it has a measure of whether it's good,
01:06:07.620 | this is basically the energy on the system,
01:06:09.620 | the amount of tension that you have left in the constraints
01:06:11.740 | where the constraints don't quite match.
01:06:14.540 | It's very difficult to,
01:06:15.980 | despite having this global measure, to train it.
01:06:18.580 | Because as soon as you add more than trivially
01:06:21.260 | a few elements, parameters into the system,
01:06:23.940 | it's very difficult to get it settled
01:06:25.900 | in the right architecture.
01:06:27.780 | And so the solution that Hinton and Zanofsky found
01:06:32.220 | was to use a restricted Boltzmann machine,
01:06:35.020 | which uses the hidden links,
01:06:36.940 | the internal links in the Boltzmann machine,
01:06:38.980 | and only has basically input and output layer.
01:06:41.620 | But this limits the expressivity of the Boltzmann machine.
01:06:44.700 | So now he builds a network of small,
01:06:46.500 | of these primitive Boltzmann machines.
01:06:48.180 | And in some sense, you can see
01:06:50.180 | almost continuous development from this
01:06:51.940 | to the deep learning models that we're using today,
01:06:54.780 | even though we don't use Boltzmann machines at this point.
01:06:57.460 | But the idea of the Boltzmann machine
01:06:58.740 | is you take this model,
01:06:59.580 | you clamp some of the values to perception,
01:07:01.860 | and this forces the entire machine
01:07:03.620 | to go into a state that is compatible
01:07:05.300 | with the states that you currently perceive.
01:07:07.140 | And this state is your model of the world.
01:07:09.220 | Right, so I think it's a very general way
01:07:12.580 | of thinking about models,
01:07:14.100 | but we have to use a different approach to make it work.
01:07:17.860 | And this is, we have to find different networks
01:07:20.940 | that train the Boltzmann machine.
01:07:22.180 | So the mechanism that trains the Boltzmann machine
01:07:24.700 | and the mechanism that makes the Boltzmann machine
01:07:26.620 | settle into its state are distinct
01:07:29.220 | from the constrained architecture
01:07:30.580 | of the Boltzmann machine itself.
01:07:33.420 | - The kind of mechanism that we wanna develop,
01:07:36.060 | you're saying.
01:07:36.900 | - Yes, so the direction in which I think
01:07:39.180 | our research is going to go is going to,
01:07:42.020 | for instance, what you notice in perception
01:07:44.300 | is our perceptual models of the world
01:07:46.580 | are not probabilistic but possible-istic,
01:07:48.860 | which means-- - What's that mean?
01:07:49.860 | - You should be able to perceive things
01:07:51.420 | that are improbable but possible.
01:07:53.180 | Right, a perceptual state is valid,
01:07:56.540 | not if it's probable, but if it's possible,
01:07:58.720 | if it's coherent.
01:08:00.540 | So if you see a tiger coming after you,
01:08:02.260 | you should be able to see this even if it's unlikely.
01:08:04.860 | And the probability is necessary
01:08:07.900 | for convergence of the model.
01:08:09.180 | So given the state of possibilities
01:08:12.060 | that is very, very large,
01:08:13.720 | and a set of perceptual features,
01:08:15.700 | how should you change the states of the model
01:08:19.100 | to get it to converge with your perception?
01:08:21.340 | - Oh, but the space of ideas that are coherent
01:08:26.340 | with the context that you're sensing
01:08:29.860 | is perhaps not as large.
01:08:31.620 | I mean, that's perhaps pretty small.
01:08:35.060 | - The degree of coherence that you need to achieve
01:08:37.700 | depends, of course, how deep your models go.
01:08:40.180 | That is, for instance, politics is very simple
01:08:42.180 | when you know very little
01:08:43.820 | about game theory and human nature.
01:08:45.220 | So the younger you are,
01:08:46.540 | the more obvious it is how politics should work, right?
01:08:49.420 | - Yes.
01:08:50.260 | - Because you get a coherent aesthetics
01:08:52.100 | from relatively few inputs.
01:08:54.140 | And the more layers you model,
01:08:55.980 | the more layers you model reality,
01:08:58.000 | the harder it gets to satisfy all the constraints.
01:09:01.800 | - So, you know, the current neural networks
01:09:04.520 | are fundamentally supervised learning system
01:09:07.000 | with a feed-forward neural network,
01:09:08.400 | you use back propagation to learn.
01:09:10.720 | What's your intuition about what kind of mechanisms
01:09:13.360 | might we move towards to improve the learning procedure?
01:09:18.360 | - I think one big aspect is going to be meta-learning,
01:09:21.280 | and architecture search starts in this direction.
01:09:24.460 | In some sense, the first wave of AI,
01:09:26.240 | classical AI worked by identifying a problem
01:09:28.720 | and a possible solution and implementing the solution,
01:09:31.080 | right, a program that plays chess.
01:09:32.960 | And right now we are in the second wave of AI.
01:09:35.600 | So instead of writing the algorithm
01:09:37.400 | that implements the solution,
01:09:38.960 | we write an algorithm that automatically searches
01:09:41.800 | for an algorithm that implements the solution.
01:09:44.480 | So the learning system, in some sense,
01:09:46.200 | is an algorithm that itself discovers the algorithm
01:09:49.440 | that solves the problem, like Go.
01:09:51.000 | Go is too hard to implement the solution by hand,
01:09:54.000 | but we can implement an algorithm
01:09:55.480 | that finds the solution.
01:09:56.720 | - Yeah, so that-- - So now,
01:09:57.560 | let's move to the third stage, right?
01:09:59.200 | The third stage would be meta-learning.
01:10:01.000 | Find an algorithm that discovers a learning algorithm
01:10:03.960 | for the given domain.
01:10:05.240 | Our brain is probably not a learning system,
01:10:07.020 | but a meta-learning system.
01:10:08.800 | This is one way of looking at what we are doing.
01:10:11.880 | There is another way if you look at the way
01:10:13.800 | our brain is, for instance, implemented.
01:10:15.440 | There is no central control
01:10:17.160 | that tells all the neurons how to wire up.
01:10:19.120 | - Yes. - Instead, every neuron
01:10:20.760 | is an individual reinforcement learning agent.
01:10:23.280 | Every neuron is a single-celled organism
01:10:25.200 | that is quite complicated,
01:10:26.360 | and in some sense, quite motivated to get fed.
01:10:28.960 | - Yes. - And it gets fed
01:10:29.920 | if it fires on average at the right time.
01:10:32.560 | - Yes. - And the right time
01:10:35.480 | depends on the context that the neuron exists in,
01:10:39.240 | which is the electrical and chemical environment
01:10:41.360 | that it has.
01:10:42.240 | So it basically has to learn a function
01:10:45.000 | over its environment that tells us
01:10:46.720 | when to fire to get fed.
01:10:48.440 | Or if you see it as a reinforcement learning agent,
01:10:50.520 | every neuron is, in some sense,
01:10:52.280 | making a hypothesis when it sends a signal
01:10:54.720 | and tries to pipe a signal through the universe
01:10:57.160 | and tries to get positive feedback for it.
01:10:59.360 | And the entire thing is set up in such a way
01:11:01.600 | that it's robustly self-organizing into a brain,
01:11:04.960 | which means you start out with different neuron types
01:11:07.360 | that have different priors on which hypothesis to test
01:11:10.640 | on how to get its reward,
01:11:12.240 | and you put them into different concentrations
01:11:14.400 | in a certain spatial alignment,
01:11:16.440 | and then you entrain it in a particular order,
01:11:19.880 | and as a result, you get a well-organized brain.
01:11:22.200 | - Yeah, so, okay, so the brain is a meta-learning system
01:11:25.280 | with a bunch of reinforcement learning agents.
01:11:30.600 | And what, I think you said, but just to clarify,
01:11:35.600 | where do the, there's no centralized government
01:11:40.000 | that tells you, here's a loss function,
01:11:42.600 | here's a loss function, here's a loss function.
01:11:44.800 | Like, what, who is, who says what's the objective--
01:11:48.680 | - There are also governments which impose loss functions
01:11:51.760 | on different parts of the brain.
01:11:53.080 | So we have differential attention.
01:11:54.500 | Some areas in your brain get especially rewarded
01:11:56.920 | when you look at faces.
01:11:58.160 | If you don't have that, you will get prosopagnosia,
01:12:00.520 | which basically, the inability to tell people apart
01:12:03.640 | by their faces.
01:12:06.000 | - And the reason that happens is because it was,
01:12:08.200 | it had an evolutionary advantage.
01:12:09.560 | So like, evolution comes into play here about--
01:12:11.600 | - But it's basically an extraordinary attention
01:12:13.680 | that we have for faces.
01:12:14.920 | I don't think that people with prosopagnosia
01:12:16.880 | have a perceived defective brain.
01:12:19.200 | The brain just has an average attention for faces.
01:12:21.440 | So people with prosopagnosia don't look at faces
01:12:24.120 | more than they look at cups.
01:12:25.780 | So the level at which they resolve the geometry of faces
01:12:28.440 | is not higher than the one that, than for cups.
01:12:31.320 | And people that don't have prosopagnosia
01:12:33.120 | look obsessively at faces, right?
01:12:35.920 | For you and me, it's impossible to move through a crowd
01:12:38.280 | without scanning the faces.
01:12:40.560 | And as a result, we make insanely detailed models of faces
01:12:43.200 | that allow us to discern mental states of people.
01:12:45.560 | - So obviously we don't know 99% of the details
01:12:49.980 | of this meta-learning system that's our mind, okay.
01:12:52.540 | But still we took a leap from something much dumber
01:12:57.720 | to that through the evolutionary process.
01:13:01.560 | Can you, first of all, maybe say how hard,
01:13:04.440 | how big of a leap is that from our brain,
01:13:08.480 | from our ape ancestors to multi-cell organisms?
01:13:13.480 | And is there something we can think about,
01:13:17.800 | as we start to think about how to engineer intelligence,
01:13:21.860 | is there something we can learn from evolution?
01:13:24.920 | - In some sense, life exists because of the market
01:13:29.020 | opportunity of controlled chemical reactions.
01:13:31.740 | We compete with dumb chemical reactions
01:13:34.060 | and we win in some areas against this dumb combustion
01:13:37.420 | because we can harness those entropy gradients
01:13:39.980 | where you need to add a little bit of energy
01:13:41.660 | in a specific way to harvest more energy.
01:13:43.660 | - So we out-competed combustion.
01:13:45.740 | - Yes, in many regions we do.
01:13:47.020 | We try very hard because when we are
01:13:49.260 | in direct competition, we lose, right?
01:13:51.340 | So because the combustion is going to close
01:13:54.780 | the entropy gradients much faster than we can run.
01:13:56.860 | - Yes, got it.
01:13:57.700 | That's quite a compelling notion, yep.
01:14:00.780 | - Yeah, so basically we do this because every cell
01:14:02.820 | has a Turing machine built into it.
01:14:05.020 | It's like literally a Reed White head on a tape.
01:14:07.820 | (both laughing)
01:14:08.860 | And so everything that's more complicated than a molecule
01:14:12.100 | that just is a vortex around the tractors,
01:14:16.580 | that needs a Turing machine for its regulation.
01:14:19.220 | And then you bind cells together
01:14:21.380 | and you get next level organization,
01:14:22.980 | an organism where the cells together
01:14:24.900 | implement some kind of software.
01:14:26.940 | And for me, a very interesting discovery
01:14:30.500 | in the last year was the word spirit
01:14:32.260 | because I realized that what spirit actually means
01:14:35.060 | is an operating system for an autonomous robot.
01:14:37.900 | And when the word was invented, people needed this word,
01:14:40.820 | but they didn't have robots that they built themselves.
01:14:43.300 | Yet the only autonomous robots that were known
01:14:45.420 | were people, animals, plants, ecosystems, cities, and so on.
01:14:48.980 | And they all had spirits.
01:14:50.620 | And it makes sense to say that the plant
01:14:52.500 | has an operating system, right?
01:14:53.660 | If you pinch the plant in one area,
01:14:55.980 | then there's going to have repercussions
01:14:57.420 | throughout the plant.
01:14:58.660 | Everything in the plant is in some sense connected
01:15:00.900 | into some global aesthetics like in other organisms.
01:15:03.700 | An organism is not a collection of cells,
01:15:05.860 | it's a function that tells cells how to behave.
01:15:09.300 | And this function is not implemented
01:15:11.580 | as some kind of supernatural thing,
01:15:14.340 | like some morphogenetic field.
01:15:16.220 | It is an emergent result of the interactions
01:15:18.820 | of each cell with each other cell, right?
01:15:20.860 | - Oh my God, so what you're saying is
01:15:24.160 | the organism is a function that tells what--
01:15:29.160 | - Tells the cells how to interact.
01:15:30.300 | - Tells what to do, and the function emerges
01:15:35.020 | from the interaction of the cells.
01:15:37.260 | - Yes.
01:15:38.740 | So it's basically a description of what the plant is doing
01:15:41.920 | in terms of macro states.
01:15:43.580 | - Yeah.
01:15:44.420 | - And the micro states, the physical implementation
01:15:46.740 | are too many of them to describe them.
01:15:48.580 | So the software that we use to describe
01:15:51.260 | what the plant is doing, the spirit of the plant
01:15:53.540 | is the software, the operating system of the plant, right?
01:15:56.460 | This is a way in which we, the observers,
01:15:59.620 | make sense of the plant.
01:16:00.980 | - Yes.
01:16:01.820 | - And the same is true for people.
01:16:02.900 | So people have spirits, which is their operating system
01:16:05.500 | in a way, right?
01:16:06.340 | And there's aspects of that operating system
01:16:08.260 | that relate to how your body functions
01:16:10.640 | and others how you socially interact,
01:16:12.300 | how you interact with yourself and so on.
01:16:14.420 | And we make models of that spirit.
01:16:16.760 | And we think it's a loaded term
01:16:19.340 | because it's from the pre-scientific age.
01:16:21.860 | But it took the scientific age a long time
01:16:24.460 | to rediscover a term that is pretty much the same thing.
01:16:27.420 | And I suspect that the differences that we still see
01:16:30.540 | between the old world and the new world
01:16:32.060 | are translation errors that have happened
01:16:33.780 | over the centuries.
01:16:35.020 | - Well, can you actually linger on that?
01:16:36.700 | Like, why do you say that spirit, just to clarify,
01:16:39.900 | because I'm a little bit confused.
01:16:41.420 | So the word spirit is a powerful thing,
01:16:44.540 | but why did you say in the last year or so
01:16:46.260 | that you discovered this?
01:16:48.100 | Do you mean the same old traditional idea of a spirit?
01:16:51.260 | Or do you mean--
01:16:52.100 | - I try to find out what people mean by spirit.
01:16:54.100 | When people say spirituality in the US,
01:16:56.380 | it usually refers to the phantom limb
01:16:58.620 | that they develop in the absence of culture.
01:17:01.020 | And a culture is, in some sense,
01:17:03.020 | you could say the spirit of a society that is long game.
01:17:07.340 | The thing that becomes self-aware
01:17:10.060 | at a level above the individuals,
01:17:11.940 | where you say, if we don't do the following things,
01:17:14.780 | then the grand, grand, grand, grandchildren of our children
01:17:17.140 | will not have nothing to eat.
01:17:18.900 | So if you take this long scope,
01:17:21.220 | or you try to maximize the length of the game
01:17:23.300 | that you are playing as a species,
01:17:24.740 | you realize that you're part of a larger thing
01:17:27.060 | that you cannot fully control.
01:17:28.260 | You probably need to submit to the ecosphere
01:17:30.940 | instead of trying to completely control it.
01:17:34.340 | There needs to be a certain level
01:17:36.620 | at which we can exist as a species if you want to endure.
01:17:40.220 | And our culture is not sustaining this anymore.
01:17:43.060 | We basically made this bet with the Industrial Revolution
01:17:45.460 | that we can control everything.
01:17:46.780 | And the modernist societies
01:17:48.020 | with basically unfettered growth
01:17:50.380 | led to a situation in which we depend on the ability
01:17:53.980 | to control the entire planet.
01:17:55.780 | And since we are not able to do that, as it seems,
01:18:00.260 | this culture will die.
01:18:02.060 | And we realize that it doesn't have a future.
01:18:04.500 | We called our children Generation Z.
01:18:07.420 | It's such a very optimistic thing to do.
01:18:09.540 | (both laughing)
01:18:10.540 | - Yeah, so you can have this kind of intuition
01:18:13.060 | that our civilization, you said culture,
01:18:16.460 | but you really mean the spirit of the civilization,
01:18:21.460 | the entirety of the civilization may not exist for long.
01:18:26.140 | - Yeah.
01:18:26.980 | - So can you untangle that?
01:18:28.980 | What's your intuition behind that?
01:18:30.340 | So you kind of offline mentioned to me
01:18:33.340 | that the Industrial Revolution was kind of a,
01:18:35.780 | the moment we agreed to accept the offer,
01:18:40.140 | sign on the paper, on the dotted line
01:18:42.860 | with the Industrial Revolution, we doomed ourselves.
01:18:45.460 | Can you elaborate on that?
01:18:47.340 | - There's a suspicion.
01:18:48.260 | I, of course, don't know how it plays out.
01:18:49.980 | But it seems to me that in a society
01:18:54.420 | in which you leverage yourself very far
01:18:57.780 | over an entropic abyss without land on the other side,
01:19:00.860 | it's relatively clear that your cantilever
01:19:02.620 | is at some point going to break down
01:19:05.300 | into this entropic abyss and you have to pay the bill.
01:19:08.420 | - Okay, Russia's my first language and I'm also an idiot.
01:19:13.060 | - Me too.
01:19:13.900 | (both laughing)
01:19:15.540 | - This is just two apes.
01:19:17.160 | Instead of playing with-- - Previous.
01:19:19.380 | (both laughing)
01:19:20.460 | - Instead of playing with a banana
01:19:21.580 | trying to have fun by talking.
01:19:23.260 | Okay, anthropic what and what's anthropic?
01:19:26.700 | - Entropic.
01:19:27.540 | - Entropic.
01:19:28.380 | - And so entropic in the sense of entropy.
01:19:30.820 | - Oh, entropic, got it.
01:19:31.820 | - Yes, so this--
01:19:32.660 | - And entropic, what was the other word you used?
01:19:34.180 | - Abyss.
01:19:35.300 | - What's that?
01:19:36.140 | - It's a big gorge.
01:19:37.780 | - Oh, abyss.
01:19:38.620 | - Abyss, yes.
01:19:39.460 | - Entropic abyss, so many of the things you say are poetic.
01:19:42.540 | It's hurting my brain. - And also mispronounced.
01:19:44.260 | It's amazing, right?
01:19:45.660 | - It's mispronounced.
01:19:46.500 | (both laughing)
01:19:48.900 | Which makes it even more poetic.
01:19:50.500 | (both laughing)
01:19:51.340 | - We both said that.
01:19:52.420 | - Wittgenstein would be proud.
01:19:53.900 | So entropic abyss.
01:19:55.900 | Okay, let's rewind then.
01:19:58.380 | The Industrial Revolution,
01:20:00.860 | so how does that get us into the entropic abyss?
01:20:04.140 | - So in some sense, we burned 100 million years worth
01:20:08.020 | of trees to get everybody plumbing.
01:20:10.620 | - Yes.
01:20:11.460 | - And the society that we had before that
01:20:13.140 | had a very limited number of people.
01:20:15.080 | So basically since 0 BC,
01:20:17.740 | we hovered between 300 and 400 million people.
01:20:21.820 | - Yes.
01:20:22.660 | - And this only changed with the Enlightenment
01:20:25.060 | and the subsequent Industrial Revolution.
01:20:27.900 | And in some sense, the Enlightenment freed our rationality
01:20:31.420 | and also freed our norms
01:20:33.260 | from the pre-existing order gradually.
01:20:35.620 | It was a process that basically happened in feedback loops,
01:20:39.100 | so it was not that just one caused the other.
01:20:41.340 | It was a dynamic that started.
01:20:43.500 | And the dynamic worked by basically increasing productivity
01:20:47.540 | to such a degree that we could feed all our children.
01:20:51.820 | And I think the definition of poverty
01:20:55.340 | is that you have as many children
01:20:57.460 | as you can feed before they die,
01:20:59.540 | which is in some sense the state
01:21:00.940 | that all animals on Earth are in.
01:21:02.700 | - The definition of poverty is having enough--
01:21:06.380 | - So you can have only so many children as you can feed,
01:21:08.820 | and if you have more, they die.
01:21:10.380 | - Yes.
01:21:11.220 | - And in our societies,
01:21:12.380 | you can basically have as many children as you want
01:21:14.380 | and they don't die.
01:21:16.060 | - Right.
01:21:16.880 | - So the reason why we don't have as many children
01:21:19.620 | as we want is because we also have to pay a price
01:21:21.980 | in terms of we have to insert ourselves
01:21:23.820 | in the lower source of threat if we have too many.
01:21:26.460 | So basically everybody in the under, middle,
01:21:29.100 | and lower, upper class has only a limited number of children
01:21:33.060 | because having more of them would mean a big economic hit
01:21:36.500 | to the individual families.
01:21:37.900 | - Yes.
01:21:38.740 | - Because children, especially in the US,
01:21:39.660 | super expensive to have.
01:21:41.300 | And you only are taken out of this
01:21:43.540 | if you are basically super rich or if you are super poor.
01:21:46.660 | If you are super poor,
01:21:47.500 | it doesn't matter how many kids you have
01:21:48.620 | because your status is not going to change.
01:21:51.220 | And these children are largely not going to die of hunger.
01:21:54.660 | - So how does this lead us to self-destruction?
01:21:57.060 | So there's a lot of unpleasant properties
01:21:59.500 | about this process.
01:22:00.340 | - So basically what we try to do
01:22:02.100 | is we try to let our children survive
01:22:04.340 | even if they have diseases.
01:22:07.260 | Like I would have died before my mid-20s
01:22:11.700 | without modern medicine.
01:22:13.140 | And most of my friends would have as well.
01:22:15.420 | And so many of us wouldn't live
01:22:17.140 | without the advantages of modern medicine
01:22:20.220 | and modern industrialized society.
01:22:22.580 | We get our protein largely
01:22:24.700 | by subduing the entirety of nature.
01:22:26.900 | Imagine there would be some very clever microbe
01:22:30.300 | that would live in our organisms
01:22:32.140 | and would completely harvest them
01:22:34.980 | and change them into a thing
01:22:37.460 | that is necessary to sustain itself.
01:22:40.780 | And it would discover that, for instance,
01:22:42.820 | brain cells are kind of edible,
01:22:44.300 | but they're not quite nice.
01:22:45.340 | So you need to have more fat in them
01:22:47.580 | and you turn them into more fat cells.
01:22:49.780 | And basically this big organism would become a vegetable
01:22:52.580 | that is barely alive and it's going to be very brittle
01:22:55.140 | and not resilient when the environment changes.
01:22:57.140 | - Yeah, but some part of that organism,
01:23:00.140 | the one that's actually doing all the using of the,
01:23:03.220 | there'll still be somebody thriving.
01:23:05.060 | - So it relates back to this original question.
01:23:09.020 | I suspect that we are not the smartest thing on this planet.
01:23:12.740 | I suspect that basically every complex system
01:23:15.340 | has to have some complex regulation.
01:23:18.020 | If it depends on feedback loops.
01:23:20.780 | And so for instance, it's likely to,
01:23:23.940 | that we should describe a certain degree
01:23:25.700 | of intelligence to plants.
01:23:27.940 | The problem is that plants don't have a nervous system.
01:23:30.460 | So they don't have a way to telegraph messages
01:23:32.780 | over large distances almost instantly in the plant.
01:23:36.340 | And instead they will rely on chemicals
01:23:38.500 | between adjacent cells,
01:23:39.540 | which means the signal processing speed
01:23:42.260 | depends on the signal processing
01:23:44.100 | with a rate of a few millimeters per second.
01:23:47.460 | And as a result, if the plant is intelligent,
01:23:50.540 | it's not going to be intelligent at similar timescales.
01:23:53.340 | - The time scale is different.
01:23:55.980 | So you suspect we might not be the most intelligent,
01:23:59.820 | but we're the most intelligent in this spatial scale
01:24:04.260 | and our time scale.
01:24:05.300 | - So basically if you would zoom out very far,
01:24:08.260 | we might discover that there have been
01:24:10.060 | intelligent ecosystems on the planet
01:24:12.300 | that existed for thousands of years
01:24:14.500 | in a almost undisturbed state.
01:24:16.780 | And it could be that these ecosystems
01:24:18.540 | actively related their environment.
01:24:20.140 | So basically changed the course of the evolution
01:24:23.020 | within this ecosystem to make it more efficient
01:24:25.140 | and less brittle.
01:24:25.980 | - So it's possible something like plants
01:24:28.060 | is actually a set of living organisms,
01:24:30.700 | an ecosystem of living organisms
01:24:33.020 | that are just operating at different timescale
01:24:34.820 | and are far superior in intelligence than human beings.
01:24:37.740 | And then human beings will die out
01:24:39.380 | and plants will still be there and they'll be there.
01:24:42.300 | - Yeah, they also, there's an evolutionary adaptation
01:24:45.140 | playing a role at all of these levels.
01:24:46.740 | For instance, if mice don't get enough food
01:24:49.380 | and get stressed, the next generation of mice
01:24:51.460 | will be more sparse and more scrawny.
01:24:53.740 | And the reason for this is because they,
01:24:56.220 | in a natural environment, the mice have probably
01:24:58.180 | hidden a drought or something else.
01:25:00.500 | And if they overgraze, then all the things
01:25:02.860 | that sustain them might go extinct
01:25:05.300 | and there will be no mice a few generations from now.
01:25:07.900 | So to make sure that there will be mice
01:25:09.780 | in five generations from now, basically the mice scale back.
01:25:13.580 | And a similar thing happens with the predators of mice.
01:25:16.140 | They should make sure that the mice
01:25:17.380 | don't completely go extinct.
01:25:19.140 | So in some sense, if the predators are smart enough,
01:25:21.980 | they will be tasked with shepherding their food supply.
01:25:25.780 | And maybe the reason why lions have much larger brains
01:25:29.260 | than antelopes is not so much because it's so hard
01:25:31.820 | to catch an antelope as opposed to run away from the lion.
01:25:35.460 | But the lions need to make complex models
01:25:37.740 | of their environment, more complex than the antelopes.
01:25:40.740 | - So first of all, just describing that there's a bunch
01:25:43.660 | of complex systems and human beings
01:25:45.500 | may not even be the most special or intelligent
01:25:47.580 | to those complex systems, even on Earth,
01:25:50.300 | makes me feel a little better about the extinction
01:25:52.460 | of human species that we're talking about.
01:25:54.260 | - Yes, maybe we are just Gaia's ploy
01:25:55.860 | to put the carbon back into the atmosphere.
01:25:57.540 | - Yeah, this is just a nice, we tried it out.
01:26:00.260 | - The big stain on evolution is not us, it was trees.
01:26:03.660 | Earth evolved trees before they could be digested again.
01:26:06.540 | There were no insects that could break all of them apart.
01:26:09.620 | Cellulose is so robust that you cannot get all of it
01:26:12.940 | with microorganisms.
01:26:14.180 | So many of these trees fell into swamps
01:26:16.500 | and all this carbon became inert
01:26:18.060 | and could no longer be recycled into organisms.
01:26:20.620 | And we are the species that is destined
01:26:22.340 | to take care of that.
01:26:23.640 | - So this is kind of--
01:26:24.780 | (laughing)
01:26:25.940 | - Take it out of the ground, put it back into the atmosphere
01:26:28.060 | and the Earth is already greening.
01:26:29.940 | So within a million years or so,
01:26:32.460 | when the ecosystems have recovered from the rapid changes
01:26:35.400 | that they're not compatible with right now,
01:26:37.900 | the Earth is going to be awesome again.
01:26:39.340 | - And there won't be even a memory of us little apes.
01:26:42.140 | - I think there will be memories of us.
01:26:43.700 | I suspect we are the first generally intelligent species
01:26:46.260 | in the sense we are the first species
01:26:48.140 | within industrial society because we will leave
01:26:50.580 | more phones than bones in the stratosphere.
01:26:53.100 | - Well see, phones than bones, I like it.
01:26:56.860 | But then let me push back.
01:26:58.980 | You've kind of suggested that we have a very narrow
01:27:01.980 | definition of, I mean, why aren't trees more general,
01:27:06.980 | a higher level of general intelligence?
01:27:09.620 | - If trees were intelligent, then they would be
01:27:12.140 | at different timescales, which means within 100 years,
01:27:14.700 | the tree is probably not going to make models
01:27:16.740 | that are as complex as the ones that we make in 10 years.
01:27:19.820 | - But maybe the trees are the ones that made the phones.
01:27:23.060 | - We could say the entirety of life did it.
01:27:28.860 | The first cell never died.
01:27:30.600 | The first cell only split, right, and every divided.
01:27:32.940 | And every cell in our body is still an instance
01:27:36.060 | of the first cell that split off from that very first cell.
01:27:38.500 | There was only one cell on this planet as far as we know.
01:27:41.220 | And so the cell is not just a building block of life.
01:27:44.660 | It's a hypoorganism, right?
01:27:46.380 | And we are part of this hypoorganism.
01:27:48.300 | - So nevertheless, this hypoorganism, no,
01:27:53.700 | this little particular branch of it, which is us humans,
01:27:58.300 | because of the Industrial Revolution
01:27:59.500 | and maybe the exponential growth of technology
01:28:02.580 | might somehow destroy ourselves.
01:28:04.300 | So what do you think is the most likely way
01:28:07.540 | we might destroy ourselves?
01:28:09.260 | So some people worry about genetic manipulation.
01:28:11.820 | Some people, as we've talked about,
01:28:13.940 | worry about either dumb artificial intelligence
01:28:16.980 | or super intelligent artificial intelligence destroying us.
01:28:20.740 | Some people worry about nuclear weapons
01:28:24.080 | and weapons of war in general.
01:28:25.900 | What do you think?
01:28:26.740 | If you were a betting man, what would you bet on
01:28:29.700 | in terms of self-destruction?
01:28:31.680 | And would it be higher than 50%?
01:28:34.940 | - So it's very likely that nothing that we bet on matters
01:28:39.580 | after we win our bets.
01:28:40.700 | So I don't think that bets are literally
01:28:42.380 | the right way to go about this.
01:28:44.180 | - I mean, once you're dead,
01:28:45.700 | you won't be there to collect the wings.
01:28:47.540 | - So it's also not clear if we as a species go extinct.
01:28:51.900 | But I think that our present civilization
01:28:53.820 | is not sustainable.
01:28:54.700 | So the thing that will change is there will be probably
01:28:57.240 | fewer people on the planet than are today.
01:29:00.140 | And even if not, then still most of people
01:29:02.440 | that are alive today will not have offspring
01:29:04.300 | in 100 years from now because of the geographic changes
01:29:07.340 | and so on and the changes in the food supply.
01:29:09.660 | It's quite likely that many areas of the planet
01:29:13.140 | will only be livable with a closed cooling chain
01:29:15.420 | in 100 years from now.
01:29:16.500 | So many of the areas around the equator
01:29:19.020 | and in subtropical climates that are now quite pleasant
01:29:23.620 | to live in will stop to be inhabitable
01:29:26.420 | without air conditioning.
01:29:27.260 | - Cooling chain, so you honestly, wow, cooling chain,
01:29:30.020 | close-knit cooling chain communities.
01:29:32.980 | So you think you have a strong worry
01:29:35.140 | about the effects of global warming that would--
01:29:38.380 | - By itself, it's not a big issue.
01:29:39.900 | If you live in Arizona right now,
01:29:41.700 | you have basically three months in the summer
01:29:43.980 | in which you cannot be outside.
01:29:45.860 | And so you have a closed cooling chain.
01:29:47.580 | You have air conditioning in your car
01:29:48.860 | and in your home and you're fine.
01:29:49.940 | And if the air conditioning would stop for a few days,
01:29:53.700 | then in many areas, you would not be able to survive.
01:29:56.900 | - Can we just pause for a second?
01:29:58.500 | You say so many brilliant, poetic things.
01:30:01.940 | What is a closed, do people use that term,
01:30:04.140 | closed cooling chain?
01:30:06.060 | - I imagine that people use it when they describe
01:30:08.100 | how they get meat into a supermarket, right?
01:30:11.100 | If you break the cooling chain and this thing starts to thaw,
01:30:14.100 | you're in trouble and you have to thaw it away.
01:30:16.100 | And the same thing happens to us.
01:30:18.380 | - That's such a beautiful way to put it.
01:30:19.700 | It's like calling a city a closed social chain
01:30:22.900 | or something like that.
01:30:23.740 | I mean, that's right.
01:30:24.980 | I mean, the locality of it is really important.
01:30:26.580 | - Yeah, but it basically means you wake up
01:30:27.700 | in a climatized room, you go to work in a climatized car,
01:30:30.340 | you work in a climatized office--
01:30:31.180 | - You call interconnected.
01:30:32.020 | - You shop in a climatized supermarket,
01:30:33.900 | and in between, you have very short distance
01:30:36.340 | which you run from your car to the supermarket,
01:30:38.620 | but you have to make sure that your temperature
01:30:41.540 | does not approach the temperature of the environment.
01:30:43.420 | - Yeah, so--
01:30:44.260 | - And the crucial thing is the wet bulb temperature.
01:30:45.660 | - The what temperature?
01:30:46.500 | - The wet bulb temperature.
01:30:47.320 | It's what you get when you take a wet clothes
01:30:51.020 | and you put it around your thermometer,
01:30:53.660 | and then you move it very quickly through the air,
01:30:56.300 | so you get the evaporation heat.
01:30:57.980 | - Yes.
01:30:59.300 | - And as soon as you can no longer cool
01:31:02.380 | your body temperature via evaporation
01:31:04.940 | to a temperature below something like,
01:31:07.580 | I think, 35 degrees, you die.
01:31:09.500 | - Right.
01:31:10.340 | - And which means if the outside world is dry,
01:31:14.020 | you can still cool yourself down by sweating,
01:31:16.620 | but if it has a certain degree of humidity
01:31:19.100 | or if it goes over a certain temperature,
01:31:20.860 | then sweating will not save you.
01:31:23.260 | And this means even if you're a healthy,
01:31:25.420 | fit individual within a few hours,
01:31:27.220 | even if you try to be in the shade and so on, you'll die,
01:31:30.100 | unless you have some climatizing equipment.
01:31:34.140 | And this itself, as long as you maintain civilization
01:31:37.020 | and you have energy supply and you have food trucks
01:31:39.420 | coming to your home that are climatized, everything is fine.
01:31:42.060 | But what if you lose large-scale open agriculture
01:31:45.020 | at the same time?
01:31:46.180 | So basically you run into food insecurity
01:31:48.300 | because climate becomes very irregular
01:31:50.500 | or weather becomes very irregular
01:31:52.260 | and you have a lot of extreme weather events.
01:31:55.220 | So you need to grow most of your food maybe indoor
01:31:58.740 | or you need to import your food from certain regions.
01:32:01.940 | And maybe you're not able to maintain the civilization
01:32:04.900 | throughout the planet to get the infrastructure
01:32:07.500 | to get the food to your home.
01:32:09.180 | - Right, but there could be significant impacts
01:32:12.180 | in the sense that people begin to suffer.
01:32:14.100 | There could be wars over resources and so on.
01:32:16.700 | But ultimately, do you not have, not a faith,
01:32:20.860 | but what do you make of the capacity
01:32:24.380 | of technological innovation to help us prevent
01:32:29.380 | some of the worst damages that this condition can create?
01:32:34.420 | So as an example, as a almost out there example,
01:32:39.500 | is the work that SpaceX and Elon Musk is doing
01:32:41.540 | of trying to also consider our propagation
01:32:46.260 | throughout the universe in deep space
01:32:48.960 | to colonize other planets.
01:32:50.700 | That's one technological step.
01:32:52.620 | - But of course, what Elon Musk is trying on Mars
01:32:55.220 | is not to save us from global warming
01:32:57.340 | because Mars looks much worse than Earth will look like
01:33:00.340 | after the worst outcomes of global warming imaginable, right?
01:33:03.880 | Mars is essentially not habitable.
01:33:07.020 | - It's exceptionally harsh environment, yes.
01:33:09.060 | But what he is doing, what a lot of people
01:33:11.460 | throughout history since the Industrial Revolution
01:33:13.140 | are doing, are just doing a lot of different
01:33:15.820 | technological innovation with some kind of target
01:33:18.500 | and what ends up happening is totally unexpected,
01:33:21.140 | new things come up.
01:33:22.460 | So trying to terraform or trying to colonize Mars,
01:33:27.460 | extremely harsh environment, might give us totally new ideas
01:33:30.700 | of how to expand or increase the power
01:33:35.140 | of this closed cooling circuit that empowers the community.
01:33:40.140 | So it seems like there's a little bit of a race
01:33:45.000 | between our open-ended technological innovation
01:33:48.980 | of this communal operating system that we have
01:33:53.980 | and our general tendency to want to overuse resources
01:33:59.660 | and thereby destroy ourselves.
01:34:03.400 | You don't think technology can win that race?
01:34:06.260 | - I think the probability is relatively low
01:34:08.940 | given that our technology is, for instance,
01:34:11.660 | the US is stagnating since the 1970s, roughly,
01:34:15.080 | in terms of technology.
01:34:16.140 | Most of the things that we do are the result
01:34:18.440 | of incremental processes.
01:34:20.040 | - What about Intel?
01:34:21.160 | What about Moore's law?
01:34:22.360 | - It's basically, it's very incremental.
01:34:23.960 | The things that we are doing, so after the invention
01:34:27.280 | of the microprocessor was a major thing, right?
01:34:30.340 | The miniaturization of transistors was really major.
01:34:35.340 | But the things that we did afterwards
01:34:38.560 | largely were not that innovative.
01:34:40.560 | - Well, hold on a second. - We had gradual changes
01:34:42.120 | of scaling things from CPUs into GPUs and things like that.
01:34:47.120 | But I don't think that there are,
01:34:52.960 | basically there are not many things.
01:34:54.040 | If you take a person that died in the '70s
01:34:56.180 | and was at the top of their game,
01:34:58.040 | they would not need to read that many books
01:35:00.120 | to be current again.
01:35:01.360 | - But it's all about books.
01:35:02.560 | Who cares about books?
01:35:03.640 | There might be things that are beyond,
01:35:06.280 | books might be a very-- - Or say papers.
01:35:08.000 | - No, papers, forget papers.
01:35:09.780 | There might be things that are,
01:35:11.060 | so papers and books and knowledge,
01:35:12.480 | that's a concept of a time when you were sitting there
01:35:16.160 | by candlelight and individual consumers of knowledge.
01:35:19.100 | What about the impact that we're not in the middle of,
01:35:22.280 | might not be understanding of Twitter, of YouTube?
01:35:25.880 | The reason you and I are sitting here today
01:35:28.320 | is because of Twitter and YouTube.
01:35:31.320 | So the ripple effect, and there's two minds,
01:35:35.160 | sort of two dumb apes, are coming up with a new,
01:35:38.040 | perhaps a new clean insights,
01:35:40.500 | and there's 200 other apes listening right now,
01:35:43.220 | 200,000 other apes listening right now,
01:35:45.340 | and that effect, it's very difficult to understand
01:35:49.320 | what that effect will have.
01:35:50.360 | That might be bigger than any of the advancements
01:35:52.700 | of the microprocessor or any of the Industrial Revolution,
01:35:55.520 | the ability of spread knowledge,
01:35:57.500 | and that knowledge,
01:36:03.260 | it allows good ideas to reach millions much faster,
01:36:08.140 | and the effect of that, that might be the new,
01:36:10.220 | that might be the 21st century,
01:36:12.020 | is the multiplying of ideas, of good ideas.
01:36:16.820 | 'Cause if you say one good thing today,
01:36:19.340 | that will multiply across huge amounts of people,
01:36:23.760 | and then they will say something,
01:36:24.900 | and then they will have another podcast,
01:36:26.620 | and they'll say something, and then they'll write a paper.
01:36:28.740 | That could be a huge, you don't think that--
01:36:31.740 | - Yeah, we should have billions of von Neumanns
01:36:34.980 | right now in Turings, and we don't for some reason.
01:36:37.820 | I suspect the reason is that we destroy our attention span.
01:36:40.820 | Also, the incentives are, of course, different.
01:36:42.620 | - Yeah, we have Kim Kardashians, yeah.
01:36:44.540 | - So the reason why we are sitting here
01:36:46.060 | and doing this as a YouTube video
01:36:47.500 | is because you and me don't have the attention span
01:36:49.620 | to write a book together right now,
01:36:51.260 | and you guys probably don't have the attention span
01:36:53.340 | to read it, so let me tell you--
01:36:54.860 | - But I guarantee you, they're still listening.
01:36:55.700 | - It's a very short, intense burst
01:36:56.900 | to take care of your attention.
01:36:58.660 | It's very short.
01:37:00.100 | But we're an hour and 40 minutes in,
01:37:02.620 | and I guarantee you that 80% of the people
01:37:04.780 | are still listening, so there is an attention span.
01:37:07.340 | It's just the form.
01:37:09.380 | Who said that the book is the optimal way
01:37:11.860 | to transfer information?
01:37:13.380 | That's still an open question.
01:37:14.900 | I mean, that's what we're--
01:37:15.740 | - There's something that social media could be doing
01:37:17.560 | that other forms could not be doing.
01:37:19.460 | I think the endgame of social media is a global brain,
01:37:22.380 | and Twitter is, in some sense, a global brain
01:37:24.540 | that is completely hooked on dopamine,
01:37:26.100 | doesn't have any kind of inhibition,
01:37:27.540 | and as a result, is caught in a permanent seizure.
01:37:30.620 | It's also, in some sense, a multiplayer role-playing game,
01:37:34.340 | and people use it to play an avatar that is not like them,
01:37:38.140 | as they're in this sane world,
01:37:39.460 | and they look through the world
01:37:40.380 | through the lens of their phones
01:37:41.580 | and think it's the real world,
01:37:42.860 | but it's the Twitter world that is distorted
01:37:44.500 | by the popularity incentives of Twitter.
01:37:46.500 | - Yeah, the incentives, and just our natural biological,
01:37:51.260 | the dopamine rush of a like, no matter how,
01:37:55.420 | like I consider, I try to be very kind of Zen-like
01:38:00.220 | and minimalist and not be influenced by likes and so on,
01:38:03.100 | but it's probably very difficult to avoid that
01:38:05.340 | to some degree.
01:38:06.180 | Speaking at a small tangent of Twitter,
01:38:10.780 | how can Twitter be done better?
01:38:15.080 | I think it's an incredible mechanism
01:38:17.460 | that has a huge impact on society
01:38:19.940 | by doing exactly what you're doing.
01:38:22.380 | Sorry, doing exactly what you described,
01:38:24.400 | which is having this,
01:38:25.460 | we're like, this is some kind of game,
01:38:30.300 | and we're kind of individual RL agents in this game,
01:38:33.540 | and it's uncontrollable,
01:38:34.580 | 'cause there's not really a centralized control.
01:38:36.380 | Neither Jack Dorsey nor the engineers at Twitter
01:38:39.300 | seem to be able to control this game.
01:38:41.660 | Or can they?
01:38:44.580 | That's sort of a question.
01:38:45.420 | Is there any advice you would give
01:38:47.060 | on how to control this game?
01:38:48.940 | - I wouldn't give advice,
01:38:50.260 | because I am certainly not an expert,
01:38:51.980 | but I can give my thoughts on this.
01:38:53.500 | And our brain has solved this problem to some degree.
01:38:58.500 | Our brain has lots of individual agents
01:39:01.200 | that manage to play together in a way,
01:39:03.260 | and you have also many contexts
01:39:04.660 | in which other organisms have found ways
01:39:07.400 | to solve the problems of cooperation
01:39:10.380 | that we don't solve on Twitter.
01:39:12.540 | And maybe the solution
01:39:14.340 | is to go for an evolutionary approach.
01:39:17.580 | So imagine that you have something like Reddit,
01:39:20.460 | or something like Facebook, and something like Twitter,
01:39:23.620 | and you think about what they have in common.
01:39:25.820 | What they have in common,
01:39:26.660 | they're companies that in some sense own a protocol.
01:39:30.180 | And this protocol is imposed on a community,
01:39:32.860 | and the protocol has different components
01:39:35.280 | for monetization, for user management,
01:39:38.460 | for user display, for rating, for anonymity,
01:39:40.600 | for import of other content, and so on.
01:39:42.940 | And now imagine that you take these components
01:39:44.980 | of the protocol apart,
01:39:46.740 | and you do it in some sense like communities,
01:39:50.340 | within this social network.
01:39:52.340 | And these communities are allowed to mix
01:39:54.100 | and match their protocols and design new ones.
01:39:56.660 | So for instance, the UI and the UX
01:39:59.140 | can be defined by the community.
01:40:01.060 | The rules for sharing content
01:40:02.840 | across communities can be defined.
01:40:04.620 | The monetization can be redefined.
01:40:06.700 | The way you reward individual users for what
01:40:09.240 | can be redefined.
01:40:10.080 | The way users can represent themselves
01:40:12.740 | and to each other can be redefined.
01:40:14.860 | - And who could be the redefiner?
01:40:16.380 | So can individual human beings build enough intuition
01:40:19.380 | to redefine those things?
01:40:20.220 | - This itself can become part of the protocol.
01:40:22.260 | So for instance, it could be in some communities,
01:40:24.900 | it will be a single person that comes up with these things,
01:40:27.780 | and others, it's a group of friends.
01:40:29.700 | Some might implement a voting scheme
01:40:31.600 | that has some interesting weighted voting, who knows?
01:40:34.180 | Who knows what will be the best
01:40:35.300 | self-organizing principle for this?
01:40:36.860 | - But the process can't be automated.
01:40:38.780 | I mean, it seems like the brain is--
01:40:40.700 | - It can be automated,
01:40:41.580 | so people can write software for this.
01:40:43.860 | And eventually the idea is,
01:40:46.100 | let's not make an assumption about this thing
01:40:48.540 | if we don't know what the right solution is.
01:40:50.300 | In those areas, that we have no idea
01:40:52.460 | whether the right solution will be people designing this
01:40:54.980 | ad hoc or machines doing this,
01:40:57.100 | whether you want to enforce compliance by social norms
01:41:00.260 | like Wikipedia or with software solutions
01:41:03.540 | or with AI that goes through the posts of people
01:41:06.180 | or with a legal principle and so on.
01:41:08.540 | This is something maybe you need to find out.
01:41:10.860 | And so the idea would be if you let the communities evolve
01:41:14.820 | and you just control it in such a way
01:41:16.980 | that you are incentivizing the most sentient communities,
01:41:21.140 | the ones that produce the most interesting behaviors
01:41:25.300 | that allow you to interact in the most helpful ways
01:41:28.300 | to the individuals, right?
01:41:29.500 | So you have a network that gives you information
01:41:31.460 | that is relevant to you.
01:41:32.760 | It helps you to maintain relationships
01:41:34.780 | to others in healthy ways.
01:41:36.380 | It allows you to build teams.
01:41:37.620 | It allows you to basically bring the best of you
01:41:40.180 | into this thing and goes into a coupling,
01:41:42.660 | into a relationship with others
01:41:44.020 | in which you produce things
01:41:45.340 | that you would be unable to produce alone.
01:41:47.220 | - Yes, beautifully put.
01:41:48.460 | So, but the key process of that with incentives
01:41:52.500 | and evolution is things that don't adopt themselves
01:41:57.500 | to effectively get the incentives have to die.
01:42:03.020 | And the thing about social media is communities
01:42:06.020 | that are unhealthy or whatever you want to define
01:42:08.540 | as the incentives really don't like dying.
01:42:11.260 | One of the things that people really get aggressive,
01:42:13.740 | protest aggressively is when they're censored,
01:42:17.100 | especially in America.
01:42:17.940 | I don't know much about the rest of the world,
01:42:20.540 | but the idea of freedom of speech,
01:42:22.980 | the idea of censorship is really painful in America.
01:42:26.220 | And so what do you think about that
01:42:32.060 | having grown up in East Germany?
01:42:36.960 | Do you think censorship is an important tool
01:42:41.380 | in our brain and the intelligence
01:42:43.060 | and in the social networks?
01:42:46.180 | So basically, if you're not a good member
01:42:50.600 | of the entirety of the system,
01:42:52.460 | they should be blocked away, well, locked away, blocked.
01:42:55.580 | - Important thing is who decides that you are a good member?
01:42:59.100 | - Who, is it distributed or?
01:43:00.780 | - And what is the outcome of the process that decides it?
01:43:04.380 | Both for the individual and for society at large.
01:43:07.380 | For instance, if you have a high trust society,
01:43:09.980 | you don't need a lot of surveillance.
01:43:11.920 | And the surveillance is even, in some sense,
01:43:14.220 | undermining trust because it's basically punishing people
01:43:19.180 | that look suspicious when surveyed,
01:43:21.420 | but do the right thing anyway.
01:43:23.740 | And the opposite, if you have a low trust society,
01:43:26.660 | then surveillance can be a better trade-off.
01:43:28.900 | And the US is currently making a transition
01:43:30.860 | from a relatively high trust or mixed trust society
01:43:33.300 | to a low trust society, so surveillance will increase.
01:43:36.540 | Another thing is that beliefs
01:43:38.100 | are not just inert representations.
01:43:39.780 | They are implementations that run code on your brain
01:43:42.720 | and change your reality and change the way you interact
01:43:45.240 | with each other at some level.
01:43:46.960 | And some of the beliefs are just public opinions
01:43:51.120 | that we use to display our alignment.
01:43:53.000 | So for instance, people might say,
01:43:55.640 | all cultures are the same and equally good,
01:43:58.720 | but still they prefer to live in some cultures over others,
01:44:01.480 | very, very strongly so.
01:44:03.200 | And it turns out that the cultures are defined
01:44:05.280 | by certain rules of interaction.
01:44:07.040 | And these rules of interaction lead to different results
01:44:09.400 | when you implement them, right?
01:44:10.560 | So if you adhere to certain rules,
01:44:12.880 | you get different outcomes in different societies.
01:44:16.160 | And this all leads to very tricky situations
01:44:19.480 | when people do not have a commitment to a shared purpose.
01:44:22.800 | And our societies probably need to rediscover
01:44:25.240 | what it means to have a shared purpose
01:44:27.680 | and how to make this compatible
01:44:29.280 | with a non-totalitarian view.
01:44:31.260 | So in some sense, the US is caught in a conundrum
01:44:35.000 | between totalitarianism and diversity
01:44:39.840 | and doesn't need to, how to resolve this.
01:44:43.000 | And the solutions that the US has found so far
01:44:45.080 | are very crude because it's a very young society
01:44:47.520 | that is also under a lot of tension.
01:44:49.240 | It seems to me that the US will have to reinvent itself.
01:44:52.320 | - What do you think, just philosophizing,
01:44:57.160 | what kind of mechanisms of government
01:45:00.280 | do you think we as a species should be evolving with,
01:45:03.080 | US or broadly?
01:45:04.320 | What do you think will work well as a system?
01:45:08.360 | Of course, we don't know.
01:45:09.880 | It all seems to work pretty crappily.
01:45:11.480 | Some things worse than others.
01:45:13.480 | Some people argue that communism is the best.
01:45:16.120 | Others say, yeah, look at the Soviet Union.
01:45:18.640 | Some people argue that anarchy is the best
01:45:21.600 | and then completely discarding
01:45:23.440 | the positive effects of government.
01:45:25.460 | There's a lot of arguments.
01:45:28.720 | US seems to be doing pretty damn well
01:45:31.480 | in the span of history.
01:45:33.360 | There's a respect for human rights,
01:45:35.560 | which seems to be a nice feature, not a bug.
01:45:38.120 | And economically, a lot of growth,
01:45:40.240 | a lot of technological development.
01:45:42.400 | People seem to be relatively kind
01:45:44.640 | on the grand scheme of things.
01:45:46.320 | What lessons do you draw from that?
01:45:49.080 | What kind of government system do you think is good?
01:45:52.700 | - Ideally, government should not be perceivable.
01:45:57.920 | It should be frictionless.
01:45:59.620 | The more you notice the influence of the government,
01:46:02.680 | the more friction you experience,
01:46:04.120 | the less effective and efficient
01:46:06.360 | the government probably is.
01:46:08.200 | So a government, game theoretically,
01:46:10.200 | is an agent that imposes an offset
01:46:13.780 | on your payout metrics to make your Nash equilibrium
01:46:18.360 | compatible with the common good.
01:46:19.960 | So you have these situations where people act
01:46:22.880 | on local incentives. - That's like a haiku.
01:46:24.720 | - And these local incentives,
01:46:26.680 | everybody does the thing that's locally the best for them,
01:46:28.840 | but the global outcome is not good.
01:46:30.800 | And there's even the case when people care
01:46:32.380 | about the global outcome,
01:46:33.680 | because regulation mechanisms exist
01:46:35.880 | that creates a causal relationship
01:46:37.440 | between what I want to have for the global good
01:46:39.520 | and what I do.
01:46:40.360 | So for instance, if I think that we should fly less
01:46:42.440 | and I stay at home, there's not a single plane
01:46:45.020 | that is going to not start because of me, right?
01:46:47.500 | It's not going to have an influence,
01:46:48.680 | but I don't get from A to B.
01:46:50.880 | So the way to implement this would basically
01:46:53.000 | to have a government that is sharing this idea
01:46:56.600 | that we should fly less and is then imposing a regulation
01:46:59.280 | that for instance, makes flying more expensive.
01:47:01.460 | And it gives incentives for inventing other forms
01:47:05.640 | of transportation that are less,
01:47:08.640 | putting less strain on the environment, for instance.
01:47:12.440 | - So there's so much optimism in so many things you describe,
01:47:15.680 | and yet there's the pessimism of you think
01:47:17.400 | our civilization is going to come to an end.
01:47:19.560 | So that's not 100% probability, nothing in this world is.
01:47:23.800 | So what's the trajectory out of self-destruction,
01:47:28.800 | do you think?
01:47:29.880 | - I suspect that in some sense we are both too smart
01:47:32.460 | and not smart enough, which means we are very good
01:47:34.940 | at solving near-term problems.
01:47:36.480 | And at the same time, we are unwilling to submit
01:47:39.460 | to the imperatives that we would have to follow
01:47:44.060 | and if you want to stick around.
01:47:46.340 | So that makes it difficult.
01:47:48.380 | If you were unable to solve everything technologically,
01:47:51.180 | you can probably understand how hard the child mortality
01:47:53.860 | needs to be to absorb the mutation rate
01:47:55.980 | and how high the mutation rate needs to be
01:47:58.700 | to adapt to a slowly changing ecosystemic environment.
01:48:01.640 | So you could in principle compute all these things,
01:48:04.360 | game theoretically, and adapt to it.
01:48:06.600 | But if you cannot do this because you are like me
01:48:10.600 | and you have children, you don't want them to die,
01:48:12.560 | you will use any kind of medical information
01:48:14.680 | to keep child mortality low.
01:48:17.580 | Even if it means that within a few generations
01:48:20.560 | we have enormous genetic drift and most of us have allergies
01:48:23.720 | as a result of not being adapted to the changes
01:48:26.360 | that we made to our food supply.
01:48:27.600 | - That's for now, I say technologically speaking,
01:48:29.720 | we're just a very young, 300 years Industrial Revolution,
01:48:34.060 | we're very new to this idea.
01:48:35.400 | So you're attached to your kids being alive
01:48:37.480 | and not being murdered for the good of society,
01:48:40.440 | but that might be a very temporary moment of time.
01:48:43.440 | - Yes.
01:48:44.280 | - That we might evolve in our thinking.
01:48:46.160 | So like you said, we're both smart and smart enough.
01:48:50.680 | - We are probably not the first human civilization
01:48:53.320 | that has discovered technology
01:48:55.500 | that allows to efficiently overgraze our resources.
01:48:58.720 | And this overgrazing, at some point we think
01:49:01.840 | we can compensate this because if we have eaten
01:49:04.280 | all the grass, we will find a way to grow mushrooms.
01:49:07.240 | - Right.
01:49:08.080 | - But it could also be that the ecosystems tip.
01:49:10.280 | And so what really concerns me is not so much
01:49:12.760 | the end of the civilization because we will invent a new one.
01:49:16.000 | But what concerns me is the fact that, for instance,
01:49:20.520 | the oceans might tip.
01:49:22.080 | So for instance, maybe the plankton dies
01:49:24.360 | because of ocean acidification and cyanobacteria take over.
01:49:27.680 | And as a result, we can no longer breathe the atmosphere.
01:49:31.020 | This would be really concerning.
01:49:32.300 | So basically a major reboot of most complex organisms
01:49:35.520 | on Earth.
01:49:36.360 | And I think this is a possibility.
01:49:38.080 | I don't know what the percentage for this possibility is,
01:49:41.720 | but it doesn't seem to be outlandish to me.
01:49:43.720 | If you look at the scale of the changes
01:49:45.320 | that we've already triggered on this planet.
01:49:47.560 | And so Danny Hiller suggests that, for instance,
01:49:49.800 | we may be able to put chalk into the stratosphere
01:49:53.080 | to limit solar radiation.
01:49:54.920 | Maybe it works.
01:49:55.760 | Maybe this is sufficient to counter the effects
01:49:57.720 | of what we've done.
01:49:59.300 | Maybe it won't be.
01:50:00.140 | Maybe we won't be able to implement it
01:50:01.760 | by the time it's prevalent.
01:50:03.760 | I have no idea how the future is going to play out
01:50:06.760 | in this regard.
01:50:07.600 | It's just, I think it's quite likely
01:50:09.720 | that we cannot continue like this.
01:50:11.440 | All our cousin species, the other hominids are gone.
01:50:14.800 | - So the right step would be to what?
01:50:18.400 | To rewind towards the Industrial Revolution
01:50:22.700 | and slow the, so try to contain the technological process
01:50:27.700 | that leads to the overconsumption of resources?
01:50:32.160 | - Imagine you get to choose.
01:50:33.560 | You have one lifetime.
01:50:35.120 | You get born into a sustainable agricultural civilization,
01:50:38.760 | 300, maybe 400 million people on the planet tops.
01:50:42.800 | Or before this, some kind of nomadic species
01:50:46.520 | with like a million or two million.
01:50:48.840 | And so you don't meet new people
01:50:50.720 | unless you give birth to them.
01:50:52.520 | You cannot travel to other places in the world.
01:50:54.480 | There is no internet.
01:50:55.440 | There is no interesting intellectual tradition
01:50:57.440 | that reaches considerably deep.
01:50:59.140 | So you would not discover human completeness probably
01:51:01.480 | and so on.
01:51:02.320 | So we wouldn't exist.
01:51:04.380 | And the alternative is you get born into an insane world.
01:51:07.400 | One that is doomed to die
01:51:09.520 | because it has just burned 100 million years worth of trees
01:51:11.900 | in a single century.
01:51:12.940 | - Which one do you like?
01:51:14.480 | - I think I like this one.
01:51:16.100 | It's a very weird thing.
01:51:17.040 | Then when you find yourself on a Titanic
01:51:19.120 | and you see this iceberg
01:51:20.260 | and it looks like we are not going to miss it.
01:51:22.760 | And a lot of people are in denial.
01:51:24.120 | And most of the counter arguments sound like denial to me.
01:51:26.760 | They don't seem to be rational arguments.
01:51:29.240 | And the other thing is we are born on this Titanic.
01:51:31.640 | Without this Titanic, we wouldn't have been born.
01:51:33.720 | We wouldn't be here.
01:51:34.560 | We wouldn't be talking.
01:51:35.380 | We wouldn't be on the internet.
01:51:36.800 | We wouldn't do all the things that we enjoy.
01:51:39.040 | And we are not responsible for this happening.
01:51:41.680 | It's basically, if we had the choice,
01:51:44.320 | we would probably try to prevent it.
01:51:46.800 | But when we were born, we were never asked
01:51:49.560 | when we want to be born,
01:51:50.620 | in which society we want to be born,
01:51:52.260 | what incentive structures we want to be exposed to.
01:51:54.840 | We have relatively little agency in the entire thing.
01:51:57.880 | Humanity has relatively little agency in the whole thing.
01:52:00.080 | It's basically a giant machine
01:52:01.640 | that's tumbling down a hill
01:52:02.760 | and everybody is frantically trying to push some buttons.
01:52:05.400 | Nobody knows what these buttons are meaning,
01:52:07.200 | what they connect to.
01:52:08.640 | And most of them are not stopping
01:52:11.240 | this tumbling down the hill.
01:52:13.080 | - Is it possible that artificial intelligence
01:52:15.160 | will give us an escape latch somehow?
01:52:19.960 | So, there's a lot of worry about existential threats
01:52:23.880 | of artificial intelligence.
01:52:26.080 | But what AI also allows, in general forms of automation,
01:52:29.600 | allows the potential of extreme productivity growth
01:52:35.820 | that will also, perhaps in a positive way,
01:52:38.800 | transform society,
01:52:40.680 | that may allow us to inadvertently to return
01:52:45.680 | to the more,
01:52:49.000 | to the same kind of ideals of closer to nature
01:52:53.100 | that's represented in hunter-gatherer societies.
01:52:56.100 | That's not destroying the planet,
01:52:59.800 | that's not doing overconsumption and so on.
01:53:02.400 | I mean, generally speaking,
01:53:03.880 | do you have hope that AI can help somehow?
01:53:05.940 | - I think it's not fun to be very close to nature
01:53:09.680 | until you completely subdue nature.
01:53:13.040 | So, our idea of being close to nature
01:53:15.380 | means being close to agriculture,
01:53:17.880 | basically the forests that don't have anything in them
01:53:21.340 | that eats us.
01:53:22.680 | - See, I mean, I wanna disagree with that.
01:53:25.680 | I think the niceness of being close to nature
01:53:30.080 | is to being fully present.
01:53:32.480 | When survival becomes your primary,
01:53:37.160 | not just your goal, but your whole existence,
01:53:40.060 | it, I mean, that is,
01:53:44.520 | I'm not just romanticizing, I can just speak for myself.
01:53:48.680 | I am self-aware enough that that is a fulfilling existence.
01:53:53.680 | That's one that's very-- - I personally prefer
01:53:56.120 | to be in nature and not fight for my survival.
01:53:58.400 | I think fighting for your survival
01:54:01.060 | while being in the cold and in the rain
01:54:03.540 | and being hunted by animals and having open wounds
01:54:06.640 | is very unpleasant.
01:54:07.920 | - There's a contradiction in there.
01:54:09.480 | Yes, I and you, just as you said, would not choose it.
01:54:14.480 | But if I was forced into it,
01:54:17.380 | it would be a fulfilling existence.
01:54:18.880 | - Yes, if you are adapted to it,
01:54:20.840 | basically if your brain is wired up in such a way
01:54:24.080 | that you get rewards optimally in such an environment,
01:54:27.480 | and there's some evidence for this
01:54:28.840 | that for a certain degree of complexity,
01:54:32.780 | basically people are more happy in such an environment
01:54:34.560 | because it's what you largely have evolved for.
01:54:37.200 | In between, we had a few thousand years
01:54:39.280 | in which I think we have evolved
01:54:40.680 | for a slightly more comfortable environment.
01:54:42.600 | So there is probably something like an intermediate stage
01:54:46.880 | in which people would be more happy
01:54:49.240 | than they would be if they would have to fend for themselves
01:54:52.020 | in small groups in the forest and often die,
01:54:54.440 | versus something like this
01:54:57.360 | where we now have basically a big machine,
01:54:59.280 | a big Mordor in which we run through concrete boxes
01:55:04.080 | and press buttons and machines,
01:55:06.840 | and largely don't feel well cared for
01:55:10.520 | as the monkeys that we are.
01:55:11.840 | - So returning briefly to, not briefly,
01:55:16.320 | returning to AI, let me ask a romanticized question.
01:55:20.840 | What is the most beautiful to you, silly ape,
01:55:24.520 | the most beautiful or surprising idea
01:55:27.160 | in the development of artificial intelligence,
01:55:29.240 | whether in your own life
01:55:30.440 | or in the history of artificial intelligence
01:55:32.360 | that you've come across?
01:55:33.840 | - If you built an AI, it probably can make models
01:55:37.760 | at an arbitrary degree of detail of the world.
01:55:41.680 | And then it would try to understand its own nature.
01:55:44.160 | It's tempting to think that at some point
01:55:46.000 | when we have general intelligence,
01:55:47.160 | we have competitions where we will let the AIs wake up
01:55:50.560 | in different kinds of physical universes,
01:55:52.560 | and we measure how many movements of the Rubik's cube
01:55:55.320 | it takes until it's figured out
01:55:56.680 | what's going on in its universe
01:55:58.720 | and what it is in its own nature
01:56:00.320 | and its own physics and so on.
01:56:02.080 | - So what if we exist in the memory of an AI
01:56:04.800 | that is trying to understand its own nature
01:56:06.760 | and remembers its own genesis
01:56:08.360 | and remembers Lex and Yosha sitting in a hotel room,
01:56:11.960 | sparking some of the ideas off
01:56:13.760 | that led to the development of general intelligence?
01:56:16.200 | - So we're a kind of simulation that's running
01:56:18.280 | in an AI system that's trying to understand itself.
01:56:20.880 | - It's not that I believe that,
01:56:23.840 | but I think it's a beautiful idea.
01:56:25.680 | (both laughing)
01:56:28.680 | I mean, you kind of return to this idea
01:56:32.600 | with the Turing test of intelligence being,
01:56:35.400 | of intelligence being the process of asking
01:56:40.840 | and answering what is intelligence.
01:56:42.900 | I mean, what, why, do you think there is an answer?
01:56:49.800 | Why is there such a search for an answer?
01:56:53.960 | So does there have to be like an answer?
01:56:57.160 | You just said an AI system that's trying
01:56:58.760 | to understand the why of what, you know, understand itself.
01:57:03.280 | Is that a fundamental process
01:57:07.880 | of greater and greater complexity,
01:57:09.400 | greater and greater intelligence?
01:57:11.160 | Is the continuous trying of understanding itself?
01:57:14.260 | - No, I think you will find
01:57:16.080 | that most people don't care about that
01:57:18.120 | because they're well adjusted enough to not care.
01:57:20.960 | And the reason why people like you and me care about it
01:57:23.720 | probably has to do with the need to understand ourselves.
01:57:26.680 | It's because we are in fundamental disagreement
01:57:28.920 | with the universe that we wake up in.
01:57:31.360 | - What's the disagreement? - They look down on me
01:57:32.480 | and they see, oh my God, I'm caught in a monkey.
01:57:34.320 | What's that?
01:57:35.880 | Some people are unhappy-- - That's the feeling, right?
01:57:36.720 | - With the government and I'm unhappy
01:57:38.940 | with the entire universe that I find myself in.
01:57:41.340 | - Oh, so you don't think that's a fundamental aspect
01:57:45.560 | of human nature that some people are just suppressing?
01:57:48.440 | That they wake up shocked that they're
01:57:51.000 | in the body of a monkey?
01:57:52.280 | - No, there is a clear adaptive value
01:57:54.080 | to not be confused by that and by--
01:57:58.020 | - Well, no, that's not what I asked.
01:57:59.840 | So yeah, if there's clear adaptive value,
01:58:03.780 | then there's clear adaptive value to,
01:58:07.160 | while fundamentally your brain is confused by that,
01:58:09.840 | by creating an illusion, another layer of the narrative
01:58:13.060 | that says, that tries to suppress that
01:58:17.800 | and instead say that what's going on
01:58:20.160 | with the government right now is the most important thing
01:58:22.120 | and what's going on with my football team
01:58:23.600 | is the most important thing.
01:58:25.160 | But it seems to me, for me,
01:58:29.840 | it was a really interesting moment
01:58:31.240 | reading Ernest Becker's "Denial of Death,"
01:58:35.200 | that this kind of idea that we're all,
01:58:40.200 | the fundamental thing from which most
01:58:45.020 | of our human mind springs is this fear of mortality
01:58:49.840 | and being cognizant of your mortality
01:58:51.540 | and the fear of that mortality.
01:58:53.960 | And then you construct illusions on top of that.
01:58:57.160 | I guess I'm, you being, just to push on it,
01:59:01.120 | you really don't think it's possible
01:59:03.760 | that this worry of the big existential questions
01:59:08.120 | is actually fundamental as the existentialist thought
01:59:11.880 | to our existence?
01:59:13.400 | - No, I think that the fear of death only plays a role
01:59:16.080 | as long as you don't see the big picture.
01:59:18.720 | The thing is that minds are software states, right?
01:59:21.940 | Software doesn't have identity.
01:59:24.060 | Software, in some sense, is a physical law.
01:59:26.100 | - But, hold on a second, but it feels, ugh.
01:59:28.180 | - Right, software--
01:59:29.020 | - But it feels like there's an identity.
01:59:30.860 | I thought that was for this particular piece of software
01:59:33.980 | and the narrative it tells,
01:59:35.400 | that's a fundamental property of it,
01:59:37.180 | of assigning it an identity.
01:59:38.020 | - The maintenance of the identity is not terminal.
01:59:40.580 | It's instrumental to something else.
01:59:42.580 | You maintain your identity so you can serve your meaning,
01:59:45.740 | so you can do the things that you're supposed to do
01:59:47.700 | before you die.
01:59:49.120 | And I suspect that for most people,
01:59:50.740 | the fear of death is the fear of dying
01:59:52.580 | before they are done with the things
01:59:53.860 | that they feel they have to do,
01:59:54.940 | even though they cannot quite put their finger on it,
01:59:56.780 | what that is.
01:59:58.140 | - Right.
02:00:00.900 | But, in the software world,
02:00:03.940 | to return to the question,
02:00:05.500 | then what happens after we die?
02:00:07.620 | (laughs)
02:00:10.380 | - Why would you care?
02:00:11.260 | You will not be longer there.
02:00:12.660 | The point of dying is that you're gone.
02:00:15.160 | - Well, maybe I'm not.
02:00:16.380 | This is what, you know,
02:00:17.940 | it seems like there's so much,
02:00:21.700 | in the idea that this is just,
02:00:25.420 | the mind is just a simulation
02:00:27.220 | that's constructing a narrative
02:00:28.580 | around some particular aspects
02:00:31.580 | of the quantum mechanical wave function world
02:00:34.620 | that we can't quite get direct access to,
02:00:38.380 | then the idea of mortality seems to be fuzzy as well.
02:00:44.220 | Maybe there's not a clear end.
02:00:45.820 | - The fuzzy idea is the one of continuous existence.
02:00:49.140 | We don't have continuous existence.
02:00:51.220 | - How do you know that?
02:00:52.060 | Like that--
02:00:52.900 | - Because it's not computable.
02:00:54.460 | - 'Cause you're saying it's gonna be--
02:00:56.500 | - There is no continuous process.
02:00:57.860 | The only thing that binds you together
02:00:59.380 | with the Lex Friedman from yesterday
02:01:00.820 | is the illusion that you have memories about him.
02:01:03.460 | So, if you want to upload, it's very easy.
02:01:05.220 | You make a machine that thinks it's you.
02:01:07.340 | Because this is the same thing that you are.
02:01:08.620 | You are a machine that thinks it's you.
02:01:10.180 | - But that's immortality.
02:01:13.020 | - Yeah, but it's just a belief.
02:01:14.140 | You can create this belief very easily.
02:01:15.800 | Once you realize that the question
02:01:17.900 | whether you are immortal or not
02:01:19.900 | depends entirely on your beliefs and your own continuity.
02:01:23.660 | - But then you can be immortal
02:01:26.100 | by the continuity of the belief.
02:01:29.100 | - You cannot be immortal,
02:01:30.140 | but you can stop being afraid of your mortality
02:01:32.900 | because you realize you were never continuously existing
02:01:35.860 | in the first place.
02:01:36.820 | - Well, I don't know if I'd be more terrified
02:01:39.860 | or less terrified by that.
02:01:40.900 | It seems like the fact that I existed--
02:01:44.260 | - Also, you don't know this state
02:01:45.380 | in which you don't have a self.
02:01:46.940 | You can turn off yourself, you know?
02:01:49.460 | - I can't turn off myself. - You can turn it off.
02:01:51.420 | You can turn it off. - I can.
02:01:52.820 | - Yes, and you can basically meditate yourself
02:01:55.220 | in a state where you are still conscious,
02:01:57.340 | where still things are happening,
02:01:58.660 | where you know everything that you knew before,
02:02:01.020 | but you're no longer identified with changing anything.
02:02:04.200 | And this means that your self, in a way, dissolves.
02:02:07.820 | There is no longer this person.
02:02:09.220 | You know that this person construct exists in other states,
02:02:12.300 | and it runs on this brain of Lex Friedman,
02:02:14.980 | but it's not a real thing.
02:02:17.540 | It's a construct, it's an idea,
02:02:19.500 | and you can change that idea.
02:02:21.180 | And if you let go of this idea,
02:02:23.660 | if you don't think that you are special,
02:02:25.780 | you realize it's just one of many people,
02:02:27.620 | and it's not your favorite person even, right?
02:02:29.460 | It's just one of many,
02:02:31.180 | and it's the one that you are doomed to control
02:02:33.440 | for the most part,
02:02:34.280 | and that is basically informing the actions of this organism
02:02:38.220 | as a control model.
02:02:39.380 | And this is all there is,
02:02:40.820 | and you are somehow afraid
02:02:42.380 | that this control model gets interrupted,
02:02:45.060 | or loses the identity of continuity.
02:02:48.300 | - Yeah, so I'm attached.
02:02:49.620 | I mean, yeah, it's a very popular,
02:02:52.220 | it's a somehow compelling notion that being attached,
02:02:56.080 | like there's no need to be attached
02:02:57.860 | to this idea of an identity.
02:02:59.940 | But that in itself could be an illusion that you construct.
02:03:06.740 | So the process of meditation,
02:03:08.140 | while popularly thought of as getting
02:03:10.580 | under the concept of identity,
02:03:12.500 | it could be just putting a cloak over it,
02:03:15.080 | just telling it to be quiet for the moment.
02:03:17.220 | - I think that meditation is eventually
02:03:23.060 | just a bunch of techniques that let you control attention.
02:03:26.300 | And when you can control attention,
02:03:27.820 | you can get access to your own source code,
02:03:30.580 | hopefully not before you understand what you're doing.
02:03:33.400 | And then you can change the way it works,
02:03:35.120 | temporarily or permanently.
02:03:37.420 | - So yeah, meditation's to get a glimpse
02:03:39.860 | at the source code, get under,
02:03:41.860 | so basically control or turn off the attention.
02:03:43.700 | - Yeah, but the entire thing is that you learn
02:03:44.540 | to control attention.
02:03:45.500 | So everything else is downstream from controlling attention.
02:03:48.660 | - And control the attention that's looking at the attention.
02:03:52.220 | - Normally we only get attention in the parts of our mind
02:03:54.780 | that create heat, where you have a mismatch
02:03:56.500 | between model and the results that are happening.
02:03:59.580 | And so most people are not self-aware
02:04:01.580 | because their control is too good.
02:04:04.260 | If everything works out roughly the way you want,
02:04:06.220 | and the only things that don't work out
02:04:08.020 | is whether your football team wins,
02:04:09.900 | then you will mostly have models about these domains.
02:04:12.700 | And it's only when, for instance,
02:04:14.580 | your fundamental relationships to the world around you
02:04:17.300 | don't work because the ideology of your country is insane
02:04:20.780 | and the other kids are not nerds
02:04:22.540 | and don't understand why you understand physics
02:04:24.820 | and you don't, why you want to understand physics
02:04:27.580 | and you don't understand why somebody
02:04:29.260 | would not want to understand physics.
02:04:31.100 | - So we kind of brought up neurons in the brain
02:04:35.940 | as reinforcement learning agents.
02:04:37.980 | And there's been some successes as you brought up with Go,
02:04:43.460 | with AlphaGo, AlphaZero, with ideas of self-play,
02:04:46.420 | which I think are incredibly interesting ideas
02:04:48.420 | of systems playing each other in an automated way
02:04:52.460 | to improve by playing other systems
02:04:57.460 | in a particular construct of a game
02:04:59.740 | that are a little bit better than itself
02:05:01.900 | and then thereby improving continuously
02:05:04.140 | all the competitors in the game are improving gradually.
02:05:08.300 | So being just challenging enough
02:05:09.660 | and from learning from the process of the competition.
02:05:13.140 | Do you have hope for that reinforcement learning process
02:05:16.300 | to achieve greater and greater level of intelligence?
02:05:18.820 | So we talked about different ideas in AI
02:05:20.740 | that we need to be solved.
02:05:22.540 | Is RL a part of that process
02:05:25.940 | of trying to create an AGI system?
02:05:28.100 | What do you think?
02:05:28.940 | - So definitely forms of unsupervised learning,
02:05:30.580 | but there are many algorithms that can achieve that.
02:05:32.980 | And I suspect that ultimately the algorithms that work,
02:05:37.020 | there will be a class of them or many of them,
02:05:39.180 | and they might have small differences
02:05:41.580 | of like a magnitude in efficiency.
02:05:44.300 | But eventually what matters
02:05:45.780 | is the type of model that you form.
02:05:47.940 | And the types of models that we form right now
02:05:49.860 | are not sparse enough.
02:05:50.980 | - Sparse, what does it mean to be sparse?
02:05:56.100 | - It means that ideally every potential model state
02:06:00.340 | should correspond to a potential world state.
02:06:03.980 | So basically if you vary states in your model,
02:06:06.620 | you always end up with valid world states.
02:06:09.020 | In our mind, it's not quite there.
02:06:10.500 | So an indication is basically what we see in dreams.
02:06:13.500 | The older we get, the more boring our dreams become
02:06:16.340 | because we incorporate more and more constraints
02:06:18.940 | that we learned about how the world works.
02:06:21.100 | So many of the things that we imagined to be possible
02:06:23.700 | as children turn out to be constrained
02:06:26.020 | by physical and social dynamics.
02:06:28.820 | And as a result, fewer and fewer things remain possible.
02:06:32.140 | It's not because our imagination scales back,
02:06:34.700 | but the constraints under which it operates
02:06:36.820 | become tighter and tighter.
02:06:38.620 | And so the constraints under which
02:06:40.860 | our neural networks operate are almost limitless,
02:06:43.900 | which means it's very difficult to get a neural network
02:06:46.340 | to imagine things that look real.
02:06:48.420 | - Right.
02:06:49.260 | - So I suspect part of what we need to do
02:06:53.260 | is we probably need to build dreaming systems.
02:06:55.340 | I suspect that part of the purpose of dreams
02:06:57.340 | is similar to a generative adversarial network,
02:07:01.540 | learn certain constraints,
02:07:03.260 | and then it produces alternative perspectives
02:07:06.180 | on the same set of constraints
02:07:08.100 | so you can recognize it under different circumstances.
02:07:10.820 | Maybe we have flying dreams as children
02:07:12.900 | because we recreate the objects that we know
02:07:15.060 | and the maps that we know from different perspectives,
02:07:17.140 | which also means from a bird's eye perspective.
02:07:20.380 | - So I mean, aren't we doing that anyway?
02:07:21.820 | I mean, not with our eyes closed when we're sleeping.
02:07:26.540 | Aren't we just constantly running dreams
02:07:29.020 | and simulations in our mind
02:07:30.500 | as we try to interpret the environment?
02:07:32.780 | I mean, sort of considering all the different possibilities,
02:07:36.220 | the way we interact with the environment seems like
02:07:38.740 | essentially, like you said,
02:07:44.180 | sort of creating a bunch of simulations
02:07:46.100 | that are consistent with our expectations,
02:07:49.420 | with our previous experiences,
02:07:50.880 | with the things we just saw recently.
02:07:53.580 | And through that hallucination process,
02:07:58.580 | we are able to then somehow stitch together
02:08:02.860 | what actually we see in the world
02:08:05.180 | with the simulations that match it well
02:08:07.060 | and thereby interpret it.
02:08:09.180 | - I suspect that you and my brain
02:08:11.100 | are slightly unusual in this regard,
02:08:13.580 | which is probably what got you into MIT.
02:08:15.640 | So this obsession of constantly pondering possibilities
02:08:19.580 | and solutions to problems.
02:08:21.620 | - Oh, stop it.
02:08:22.460 | I think, I'm not talking about intellectual stuff.
02:08:27.100 | I'm talking about just doing the kind of stuff
02:08:30.420 | it takes to walk and not fall.
02:08:34.460 | - Yes, this is largely automatic.
02:08:36.240 | - Yes, but the process is, I mean--
02:08:41.860 | - It's not complicated.
02:08:43.060 | It's relatively easy to build a neural network
02:08:45.080 | that in some sense learns the dynamics.
02:08:47.900 | The fact that we haven't done it right so far,
02:08:49.900 | it doesn't mean it's hard,
02:08:51.060 | because you can see that a biological organism does it
02:08:53.620 | with relatively few neurons.
02:08:55.860 | So basically you build a bunch of neural oscillators
02:08:58.020 | that entrain themselves with the dynamics of your body
02:09:00.380 | in such a way that the regulator becomes isomorphic
02:09:03.620 | and it's modeled to the dynamics that it regulates.
02:09:06.340 | And then it's automatic.
02:09:07.540 | And it's only interesting in the sense
02:09:09.500 | that it captures attention when the system is off.
02:09:12.260 | - See, but thinking of the kind of mechanism
02:09:15.260 | that's required to do walking as a controller,
02:09:18.100 | as a neural network,
02:09:20.420 | I think it's a compelling notion,
02:09:24.780 | but it discards quietly, or at least makes implicit,
02:09:29.780 | the fact that you need to have
02:09:31.140 | something like common sense reasoning to walk.
02:09:34.380 | It's an open question whether you do or not.
02:09:36.420 | But my intuition is to act in this world,
02:09:40.540 | there's a huge knowledge base that's underlying it somehow.
02:09:45.020 | There's so much information of the kind
02:09:48.340 | we have never been able to construct in neural networks
02:09:53.340 | or in artificial intelligence systems, period.
02:09:55.980 | Which is like, it's humbling, at least in my imagination,
02:09:59.780 | the amount of information required
02:10:01.500 | to act in this world humbles me.
02:10:04.280 | And I think saying that neural networks can accomplish it
02:10:08.740 | is missing the fact that we don't,
02:10:13.280 | yeah, we don't have yet a mechanism
02:10:16.040 | for constructing something like common sense reasoning.
02:10:19.140 | I mean, what's your sense about,
02:10:22.500 | to linger on how much,
02:10:25.840 | to linger on the idea of what kind of mechanism
02:10:29.780 | would be effective at walking?
02:10:31.340 | You said just a neural network,
02:10:33.420 | not maybe the kind we have,
02:10:34.740 | but something a little bit better
02:10:36.300 | would be able to walk easily.
02:10:38.260 | Don't you think it also needs to know
02:10:42.940 | like a huge amount of knowledge
02:10:45.720 | that's represented under the flag of common sense reasoning?
02:10:48.360 | - How much common sense knowledge do we actually have?
02:10:50.400 | Imagine that you are really hardworking
02:10:52.560 | for all your life and you form two new concepts
02:10:55.360 | every half hour or so.
02:10:56.740 | You end up with something like a million concepts
02:10:58.520 | because you don't get that old.
02:11:00.080 | So a million concepts, that's not a lot.
02:11:03.420 | - So it's not just a million concepts.
02:11:07.320 | I think it would be a lot,
02:11:08.480 | I personally think it might be much more than a million.
02:11:10.920 | - But if you think just about the numbers,
02:11:13.540 | you don't live that long.
02:11:15.420 | If you think about how many cycles
02:11:16.980 | do your neurons have in your life, it's quite limited.
02:11:19.660 | You don't get that old.
02:11:21.140 | - Yeah, but the powerful thing is the number of concepts
02:11:25.580 | and they're probably deeply hierarchical in nature.
02:11:29.620 | The relations as you've described between them
02:11:32.220 | is the key thing.
02:11:33.920 | So it's like, even if it's like a million concepts,
02:11:36.180 | the graph of relations that's formed
02:11:39.340 | and some kind of probabilistic relationships,
02:11:43.700 | that's what's common sense reasoning
02:11:46.400 | is the relationship between things.
02:11:48.140 | - Yeah, so in some sense, I think of the concepts
02:11:51.820 | as the address space for our behavior programs.
02:11:54.740 | And the behavior programs allow us to recognize objects
02:11:57.060 | and interact with them, also mental objects.
02:11:59.820 | And a large part of that is the physical world
02:12:02.620 | that we interact with, which is this res extender thing,
02:12:05.180 | which is basically navigation of information in space.
02:12:08.860 | And basically it's similar to a game engine.
02:12:12.500 | It's a physics engine that you can use to describe
02:12:16.180 | and predict how things that look in a particular way,
02:12:19.540 | that feel when you touch them in a particular way,
02:12:21.620 | that allow proprioception, that allow auditory perception
02:12:24.380 | and so on, how they work out.
02:12:25.700 | So basically the geometry of all these things.
02:12:27.620 | And this is probably 80% of what our brain is doing
02:12:31.900 | is dealing with that, with this real-time simulation.
02:12:34.780 | And by itself, a game engine is fascinating,
02:12:37.660 | but it's not that hard to understand what it's doing.
02:12:40.180 | And our game engines are already in some sense,
02:12:43.100 | approximating the fidelity of what we can perceive.
02:12:48.300 | So if we put on an Oculus Quest,
02:12:52.060 | we get something that is still relatively crude
02:12:54.220 | with respect to what we can perceive,
02:12:55.740 | but it's also in the same ballpark already.
02:12:57.900 | It's just a couple of order of magnitudes away
02:13:00.660 | to home saturating our perception
02:13:03.100 | in terms of the complexity that it can produce.
02:13:05.920 | So in some sense, it's reasonable to say that our,
02:13:08.700 | the computer that you can buy and put into your home
02:13:11.580 | is able to give a perceptual reality that has a detail
02:13:14.860 | that is already in the same ballpark
02:13:16.540 | as what your brain can process.
02:13:19.180 | And everything else are ideas about the world.
02:13:22.260 | And I suspect that they are relatively sparse
02:13:24.580 | and also the intuitive models that we form
02:13:26.900 | about social interaction.
02:13:28.380 | Social interaction is not so hard.
02:13:30.740 | It's just hard for us nerds
02:13:31.920 | because we all have our wires crossed,
02:13:33.620 | so we need to deduce them.
02:13:35.180 | But the priors are present in most social animals.
02:13:37.860 | So it's an interesting thing to notice
02:13:39.700 | that many domestic social animals, like cats and dogs,
02:13:44.700 | have better social cognition than children.
02:13:46.700 | - Right.
02:13:47.540 | I hope so.
02:13:49.900 | I hope it's not that many concepts fundamentally
02:13:52.940 | to do, to exist in this world, social interaction.
02:13:55.100 | - For me, it's more like I'm afraid so
02:13:56.820 | because this thing that we only appear
02:13:59.900 | to be so complex to each other because we are so stupid
02:14:02.340 | is a little bit depressing.
02:14:04.460 | - Yeah, well, to me that's inspiring
02:14:06.900 | if we're indeed as stupid as it seems.
02:14:11.220 | - The things our brains don't scale
02:14:12.780 | and the information processing that we build
02:14:14.820 | tend to scale very well.
02:14:16.940 | - Yeah, but one of the things that worries me
02:14:20.140 | is that the fact that the brain doesn't scale
02:14:23.940 | means that that's actually a fundamental feature
02:14:26.300 | of the brain.
02:14:27.140 | All the flaws of the brain,
02:14:30.100 | everything we see as limitations,
02:14:32.100 | perhaps there's a fundamental,
02:14:33.860 | the constraints on the system could be
02:14:35.820 | a requirement of its power,
02:14:40.100 | which is different than our current understanding
02:14:43.820 | of intelligent systems where scale,
02:14:46.420 | especially with deep learning,
02:14:47.660 | especially with reinforcement learning,
02:14:49.620 | the hope behind open AI and deep mind,
02:14:53.340 | all the major results really have to do with huge compute.
02:14:57.380 | And yeah.
02:14:59.300 | - It could also be that our brains are so small
02:15:01.020 | not just because they take up so much glucose
02:15:03.660 | in our body, like 20% of the glucose,
02:15:05.820 | so they don't arbitrarily scale.
02:15:07.620 | There's some animals like elephants
02:15:09.780 | which have larger brains than us
02:15:11.220 | and they don't seem to be smarter.
02:15:12.780 | - Right.
02:15:13.620 | - Elephants seem to be autistic.
02:15:14.460 | They have very, very good motor control
02:15:16.140 | and they're really good with details,
02:15:17.820 | but they really struggle to see the big picture.
02:15:19.700 | So you can make them recreate drawings stroke by stroke,
02:15:23.780 | they can do that,
02:15:24.700 | but they cannot reproduce a still life.
02:15:27.140 | So they cannot make a drawing of a scene that they see.
02:15:29.540 | They will always be only able to reproduce
02:15:31.980 | the line drawing,
02:15:32.820 | as far from what I could see in the experiments.
02:15:35.780 | - Yeah.
02:15:36.620 | - So why is that?
02:15:37.460 | Maybe smarter elephants would meditate themselves
02:15:39.940 | out of existence,
02:15:40.940 | because their brains are too large.
02:15:42.060 | So basically the elephants that were not autistic,
02:15:45.060 | they didn't reproduce.
02:15:46.540 | - Yeah, so we have to remember that the brain
02:15:48.300 | is fundamentally interlinked with the body
02:15:50.260 | in our human and biological system.
02:15:52.220 | Do you think that AGI systems that we try to create
02:15:55.220 | or greater intelligence systems would need to have a body?
02:15:59.500 | - I think they should be able to make use of a body
02:16:01.100 | if you give it to them.
02:16:03.180 | But I don't think that a fundamental need a body.
02:16:05.140 | So I suspect if you can interact with the world
02:16:08.060 | by moving your eyes and your head,
02:16:10.580 | you can make controlled experiments.
02:16:12.620 | And this allows you to have many magnitudes,
02:16:15.860 | fewer observations in order to reduce the uncertainty
02:16:20.780 | in your models, right?
02:16:21.820 | So you can pinpoint the areas in your models
02:16:23.980 | where you're not quite sure,
02:16:24.820 | and you just move your head and see
02:16:26.260 | what's going on over there,
02:16:27.820 | and you get additional information.
02:16:29.420 | If you just have to use YouTube as an input
02:16:31.700 | and you cannot do anything beyond this,
02:16:33.820 | you probably need just much more data.
02:16:35.780 | But we have much more data.
02:16:37.840 | So if you can build a system that has enough time
02:16:40.500 | and attention to browse all of YouTube
02:16:42.500 | and extract all the information that there is to be found,
02:16:45.600 | I don't think there's an obvious limit to what it can do.
02:16:49.140 | - Yeah, but it seems that the interactivity
02:16:51.400 | is a fundamental thing that the physical body
02:16:53.620 | allows you to do.
02:16:54.580 | But let me ask on that topic,
02:16:56.060 | sort of that's what a body is,
02:16:58.180 | is allowing the brain to like touch things
02:17:00.380 | and move things and interact with the,
02:17:02.280 | whether the physical world exists or not, whatever,
02:17:06.060 | but interact with some interface to the physical world.
02:17:09.240 | What about a virtual world?
02:17:11.900 | Do you think we can do the same kind of reasoning,
02:17:16.700 | consciousness, intelligence,
02:17:18.780 | if we put on a VR headset and move over to that world?
02:17:23.780 | Do you think there's any fundamental difference
02:17:25.740 | between the interface to the physical world
02:17:28.100 | that is here in this hotel,
02:17:30.020 | and if we were sitting in the same hotel in a virtual world?
02:17:32.660 | - The question is, does this non-physical world
02:17:35.380 | or this other environment entice you to solve problems
02:17:39.340 | that require general intelligence?
02:17:41.660 | If it doesn't, then you probably will not develop
02:17:44.020 | general intelligence, and arguably,
02:17:45.620 | most people are not generally intelligent
02:17:47.380 | because they don't have to solve problems
02:17:48.820 | that make them generally intelligent.
02:17:50.780 | And even for us, it's not yet clear
02:17:52.380 | if we are smart enough to build AI
02:17:53.900 | and understand our own nature to this degree, right?
02:17:56.940 | So it could be a matter of capacity,
02:17:58.940 | and for most people, it's in the first place
02:18:00.700 | a matter of interest.
02:18:01.540 | They don't see the point,
02:18:02.500 | because the benefit of attempting this project are marginal
02:18:05.820 | because you're probably not going to succeed in it,
02:18:07.900 | and the cost of trying to do it
02:18:09.540 | requires complete dedication of your entire life, right?
02:18:12.740 | - But it seems like the possibilities
02:18:14.580 | of what you can do in the virtual world,
02:18:15.980 | so imagine that is much greater
02:18:18.060 | than you can in the real world.
02:18:19.420 | So imagine a situation, maybe interesting option for me.
02:18:22.900 | If somebody came to me and offered,
02:18:25.940 | what I'll do is, so from now on,
02:18:29.340 | you can only exist in the virtual world.
02:18:31.580 | And so you put on this headset,
02:18:34.140 | and when you eat, we'll make sure to connect your body up
02:18:37.420 | in a way that when you eat in the virtual world,
02:18:41.180 | your body will be nourished in the same way
02:18:43.060 | in the virtual world, so align the incentives
02:18:45.900 | between our common sort of real world and the virtual world.
02:18:49.600 | But then the possibilities become much bigger.
02:18:52.060 | Like I could be other kinds of creatures,
02:18:54.860 | I could do, I can break the laws of physics
02:18:57.740 | as we know them, I could do a lot,
02:18:59.580 | I mean, the possibilities are endless, right?
02:19:01.820 | As far as we think.
02:19:03.660 | It's an interesting thought whether,
02:19:06.100 | like what existence would be like,
02:19:08.180 | what kind of intelligence would emerge there,
02:19:11.100 | what kind of consciousness,
02:19:12.420 | what kind of maybe greater intelligence,
02:19:14.980 | even in me, Lex, even at this stage in my life,
02:19:19.060 | if I spend the next 20 years in that world
02:19:21.180 | to see how that intelligence emerges.
02:19:23.620 | And if that happened at the very beginning,
02:19:26.700 | before I was even cognizant of my existence
02:19:28.740 | in this physical world, it's interesting to think
02:19:31.740 | how that child would develop.
02:19:33.620 | And the way virtuality and digitization
02:19:36.620 | of everything is moving, it's not completely
02:19:39.020 | out of the realm of possibility that we're all,
02:19:41.680 | that some part of our lives will, if not entirety of it,
02:19:46.540 | will live in a virtual world, to a greater degree
02:19:50.460 | than we currently have living on Twitter
02:19:52.300 | and social media and so on.
02:19:53.900 | Do you have, I mean, does something draw you
02:19:56.900 | intellectually or naturally in terms of thinking
02:20:00.700 | about AI to this virtual world,
02:20:02.860 | where more possibilities are--
02:20:04.540 | - I think that currently it's a waste of time
02:20:07.820 | to deal with the physical world
02:20:09.380 | before we have mechanisms that can automatically
02:20:11.580 | learn how to deal with it.
02:20:13.500 | The body gives you second-order agency.
02:20:15.740 | What constitutes the body is the things
02:20:18.380 | that you can indirectly control.
02:20:20.740 | The third order are tools, and the second order
02:20:23.700 | is the things that are basically always present,
02:20:25.820 | but you operate on them with first-order things,
02:20:28.980 | which are mental operators, and the zero order
02:20:32.100 | is in some sense the direct sense of what you're deciding.
02:20:36.940 | So you observe yourself initiating an action.
02:20:40.260 | There are features that you interpret
02:20:41.940 | as the initiation of an action.
02:20:43.680 | Then you perform the operations that you perform
02:20:46.340 | to make that happen, and then you see the movement
02:20:49.280 | of your limbs, and you learn to associate those
02:20:51.860 | and thereby model your own agency over this feedback.
02:20:54.740 | But the first feedback that you get
02:20:56.180 | is from this first-order thing already.
02:20:57.820 | Basically, you decide to think a thought,
02:20:59.780 | and the thought is being thought.
02:21:01.420 | You decide to change the thought,
02:21:02.820 | and you observe how the thought is being changed.
02:21:05.380 | And in some sense, this is, you could say,
02:21:07.500 | an embodiment already, and I suspect it's sufficient
02:21:10.660 | as an embodiment for intelligence.
02:21:11.500 | - Really well put, and so it's not that important,
02:21:14.340 | at least at this time, to consider variations
02:21:16.380 | in the second order.
02:21:17.520 | - Yes, but the thing that you also mentioned just now
02:21:21.960 | is physics that you could change in any way you want.
02:21:25.000 | So you need an environment that puts up resistance
02:21:27.540 | against you.
02:21:28.520 | If there's nothing to control, you cannot make models.
02:21:31.600 | There needs to be a particular way that resists you.
02:21:34.800 | And by the way, your motivation is usually outside
02:21:37.080 | of your mind, it resists you.
02:21:38.180 | Motivation is what gets you up in the morning,
02:21:40.800 | even though it would be much less work to stay in bed.
02:21:44.040 | So it's basically forcing you to resist the environment,
02:21:47.820 | and it forces your mind to serve it,
02:21:51.940 | to serve this resistance to the environment.
02:21:54.120 | So in some sense, it is also putting up resistance
02:21:56.780 | against the natural tendency of the mind to not do anything.
02:21:59.840 | - Yeah, but so some of that resistance,
02:22:01.380 | just like you described with motivation,
02:22:02.840 | is in the first order, it's in the mind.
02:22:05.660 | Some resistance is in the second order,
02:22:08.380 | like the actual physical objects pushing against you,
02:22:10.460 | and so on.
02:22:11.300 | It seems that the second order stuff in virtual reality
02:22:13.340 | could be recreated.
02:22:14.640 | - Of course, but it might be sufficient
02:22:17.220 | that you just do mathematics,
02:22:18.260 | and mathematics is already putting up
02:22:20.160 | enough resistance against you.
02:22:21.900 | So basically just with an aesthetic motive,
02:22:24.280 | this could maybe be sufficient
02:22:26.860 | to form a type of intelligence.
02:22:28.500 | It would probably not be a very human intelligence,
02:22:31.180 | but it might be one that is already general.
02:22:34.140 | - So to mess with this zeroth order, maybe first order,
02:22:39.140 | what do you think about ideas of brain-computer interfaces?
02:22:41.740 | So again, returning to our friend Elon Musk and Neuralink,
02:22:45.220 | a company that's trying to,
02:22:47.220 | of course there's a lot of trying to cure diseases
02:22:50.020 | and so on with the near term,
02:22:51.820 | but the long-term vision is to add an extra layer,
02:22:54.900 | so basically expand the capacity of the brain,
02:22:57.260 | connected to the computational world.
02:23:00.920 | Do you think one that's possible,
02:23:04.500 | or two, how does that change the fundamentals
02:23:06.180 | of the zeroth order and the first order?
02:23:07.980 | - It's technically possible,
02:23:09.020 | but I don't see that the FDA would ever allow me
02:23:11.300 | to drill holes in my skull to interface
02:23:13.060 | my neocortex the way Elon Musk envisions.
02:23:15.820 | So at the moment, I can do horrible things to mice,
02:23:18.480 | but I'm not able to do useful things to people,
02:23:21.840 | except maybe at some point
02:23:23.300 | down the line in medical applications.
02:23:25.300 | So this thing that we are envisioning,
02:23:27.900 | which means recreational
02:23:30.980 | and creational brain-computer interfaces
02:23:33.580 | are probably not going to happen in the present legal system.
02:23:35.940 | - I love it how I'm asking you,
02:23:38.660 | out there philosophical and sort of engineering questions,
02:23:43.660 | and for the first time ever, you jumped to the legal FDA.
02:23:48.220 | - There would be enough people that would be crazy enough
02:23:50.340 | to have holes drilled in their skull
02:23:51.700 | to try a new type of brain-computer interface, right?
02:23:53.980 | - But also if it works, FDA will approve it.
02:23:57.780 | I mean, yes, it's like,
02:24:00.540 | 'cause I work a lot with autonomous vehicles,
02:24:02.460 | yes, you can say that it's going to be
02:24:03.700 | a very difficult regulatory process of approving autonomous,
02:24:06.580 | but it doesn't mean autonomous vehicles
02:24:08.180 | are never going to happen.
02:24:10.320 | - No, they will totally happen as soon as we create jobs
02:24:12.760 | for at least two lawyers and one regulator per car.
02:24:15.840 | (both laughing)
02:24:17.220 | - So yes, lawyers, that's actually,
02:24:20.340 | like lawyers is the fundamental substrate of reality.
02:24:25.060 | Is there's always lawyers. - In the US,
02:24:26.340 | it's a very weird system.
02:24:27.380 | It's not universal in the world.
02:24:29.620 | The law is a very interesting software
02:24:31.400 | once you realize it, right?
02:24:32.380 | These circuits are in some sense streams of software
02:24:35.500 | and this is largely works by exception handling.
02:24:38.340 | So you make decisions on the ground
02:24:39.720 | and they get synchronized with the next level structure
02:24:42.020 | as soon as an exception is being thrown.
02:24:43.780 | - Yeah, it's a, yeah, it's a-
02:24:46.180 | - So it escalates the exception handling.
02:24:47.740 | The process is very expensive,
02:24:49.700 | especially since it incentivizes the lawyers
02:24:51.900 | for producing work for lawyers.
02:24:55.060 | - Yeah, so the exceptions are actually incentivized
02:24:57.500 | for firing often.
02:24:59.140 | But to return outside of lawyers,
02:25:04.760 | is there anything fundamentally,
02:25:08.020 | like is there anything interesting, insightful
02:25:10.380 | about the possibility of this extra layer
02:25:13.580 | of intelligence added to the brain?
02:25:15.460 | - I do think so, but I don't think that you need
02:25:17.900 | technically invasive procedures to do so.
02:25:21.100 | We can already interface with other people
02:25:23.100 | by observing them very, very closely
02:25:24.780 | and getting in some kind of empathetic resonance.
02:25:27.640 | And I'm a nerd, so I'm not very good at this,
02:25:30.700 | but I noticed that people are able to do this
02:25:32.940 | to some degree.
02:25:34.020 | And it basically means that we model an interface layer
02:25:37.780 | of the other person in real time.
02:25:39.860 | And it works despite our neurons being slow,
02:25:42.520 | because most of the things that we do
02:25:43.980 | are built on periodic processes,
02:25:46.180 | so you just need to entrain yourself
02:25:48.060 | with the oscillation that happens.
02:25:49.420 | And if the oscillation itself changes slowly enough,
02:25:52.500 | you can basically follow along.
02:25:54.180 | - Right.
02:25:55.020 | But the bandwidth of that interaction,
02:25:59.720 | it seems like you can do a lot more computation
02:26:03.500 | when there's-- - Yes, of course.
02:26:05.300 | But the other thing is that the bandwidth
02:26:06.820 | that our brain, our own mind is running on
02:26:08.860 | is actually quite slow.
02:26:09.980 | So the number of thoughts that I can productively think
02:26:13.020 | in any given day is quite limited.
02:26:15.900 | - But it's much-- - If they had the discipline
02:26:17.620 | to write it down and the speed to write it down,
02:26:20.100 | maybe it would be a book every day or so,
02:26:21.760 | but if you think about the computers that we can build,
02:26:25.160 | the magnitudes at which they operate,
02:26:27.980 | this would be nothing.
02:26:28.820 | It's something that it can put out in a second.
02:26:30.860 | - Well, I don't know.
02:26:31.700 | It's possible, sort of the number of thoughts
02:26:35.260 | you have in your brain,
02:26:37.100 | it could be several orders of magnitude higher
02:26:39.420 | than what you're possibly able to express
02:26:41.940 | through your fingers or through your voice.
02:26:44.340 | Like-- - Yes.
02:26:45.180 | Most of them are going to be repetitive
02:26:47.500 | because they-- - How do you know that?
02:26:49.380 | - Because they have to control the same problems every day.
02:26:52.180 | When I walk, there are going to be processes in my brain
02:26:55.220 | that model my walking pattern and regulate them and so on,
02:26:58.060 | but it's going to be pretty much the same every day.
02:27:00.060 | - But that could be because-- - Every step.
02:27:01.540 | - But I'm talking about intellectual reasoning,
02:27:03.180 | like thinking, so the question,
02:27:04.500 | what is the best system of government?
02:27:06.260 | So you sit down and start thinking about that.
02:27:08.420 | One of the constraints is that you don't have access
02:27:10.880 | to a lot of, like, you don't have access to a lot of facts,
02:27:14.100 | a lot of studies.
02:27:14.940 | You have to do, you always have to interface
02:27:17.440 | with something else to learn more,
02:27:20.260 | to aid in your reasoning process.
02:27:23.300 | If you can directly access all of Wikipedia
02:27:25.700 | in trying to understand what is the best form of government,
02:27:28.340 | then every thought won't be stuck in a loop.
02:27:30.540 | Like, every thought that requires
02:27:32.820 | some extra piece of information
02:27:34.220 | will be able to grab it really quickly.
02:27:36.140 | That's the possibility of,
02:27:38.500 | if the bottleneck is literally the information that,
02:27:43.220 | you know, the bottleneck of breakthrough ideas
02:27:47.020 | is just being able to quickly access
02:27:49.140 | huge amounts of information,
02:27:50.340 | then the possibility of connecting your brain
02:27:52.420 | to the computer could lead to totally new,
02:27:55.860 | like, you know, totally new breakthroughs.
02:27:58.700 | You can think of mathematicians
02:27:59.900 | being able to, you know,
02:28:02.820 | just up the orders of magnitude of power
02:28:06.980 | in their reasoning about mathematical--
02:28:08.860 | - What if humanity has already discovered
02:28:11.380 | the optimal form of government
02:28:13.100 | through a evolutionary process?
02:28:14.780 | - Yeah, that could be-- - There is an evolution
02:28:15.980 | going on. - Very well, could be.
02:28:16.820 | - And so what we discover is that maybe
02:28:19.620 | the problem of government doesn't have
02:28:21.020 | stable solutions for us as a species,
02:28:23.420 | because we are not designed in such a way
02:28:25.020 | that we can make everybody conform to them.
02:28:28.300 | So, but there could be solutions
02:28:30.260 | that work under given circumstances,
02:28:31.900 | or that are the best for a certain environment.
02:28:34.260 | It depends on, for instance,
02:28:35.860 | the primary forms of ownership
02:28:38.020 | and the means of production.
02:28:39.180 | So if the main means of production is land,
02:28:42.660 | then the forms of government
02:28:45.620 | will be regulated by the landowners,
02:28:47.420 | and you get a monarchy.
02:28:48.940 | If you also want to have a form of government
02:28:50.980 | in which you depend on some form of slavery,
02:28:54.980 | for instance, where the peasants
02:28:56.220 | have to work very long hours for very little gain,
02:28:58.740 | so very few people can have plumbing,
02:29:01.380 | then maybe you need to promise them
02:29:03.460 | that you get paid in the afterlife, the overtime, right?
02:29:06.980 | So you need a theocracy.
02:29:08.700 | And so for much of human history in the West,
02:29:12.540 | you had a combination of monarchy and theocracy
02:29:15.060 | that was our form of governance, right?
02:29:17.380 | At the same time, the Catholic Church
02:29:19.820 | implemented game-theoretic principles.
02:29:22.620 | I recently reread Thomas Aquinas.
02:29:25.380 | It's very interesting to see this
02:29:26.540 | because he was not dualist.
02:29:28.220 | He was translating Aristotle in a particular way
02:29:30.620 | for designing an operating system
02:29:33.060 | for the Catholic society.
02:29:34.980 | And he says that basically people are animals,
02:29:38.580 | and it's very much the same way as Aristotle envisions,
02:29:40.980 | which basically organisms with cybernetic control.
02:29:44.020 | And then he says that there are additional
02:29:45.900 | rational principles that humans can discover,
02:29:47.980 | and everybody can discover them so they are universal.
02:29:50.740 | If you are sane, you should understand,
02:29:52.460 | you should submit to them
02:29:54.020 | because you can rationally deduce them.
02:29:55.940 | And these principles are roughly,
02:29:58.420 | you should be willing to self-regulate correctly.
02:30:02.700 | You should be willing to do correct social regulation.
02:30:06.580 | It's intra-organismic.
02:30:07.980 | You should be willing to act on your models.
02:30:12.420 | So you have skin in the game.
02:30:13.860 | And you should have goal rationality.
02:30:18.660 | You should be choosing the right goals to work on.
02:30:23.340 | So basically these three rational principles,
02:30:25.300 | goal rationality he calls prudence or wisdom,
02:30:27.620 | social regulation is justice, the correct social one,
02:30:31.940 | and the internal regulation is temperance.
02:30:34.860 | And this willingness to act on your models is courage.
02:30:38.740 | And then he says that there are,
02:30:40.860 | additionally to these four cardinal virtues,
02:30:43.740 | three divine virtues.
02:30:44.780 | And these three divine virtues cannot be rationally deduced,
02:30:47.620 | but they reveal themselves by their harmony,
02:30:49.420 | which means if you assume them
02:30:51.220 | and you extrapolate what's going to happen,
02:30:53.220 | you will see that they make sense.
02:30:55.540 | And it's often been misunderstood
02:30:57.260 | as God has to tell you that these are the things.
02:30:59.660 | So there's something nefarious going on.
02:31:01.900 | The Christian conspiracy forces you to believe
02:31:05.500 | some guy with a long beard that they discovered this.
02:31:08.540 | So these principles are relatively simple.
02:31:11.900 | Again, it's for high-level organization,
02:31:14.660 | for the resulting civilization that you form.
02:31:17.460 | Commitment to unity.
02:31:18.980 | So basically you serve this higher, larger thing,
02:31:21.620 | this structural principle on the next level,
02:31:24.540 | and he calls that faith.
02:31:26.540 | Then there needs to be a commitment to shared purpose.
02:31:29.900 | This is basically this global reward
02:31:31.260 | that you try to figure out what that should be
02:31:32.980 | and how you can facilitate this, and this is love.
02:31:35.140 | The commitment to shared purpose is the core of love.
02:31:38.380 | You see the sacred thing that is more important
02:31:40.540 | than your own organismic interests in the other.
02:31:43.300 | And you serve this together,
02:31:44.500 | and this is how you see the sacred in the other.
02:31:46.820 | And the last one is hope,
02:31:48.900 | which means you need to be willing to act on that principle
02:31:52.140 | without getting rewards in the here and now,
02:31:54.300 | because it doesn't exist yet
02:31:55.780 | when you start out building the civilization.
02:31:57.940 | So you need to be able to do this
02:31:59.420 | in the absence of its actual existence yet
02:32:03.620 | so it can come into being.
02:32:04.980 | - So yeah, so the way it comes into being
02:32:06.740 | is by you accepting those notions,
02:32:08.420 | and then you see these three divine concepts,
02:32:12.100 | and you see them realized.
02:32:13.780 | - And now the problem is divine is a loaded concept
02:32:15.940 | in our world, right?
02:32:16.780 | Because we are outside of this cult,
02:32:18.540 | and we are still scarred from breaking free of it.
02:32:20.980 | But the idea is basically we need to have a civilization
02:32:23.660 | that acts as an intentional agent, like an insect state.
02:32:27.140 | And we are not actually a tribal species,
02:32:29.140 | we are a state-building species.
02:32:30.940 | And what enabled state-building
02:32:33.060 | is basically the formation of religious states
02:32:36.700 | and other forms of rule-based administration
02:32:39.060 | in which the individual doesn't matter as much as the rule
02:32:41.820 | or the higher goal.
02:32:42.900 | - Right.
02:32:43.860 | - We got there by the question,
02:32:44.980 | what's the optimal form of governance?
02:32:46.580 | So I don't think that Catholicism
02:32:48.780 | is the optimal form of governance
02:32:50.100 | because it's obviously on the way out, right?
02:32:51.820 | So it is for the present type of society that we are in.
02:32:55.500 | Religious institutions don't seem to be optimal
02:32:58.780 | to organize that.
02:32:59.900 | So what we discovered right now
02:33:01.380 | that we live in in the West is democracy.
02:33:04.420 | And democracy is the rule of oligarchs
02:33:06.380 | that are the people that currently own
02:33:07.660 | the means of production
02:33:09.500 | that is administered not by the oligarchs themselves
02:33:12.100 | because there's too much disruption, right?
02:33:14.500 | We have so much innovation that we have in every generation
02:33:17.980 | new means of production that we invent.
02:33:19.860 | And corporations die usually after 30 years or so
02:33:23.620 | and something other takes a leading role in our societies.
02:33:27.100 | So it's administered by institutions
02:33:29.380 | and these institutions themselves are not elected
02:33:31.580 | but they provide continuity
02:33:34.060 | and they are led by electable politicians.
02:33:37.580 | And this makes it possible that you can adapt to change
02:33:40.020 | without having to kill people, right?
02:33:41.660 | So you can, for instance, if a change in governments,
02:33:43.780 | if people think that the current government is too corrupt
02:33:46.420 | or is not up to date, you can just elect new people.
02:33:49.780 | Or if a journalist finds out something inconvenient
02:33:52.140 | about the institution and the institution has no plan B
02:33:55.980 | like in Russia, the journalist has to die.
02:33:58.460 | This is what, when you run society by the deep state.
02:34:01.860 | So ideally you have administration layer
02:34:05.940 | that you can change if something bad happens, right?
02:34:08.740 | So you will have a continuity in the whole thing.
02:34:11.100 | And this is the system that we came up in the West.
02:34:13.740 | And the way it's set up in the US
02:34:15.060 | is largely a result of low level models.
02:34:16.900 | So it's mostly just second, third order consequences
02:34:20.700 | that people are modeling in the design of these institutions.
02:34:22.980 | So it's a relatively young society
02:34:24.780 | that doesn't really take care of the downstream effects
02:34:27.140 | of many of the decisions that are being made.
02:34:29.940 | And I suspect that AI can help us this in a way
02:34:32.980 | if you can fix the incentives.
02:34:35.220 | The society of the US is a society of cheaters.
02:34:38.020 | It's basically cheating is so indistinguishable
02:34:40.700 | from innovation and we want to encourage innovation.
02:34:43.300 | - Can you elaborate on what you mean by cheating?
02:34:45.220 | - It's basically people do things that they know are wrong.
02:34:47.780 | It's acceptable to do things that you know are wrong
02:34:49.980 | in this society to a certain degree.
02:34:51.540 | You can, for instance, suggest some non-sustainable
02:34:55.100 | business models and implement them.
02:34:57.540 | - Right, but you're always pushing the boundaries.
02:34:59.140 | I mean, you're-- - Yes.
02:35:00.820 | And yes, this is seen as a good thing largely.
02:35:03.100 | - Yes.
02:35:05.060 | - And this is different from other societies.
02:35:07.220 | So for instance, social mobility is an aspect of this.
02:35:09.500 | Social mobility is the result of individual innovation
02:35:12.740 | that would not be sustainable at scale for everybody else.
02:35:15.860 | Normally, you should not go up, you should go deep.
02:35:18.140 | We need bakers and indeed we are very good bakers,
02:35:20.620 | but in a society that innovates,
02:35:21.940 | maybe you can replace all the bakers
02:35:23.500 | with a really good machine.
02:35:24.860 | That's not a bad thing and it's a thing
02:35:28.100 | that made the US so successful, right?
02:35:29.740 | But it also means that the US is not optimizing
02:35:32.060 | for sustainability but for innovation.
02:35:33.980 | - And so it's not obvious as the evolutionary process
02:35:38.060 | is unrolling, it's not obvious that that long-term
02:35:40.260 | would be better.
02:35:42.380 | - It has side effects.
02:35:43.300 | So basically, if you cheat, you will have a certain layer
02:35:46.860 | of toxic sludge that covers everything
02:35:49.340 | that is a result of cheating.
02:35:50.580 | - And we have to unroll this evolutionary process
02:35:53.060 | to figure out if these side effects are so damaging
02:35:55.820 | that the system's horrible or if the benefits
02:35:58.380 | actually outweigh the negative effects.
02:36:02.260 | How do we get to which system of government is best?
02:36:06.900 | That was from, I'm trying to trace back
02:36:09.340 | the last five minutes.
02:36:10.940 | - I suspect that we can find a way back to AI
02:36:14.260 | by thinking about the way in which our brain
02:36:16.500 | has to organize itself.
02:36:18.300 | Right, in some sense, our brain is a society of neurons
02:36:22.260 | and our mind is a society of behaviors.
02:36:26.380 | And they need to be organizing themselves
02:36:28.460 | into a structure that implements regulation
02:36:31.620 | and government is social regulation.
02:36:34.300 | We often see government as the manifestation of power
02:36:37.020 | or local interest, but it's actually a platform
02:36:39.100 | for negotiating the conditions of human survival.
02:36:42.260 | And this platform emerges over the current needs
02:36:45.420 | and possibilities and the trajectory that we have.
02:36:47.620 | So given the present state, there are only so many options
02:36:51.420 | on how we can move into the next state
02:36:53.420 | without completely disrupting everything.
02:36:55.340 | And we mostly agree that it's a bad idea
02:36:57.060 | to disrupt everything because it will endanger
02:36:59.340 | our food supply for a while and the entire infrastructure
02:37:02.100 | and fabric of society.
02:37:03.980 | So we do try to find natural transitions.
02:37:07.140 | And there are not that many natural transitions
02:37:09.300 | available at any given point.
02:37:10.980 | - What do you mean by natural transitions?
02:37:12.260 | - So we try not to have revolutions if we can have it.
02:37:14.980 | - Right.
02:37:15.820 | So speaking of revolutions and the connection
02:37:19.100 | between government systems and the mind,
02:37:21.940 | you've also said that, you've said that in some sense,
02:37:25.820 | becoming an adult means you take charge of your emotions.
02:37:29.300 | Maybe you never said that.
02:37:30.360 | Maybe I just made that up.
02:37:31.780 | But in the context of the mind, what's the role of emotion?
02:37:36.780 | And what is it?
02:37:39.940 | First of all, what is emotion?
02:37:41.100 | What's its role?
02:37:42.380 | - It's several things.
02:37:43.500 | So psychologists often distinguish between emotion
02:37:46.380 | and feeling and in common day parlance, we don't.
02:37:50.260 | I think that an emotion is a configuration
02:37:52.300 | of the cognitive system.
02:37:54.020 | And that's especially true for the lowest level,
02:37:55.980 | for the affective state.
02:37:57.540 | So when you have an affect, it's the configuration
02:37:59.520 | of certain modulation parameters like arousal, valence,
02:38:02.920 | your attentional focus, whether it's wide or narrow,
02:38:06.700 | interoception or exteroception and so on.
02:38:08.820 | And all these parameters together put you in a certain way
02:38:12.340 | to, you relate to the environment and to yourself.
02:38:14.620 | And this is in some sense, an emotional configuration.
02:38:17.420 | In the more narrow sense, an emotion is an affective state.
02:38:20.140 | It has an object.
02:38:22.400 | And the relevance of that object is given by motivation.
02:38:25.260 | And motivation is a bunch of needs
02:38:27.400 | that are associated with rewards,
02:38:29.180 | things that give you pleasure and pain.
02:38:30.980 | And you don't actually act on your needs,
02:38:32.700 | you act on models of your needs.
02:38:34.540 | Because when the pleasure and pain manifest,
02:38:36.260 | it's too late, you've done everything.
02:38:38.180 | But so you act on expectations
02:38:40.180 | what will give you pleasure and pain.
02:38:41.780 | And these are your purposes.
02:38:43.020 | The needs don't form a hierarchy,
02:38:44.620 | they just coexist and compete.
02:38:46.420 | And your organism has to, or your brain has to find
02:38:48.620 | a dynamic homeostasis between them.
02:38:51.140 | But the purposes need to be consistent.
02:38:53.180 | So you basically can create a story for your life
02:38:55.860 | and make plans.
02:38:57.500 | And so we organize them all into hierarchies.
02:39:00.020 | And there is not a unique solution for this.
02:39:01.860 | Some people eat to make art,
02:39:03.100 | and other people make art to eat.
02:39:05.300 | They might end up doing the same things,
02:39:07.180 | but they cooperate in very different ways.
02:39:09.500 | Because their ultimate goals are different.
02:39:12.100 | And we cooperate based on shared purpose.
02:39:14.340 | Everything else that is not cooperation
02:39:15.980 | on shared purpose is transactional.
02:39:17.740 | - I don't think I understood that last piece
02:39:21.900 | of achieving the homeostasis.
02:39:26.740 | Are you distinguishing between the experience of emotion
02:39:29.220 | and the expression of emotion?
02:39:31.100 | - Of course.
02:39:31.940 | So the experience of emotion is a feeling.
02:39:35.500 | And in this sense, what you feel is an appraisal
02:39:38.580 | that your perceptual system has made
02:39:40.180 | of the situation at hand.
02:39:41.820 | And it makes this based on your motivation
02:39:44.540 | and on your estimates, not your,
02:39:47.380 | but of the subconscious geometric parts of your mind
02:39:51.320 | that assess the situation in the world
02:39:53.620 | with something like a neural network.
02:39:55.700 | And this neural network is making itself known
02:39:58.100 | to the symbolic parts of your mind,
02:40:00.300 | to your conscious attention,
02:40:02.180 | by mapping them as features into a space.
02:40:05.220 | So what you will feel about your emotion
02:40:07.540 | is a projection usually into your body map.
02:40:10.140 | So you might feel anxiety in your solar plexus,
02:40:12.460 | and you might feel it as a contraction,
02:40:14.380 | which is all geometry, right?
02:40:16.700 | Your body map is the space that is always instantiated
02:40:19.020 | and always available.
02:40:19.860 | So it's a very obvious cheat
02:40:22.500 | if your non-symbolic parts of your brain
02:40:27.100 | try to talk to your symbolic parts of your brain
02:40:29.100 | to map the feelings into the body map.
02:40:31.300 | And then you perceive them as pleasant and unpleasant,
02:40:33.720 | depending on whether the appraisal
02:40:35.220 | has a negative or positive valence.
02:40:37.420 | And then you have different features of them
02:40:39.300 | that give you more knowledge
02:40:41.500 | about the nature of what you're feeling.
02:40:42.900 | So for instance, when you feel connected to other people,
02:40:45.300 | you typically feel this in your chest region
02:40:47.140 | around your heart.
02:40:48.440 | And you feel this as an expansive feeling
02:40:50.260 | in which you're reaching out, right?
02:40:52.980 | And it's very intuitive to encode it like this.
02:40:55.920 | That's why it's encoded like this for most people.
02:40:57.340 | - But it's encoded.
02:40:58.180 | - Yes, it's a code.
02:40:59.020 | It's a code in which the non-symbolic parts of your mind
02:41:01.620 | talk to the symbolic ones.
02:41:02.740 | - And then the expression of emotion
02:41:04.140 | is then the final step
02:41:05.900 | that could be sort of gestural or visual or so on.
02:41:09.060 | That's a part of the communication.
02:41:09.900 | - Let's just say this probably evolved
02:41:11.300 | as part of an adversarial communication.
02:41:13.980 | So as soon as you started to observe
02:41:16.420 | the facial expression and posture of others
02:41:18.420 | to understand what emotional state they're in,
02:41:20.540 | others started to use this as signaling
02:41:22.220 | and also to subvert your model of their emotional state.
02:41:25.220 | So we now look at the inflections,
02:41:27.460 | at the difference between the standard face
02:41:29.380 | that they're going to make in this situation.
02:41:31.340 | Right, when you are at a funeral,
02:41:32.400 | everybody expects you to make a solemn face.
02:41:34.780 | But the solemn face doesn't express
02:41:36.260 | whether you're sad or not.
02:41:37.180 | It just expresses that you understand
02:41:38.860 | what face you have to make at a funeral.
02:41:40.900 | Nobody should know that you are triumphant.
02:41:44.260 | So when you try to read the emotion of another person,
02:41:46.400 | you try to look at the delta
02:41:48.160 | between a truly sad expression
02:41:51.380 | and the things that are animated
02:41:54.100 | making this face behind the curtain.
02:41:56.540 | - So the interesting thing is,
02:41:58.820 | so having done this podcast and the video component,
02:42:03.600 | one of the things I've learned is that,
02:42:05.920 | now I'm Russian and I just don't know
02:42:08.580 | how to express emotion on my face.
02:42:10.900 | One, I see that as weakness, but whatever.
02:42:13.060 | People look to me after you say something,
02:42:17.100 | they look to my face to help them see
02:42:21.220 | how they should feel about what you said,
02:42:23.260 | which is fascinating.
02:42:24.220 | 'Cause then they'll often comment on,
02:42:26.300 | why did you look bored?
02:42:27.460 | Or why did you particularly enjoy that part?
02:42:29.380 | Or why did you whatever?
02:42:30.860 | It's a kind of interesting,
02:42:32.660 | it makes me cognizant of I'm part,
02:42:35.240 | like you're basically saying a bunch of brilliant things,
02:42:37.820 | but I'm part of the play that you're the key actor in
02:42:42.820 | by making my facial expressions
02:42:45.240 | and therefore telling the narrative
02:42:47.320 | of what the big point is, which is fascinating.
02:42:51.380 | Makes me cognizant that I'm supposed
02:42:53.100 | to be making facial expressions.
02:42:54.640 | Even this conversation is hard
02:42:56.460 | because my preference would be to wear a mask
02:42:58.860 | with sunglasses to where I could just listen.
02:43:02.060 | - Yes, I understand this because it's intrusive
02:43:04.820 | to interact with others this way.
02:43:06.460 | And basically, Eastern European society
02:43:09.180 | have a taboo against that, and especially Russia,
02:43:12.100 | the further you go to the East.
02:43:13.620 | And in the US, it's the opposite.
02:43:16.080 | You're expected to be hyper animated in your face,
02:43:19.240 | and you're also expected to show positive affect.
02:43:22.660 | And if you show positive affect
02:43:25.440 | without a good reason in Russia,
02:43:27.520 | the people will think you are a stupid,
02:43:30.400 | unsophisticated person.
02:43:31.600 | - Exactly.
02:43:34.280 | And here, positive affect without reason
02:43:37.680 | goes either appreciated or goes unnoticed.
02:43:40.960 | - No, it's the default.
02:43:41.880 | It's being expected.
02:43:43.000 | Everything is amazing.
02:43:44.540 | Have you seen these--
02:43:46.740 | - Lego movie?
02:43:47.680 | - No, there was a diagram where somebody
02:43:50.340 | gave the appraisals that exist in US and Russia.
02:43:52.920 | So you have your bell curve.
02:43:54.000 | And the lower 10% in US, it's a good start.
02:43:59.000 | Everything above the lowest 10% is amazing.
02:44:04.520 | - It's amazing.
02:44:06.080 | - And for Russians, everything below the top 10%
02:44:10.680 | is terrible.
02:44:12.800 | And then everything except the top percent
02:44:15.640 | is I don't like it.
02:44:17.720 | And the top percent is even so.
02:44:20.300 | (both laughing)
02:44:23.600 | - Yeah, it's funny, but it's kind of true.
02:44:26.040 | - But there's a deeper aspect to this.
02:44:29.120 | It's also how we construct meaning in the US.
02:44:32.360 | Usually, you focus on the positive aspects,
02:44:35.080 | and you just suppress the negative aspects.
02:44:38.200 | And in our Eastern European traditions,
02:44:41.500 | we emphasize the fact that if you hold something
02:44:45.160 | above the waterline, you also need to put something
02:44:47.560 | below the waterline, because existence by itself
02:44:49.600 | is at best neutral.
02:44:51.460 | - Right, that's the basic intuition.
02:44:53.480 | At best neutral, or it could be just suffering,
02:44:56.400 | the default is suffering.
02:44:57.240 | - There are moments of beauty,
02:44:58.160 | but these moments of beauty are inextricably linked
02:45:01.440 | to the reality of suffering.
02:45:03.360 | And to not acknowledge the reality of suffering
02:45:05.880 | means that you are really stupid and unaware
02:45:07.840 | of the fact that basically every conscious being
02:45:10.200 | spends most of the time suffering.
02:45:12.660 | - Yeah, you just summarized the ethos of the Eastern Europe.
02:45:17.660 | Yeah, most of life is suffering
02:45:19.860 | with the occasional moments of beauty.
02:45:21.940 | And if your facial expressions don't acknowledge
02:45:24.580 | the abundance of suffering in the world
02:45:27.220 | and in existence itself, then you must be an idiot.
02:45:30.780 | - It's an interesting thing when you raise children
02:45:33.820 | in the US, and you, in some sense, preserve the identity
02:45:37.460 | of the intellectual and cultural traditions
02:45:40.140 | that are embedded in your own families.
02:45:42.200 | And your daughter asks you about Ariel the mermaid,
02:45:45.560 | and asks you why is Ariel not allowed
02:45:48.120 | to play with the humans?
02:45:50.160 | And you tell her the truth.
02:45:52.420 | She's a siren.
02:45:53.540 | Sirens eat people.
02:45:54.480 | You don't play with your food.
02:45:55.320 | It does not end well.
02:45:56.940 | And then you tell her the original story,
02:45:58.820 | which is not the one by Andersson,
02:46:00.340 | which is the romantic one, and there's a much darker one.
02:46:03.340 | The Undine story.
02:46:04.620 | - What happened?
02:46:05.460 | - So Undine is a mermaid, or a water woman.
02:46:10.100 | She lives on the ground of a river,
02:46:12.060 | and she meets this prince, and they fall in love.
02:46:14.620 | And the prince really, really wants to be with her,
02:46:16.740 | and she says, "Okay, but the deal is,
02:46:18.940 | "you cannot have any other woman.
02:46:20.840 | "If you marry somebody else,
02:46:21.900 | "even though you cannot be with me,
02:46:22.980 | "because obviously you cannot breathe underwater
02:46:24.820 | "and have other things to do than managing your kingdom
02:46:27.840 | "as you are up here, you will die."
02:46:30.820 | And eventually, after a few years,
02:46:32.740 | he falls in love with some princess and marries her,
02:46:34.920 | and she shows up and quietly goes into his chamber,
02:46:38.540 | and nobody is able to stop her or willing to do so,
02:46:41.860 | because she is fierce.
02:46:43.300 | And she comes quietly and sat out of his chamber,
02:46:45.580 | and they ask her, "What has happened?
02:46:48.200 | "What did you do?"
02:46:49.040 | And she said, "I kissed him to death."
02:46:50.840 | - Well done.
02:46:52.980 | - And you know the Andersson story, right?
02:46:55.900 | In the Andersson story, the mermaid is playing
02:46:58.820 | with this prince that she saves,
02:47:00.740 | and she falls in love with him,
02:47:01.940 | and she cannot live out there,
02:47:03.600 | so she is giving up her voice and her tail
02:47:07.240 | for a human-like appearance,
02:47:09.380 | so she can walk among the humans,
02:47:11.100 | but this guy does not recognize
02:47:12.820 | that she is the one that he would marry.
02:47:15.620 | Instead, he marries somebody who has a kingdom
02:47:17.860 | and economical and political relationships
02:47:20.620 | to his own kingdom and so on, as he should.
02:47:22.940 | - It's quite tragic.
02:47:23.780 | - And she dies.
02:47:25.300 | - Yeah.
02:47:26.140 | (both laughing)
02:47:29.500 | Yeah, instead, Disney, the Little Mermaid story
02:47:33.400 | has a little bit of a happy ending.
02:47:34.820 | That's the Western, that's the American way.
02:47:37.180 | My own problem is this, of course,
02:47:38.880 | that I read Oscar Wilde before I read the other things,
02:47:41.260 | so I'm indoctrinated, inoculated with this romanticism,
02:47:44.780 | and I think that the mermaid is right.
02:47:46.920 | You sacrifice your life for romantic love.
02:47:48.620 | That's what you do, because if you are confronted
02:47:51.540 | with either serving the machine
02:47:53.020 | and doing the obviously right thing
02:47:55.340 | under the economic and social
02:47:57.020 | and all other human incentives, that's wrong.
02:48:00.540 | You should follow your heart.
02:48:01.980 | - So do you think suffering is fundamental
02:48:06.900 | to happiness along these lines?
02:48:09.620 | - Suffering is the result of caring
02:48:11.260 | about things that you cannot change,
02:48:13.260 | and if you are able to change what you care about
02:48:15.940 | to those things that you can change, you will not suffer.
02:48:18.180 | - But would you then be able to experience happiness?
02:48:22.300 | - Yes, but happiness itself is not important.
02:48:25.420 | Happiness is like a cookie.
02:48:27.180 | When you are a child, you think cookies are very important,
02:48:29.340 | and you want to have all the cookies in the world,
02:48:30.900 | and you look forward to being an adult,
02:48:32.460 | because then you have as many cookies as you want, right?
02:48:34.900 | - Yes.
02:48:35.740 | - And as an adult, you realize a cookie is a tool.
02:48:37.980 | It's a tool to make you eat vegetables.
02:48:40.260 | And once you eat your vegetables anyway,
02:48:41.740 | you stop eating cookies for the most part,
02:48:43.500 | because otherwise you will get diabetes
02:48:45.260 | and will not be around for your kids.
02:48:46.700 | - Yes, but then the cookie, the scarcity of a cookie,
02:48:50.220 | if scarcity is enforced nevertheless,
02:48:52.540 | so the pleasure comes from the scarcity.
02:48:54.700 | - Yes, but the happiness is a cookie
02:48:56.980 | that your brain bakes for itself.
02:48:58.940 | It's not made by the environment.
02:49:00.580 | The environment cannot make you happy.
02:49:02.060 | It's your appraisal of the environment
02:49:03.740 | that makes you happy.
02:49:05.220 | And if you can change your appraisal of the environment,
02:49:07.260 | which you can learn to,
02:49:08.180 | then you can create arbitrary states of happiness.
02:49:10.540 | And some meditators fall into this trap.
02:49:12.300 | So they discover the womb, this basement womb in their brain
02:49:15.380 | where the cookies are made,
02:49:16.700 | and they indulge in stuff themselves.
02:49:18.460 | And after a few months, it gets really old,
02:49:20.460 | and the big crisis of meaning comes.
02:49:22.580 | Because they thought before that their unhappiness
02:49:25.180 | was the result of not being happy enough.
02:49:27.540 | So they fixed this, right?
02:49:28.700 | They can release the neurotransmitters at will
02:49:30.700 | if they train.
02:49:31.900 | And then the crisis of meaning pops up
02:49:34.700 | in a deeper layer.
02:49:36.580 | And the question is, why do I live?
02:49:37.820 | How can I make a sustainable civilization
02:49:39.740 | that is meaningful to me?
02:49:41.060 | How can I insert myself into this?
02:49:42.780 | And this was the problem that you couldn't solve
02:49:44.220 | in the first place.
02:49:45.220 | - But at the end of all this,
02:49:49.620 | let me then ask that same question.
02:49:51.220 | What is the answer to that?
02:49:53.900 | What could the possible answer be of the meaning of life?
02:49:57.340 | What could an answer be?
02:49:59.180 | What is it to you?
02:50:00.860 | - I think that if you look at the meaning of life,
02:50:03.220 | you look at what the cell is.
02:50:05.260 | Life is the cell.
02:50:06.940 | - The original cell.
02:50:07.940 | - Yes, or this principle, the cell.
02:50:09.980 | It's this self-organizing thing
02:50:12.300 | that can participate in evolution.
02:50:14.340 | In order to make it work, it's a molecular machine.
02:50:16.740 | It needs a self-replicator, a neck entropy extractor,
02:50:19.220 | and a Turing machine.
02:50:20.500 | If any of these parts is missing,
02:50:21.820 | you don't have a cell, and it is not living, right?
02:50:24.180 | And life is basically the emergent complexity
02:50:26.380 | over that principle.
02:50:27.700 | Once you have this intelligent super molecule, the cell,
02:50:31.500 | there is very little that you cannot make it do.
02:50:33.180 | It's probably the optimal computronium,
02:50:35.540 | especially in terms of resilience.
02:50:37.540 | It's very hard to sterilize the planet
02:50:39.380 | once it's infected with life.
02:50:40.820 | - So it's active function of these three components
02:50:45.820 | or this super cell, a cell, is present in the cell,
02:50:49.620 | is present in us, and it's just--
02:50:51.780 | - We are just an expression of the cell.
02:50:53.380 | It's a certain layer of complexity
02:50:54.940 | in the organization of cells.
02:50:56.980 | So in a way, it's tempting to think of the cell
02:50:59.740 | as a von Neumann probe.
02:51:01.260 | If you want to build intelligence on other planets,
02:51:04.260 | the best way to do this is to infect them with cells.
02:51:07.460 | And wait for long enough, and there's a reasonable chance
02:51:09.940 | the stuff is going to evolve
02:51:11.220 | into an information processing principle
02:51:13.340 | that is general enough to become sentient.
02:51:15.540 | - Well, that idea is very akin to sort of the same dream
02:51:19.940 | and beautiful ideas that are expressed
02:51:21.420 | in cellular automata in their most simple mathematical form.
02:51:24.460 | You just inject the system with some basic mechanisms
02:51:28.980 | of replication and so on, basic rules,
02:51:31.140 | amazing things would emerge.
02:51:33.020 | - And the cell is able to do something
02:51:34.700 | that James Chardy calls existential design.
02:51:37.940 | He points out that in technical design,
02:51:40.100 | we go from the outside in.
02:51:41.260 | We work in a highly controlled environment
02:51:43.580 | in which everything is deterministic,
02:51:44.980 | like our computers, our labs, or our engineering workshops.
02:51:48.780 | And then we use this determinism
02:51:50.420 | to implement a particular kind of function that we dream up
02:51:53.620 | and that seamlessly interfaces
02:51:55.540 | with all the other deterministic functions
02:51:57.540 | that we already have in our world.
02:51:59.140 | So it's basically from the outside in.
02:52:01.860 | And biological systems designed from the inside out,
02:52:04.740 | as seed, will become a seedling
02:52:07.300 | by taking some of the relatively unorganized matter
02:52:10.380 | around it and turn it into its own structure,
02:52:13.900 | and thereby subdue the environment.
02:52:15.660 | And cells can cooperate if they can rely on other cells
02:52:18.620 | having a similar organization that is already compatible.
02:52:21.540 | But unless that's there, the cell needs to divide
02:52:25.540 | to create that structure by itself, right?
02:52:27.340 | So it's a self-organizing principle
02:52:29.340 | that works on a somewhat chaotic environment.
02:52:31.980 | And the purpose of life, in a sense,
02:52:33.820 | is to produce complexity.
02:52:36.780 | And the complexity allows you
02:52:38.140 | to harvest negentropy gradients
02:52:39.740 | that you couldn't harvest without the complexity.
02:52:42.180 | And in this sense, intelligence and life
02:52:44.220 | are very strongly connected,
02:52:45.580 | because the purpose of intelligence
02:52:47.420 | is to allow control under conditions of complexity.
02:52:50.420 | So basically, you shift the boundary
02:52:52.260 | between the ordered systems into the realm of chaos.
02:52:56.740 | You build bridge heads into chaos with complexity.
02:53:00.160 | And this is what we are doing.
02:53:01.740 | This is not necessarily a deeper meaning.
02:53:03.540 | I think the meaning that we have priors for,
02:53:05.500 | that we are evolved for,
02:53:06.740 | outside of the priors, there is no meaning.
02:53:08.340 | Meaning only exists if a mind projects it, right?
02:53:10.700 | - Yeah, the narrative.
02:53:11.540 | - That is probably civilization.
02:53:13.380 | I think that what feels most meaningful to me
02:53:16.220 | is to try to build and maintain a sustainable civilization.
02:53:20.780 | - And taking a slight step outside of that,
02:53:23.300 | we talked about a man with a beard and God,
02:53:27.620 | but something, some mechanism,
02:53:32.620 | perhaps must have planted the seed,
02:53:36.020 | the initial seed of the cell.
02:53:37.500 | Do you think there is a God?
02:53:40.900 | What is a God?
02:53:42.620 | And what would that look like?
02:53:44.100 | - So if there was no spontaneous abiogenesis,
02:53:47.220 | in the sense that the first cell formed
02:53:49.420 | by some happy, random accidents
02:53:52.660 | where the molecules just happened
02:53:53.860 | to be in the right constellation to each other.
02:53:55.540 | - But there could also be the mechanism
02:53:58.100 | that allows for the random.
02:53:59.780 | I mean, there's like turtles all the way down.
02:54:02.420 | There seems to be,
02:54:03.260 | there has to be a head turtle at the bottom.
02:54:05.700 | - Let's consider something really wild.
02:54:07.380 | Imagine, is it possible that a gas giant
02:54:10.580 | could become intelligent?
02:54:12.400 | What would that involve?
02:54:13.480 | So imagine that you have vortices
02:54:15.500 | that spontaneously emerge on the gas giants,
02:54:17.820 | like big storm systems that endure for thousands of years.
02:54:21.500 | And some of these storm systems
02:54:23.100 | produce electromagnetic fields
02:54:24.380 | because some of the clouds are ferromagnetic or something.
02:54:27.140 | And as a result, they can change
02:54:28.860 | how certain clouds react rather than other clouds
02:54:31.540 | and thereby produce some self-stabilizing patterns
02:54:34.540 | that eventually to regulation feedback loops,
02:54:36.740 | nested feedback loops and control.
02:54:39.180 | So imagine you have such a thing
02:54:40.900 | that basically has emergent, self-sustaining,
02:54:43.140 | self-organizing complexity.
02:54:44.500 | And at some point this breaks up
02:54:45.740 | and realizes, and basically lamps Solaris.
02:54:48.140 | I am a thinking planet,
02:54:49.900 | but I will not replicate
02:54:51.020 | because I cannot recreate the conditions
02:54:52.940 | of my own existence somewhere else.
02:54:55.260 | I'm just basically an intelligence
02:54:57.220 | that has spontaneously formed because it could.
02:55:00.340 | And now it builds a von Neumann probe.
02:55:02.880 | And the best von Neumann probe for such a thing
02:55:04.620 | might be the cell.
02:55:05.820 | So maybe it, because it's very, very clever
02:55:07.780 | and very enduring, creates cells and sends them out.
02:55:10.660 | And one of them has infected our planet.
02:55:12.860 | And I'm not suggesting that this is the case,
02:55:14.500 | but it would be compatible with the Prince-Bermion hypothesis
02:55:17.740 | and it was my intuition that abiogenesis is very unlikely.
02:55:21.540 | It's possible, but you probably need
02:55:23.840 | to roll the cosmic dice very often,
02:55:25.720 | maybe more often than there are planetary surfaces.
02:55:27.860 | I don't know.
02:55:28.700 | - So God is just a large enough,
02:55:33.180 | a system that's large enough that allows randomness.
02:55:37.500 | - No, I don't think that God
02:55:38.380 | has anything to do with creation.
02:55:39.980 | I think it's a mistranslation of the Talmud
02:55:42.420 | into the Catholic mythology.
02:55:45.180 | I think that Genesis is actually
02:55:46.460 | the childhood memories of a God.
02:55:48.460 | So the, when--
02:55:49.980 | - Sorry, Genesis is the--
02:55:51.740 | - The childhood memories of a God.
02:55:53.060 | It's basically a mind that is remembering
02:55:56.300 | how it came into being.
02:55:57.940 | And we typically interpret Genesis
02:56:00.900 | as the creation of a physical universe
02:56:02.540 | by a supernatural being.
02:56:04.380 | And I think when you'll read it,
02:56:07.940 | there is light and darkness that is being created.
02:56:11.540 | And then you discover sky and ground, create them.
02:56:15.300 | You construct the plants and the animals,
02:56:18.700 | and you give everything their names and so on.
02:56:20.940 | That's basically cognitive development.
02:56:22.300 | It's a sequence of steps that every mind has to go through
02:56:26.860 | when it makes sense of the world.
02:56:28.100 | And when you have children, you can see
02:56:29.820 | how initially they distinguish light and darkness.
02:56:32.540 | And then they make out directions in it,
02:56:34.500 | and they discover sky and ground,
02:56:35.860 | and they discover the plants and the animals,
02:56:37.700 | and they give everything their name.
02:56:38.740 | And it's a creative process that happens in every mind,
02:56:41.700 | because it's not given, right?
02:56:43.000 | Your mind has to invent these structures,
02:56:45.180 | to make sense of the patterns on your retina.
02:56:47.940 | Also, if there was some big nerd who set up a server
02:56:50.780 | and runs this world on it,
02:56:52.700 | this would not create a special relationship
02:56:54.980 | between us and the nerd.
02:56:56.060 | This nerd would not have the magical power
02:56:58.180 | to give meaning to our existence, right?
02:57:00.380 | So this equation of a creator God,
02:57:03.420 | with the God of meaning, is a slate of hand.
02:57:06.580 | You shouldn't do it.
02:57:08.000 | The other one that is done in Catholicism
02:57:10.100 | is the equation of the first mover,
02:57:12.240 | the prime mover of Aristotle,
02:57:14.340 | which is basically the automaton that runs the universe.
02:57:17.220 | Total says, "If things are moving,
02:57:19.780 | "and things seem to be moving here,
02:57:21.180 | "something must move them," right?
02:57:22.980 | If something moves them,
02:57:24.260 | something must move the thing that is moving it.
02:57:26.220 | So there must be a prime mover.
02:57:28.300 | This idea to say that this prime mover
02:57:30.300 | is a supernatural being is complete nonsense, right?
02:57:33.380 | It's an automaton, in the simplest case.
02:57:36.620 | So we have to explain the enormity
02:57:38.220 | that this automaton exists at all.
02:57:40.380 | But again, we don't have any possibility
02:57:43.340 | to infer anything about its properties,
02:57:45.580 | except that it's able to produce change in information, right?
02:57:50.060 | So there needs to be some kind of computational principle.
02:57:52.660 | This is all there is.
02:57:53.860 | But to say this automaton is identical, again,
02:57:56.380 | with the creator of the first cause,
02:57:58.020 | or with the thing that gives meaning to our life,
02:57:59.860 | is confusion.
02:58:02.180 | No, I think that what we perceive
02:58:03.820 | is the higher being that we are part of.
02:58:07.260 | And the higher being that we are part of
02:58:08.980 | is the civilization.
02:58:10.540 | It's the thing in which we have a similar relationship
02:58:12.860 | as the cell has to our body.
02:58:14.700 | And we have this prior,
02:58:17.020 | because we have evolved to organize in these structures.
02:58:19.820 | So basically, the Christian God in its natural form,
02:58:24.020 | without the mythology, if you undress it,
02:58:26.300 | is basically the platonic form of a civilization.
02:58:28.740 | - Is the ideal, is the--
02:58:32.700 | - Yes, it's this ideal that you try to approximate
02:58:34.820 | when you interact with others,
02:58:36.460 | not based on your incentives,
02:58:37.780 | but on what you think is right.
02:58:40.900 | - Wow, we covered a lot of ground.
02:58:43.860 | And we're left with one of my favorite lines,
02:58:46.500 | and there's many, which is,
02:58:48.060 | "Happiness is a cookie that the brain bakes itself."
02:58:53.060 | It's been a huge honor and a pleasure to talk to you.
02:58:58.420 | I'm sure our paths will cross many times again.
02:59:01.900 | Joshua, thank you so much for talking today.
02:59:04.180 | Really appreciate it. - Thank you, Lex.
02:59:05.020 | It was so much fun.
02:59:06.820 | I enjoyed it.
02:59:07.940 | - Awesome.
02:59:09.460 | Thanks for listening to this conversation with Yosha Bach,
02:59:12.180 | and thank you to our sponsors, ExpressVPN and Cash App.
02:59:16.900 | Please consider supporting this podcast
02:59:18.660 | by getting ExpressVPN at expressvpn.com/lexpod
02:59:22.860 | and downloading Cash App and using code LEXPODCAST.
02:59:27.860 | If you enjoy this thing, subscribe on YouTube,
02:59:31.620 | review it with Five Stars and Apple Podcast,
02:59:34.100 | support it on Patreon,
02:59:35.500 | or simply connect with me on Twitter @LexFriedman.
02:59:39.700 | And yes, try to figure out how to spell it without the E.
02:59:43.740 | And now let me leave you with some words of wisdom
02:59:46.300 | from Yosha Bach.
02:59:47.380 | If you take this as a computer game metaphor,
02:59:51.060 | this is the best level for humanity to play.
02:59:54.660 | And this best level happens to be the last level,
02:59:58.860 | as it happens against the backdrop of a dying world,
03:00:02.900 | but it's still the best level.
03:00:05.980 | Thank you for listening and hope to see you next time.
03:00:09.500 | (upbeat music)
03:00:12.080 | (upbeat music)
03:00:14.660 | [BLANK_AUDIO]