back to indexJoscha Bach: Artificial Consciousness and the Nature of Reality | Lex Fridman Podcast #101
Chapters
0:0 Introduction
3:14 Reverse engineering Joscha Bach
10:38 Nature of truth
18:47 Original thinking
23:14 Sentience vs intelligence
31:45 Mind vs Reality
46:51 Hard problem of consciousness
51:9 Connection between the mind and the universe
56:29 What is consciousness
62:32 Language and concepts
69:2 Meta-learning
76:35 Spirit
78:10 Our civilization may not exist for long
97:48 Twitter and social media
104:52 What systems of government might work well?
107:12 The way out of self-destruction with AI
115:18 AI simulating humans to understand its own nature
124:32 Reinforcement learning
129:12 Commonsense reasoning
135:47 Would AGI need to have a body?
142:34 Neuralink
147:1 Reasoning at the scale of neurons and societies
157:16 Role of emotion
168:3 Happiness is a cookie that your brain bakes for itself
00:00:00.000 |
The following is a conversation with Yosha Bach, 00:00:05.640 |
with a history of research positions at MIT and Harvard. 00:00:09.480 |
Yosha is one of the most unique and brilliant people 00:00:21.440 |
and the possibly simulated fabric of our universe. 00:00:41.200 |
and downloading Cash App and using code LEXPODCAST. 00:00:54.920 |
or simply connect with me on Twitter @lexfriedman. 00:01:05.740 |
how to spell my last name without using the letter E, 00:01:35.360 |
I think ExpressVPN is the best VPN out there. 00:01:52.200 |
it's really important that they don't log your data. 00:01:57.720 |
Shout out to my favorite flavor of Linux, Ubuntu Mate 2004. 00:02:28.920 |
Since Cash App does fractional share trading, 00:02:31.320 |
let me mention that the order execution algorithm 00:02:35.400 |
to create the abstraction of the fractional orders 00:02:42.640 |
for taking a step up to the next layer of abstraction 00:02:46.360 |
making trading more accessible for new investors 00:02:51.840 |
So again, if you get Cash App from the App Store or Google Play 00:03:02.480 |
an organization that is helping advance robotics 00:03:05.240 |
and STEM education for young people around the world. 00:03:08.240 |
And now, here's my conversation with Josia Bach. 00:03:13.380 |
As you've said, you grew up in a forest in East Germany, 00:03:24.800 |
you've become one of the most unique thinkers 00:03:28.160 |
So can we try to reverse engineer your mind a little bit? 00:03:31.600 |
What were the key philosophers, scientists, ideas, 00:03:38.440 |
that had an impact on you when you were growing up 00:03:43.920 |
or were the key sort of crossroads in the trajectory 00:03:49.760 |
- My father came from a long tradition of architects, 00:04:13.480 |
And normal people understand that the primary purpose 00:04:26.280 |
- Who is the reviewer in the nerd's view of communication? 00:04:54.600 |
but basically my grandfather made the wrong decision. 00:04:57.840 |
He married an aristocrat and was drawn into the war. 00:05:05.600 |
So basically my father was not parented by a nerd, 00:05:10.600 |
but by somebody who tried to tell him what to do 00:05:16.900 |
And he was unable to, he's unable to do things 00:05:21.780 |
So in some sense, my grandmother broke her son 00:05:24.980 |
and her son responded by, when he became an architect, 00:05:36.100 |
in the more brutalist traditions of Eastern Germany. 00:05:43.460 |
and did only what he wanted to do, which was art. 00:05:51.380 |
put was heavily subsidized, healthcare was free. 00:05:54.000 |
You didn't have to worry about rent or pensions or anything. 00:05:56.640 |
- So it's a socialized communist side of Germany. 00:05:58.980 |
- And the other thing is it was almost impossible 00:06:01.380 |
not to be in political disagreement with your government, 00:06:05.560 |
So everything that you do is intrinsically meaningful 00:06:08.260 |
because it will always touch on the deeper currents 00:06:11.140 |
of society, of culture, and be in conflict with it, 00:06:13.740 |
and tension with it, and you will always have 00:06:19.780 |
this outside-of-the-box thinker against the government, 00:06:30.540 |
to the degree that he needed to make himself functional. 00:06:33.540 |
So in some sense, he was also in the late 1960s, 00:06:53.980 |
She was also an architect, and she adored him 00:06:58.660 |
And I basically grew up in a big cave full of books. 00:07:05.700 |
It was very, very beautiful, very quiet, and quite lonely. 00:07:08.940 |
So I started to read, and by the time I came to school, 00:07:12.620 |
I've read everything until fourth grade and then some, 00:07:21.920 |
and today I know it was because I was a nerd, obviously, 00:07:40.060 |
I was not beaten up, but I also didn't make many friends 00:07:46.340 |
when I went into a school for mathematics and physics. 00:07:49.500 |
- Do you remember any key books from this moment? 00:07:52.700 |
So I went to the library, and I worked my way 00:07:56.180 |
through the children's and young adult sections, 00:08:06.720 |
Back then, I didn't see him as a big influence 00:08:15.140 |
Another thing that was very influential on me 00:08:22.100 |
so German poetry and art, Droster-Hilzhoff and Heine 00:08:31.580 |
So at which point do the classical philosophers end? 00:08:39.520 |
Does this stretch through even as far as Nietzsche, 00:08:43.080 |
or is this, are we talking about Plato and Aristotle? 00:08:46.000 |
- I think that Nietzsche is the classical equivalent 00:08:49.380 |
- So he's a classical troll. - He's very smart 00:08:53.840 |
and easy to read, but he's not so much trolling others, 00:08:57.160 |
he's trolling himself because he was at odds with the world. 00:08:59.960 |
Largely, his Romantic relationships didn't work out. 00:09:02.640 |
He got angry and he basically became a nihilist. 00:09:05.040 |
- Isn't that a beautiful way to be as an intellectual, 00:09:20.300 |
If you take yourself seriously and you are not functional, 00:09:25.900 |
- I think you think he took himself too seriously 00:09:29.940 |
- And if you find the same thing in Hesse and so on, 00:09:32.300 |
this Steppenwolf syndrome is classic adolescence, 00:09:35.340 |
where you basically feel misunderstood by the world 00:09:37.780 |
and you don't understand that all the misunderstandings 00:09:39.860 |
are the result of your own lack of self-awareness, 00:09:43.100 |
because you think that you are a prototypical human 00:09:45.780 |
and the others around you should behave the same way 00:09:48.180 |
as you expect them based on your innate instincts 00:09:51.860 |
And you become a transcendentalist to deal with that. 00:10:10.980 |
from that perspective. - No, I think that you 00:10:16.100 |
because you need to become unimportant as a subject. 00:10:19.940 |
That is, if you are a philosopher, belief is not a verb. 00:10:27.480 |
You have to submit to the things that are possibly true 00:10:30.700 |
and you have to follow wherever your inquiry leads, 00:10:33.440 |
but it's not about you, it has nothing to do with you. 00:10:39.940 |
believed sort of in the idea of there's objective truth, 00:10:45.200 |
if you remove yourself as objective from the picture, 00:10:50.700 |
ideas that are true, or are we just in a mesh 00:10:53.140 |
of relative concepts that are neither true nor false? 00:11:02.380 |
in the first place, so what does the brain mean 00:11:04.820 |
by saying that it discovers something as truth? 00:11:07.460 |
So for instance, a model can be predictive or not predictive. 00:11:10.940 |
Then there can be a sense in which a mathematical statement 00:11:23.380 |
in a symbol game, and then you can have a correspondence 00:11:29.020 |
which is again a type of model correspondence. 00:11:39.460 |
That's stunning when you realize something exists 00:11:43.220 |
rather than nothing, and this seems to be true, right? 00:11:53.740 |
and be amazed by that idea for the rest of my life 00:11:56.820 |
and not go any farther, 'cause I don't even know 00:12:01.220 |
- Well, the easiest answer is existence is the default, 00:12:07.820 |
- The simplest answer to this is that existence 00:12:14.340 |
- Nonexistence might not be a meaningful notion 00:12:25.620 |
is finite automata, so maybe the whole of existence 00:12:32.300 |
that has the properties that it can contain us. 00:12:35.060 |
- What does it mean to be a superposition of finite, 00:12:38.140 |
so I understand, superposition of all possible rules. 00:12:43.060 |
- Imagine that every automaton is basically an operator 00:12:45.980 |
that acts on some substrate, and as a result, 00:12:51.580 |
- I have no idea to know, so it's basically-- 00:13:01.780 |
- Still, doesn't make sense to me the why that exists at all. 00:13:04.740 |
I could just sit there with a beer or a vodka 00:13:13.100 |
This might be the wrong direction to ask into this, 00:13:15.820 |
so there could be no relation in the why direction 00:13:22.420 |
It doesn't mean that everything has to have a purpose 00:13:25.060 |
- So we mentioned some philosophers in that earlier, 00:13:45.860 |
we basically came up with a new form of government 00:13:48.740 |
that didn't have a good sense of this new organism 00:13:56.260 |
So the universities went on through modernism 00:13:59.980 |
At the same time, democracy failed in Germany 00:14:06.580 |
as Stalinism burned down intellectual traditions in Russia. 00:14:09.820 |
And Germany, both Germanys have not recovered from this. 00:14:12.820 |
Eastern Germany had this vulgar dialectic materialism, 00:14:16.460 |
and Western Germany didn't get much more edgy 00:14:23.280 |
and killing off and driving out the Jews didn't help. 00:14:34.180 |
- There's also this thing that in some sense, 00:14:37.620 |
the low-hanging fruits in philosophy were mostly wrapped. 00:14:48.180 |
So to understand that the parts of mathematics 00:15:01.700 |
Physicists checked out the code libraries for mathematics 00:15:11.860 |
- So basically, Gödel himself, I think, didn't get it yet. 00:15:18.260 |
Kantor's set theoretic experiments in mathematics 00:15:22.060 |
And he noticed that with the current semantics, 00:15:37.420 |
And because Gödel strongly believed in these semantics 00:15:40.260 |
and more than in what he could observe and so on, 00:15:46.700 |
because in some sense, he felt that the world 00:15:48.260 |
has to be implemented in classical mathematics. 00:15:53.780 |
I think that Turing could see that the solution 00:16:04.100 |
It's also a function, but it's the same thing. 00:16:07.940 |
And in computation, a function is only a value 00:16:12.140 |
And if you cannot compute the last digit of pi, 00:16:15.540 |
You can plug this function into your local sun, 00:16:22.140 |
But it also means that there can be no process 00:16:27.940 |
that depends on having known the last digit of pi. 00:16:39.340 |
- So I think putting computation at the center 00:16:51.580 |
that Minsky started later, like 30 years later. 00:16:58.180 |
I didn't know there's any connection between Turing and- 00:17:36.060 |
"by starting with those that are so formalizable 00:17:45.900 |
"so it's very hard to say something meaningful 00:17:52.500 |
than what our brain is casually doing all the time 00:17:59.300 |
so it's mostly starting from natural languages 00:18:03.540 |
And the hope is that mathematics and philosophy 00:18:07.460 |
And Wittgenstein was trying to make them meet. 00:18:09.660 |
And he already understood that, for instance, 00:18:11.260 |
you could express everything with the Nand calculus, 00:18:13.420 |
that you could reduce the entire logic to Nand gates 00:18:34.300 |
It was mostly his work in decrypting the German codes 00:18:38.900 |
that made him famous or gave him some notoriety. 00:18:41.420 |
But this saint status that he has to computer science 00:18:48.740 |
Do you think of computation and computer science, 00:19:26.620 |
I try to understand what they must have thought originally 00:19:29.900 |
or what their teachers or their teacher's teachers 00:19:39.060 |
I'm usually late to the party by say 400 years. 00:19:54.480 |
of your upbringing was that you were not tethered, 00:19:59.940 |
or in general maybe something within your mind, 00:20:12.040 |
We're kind of, you know, the education system 00:20:26.980 |
Even in your work today, you seem to not care 00:20:42.240 |
people seem to value as a thing you put on your CV 00:20:45.960 |
You're a little bit more outside of that world, 00:20:59.620 |
we might be able to solve some of the bigger problems 00:21:12.320 |
and visit this tradition into a particular school. 00:21:15.220 |
If everybody comes up with their own paradigms, 00:21:17.620 |
the whole thing is not cumulative as an enterprise, right? 00:21:21.160 |
So in some sense, you need a healthy balance. 00:21:24.920 |
and you need people that work within given paradigms. 00:21:27.320 |
Basically, scientists today define themselves 00:21:30.720 |
And it's almost a disease that we think as a scientist, 00:21:34.120 |
somebody who was convinced by their guidance counselor 00:21:38.240 |
that they should join a particular discipline 00:21:42.980 |
And then they are lucky enough and privileged enough 00:21:46.620 |
And then their name will show up on influential papers. 00:21:50.400 |
But we also see that there are diminishing returns 00:21:54.320 |
And when our field, computer science and AI started, 00:22:04.560 |
either don't have interesting opinions at all, 00:22:17.480 |
if you think that this is the best way to make progress. 00:22:23.200 |
If somebody else can do it, why should I do it? 00:22:28.880 |
lead to a strong AI, why should I be doing it? 00:22:36.240 |
Or read interesting books or write some and have fun. 00:22:46.460 |
then it's required to think outside of the box. 00:22:59.320 |
So you have to be willing to ask new questions 00:23:02.080 |
and design new methods whenever you want to answer them. 00:23:05.080 |
And you have to be willing to dismiss the existing methods 00:23:19.980 |
in the early days, when would you say for you 00:23:22.760 |
was the dream, before we dive into the discussions 00:23:31.060 |
or maybe to create human level intelligence born for you? 00:23:43.800 |
If you would change the acronym of AI into that, 00:23:48.080 |
It would not change anything what they're doing. 00:23:51.720 |
and many of the statistical models are more advanced 00:23:57.800 |
And it's pretty good work, it's very productive. 00:24:00.120 |
And the other aspect of AI is philosophical project. 00:24:25.440 |
as if it's the mere, it's the muck of existence, 00:24:50.080 |
what do you mean by the philosophical project? 00:24:55.160 |
is the result of trying to solve general problems. 00:24:59.400 |
So intelligence, I think, is the ability to model. 00:25:02.320 |
It's not necessarily goal-directed rationality 00:25:05.240 |
or something, many intelligent people are bad at this. 00:25:14.060 |
and be able to predict the next set of patterns, right? 00:25:23.680 |
so you make these models for a particular purpose 00:25:32.980 |
but by itself, it's just the ability to make models. 00:25:46.840 |
despite you perceiving yourself as wanting different things. 00:26:02.040 |
to certain situations and how you deal with yourself 00:26:26.280 |
If that system is able to explain what it is, 00:26:35.760 |
So the test that Turing was administering in a way, 00:26:40.840 |
but he didn't express it yet in the original 1950 paper, 00:26:56.040 |
We don't yet know if we are generally intelligent. 00:26:58.360 |
Basically, you win the Turing test by building an AI. 00:27:09.040 |
The Turing test is basically a test of the conjecture 00:27:21.000 |
Do you think this kind of emergent self-awareness 00:27:24.040 |
is one of the fundamental aspects of intelligence? 00:27:43.400 |
So self-awareness and intelligence are not the same thing. 00:27:52.840 |
So you don't need to be very good at solving puzzles 00:27:55.240 |
if the system that you are already implements the solution. 00:28:23.680 |
- So I call this ability to make sense of the world 00:28:32.400 |
And I would distinguish sentience from intelligence 00:28:35.160 |
because sentience is possessing certain classes of models. 00:28:39.680 |
And intelligence is a way to get to these models 00:28:53.200 |
that we just said we may not be able to answer? 00:29:04.000 |
- So models, I think it's useful as examples, 00:29:07.640 |
very popular now, neural networks form representations 00:29:20.160 |
When you say models and look at today's neural networks, 00:29:23.640 |
what are the difference of how you're thinking about 00:29:25.800 |
what is intelligence in saying that intelligence 00:29:35.720 |
is the representation adequate for the domain 00:29:44.320 |
So basically, are you modeling the correct domain? 00:29:52.600 |
And I think that I'm not saying anything new here. 00:30:01.480 |
And so one aspect that we are missing is unified learning. 00:30:07.480 |
that everything that we sense is part of the same object, 00:30:15.000 |
So experience of the world that we are embedded on 00:30:17.160 |
is not a secret direct via to physical reality. 00:30:22.360 |
that we can never experience or get access to. 00:30:31.960 |
and the relationship between the patterns that we discover 00:30:36.600 |
So at some point in our development as a nervous system, 00:30:41.600 |
we discover that everything that we relate to in the world 00:30:47.000 |
in the same three-dimensional space by and large. 00:30:50.240 |
We now know in physics that this is not quite true. 00:31:05.080 |
And this is, I think, what gave rise to this intuition 00:31:08.000 |
of ResX Tensa, of this material world, this material domain. 00:31:19.640 |
- Physics engine in which we are embedded, I love that. 00:31:29.120 |
which is the real world which you can never get access to. 00:31:43.920 |
Can you talk about what is dualism, what is idealism, 00:32:03.520 |
- So the particular trajectory that mostly exists 00:32:05.960 |
in the West is the result of our indoctrination 00:32:14.280 |
And for better or worse, it has created or defined 00:32:18.160 |
many of the modes of interaction that we have 00:32:22.000 |
but it has also in some sense scarred our rationality. 00:32:26.680 |
And the intuition that exists, if you would translate 00:32:31.480 |
the mythology of the Catholic church into the modern world 00:32:34.960 |
is that the world in which you and me interact 00:32:37.680 |
is something like a multiplayer role-playing adventure. 00:32:41.480 |
And the money and the objects that we have in this world, 00:32:45.200 |
Or as Eastern philosophers would say, it's Maya. 00:32:49.200 |
It's just stuff that appears to be meaningful 00:32:57.120 |
It's basically the identification with the needs 00:33:08.800 |
And this existed before, but eventually the natural shape 00:33:12.080 |
of God is the platonic form of the civilization 00:33:15.980 |
It's basically the superorganism that is formed 00:33:20.800 |
And basically the Catholics used relatively crude mythology 00:33:40.320 |
that spends multiple brains as opposed to your and myself, 00:33:46.080 |
So in some sense, you can construct a self functionally 00:33:59.140 |
this is one of the nice features of our brains, 00:34:04.480 |
the same piece of software, like God in this case, 00:34:08.120 |
- Yeah, so basically you give everybody a spec 00:34:12.480 |
that are intrinsic to information processing, 00:34:20.360 |
- Okay, so there's this space of ideas that we all share 00:34:27.840 |
the idea is, from Christianity, from religion, 00:34:32.940 |
is that there's a separate thing between the mind-- 00:34:36.240 |
And this real world is the world in which God exists. 00:34:39.800 |
God is the coder of the multiplayer adventure, 00:34:42.120 |
so to speak, and we are all players in this game. 00:34:48.480 |
- But the dualist aspect is because the mental realm 00:34:52.120 |
exists in a different implementation than the physical realm 00:35:00.680 |
in which you and me talk and speak right now, 00:35:03.320 |
then comes a layer of physics and abstract rules and so on, 00:35:07.680 |
and then comes another real room where our souls are 00:35:13.800 |
And this, of course, is a very confused notion 00:35:17.880 |
And it's basically, it's the result of connecting 00:35:24.920 |
- So, okay, I apologize, but I think it's really helpful 00:35:27.880 |
if we just try to define, try to define terms. 00:35:33.240 |
what is materialism for people that don't know? 00:35:35.120 |
- So the idea of dualism in our cultural tradition 00:35:45.000 |
And the physical world is basically causally closed 00:35:48.160 |
and is built on a low-level causal structure. 00:35:53.400 |
that is causally closed that's entirely mechanical. 00:35:56.280 |
And mechanical in the widest sense, so it's computational. 00:36:04.080 |
of how information flows around in this world. 00:36:20.400 |
that is able to perform all the computations. 00:36:30.400 |
to be able to perform each other's computations. 00:36:38.920 |
and idealism is this whole world is just the software? 00:36:46.120 |
because software also comes down to information processing. 00:36:51.920 |
that is real to you and me is this experiential world 00:36:54.600 |
in which things matter, in which things have taste, 00:36:56.800 |
in which things have color, phenomenal content, and so on. 00:36:59.840 |
- Oh, there you are bringing up consciousness, okay. 00:37:02.360 |
- And this is distinct from the physical world, 00:37:04.320 |
in which things have values only in an abstract sense. 00:37:08.720 |
And you only look at cold patterns moving around. 00:37:28.960 |
the material patterns that we see playing out 00:37:32.160 |
are part of the dream that the mind is dreaming. 00:37:49.360 |
And in some sense, I don't think that we should understand, 00:37:56.960 |
but as two different aspects of the same thing. 00:37:59.920 |
So the weird thing is we don't exist in the physical world. 00:38:02.280 |
We do exist inside of a story that the brain tells itself. 00:38:22.560 |
Physical systems are unable to experience anything. 00:38:27.160 |
or for the organism to know what it would be like 00:38:31.680 |
So the brain creates a simulacrum of such a person 00:38:35.080 |
that it uses to model the interactions of the person. 00:38:45.640 |
that the brain is continuously writing and updating. 00:38:50.160 |
you said that we kind of exist in that story. 00:39:06.360 |
I mean, what is this whole thing running on then? 00:39:09.240 |
Is the story, and is it completely, fundamentally impossible 00:39:24.680 |
- So what we can identify as computer scientists, 00:39:27.560 |
we can engineer systems and test our theories this way 00:39:31.360 |
that may have the necessary insufficient properties 00:39:35.040 |
to produce the phenomena that we are observing, 00:39:42.560 |
that is contained in the skull of this primate here. 00:39:55.680 |
that allow you to interpret what I'm saying, right? 00:39:58.200 |
But we both know that the world that you and me are seeing 00:40:03.080 |
What we are seeing is a virtual reality generated 00:40:05.600 |
in your brain to explain the patterns on your retina. 00:40:11.640 |
Is it, when you have people like Donald Hoffman, 00:40:21.320 |
that interface we have is very far away from anything. 00:40:25.080 |
We don't even have anything close to the sense 00:40:28.680 |
Or is it a very surface piece of architecture? 00:40:32.160 |
- Imagine you look at the Mandelbrot fractal, right? 00:40:34.600 |
This famous thing that Bernard Mandelbrot discovered. 00:40:49.440 |
for complex numbers in the complex number plane 00:41:14.100 |
and you don't have access to where you are in the fractal. 00:41:17.040 |
Or you have not discovered the generator function even. 00:41:23.680 |
And the spiral moves a little bit to the right. 00:41:36.380 |
that is interpreting things as a two-dimensional space 00:41:39.220 |
and then defines certain irregularities in there 00:41:41.800 |
at a certain scale that it currently observes. 00:41:43.940 |
Because if you zoom in, the spiral might disappear 00:41:50.720 |
And then you discover the spiral moves to the right 00:41:55.380 |
At this point, your model is no longer valid. 00:41:57.680 |
You cannot predict what happens beyond the singularity. 00:42:02.480 |
it hit another spiral and at this point it disappeared. 00:42:11.260 |
that is similar to the one that we come up with 00:42:21.840 |
But it's relatively good to explain the universe 00:42:25.360 |
- But you don't think the tools of computer science, 00:42:30.900 |
see the whole drawing, and get at the basic mechanism 00:42:33.880 |
of how the pattern, the spirals, is generated? 00:42:51.360 |
And maybe you just enumerate all the possible automata 00:42:53.720 |
until you get to the one that produces your reality. 00:42:56.480 |
So you can identify necessary and sufficient condition. 00:42:59.440 |
For instance, we discover that mathematics itself 00:43:04.160 |
And then we see that most of the domains of mathematics 00:43:10.520 |
This is what category theory is obsessed about, 00:43:12.680 |
that you can map these different domains to each other. 00:43:20.840 |
And so you can discover what region of this global fractal 00:43:25.680 |
you might be embedded in from first principles. 00:43:28.240 |
But the only way you can get there is from first principles. 00:43:30.560 |
So basically, your understanding of the universe 00:43:33.020 |
has to start with automata and then number theory 00:43:37.060 |
- Yeah, I think, like, Stephen Wolfram still dreams 00:43:39.440 |
that he'll be able to arrive at the fundamental rules 00:43:43.440 |
of the cellular automata, or the generalization of which 00:43:52.120 |
you said in a recent conversation that, quote, 00:43:55.400 |
"Some people think that a simulation can't be conscious 00:44:26.920 |
It's the software that is implemented by your brain. 00:44:29.880 |
And the mind is creating both the universe that we are in 00:44:40.200 |
- Why is that important, that idea of a self? 00:44:43.040 |
Why is that an important feature in the simulation? 00:44:53.120 |
We are side effects of the regulation needs of monkeys. 00:44:59.640 |
is the relationship of an organism to an outside world 00:45:03.880 |
that is in large part also consisting of other organisms. 00:45:08.160 |
And as a result, it basically has regulation targets 00:45:14.520 |
They're basically like unconditional reflexes 00:45:23.060 |
about how the world works and how to interact with it. 00:45:37.360 |
And we find ourselves living inside of feedback loops, 00:45:41.200 |
Consciousness emerges over dimensions of disagreements 00:46:11.580 |
you're completely free and you can enter Nirvana 00:46:15.340 |
- And actually, this is a good time to pause and say, 00:46:18.020 |
thank you to a friend of mine, Gustav Sordestrom, 00:46:26.380 |
And I think the AI community is actually quite amazing. 00:46:33.860 |
I'm glad the internet exists and YouTube exists 00:46:38.300 |
and then get to your book and study your writing 00:46:46.900 |
in sort of this emergent phenomenon of consciousness 00:46:50.600 |
So what about the hard problem of consciousness? 00:47:03.660 |
the self is an important part of the simulation, 00:47:06.140 |
but why does the simulation feel like something? 00:47:10.060 |
- So if you look at a book by, say, George R.R. Martin 00:47:14.260 |
where the characters have plausible psychology 00:47:18.320 |
because they want to conquer the city below the hill 00:47:27.460 |
It's because it's written into the story, right? 00:47:30.660 |
because there's an adequate model of the person 00:47:37.580 |
So it's basically a story that our brain is writing. 00:47:46.020 |
And it's a model of what the person would feel 00:47:52.700 |
And you and me happen to be this virtual person. 00:47:54.940 |
So this virtual person gets access to the language center 00:48:05.860 |
- You do exist in an almost similar way as me. 00:48:09.580 |
So there are internal states that are less accessible 00:48:18.420 |
And my model might not be completely adequate. 00:48:20.900 |
There are also things that I might perceive about you 00:48:24.120 |
But in some sense, both you and me are some puppets, 00:48:32.260 |
because I can control one of the puppet directly. 00:48:35.060 |
And with the other one, I can create things in between. 00:48:40.700 |
that even leads to a coupling to a feedback loop. 00:48:43.200 |
So we can think things together in a certain way 00:48:47.260 |
But this coupling is itself not a physical phenomenon. 00:48:51.900 |
It's the result of two different implementations 00:48:56.060 |
So are you suggesting, like the way you think about it, 00:49:03.780 |
and we're kind of each mind is a little sub simulation 00:49:28.340 |
So basically when I know something about myself, 00:49:32.060 |
So one part of your brain is tasked with modeling 00:49:36.220 |
- Yes, but there seems to be an incredible consistency 00:49:42.260 |
that there's repeatable experiments and so on. 00:49:54.540 |
There's a lot of fundamental physics experiments 00:50:07.060 |
that are not deterministic are not long lived. 00:50:10.480 |
So if you build a system, any kind of automaton, 00:50:23.700 |
So basically, if you see anything that is complex 00:50:25.820 |
in the world, it's the result of usually of some control, 00:50:33.560 |
that don't give rise to certain harmonic patterns 00:50:35.900 |
and so on, they tend to get weeded out over time. 00:50:49.780 |
that is very tightly controlled and controllable. 00:50:52.640 |
So it's going to have lots of interesting symmetries 00:51:04.060 |
that our mind is simulation that's constructing 00:51:11.500 |
how that fits with the entirety of the universe. 00:51:15.460 |
You're saying that there's a region of this universe 00:51:18.100 |
that allows enough complexity to create creatures like us, 00:51:30.540 |
Is the mind the starting point, the universe is emergent? 00:51:34.140 |
Is the universe the starting point, the minds are emergent? 00:51:44.060 |
And I don't see any way to construct an inverse causality. 00:51:47.620 |
- So what happens when you die to your mind simulation? 00:51:53.420 |
So basically the thing that implements myself 00:52:02.000 |
The weird thing is I don't actually have an identity 00:52:17.300 |
but because he is not identifying as a human being. 00:52:25.620 |
that is instantiated in every new generation and you. 00:52:32.220 |
So if you identify with this, you are no longer human 00:52:37.500 |
what dies is only the body of the human that you run on. 00:52:41.060 |
To kill the Dalai Lama, you would have to kill his tradition. 00:52:46.260 |
we realize that we are to a small part like this, 00:52:53.220 |
Or if you spark an idea in the world, something lives on. 00:52:55.860 |
Or if you identify with the society around you. 00:53:01.780 |
- Yeah, so in a sense, you are kind of like a Dalai Lama 00:53:28.180 |
It's basically a representation of different objects 00:53:39.960 |
Is it the ideas that come together to form identity? 00:53:46.540 |
- It's a representation that you can get agency over 00:53:49.740 |
So basically you can choose what you identify with 00:53:53.300 |
- No, but it just seems, if the mind is not real, 00:53:58.300 |
that the birth and death is not a crucial part of it. 00:54:10.200 |
Maybe I'm attached to this whole biological organism, 00:54:24.040 |
Like it feels like it has to be physical to die. 00:54:30.200 |
- The physics that we experience is not the real physics. 00:54:32.760 |
There is no color and sound in the real world. 00:54:35.400 |
Color and sound are types of representations that you get 00:54:38.800 |
if you want to model reality with oscillators. 00:54:41.480 |
So colors and sound in some sense have octaves. 00:54:44.600 |
And it's because they are represented properly 00:54:50.840 |
And colors have harmonics, sounds have harmonics 00:54:53.360 |
as a result of synchronizing oscillators in the brain. 00:54:57.120 |
So the world that we subjectively interact with 00:55:01.960 |
of the representation mechanisms in our brain. 00:55:04.400 |
They are mathematically to some degree universal. 00:55:06.420 |
There are certain regularities that you can discover 00:55:10.520 |
But the patterns that we get, this is not the real world. 00:55:19.600 |
it's consisting of so many molecules and atoms 00:55:40.620 |
that you get by building an infinite series that converges. 00:55:44.000 |
For those parts where it converges, it's geometry. 00:55:46.520 |
For those parts where it doesn't converge, it's chaos. 00:55:51.180 |
through the consciousness that's emergent in our narrative. 00:56:04.340 |
is given by the relationship that a feature has 00:56:12.320 |
The color is given by those aspects of the representation 00:56:15.720 |
or this experiential color where you care about, 00:56:23.620 |
And the dimensions of caring are basically dimensions 00:56:27.020 |
of this motivational system that we emerge over. 00:56:35.020 |
Like where does the, maybe we can even step back 00:56:38.340 |
and ask the question of what is consciousness 00:56:42.740 |
What do you, how do you think about consciousness? 00:56:47.340 |
- I think that consciousness is largely a model 00:56:58.100 |
largely work by building chains of weighted sums 00:57:11.900 |
and adjusting the weights in these weighted sums. 00:57:15.940 |
And you can approximate most polynoms with this 00:57:21.300 |
But the price is you need to change a lot of these weights. 00:57:24.740 |
Basically the error is piped backwards into the system 00:57:28.020 |
until it accumulates at certain junctures in the network 00:57:34.900 |
this is where you had the actual error in the network, 00:57:40.060 |
and our brains don't have enough time for that 00:57:46.520 |
So instead what we do is an attention-based learning. 00:57:48.820 |
We pinpoint the probable region in the network 00:57:57.380 |
together with the expected outcome in a protocol. 00:58:06.180 |
this requires a memory of the contents of our attention. 00:58:10.300 |
Another aspect is when I construct my reality, 00:58:13.680 |
So I see things that turn out to be reflections 00:58:21.440 |
gave rise to a present construction of reality. 00:58:27.260 |
to the features that are currently in its focus. 00:58:34.740 |
in part because the attentional system gets trained 00:58:39.140 |
but also in part because your attention lapses 00:58:41.180 |
if you don't pay attention to the attention itself. 00:58:50.300 |
or am I still paying attention to my percept? 00:58:54.780 |
and see whether you're still paying attention. 00:58:56.820 |
And if you have this loop and you make it tight enough 00:59:02.540 |
and the fact that it's paying attention itself 00:59:04.700 |
and makes attention the object of its attention, 00:59:06.960 |
I think this is the loop over which we wake up. 00:59:33.260 |
by allowing the network to focus its attention 00:59:37.320 |
to particular parts of the sentence of each individual. 00:59:44.980 |
by having like a little window into the sentence. 00:59:49.520 |
Do you think that's like a little step towards 00:59:53.740 |
that eventually will take us to the intentional mechanisms 01:00:00.420 |
- Not quite, I think it models only one aspect of attention. 01:00:03.740 |
In the early days of automated language translation, 01:00:07.660 |
there was an example that I found particularly funny 01:00:32.180 |
And it seemed to be the most similar to this program 01:00:42.740 |
And this is a mistake that the transformer model 01:00:48.020 |
And the attentional mechanism in the transformer model 01:00:50.220 |
is basically putting its finger on individual concepts 01:00:53.380 |
and make sure that these concepts pop up later in the text 01:00:57.660 |
and tracks basically the individuals through the text. 01:01:05.620 |
which makes it, for instance, possible to write a text 01:01:09.700 |
and the scientist has a name and has a pronoun 01:01:12.100 |
and it gets a consistent story about that thing. 01:01:15.340 |
What it does not do, it doesn't fully integrate this. 01:01:21.780 |
It does not yet understand that everything that it says 01:01:33.180 |
And tracking identity is an important part of attention, 01:01:36.020 |
but it's a different, very specific attentional mechanism. 01:01:43.060 |
- Just to linger on, what do you mean by identity 01:01:53.540 |
- And in the sense that-- - So space of concepts. 01:02:01.740 |
this word does not only refer to this class of objects, 01:02:07.340 |
to some kind of agent that waves their way through the story 01:02:11.580 |
and is only referred by different ways in the language. 01:02:26.820 |
it learns aspects of this projection mechanism 01:02:32.460 |
- So have you ever seen an artificial intelligence 01:02:41.540 |
that's able to form something where the space of concepts 01:02:48.260 |
So what you're describing, building a knowledge base, 01:02:52.020 |
building this consistent larger and larger sets of ideas 01:02:56.060 |
that would then allow for a deeper understanding. 01:02:58.540 |
- Wittgenstein thought that we can build everything 01:03:09.780 |
So that's why he focused so much on common sense, 01:03:13.100 |
And a project that was inspired by him was Psyche. 01:03:19.740 |
- Yes, of course, ideas don't die, only people die. 01:03:29.140 |
It's just probably not one that is going to converge 01:03:37.620 |
at the end of his life, "Philosophical Investigations," 01:03:42.220 |
So images play an important role in Tractatus. 01:03:46.460 |
turn philosophy into logical programming language, 01:03:48.500 |
to design a logical language in which you can do 01:03:51.100 |
actual philosophy that's rich enough for doing this. 01:03:53.860 |
And the difficulty was to deal with perceptual content. 01:04:08.300 |
is we need more general function approximation. 01:04:38.860 |
is actually more general than what can be expressed 01:05:03.820 |
And I also agree with the beauty of the early Wittgenstein. 01:05:09.660 |
is probably the most beautiful philosophical text 01:05:14.020 |
- But language is not fundamental to cognition 01:05:18.580 |
- So I think that language is a particular way, 01:05:27.540 |
But the languages in which we express geometry 01:05:31.500 |
are not grammatical languages in the same sense. 01:05:35.460 |
They're more general expressions of functions. 01:05:42.780 |
These have a range, these are the variances of the world. 01:05:52.580 |
then other parameters have to have the following values. 01:05:55.860 |
And this is a very early insight in computer science. 01:05:59.820 |
And I think some of the earliest formulations 01:06:05.180 |
is that while it has a measure of whether it's good, 01:06:09.620 |
the amount of tension that you have left in the constraints 01:06:15.980 |
despite having this global measure, to train it. 01:06:18.580 |
Because as soon as you add more than trivially 01:06:27.780 |
And so the solution that Hinton and Zanofsky found 01:06:38.980 |
and only has basically input and output layer. 01:06:41.620 |
But this limits the expressivity of the Boltzmann machine. 01:06:51.940 |
to the deep learning models that we're using today, 01:06:54.780 |
even though we don't use Boltzmann machines at this point. 01:07:14.100 |
but we have to use a different approach to make it work. 01:07:17.860 |
And this is, we have to find different networks 01:07:22.180 |
So the mechanism that trains the Boltzmann machine 01:07:24.700 |
and the mechanism that makes the Boltzmann machine 01:07:33.420 |
- The kind of mechanism that we wanna develop, 01:08:02.260 |
you should be able to see this even if it's unlikely. 01:08:15.700 |
how should you change the states of the model 01:08:21.340 |
- Oh, but the space of ideas that are coherent 01:08:35.060 |
- The degree of coherence that you need to achieve 01:08:40.180 |
That is, for instance, politics is very simple 01:08:46.540 |
the more obvious it is how politics should work, right? 01:08:58.000 |
the harder it gets to satisfy all the constraints. 01:09:10.720 |
What's your intuition about what kind of mechanisms 01:09:13.360 |
might we move towards to improve the learning procedure? 01:09:18.360 |
- I think one big aspect is going to be meta-learning, 01:09:21.280 |
and architecture search starts in this direction. 01:09:28.720 |
and a possible solution and implementing the solution, 01:09:32.960 |
And right now we are in the second wave of AI. 01:09:38.960 |
we write an algorithm that automatically searches 01:09:41.800 |
for an algorithm that implements the solution. 01:09:46.200 |
is an algorithm that itself discovers the algorithm 01:09:51.000 |
Go is too hard to implement the solution by hand, 01:10:01.000 |
Find an algorithm that discovers a learning algorithm 01:10:08.800 |
This is one way of looking at what we are doing. 01:10:20.760 |
is an individual reinforcement learning agent. 01:10:26.360 |
and in some sense, quite motivated to get fed. 01:10:35.480 |
depends on the context that the neuron exists in, 01:10:39.240 |
which is the electrical and chemical environment 01:10:48.440 |
Or if you see it as a reinforcement learning agent, 01:10:54.720 |
and tries to pipe a signal through the universe 01:11:01.600 |
that it's robustly self-organizing into a brain, 01:11:04.960 |
which means you start out with different neuron types 01:11:07.360 |
that have different priors on which hypothesis to test 01:11:12.240 |
and you put them into different concentrations 01:11:16.440 |
and then you entrain it in a particular order, 01:11:19.880 |
and as a result, you get a well-organized brain. 01:11:22.200 |
- Yeah, so, okay, so the brain is a meta-learning system 01:11:25.280 |
with a bunch of reinforcement learning agents. 01:11:30.600 |
And what, I think you said, but just to clarify, 01:11:35.600 |
where do the, there's no centralized government 01:11:42.600 |
here's a loss function, here's a loss function. 01:11:44.800 |
Like, what, who is, who says what's the objective-- 01:11:48.680 |
- There are also governments which impose loss functions 01:11:54.500 |
Some areas in your brain get especially rewarded 01:11:58.160 |
If you don't have that, you will get prosopagnosia, 01:12:00.520 |
which basically, the inability to tell people apart 01:12:06.000 |
- And the reason that happens is because it was, 01:12:09.560 |
So like, evolution comes into play here about-- 01:12:11.600 |
- But it's basically an extraordinary attention 01:12:19.200 |
The brain just has an average attention for faces. 01:12:21.440 |
So people with prosopagnosia don't look at faces 01:12:25.780 |
So the level at which they resolve the geometry of faces 01:12:28.440 |
is not higher than the one that, than for cups. 01:12:35.920 |
For you and me, it's impossible to move through a crowd 01:12:40.560 |
And as a result, we make insanely detailed models of faces 01:12:43.200 |
that allow us to discern mental states of people. 01:12:45.560 |
- So obviously we don't know 99% of the details 01:12:49.980 |
of this meta-learning system that's our mind, okay. 01:12:52.540 |
But still we took a leap from something much dumber 01:13:08.480 |
from our ape ancestors to multi-cell organisms? 01:13:17.800 |
as we start to think about how to engineer intelligence, 01:13:21.860 |
is there something we can learn from evolution? 01:13:24.920 |
- In some sense, life exists because of the market 01:13:29.020 |
opportunity of controlled chemical reactions. 01:13:34.060 |
and we win in some areas against this dumb combustion 01:13:37.420 |
because we can harness those entropy gradients 01:13:54.780 |
the entropy gradients much faster than we can run. 01:14:00.780 |
- Yeah, so basically we do this because every cell 01:14:05.020 |
It's like literally a Reed White head on a tape. 01:14:08.860 |
And so everything that's more complicated than a molecule 01:14:16.580 |
that needs a Turing machine for its regulation. 01:14:32.260 |
because I realized that what spirit actually means 01:14:35.060 |
is an operating system for an autonomous robot. 01:14:37.900 |
And when the word was invented, people needed this word, 01:14:40.820 |
but they didn't have robots that they built themselves. 01:14:43.300 |
Yet the only autonomous robots that were known 01:14:45.420 |
were people, animals, plants, ecosystems, cities, and so on. 01:14:58.660 |
Everything in the plant is in some sense connected 01:15:00.900 |
into some global aesthetics like in other organisms. 01:15:05.860 |
it's a function that tells cells how to behave. 01:15:38.740 |
So it's basically a description of what the plant is doing 01:15:44.420 |
- And the micro states, the physical implementation 01:15:51.260 |
what the plant is doing, the spirit of the plant 01:15:53.540 |
is the software, the operating system of the plant, right? 01:16:02.900 |
So people have spirits, which is their operating system 01:16:24.460 |
to rediscover a term that is pretty much the same thing. 01:16:27.420 |
And I suspect that the differences that we still see 01:16:36.700 |
Like, why do you say that spirit, just to clarify, 01:16:48.100 |
Do you mean the same old traditional idea of a spirit? 01:16:52.100 |
- I try to find out what people mean by spirit. 01:17:03.020 |
you could say the spirit of a society that is long game. 01:17:11.940 |
where you say, if we don't do the following things, 01:17:14.780 |
then the grand, grand, grand, grandchildren of our children 01:17:21.220 |
or you try to maximize the length of the game 01:17:24.740 |
you realize that you're part of a larger thing 01:17:36.620 |
at which we can exist as a species if you want to endure. 01:17:40.220 |
And our culture is not sustaining this anymore. 01:17:43.060 |
We basically made this bet with the Industrial Revolution 01:17:50.380 |
led to a situation in which we depend on the ability 01:17:55.780 |
And since we are not able to do that, as it seems, 01:18:02.060 |
And we realize that it doesn't have a future. 01:18:10.540 |
- Yeah, so you can have this kind of intuition 01:18:16.460 |
but you really mean the spirit of the civilization, 01:18:21.460 |
the entirety of the civilization may not exist for long. 01:18:33.340 |
that the Industrial Revolution was kind of a, 01:18:42.860 |
with the Industrial Revolution, we doomed ourselves. 01:18:57.780 |
over an entropic abyss without land on the other side, 01:19:05.300 |
into this entropic abyss and you have to pay the bill. 01:19:08.420 |
- Okay, Russia's my first language and I'm also an idiot. 01:19:32.660 |
- And entropic, what was the other word you used? 01:19:39.460 |
- Entropic abyss, so many of the things you say are poetic. 01:19:42.540 |
It's hurting my brain. - And also mispronounced. 01:20:00.860 |
so how does that get us into the entropic abyss? 01:20:04.140 |
- So in some sense, we burned 100 million years worth 01:20:17.740 |
we hovered between 300 and 400 million people. 01:20:22.660 |
- And this only changed with the Enlightenment 01:20:27.900 |
And in some sense, the Enlightenment freed our rationality 01:20:35.620 |
It was a process that basically happened in feedback loops, 01:20:39.100 |
so it was not that just one caused the other. 01:20:43.500 |
And the dynamic worked by basically increasing productivity 01:20:47.540 |
to such a degree that we could feed all our children. 01:21:02.700 |
- The definition of poverty is having enough-- 01:21:06.380 |
- So you can have only so many children as you can feed, 01:21:12.380 |
you can basically have as many children as you want 01:21:16.880 |
- So the reason why we don't have as many children 01:21:19.620 |
as we want is because we also have to pay a price 01:21:23.820 |
in the lower source of threat if we have too many. 01:21:29.100 |
and lower, upper class has only a limited number of children 01:21:33.060 |
because having more of them would mean a big economic hit 01:21:43.540 |
if you are basically super rich or if you are super poor. 01:21:51.220 |
And these children are largely not going to die of hunger. 01:21:54.660 |
- So how does this lead us to self-destruction? 01:22:26.900 |
Imagine there would be some very clever microbe 01:22:49.780 |
And basically this big organism would become a vegetable 01:22:52.580 |
that is barely alive and it's going to be very brittle 01:22:55.140 |
and not resilient when the environment changes. 01:23:00.140 |
the one that's actually doing all the using of the, 01:23:05.060 |
- So it relates back to this original question. 01:23:09.020 |
I suspect that we are not the smartest thing on this planet. 01:23:12.740 |
I suspect that basically every complex system 01:23:27.940 |
The problem is that plants don't have a nervous system. 01:23:30.460 |
So they don't have a way to telegraph messages 01:23:32.780 |
over large distances almost instantly in the plant. 01:23:47.460 |
And as a result, if the plant is intelligent, 01:23:50.540 |
it's not going to be intelligent at similar timescales. 01:23:55.980 |
So you suspect we might not be the most intelligent, 01:23:59.820 |
but we're the most intelligent in this spatial scale 01:24:05.300 |
- So basically if you would zoom out very far, 01:24:20.140 |
So basically changed the course of the evolution 01:24:23.020 |
within this ecosystem to make it more efficient 01:24:33.020 |
that are just operating at different timescale 01:24:34.820 |
and are far superior in intelligence than human beings. 01:24:39.380 |
and plants will still be there and they'll be there. 01:24:42.300 |
- Yeah, they also, there's an evolutionary adaptation 01:24:49.380 |
and get stressed, the next generation of mice 01:24:56.220 |
in a natural environment, the mice have probably 01:25:05.300 |
and there will be no mice a few generations from now. 01:25:09.780 |
in five generations from now, basically the mice scale back. 01:25:13.580 |
And a similar thing happens with the predators of mice. 01:25:19.140 |
So in some sense, if the predators are smart enough, 01:25:21.980 |
they will be tasked with shepherding their food supply. 01:25:25.780 |
And maybe the reason why lions have much larger brains 01:25:29.260 |
than antelopes is not so much because it's so hard 01:25:31.820 |
to catch an antelope as opposed to run away from the lion. 01:25:37.740 |
of their environment, more complex than the antelopes. 01:25:40.740 |
- So first of all, just describing that there's a bunch 01:25:45.500 |
may not even be the most special or intelligent 01:25:50.300 |
makes me feel a little better about the extinction 01:25:57.540 |
- Yeah, this is just a nice, we tried it out. 01:26:00.260 |
- The big stain on evolution is not us, it was trees. 01:26:03.660 |
Earth evolved trees before they could be digested again. 01:26:06.540 |
There were no insects that could break all of them apart. 01:26:09.620 |
Cellulose is so robust that you cannot get all of it 01:26:18.060 |
and could no longer be recycled into organisms. 01:26:25.940 |
- Take it out of the ground, put it back into the atmosphere 01:26:32.460 |
when the ecosystems have recovered from the rapid changes 01:26:39.340 |
- And there won't be even a memory of us little apes. 01:26:43.700 |
I suspect we are the first generally intelligent species 01:26:48.140 |
within industrial society because we will leave 01:26:58.980 |
You've kind of suggested that we have a very narrow 01:27:01.980 |
definition of, I mean, why aren't trees more general, 01:27:09.620 |
- If trees were intelligent, then they would be 01:27:12.140 |
at different timescales, which means within 100 years, 01:27:14.700 |
the tree is probably not going to make models 01:27:16.740 |
that are as complex as the ones that we make in 10 years. 01:27:19.820 |
- But maybe the trees are the ones that made the phones. 01:27:30.600 |
The first cell only split, right, and every divided. 01:27:32.940 |
And every cell in our body is still an instance 01:27:36.060 |
of the first cell that split off from that very first cell. 01:27:38.500 |
There was only one cell on this planet as far as we know. 01:27:41.220 |
And so the cell is not just a building block of life. 01:27:53.700 |
this little particular branch of it, which is us humans, 01:27:59.500 |
and maybe the exponential growth of technology 01:28:09.260 |
So some people worry about genetic manipulation. 01:28:13.940 |
worry about either dumb artificial intelligence 01:28:16.980 |
or super intelligent artificial intelligence destroying us. 01:28:26.740 |
If you were a betting man, what would you bet on 01:28:34.940 |
- So it's very likely that nothing that we bet on matters 01:28:47.540 |
- So it's also not clear if we as a species go extinct. 01:28:54.700 |
So the thing that will change is there will be probably 01:29:04.300 |
in 100 years from now because of the geographic changes 01:29:07.340 |
and so on and the changes in the food supply. 01:29:09.660 |
It's quite likely that many areas of the planet 01:29:13.140 |
will only be livable with a closed cooling chain 01:29:19.020 |
and in subtropical climates that are now quite pleasant 01:29:27.260 |
- Cooling chain, so you honestly, wow, cooling chain, 01:29:35.140 |
about the effects of global warming that would-- 01:29:41.700 |
you have basically three months in the summer 01:29:49.940 |
And if the air conditioning would stop for a few days, 01:29:53.700 |
then in many areas, you would not be able to survive. 01:30:06.060 |
- I imagine that people use it when they describe 01:30:11.100 |
If you break the cooling chain and this thing starts to thaw, 01:30:14.100 |
you're in trouble and you have to thaw it away. 01:30:19.700 |
It's like calling a city a closed social chain 01:30:24.980 |
I mean, the locality of it is really important. 01:30:27.700 |
in a climatized room, you go to work in a climatized car, 01:30:36.340 |
which you run from your car to the supermarket, 01:30:38.620 |
but you have to make sure that your temperature 01:30:41.540 |
does not approach the temperature of the environment. 01:30:44.260 |
- And the crucial thing is the wet bulb temperature. 01:30:47.320 |
It's what you get when you take a wet clothes 01:30:53.660 |
and then you move it very quickly through the air, 01:31:10.340 |
- And which means if the outside world is dry, 01:31:14.020 |
you can still cool yourself down by sweating, 01:31:27.220 |
even if you try to be in the shade and so on, you'll die, 01:31:34.140 |
And this itself, as long as you maintain civilization 01:31:37.020 |
and you have energy supply and you have food trucks 01:31:39.420 |
coming to your home that are climatized, everything is fine. 01:31:42.060 |
But what if you lose large-scale open agriculture 01:31:52.260 |
and you have a lot of extreme weather events. 01:31:55.220 |
So you need to grow most of your food maybe indoor 01:31:58.740 |
or you need to import your food from certain regions. 01:32:01.940 |
And maybe you're not able to maintain the civilization 01:32:04.900 |
throughout the planet to get the infrastructure 01:32:09.180 |
- Right, but there could be significant impacts 01:32:14.100 |
There could be wars over resources and so on. 01:32:16.700 |
But ultimately, do you not have, not a faith, 01:32:24.380 |
of technological innovation to help us prevent 01:32:29.380 |
some of the worst damages that this condition can create? 01:32:34.420 |
So as an example, as a almost out there example, 01:32:39.500 |
is the work that SpaceX and Elon Musk is doing 01:32:52.620 |
- But of course, what Elon Musk is trying on Mars 01:32:57.340 |
because Mars looks much worse than Earth will look like 01:33:00.340 |
after the worst outcomes of global warming imaginable, right? 01:33:11.460 |
throughout history since the Industrial Revolution 01:33:15.820 |
technological innovation with some kind of target 01:33:18.500 |
and what ends up happening is totally unexpected, 01:33:22.460 |
So trying to terraform or trying to colonize Mars, 01:33:27.460 |
extremely harsh environment, might give us totally new ideas 01:33:35.140 |
of this closed cooling circuit that empowers the community. 01:33:40.140 |
So it seems like there's a little bit of a race 01:33:45.000 |
between our open-ended technological innovation 01:33:48.980 |
of this communal operating system that we have 01:33:53.980 |
and our general tendency to want to overuse resources 01:34:03.400 |
You don't think technology can win that race? 01:34:11.660 |
the US is stagnating since the 1970s, roughly, 01:34:23.960 |
The things that we are doing, so after the invention 01:34:27.280 |
of the microprocessor was a major thing, right? 01:34:30.340 |
The miniaturization of transistors was really major. 01:34:40.560 |
- Well, hold on a second. - We had gradual changes 01:34:42.120 |
of scaling things from CPUs into GPUs and things like that. 01:35:12.480 |
that's a concept of a time when you were sitting there 01:35:16.160 |
by candlelight and individual consumers of knowledge. 01:35:19.100 |
What about the impact that we're not in the middle of, 01:35:22.280 |
might not be understanding of Twitter, of YouTube? 01:35:35.160 |
sort of two dumb apes, are coming up with a new, 01:35:40.500 |
and there's 200 other apes listening right now, 01:35:45.340 |
and that effect, it's very difficult to understand 01:35:50.360 |
That might be bigger than any of the advancements 01:35:52.700 |
of the microprocessor or any of the Industrial Revolution, 01:36:03.260 |
it allows good ideas to reach millions much faster, 01:36:08.140 |
and the effect of that, that might be the new, 01:36:19.340 |
that will multiply across huge amounts of people, 01:36:26.620 |
and they'll say something, and then they'll write a paper. 01:36:31.740 |
- Yeah, we should have billions of von Neumanns 01:36:34.980 |
right now in Turings, and we don't for some reason. 01:36:37.820 |
I suspect the reason is that we destroy our attention span. 01:36:40.820 |
Also, the incentives are, of course, different. 01:36:47.500 |
is because you and me don't have the attention span 01:36:51.260 |
and you guys probably don't have the attention span 01:36:54.860 |
- But I guarantee you, they're still listening. 01:37:04.780 |
are still listening, so there is an attention span. 01:37:15.740 |
- There's something that social media could be doing 01:37:19.460 |
I think the endgame of social media is a global brain, 01:37:22.380 |
and Twitter is, in some sense, a global brain 01:37:27.540 |
and as a result, is caught in a permanent seizure. 01:37:30.620 |
It's also, in some sense, a multiplayer role-playing game, 01:37:34.340 |
and people use it to play an avatar that is not like them, 01:37:46.500 |
- Yeah, the incentives, and just our natural biological, 01:37:55.420 |
like I consider, I try to be very kind of Zen-like 01:38:00.220 |
and minimalist and not be influenced by likes and so on, 01:38:03.100 |
but it's probably very difficult to avoid that 01:38:30.300 |
and we're kind of individual RL agents in this game, 01:38:34.580 |
'cause there's not really a centralized control. 01:38:36.380 |
Neither Jack Dorsey nor the engineers at Twitter 01:38:53.500 |
And our brain has solved this problem to some degree. 01:39:17.580 |
So imagine that you have something like Reddit, 01:39:20.460 |
or something like Facebook, and something like Twitter, 01:39:23.620 |
and you think about what they have in common. 01:39:26.660 |
they're companies that in some sense own a protocol. 01:39:42.940 |
And now imagine that you take these components 01:39:46.740 |
and you do it in some sense like communities, 01:39:54.100 |
and match their protocols and design new ones. 01:40:16.380 |
So can individual human beings build enough intuition 01:40:20.220 |
- This itself can become part of the protocol. 01:40:22.260 |
So for instance, it could be in some communities, 01:40:24.900 |
it will be a single person that comes up with these things, 01:40:31.600 |
that has some interesting weighted voting, who knows? 01:40:46.100 |
let's not make an assumption about this thing 01:40:52.460 |
whether the right solution will be people designing this 01:40:57.100 |
whether you want to enforce compliance by social norms 01:41:03.540 |
or with AI that goes through the posts of people 01:41:08.540 |
This is something maybe you need to find out. 01:41:10.860 |
And so the idea would be if you let the communities evolve 01:41:16.980 |
that you are incentivizing the most sentient communities, 01:41:21.140 |
the ones that produce the most interesting behaviors 01:41:25.300 |
that allow you to interact in the most helpful ways 01:41:29.500 |
So you have a network that gives you information 01:41:37.620 |
It allows you to basically bring the best of you 01:41:48.460 |
So, but the key process of that with incentives 01:41:52.500 |
and evolution is things that don't adopt themselves 01:41:57.500 |
to effectively get the incentives have to die. 01:42:03.020 |
And the thing about social media is communities 01:42:06.020 |
that are unhealthy or whatever you want to define 01:42:11.260 |
One of the things that people really get aggressive, 01:42:13.740 |
protest aggressively is when they're censored, 01:42:17.940 |
I don't know much about the rest of the world, 01:42:22.980 |
the idea of censorship is really painful in America. 01:42:52.460 |
they should be blocked away, well, locked away, blocked. 01:42:55.580 |
- Important thing is who decides that you are a good member? 01:43:00.780 |
- And what is the outcome of the process that decides it? 01:43:04.380 |
Both for the individual and for society at large. 01:43:07.380 |
For instance, if you have a high trust society, 01:43:14.220 |
undermining trust because it's basically punishing people 01:43:23.740 |
And the opposite, if you have a low trust society, 01:43:30.860 |
from a relatively high trust or mixed trust society 01:43:33.300 |
to a low trust society, so surveillance will increase. 01:43:39.780 |
They are implementations that run code on your brain 01:43:42.720 |
and change your reality and change the way you interact 01:43:46.960 |
And some of the beliefs are just public opinions 01:43:58.720 |
but still they prefer to live in some cultures over others, 01:44:03.200 |
And it turns out that the cultures are defined 01:44:07.040 |
And these rules of interaction lead to different results 01:44:12.880 |
you get different outcomes in different societies. 01:44:19.480 |
when people do not have a commitment to a shared purpose. 01:44:22.800 |
And our societies probably need to rediscover 01:44:31.260 |
So in some sense, the US is caught in a conundrum 01:44:43.000 |
And the solutions that the US has found so far 01:44:45.080 |
are very crude because it's a very young society 01:44:49.240 |
It seems to me that the US will have to reinvent itself. 01:45:00.280 |
do you think we as a species should be evolving with, 01:45:04.320 |
What do you think will work well as a system? 01:45:13.480 |
Some people argue that communism is the best. 01:45:49.080 |
What kind of government system do you think is good? 01:45:52.700 |
- Ideally, government should not be perceivable. 01:45:59.620 |
The more you notice the influence of the government, 01:46:13.780 |
on your payout metrics to make your Nash equilibrium 01:46:19.960 |
So you have these situations where people act 01:46:26.680 |
everybody does the thing that's locally the best for them, 01:46:37.440 |
between what I want to have for the global good 01:46:40.360 |
So for instance, if I think that we should fly less 01:46:42.440 |
and I stay at home, there's not a single plane 01:46:45.020 |
that is going to not start because of me, right? 01:46:53.000 |
to have a government that is sharing this idea 01:46:56.600 |
that we should fly less and is then imposing a regulation 01:46:59.280 |
that for instance, makes flying more expensive. 01:47:01.460 |
And it gives incentives for inventing other forms 01:47:08.640 |
putting less strain on the environment, for instance. 01:47:12.440 |
- So there's so much optimism in so many things you describe, 01:47:19.560 |
So that's not 100% probability, nothing in this world is. 01:47:23.800 |
So what's the trajectory out of self-destruction, 01:47:29.880 |
- I suspect that in some sense we are both too smart 01:47:32.460 |
and not smart enough, which means we are very good 01:47:36.480 |
And at the same time, we are unwilling to submit 01:47:39.460 |
to the imperatives that we would have to follow 01:47:48.380 |
If you were unable to solve everything technologically, 01:47:51.180 |
you can probably understand how hard the child mortality 01:47:58.700 |
to adapt to a slowly changing ecosystemic environment. 01:48:01.640 |
So you could in principle compute all these things, 01:48:06.600 |
But if you cannot do this because you are like me 01:48:10.600 |
and you have children, you don't want them to die, 01:48:17.580 |
Even if it means that within a few generations 01:48:20.560 |
we have enormous genetic drift and most of us have allergies 01:48:23.720 |
as a result of not being adapted to the changes 01:48:27.600 |
- That's for now, I say technologically speaking, 01:48:29.720 |
we're just a very young, 300 years Industrial Revolution, 01:48:37.480 |
and not being murdered for the good of society, 01:48:40.440 |
but that might be a very temporary moment of time. 01:48:46.160 |
So like you said, we're both smart and smart enough. 01:48:50.680 |
- We are probably not the first human civilization 01:48:55.500 |
that allows to efficiently overgraze our resources. 01:49:01.840 |
we can compensate this because if we have eaten 01:49:04.280 |
all the grass, we will find a way to grow mushrooms. 01:49:08.080 |
- But it could also be that the ecosystems tip. 01:49:10.280 |
And so what really concerns me is not so much 01:49:12.760 |
the end of the civilization because we will invent a new one. 01:49:16.000 |
But what concerns me is the fact that, for instance, 01:49:24.360 |
because of ocean acidification and cyanobacteria take over. 01:49:27.680 |
And as a result, we can no longer breathe the atmosphere. 01:49:32.300 |
So basically a major reboot of most complex organisms 01:49:38.080 |
I don't know what the percentage for this possibility is, 01:49:47.560 |
And so Danny Hiller suggests that, for instance, 01:49:49.800 |
we may be able to put chalk into the stratosphere 01:49:55.760 |
Maybe this is sufficient to counter the effects 01:50:03.760 |
I have no idea how the future is going to play out 01:50:11.440 |
All our cousin species, the other hominids are gone. 01:50:22.700 |
and slow the, so try to contain the technological process 01:50:27.700 |
that leads to the overconsumption of resources? 01:50:35.120 |
You get born into a sustainable agricultural civilization, 01:50:38.760 |
300, maybe 400 million people on the planet tops. 01:50:52.520 |
You cannot travel to other places in the world. 01:50:55.440 |
There is no interesting intellectual tradition 01:50:59.140 |
So you would not discover human completeness probably 01:51:04.380 |
And the alternative is you get born into an insane world. 01:51:09.520 |
because it has just burned 100 million years worth of trees 01:51:20.260 |
and it looks like we are not going to miss it. 01:51:24.120 |
And most of the counter arguments sound like denial to me. 01:51:29.240 |
And the other thing is we are born on this Titanic. 01:51:31.640 |
Without this Titanic, we wouldn't have been born. 01:51:39.040 |
And we are not responsible for this happening. 01:51:52.260 |
what incentive structures we want to be exposed to. 01:51:54.840 |
We have relatively little agency in the entire thing. 01:51:57.880 |
Humanity has relatively little agency in the whole thing. 01:52:02.760 |
and everybody is frantically trying to push some buttons. 01:52:13.080 |
- Is it possible that artificial intelligence 01:52:19.960 |
So, there's a lot of worry about existential threats 01:52:26.080 |
But what AI also allows, in general forms of automation, 01:52:29.600 |
allows the potential of extreme productivity growth 01:52:49.000 |
to the same kind of ideals of closer to nature 01:52:53.100 |
that's represented in hunter-gatherer societies. 01:53:05.940 |
- I think it's not fun to be very close to nature 01:53:17.880 |
basically the forests that don't have anything in them 01:53:25.680 |
I think the niceness of being close to nature 01:53:37.160 |
not just your goal, but your whole existence, 01:53:44.520 |
I'm not just romanticizing, I can just speak for myself. 01:53:48.680 |
I am self-aware enough that that is a fulfilling existence. 01:53:53.680 |
That's one that's very-- - I personally prefer 01:53:56.120 |
to be in nature and not fight for my survival. 01:54:03.540 |
and being hunted by animals and having open wounds 01:54:09.480 |
Yes, I and you, just as you said, would not choose it. 01:54:20.840 |
basically if your brain is wired up in such a way 01:54:24.080 |
that you get rewards optimally in such an environment, 01:54:32.780 |
basically people are more happy in such an environment 01:54:34.560 |
because it's what you largely have evolved for. 01:54:42.600 |
So there is probably something like an intermediate stage 01:54:49.240 |
than they would be if they would have to fend for themselves 01:54:59.280 |
a big Mordor in which we run through concrete boxes 01:55:16.320 |
returning to AI, let me ask a romanticized question. 01:55:20.840 |
What is the most beautiful to you, silly ape, 01:55:27.160 |
in the development of artificial intelligence, 01:55:33.840 |
- If you built an AI, it probably can make models 01:55:37.760 |
at an arbitrary degree of detail of the world. 01:55:41.680 |
And then it would try to understand its own nature. 01:55:47.160 |
we have competitions where we will let the AIs wake up 01:55:52.560 |
and we measure how many movements of the Rubik's cube 01:56:08.360 |
and remembers Lex and Yosha sitting in a hotel room, 01:56:13.760 |
that led to the development of general intelligence? 01:56:16.200 |
- So we're a kind of simulation that's running 01:56:18.280 |
in an AI system that's trying to understand itself. 01:56:42.900 |
I mean, what, why, do you think there is an answer? 01:56:58.760 |
to understand the why of what, you know, understand itself. 01:57:11.160 |
Is the continuous trying of understanding itself? 01:57:18.120 |
because they're well adjusted enough to not care. 01:57:20.960 |
And the reason why people like you and me care about it 01:57:23.720 |
probably has to do with the need to understand ourselves. 01:57:26.680 |
It's because we are in fundamental disagreement 01:57:31.360 |
- What's the disagreement? - They look down on me 01:57:32.480 |
and they see, oh my God, I'm caught in a monkey. 01:57:35.880 |
Some people are unhappy-- - That's the feeling, right? 01:57:38.940 |
with the entire universe that I find myself in. 01:57:41.340 |
- Oh, so you don't think that's a fundamental aspect 01:57:45.560 |
of human nature that some people are just suppressing? 01:58:07.160 |
while fundamentally your brain is confused by that, 01:58:09.840 |
by creating an illusion, another layer of the narrative 01:58:20.160 |
with the government right now is the most important thing 01:58:45.020 |
of our human mind springs is this fear of mortality 01:58:53.960 |
And then you construct illusions on top of that. 01:59:03.760 |
that this worry of the big existential questions 01:59:08.120 |
is actually fundamental as the existentialist thought 01:59:13.400 |
- No, I think that the fear of death only plays a role 01:59:18.720 |
The thing is that minds are software states, right? 01:59:30.860 |
I thought that was for this particular piece of software 01:59:38.020 |
- The maintenance of the identity is not terminal. 01:59:42.580 |
You maintain your identity so you can serve your meaning, 01:59:45.740 |
so you can do the things that you're supposed to do 01:59:54.940 |
even though they cannot quite put their finger on it, 02:00:31.580 |
of the quantum mechanical wave function world 02:00:38.380 |
then the idea of mortality seems to be fuzzy as well. 02:00:45.820 |
- The fuzzy idea is the one of continuous existence. 02:01:00.820 |
is the illusion that you have memories about him. 02:01:19.900 |
depends entirely on your beliefs and your own continuity. 02:01:30.140 |
but you can stop being afraid of your mortality 02:01:32.900 |
because you realize you were never continuously existing 02:01:36.820 |
- Well, I don't know if I'd be more terrified 02:01:49.460 |
- I can't turn off myself. - You can turn it off. 02:01:52.820 |
- Yes, and you can basically meditate yourself 02:01:58.660 |
where you know everything that you knew before, 02:02:01.020 |
but you're no longer identified with changing anything. 02:02:04.200 |
And this means that your self, in a way, dissolves. 02:02:09.220 |
You know that this person construct exists in other states, 02:02:27.620 |
and it's not your favorite person even, right? 02:02:31.180 |
and it's the one that you are doomed to control 02:02:34.280 |
and that is basically informing the actions of this organism 02:02:52.220 |
it's a somehow compelling notion that being attached, 02:02:59.940 |
But that in itself could be an illusion that you construct. 02:03:23.060 |
just a bunch of techniques that let you control attention. 02:03:30.580 |
hopefully not before you understand what you're doing. 02:03:41.860 |
so basically control or turn off the attention. 02:03:43.700 |
- Yeah, but the entire thing is that you learn 02:03:45.500 |
So everything else is downstream from controlling attention. 02:03:48.660 |
- And control the attention that's looking at the attention. 02:03:52.220 |
- Normally we only get attention in the parts of our mind 02:03:56.500 |
between model and the results that are happening. 02:04:04.260 |
If everything works out roughly the way you want, 02:04:09.900 |
then you will mostly have models about these domains. 02:04:14.580 |
your fundamental relationships to the world around you 02:04:17.300 |
don't work because the ideology of your country is insane 02:04:22.540 |
and don't understand why you understand physics 02:04:24.820 |
and you don't, why you want to understand physics 02:04:31.100 |
- So we kind of brought up neurons in the brain 02:04:37.980 |
And there's been some successes as you brought up with Go, 02:04:43.460 |
with AlphaGo, AlphaZero, with ideas of self-play, 02:04:46.420 |
which I think are incredibly interesting ideas 02:04:48.420 |
of systems playing each other in an automated way 02:05:04.140 |
all the competitors in the game are improving gradually. 02:05:09.660 |
and from learning from the process of the competition. 02:05:13.140 |
Do you have hope for that reinforcement learning process 02:05:16.300 |
to achieve greater and greater level of intelligence? 02:05:28.940 |
- So definitely forms of unsupervised learning, 02:05:30.580 |
but there are many algorithms that can achieve that. 02:05:32.980 |
And I suspect that ultimately the algorithms that work, 02:05:37.020 |
there will be a class of them or many of them, 02:05:47.940 |
And the types of models that we form right now 02:05:56.100 |
- It means that ideally every potential model state 02:06:00.340 |
should correspond to a potential world state. 02:06:03.980 |
So basically if you vary states in your model, 02:06:10.500 |
So an indication is basically what we see in dreams. 02:06:13.500 |
The older we get, the more boring our dreams become 02:06:16.340 |
because we incorporate more and more constraints 02:06:21.100 |
So many of the things that we imagined to be possible 02:06:28.820 |
And as a result, fewer and fewer things remain possible. 02:06:32.140 |
It's not because our imagination scales back, 02:06:40.860 |
our neural networks operate are almost limitless, 02:06:43.900 |
which means it's very difficult to get a neural network 02:06:53.260 |
is we probably need to build dreaming systems. 02:06:57.340 |
is similar to a generative adversarial network, 02:07:03.260 |
and then it produces alternative perspectives 02:07:08.100 |
so you can recognize it under different circumstances. 02:07:15.060 |
and the maps that we know from different perspectives, 02:07:17.140 |
which also means from a bird's eye perspective. 02:07:21.820 |
I mean, not with our eyes closed when we're sleeping. 02:07:32.780 |
I mean, sort of considering all the different possibilities, 02:07:36.220 |
the way we interact with the environment seems like 02:08:15.640 |
So this obsession of constantly pondering possibilities 02:08:22.460 |
I think, I'm not talking about intellectual stuff. 02:08:27.100 |
I'm talking about just doing the kind of stuff 02:08:43.060 |
It's relatively easy to build a neural network 02:08:47.900 |
The fact that we haven't done it right so far, 02:08:51.060 |
because you can see that a biological organism does it 02:08:55.860 |
So basically you build a bunch of neural oscillators 02:08:58.020 |
that entrain themselves with the dynamics of your body 02:09:00.380 |
in such a way that the regulator becomes isomorphic 02:09:03.620 |
and it's modeled to the dynamics that it regulates. 02:09:09.500 |
that it captures attention when the system is off. 02:09:15.260 |
that's required to do walking as a controller, 02:09:24.780 |
but it discards quietly, or at least makes implicit, 02:09:31.140 |
something like common sense reasoning to walk. 02:09:40.540 |
there's a huge knowledge base that's underlying it somehow. 02:09:48.340 |
we have never been able to construct in neural networks 02:09:53.340 |
or in artificial intelligence systems, period. 02:09:55.980 |
Which is like, it's humbling, at least in my imagination, 02:10:04.280 |
And I think saying that neural networks can accomplish it 02:10:16.040 |
for constructing something like common sense reasoning. 02:10:25.840 |
to linger on the idea of what kind of mechanism 02:10:45.720 |
that's represented under the flag of common sense reasoning? 02:10:48.360 |
- How much common sense knowledge do we actually have? 02:10:52.560 |
for all your life and you form two new concepts 02:10:56.740 |
You end up with something like a million concepts 02:11:08.480 |
I personally think it might be much more than a million. 02:11:16.980 |
do your neurons have in your life, it's quite limited. 02:11:21.140 |
- Yeah, but the powerful thing is the number of concepts 02:11:25.580 |
and they're probably deeply hierarchical in nature. 02:11:29.620 |
The relations as you've described between them 02:11:33.920 |
So it's like, even if it's like a million concepts, 02:11:39.340 |
and some kind of probabilistic relationships, 02:11:48.140 |
- Yeah, so in some sense, I think of the concepts 02:11:51.820 |
as the address space for our behavior programs. 02:11:54.740 |
And the behavior programs allow us to recognize objects 02:11:59.820 |
And a large part of that is the physical world 02:12:02.620 |
that we interact with, which is this res extender thing, 02:12:05.180 |
which is basically navigation of information in space. 02:12:12.500 |
It's a physics engine that you can use to describe 02:12:16.180 |
and predict how things that look in a particular way, 02:12:19.540 |
that feel when you touch them in a particular way, 02:12:21.620 |
that allow proprioception, that allow auditory perception 02:12:25.700 |
So basically the geometry of all these things. 02:12:27.620 |
And this is probably 80% of what our brain is doing 02:12:31.900 |
is dealing with that, with this real-time simulation. 02:12:37.660 |
but it's not that hard to understand what it's doing. 02:12:40.180 |
And our game engines are already in some sense, 02:12:43.100 |
approximating the fidelity of what we can perceive. 02:12:52.060 |
we get something that is still relatively crude 02:12:57.900 |
It's just a couple of order of magnitudes away 02:13:03.100 |
in terms of the complexity that it can produce. 02:13:05.920 |
So in some sense, it's reasonable to say that our, 02:13:08.700 |
the computer that you can buy and put into your home 02:13:11.580 |
is able to give a perceptual reality that has a detail 02:13:19.180 |
And everything else are ideas about the world. 02:13:22.260 |
And I suspect that they are relatively sparse 02:13:35.180 |
But the priors are present in most social animals. 02:13:39.700 |
that many domestic social animals, like cats and dogs, 02:13:49.900 |
I hope it's not that many concepts fundamentally 02:13:52.940 |
to do, to exist in this world, social interaction. 02:13:59.900 |
to be so complex to each other because we are so stupid 02:14:16.940 |
- Yeah, but one of the things that worries me 02:14:20.140 |
is that the fact that the brain doesn't scale 02:14:23.940 |
means that that's actually a fundamental feature 02:14:40.100 |
which is different than our current understanding 02:14:53.340 |
all the major results really have to do with huge compute. 02:14:59.300 |
- It could also be that our brains are so small 02:15:01.020 |
not just because they take up so much glucose 02:15:17.820 |
but they really struggle to see the big picture. 02:15:19.700 |
So you can make them recreate drawings stroke by stroke, 02:15:27.140 |
So they cannot make a drawing of a scene that they see. 02:15:32.820 |
as far from what I could see in the experiments. 02:15:37.460 |
Maybe smarter elephants would meditate themselves 02:15:42.060 |
So basically the elephants that were not autistic, 02:15:46.540 |
- Yeah, so we have to remember that the brain 02:15:52.220 |
Do you think that AGI systems that we try to create 02:15:55.220 |
or greater intelligence systems would need to have a body? 02:15:59.500 |
- I think they should be able to make use of a body 02:16:03.180 |
But I don't think that a fundamental need a body. 02:16:05.140 |
So I suspect if you can interact with the world 02:16:15.860 |
fewer observations in order to reduce the uncertainty 02:16:37.840 |
So if you can build a system that has enough time 02:16:42.500 |
and extract all the information that there is to be found, 02:16:45.600 |
I don't think there's an obvious limit to what it can do. 02:16:51.400 |
is a fundamental thing that the physical body 02:17:02.280 |
whether the physical world exists or not, whatever, 02:17:06.060 |
but interact with some interface to the physical world. 02:17:11.900 |
Do you think we can do the same kind of reasoning, 02:17:18.780 |
if we put on a VR headset and move over to that world? 02:17:23.780 |
Do you think there's any fundamental difference 02:17:30.020 |
and if we were sitting in the same hotel in a virtual world? 02:17:32.660 |
- The question is, does this non-physical world 02:17:35.380 |
or this other environment entice you to solve problems 02:17:41.660 |
If it doesn't, then you probably will not develop 02:17:53.900 |
and understand our own nature to this degree, right? 02:18:02.500 |
because the benefit of attempting this project are marginal 02:18:05.820 |
because you're probably not going to succeed in it, 02:18:09.540 |
requires complete dedication of your entire life, right? 02:18:19.420 |
So imagine a situation, maybe interesting option for me. 02:18:34.140 |
and when you eat, we'll make sure to connect your body up 02:18:37.420 |
in a way that when you eat in the virtual world, 02:18:43.060 |
in the virtual world, so align the incentives 02:18:45.900 |
between our common sort of real world and the virtual world. 02:18:49.600 |
But then the possibilities become much bigger. 02:18:59.580 |
I mean, the possibilities are endless, right? 02:19:08.180 |
what kind of intelligence would emerge there, 02:19:14.980 |
even in me, Lex, even at this stage in my life, 02:19:28.740 |
in this physical world, it's interesting to think 02:19:39.020 |
out of the realm of possibility that we're all, 02:19:41.680 |
that some part of our lives will, if not entirety of it, 02:19:46.540 |
will live in a virtual world, to a greater degree 02:19:56.900 |
intellectually or naturally in terms of thinking 02:20:04.540 |
- I think that currently it's a waste of time 02:20:09.380 |
before we have mechanisms that can automatically 02:20:20.740 |
The third order are tools, and the second order 02:20:23.700 |
is the things that are basically always present, 02:20:25.820 |
but you operate on them with first-order things, 02:20:28.980 |
which are mental operators, and the zero order 02:20:32.100 |
is in some sense the direct sense of what you're deciding. 02:20:36.940 |
So you observe yourself initiating an action. 02:20:43.680 |
Then you perform the operations that you perform 02:20:46.340 |
to make that happen, and then you see the movement 02:20:49.280 |
of your limbs, and you learn to associate those 02:20:51.860 |
and thereby model your own agency over this feedback. 02:21:02.820 |
and you observe how the thought is being changed. 02:21:07.500 |
an embodiment already, and I suspect it's sufficient 02:21:11.500 |
- Really well put, and so it's not that important, 02:21:14.340 |
at least at this time, to consider variations 02:21:17.520 |
- Yes, but the thing that you also mentioned just now 02:21:21.960 |
is physics that you could change in any way you want. 02:21:25.000 |
So you need an environment that puts up resistance 02:21:28.520 |
If there's nothing to control, you cannot make models. 02:21:31.600 |
There needs to be a particular way that resists you. 02:21:34.800 |
And by the way, your motivation is usually outside 02:21:38.180 |
Motivation is what gets you up in the morning, 02:21:40.800 |
even though it would be much less work to stay in bed. 02:21:44.040 |
So it's basically forcing you to resist the environment, 02:21:54.120 |
So in some sense, it is also putting up resistance 02:21:56.780 |
against the natural tendency of the mind to not do anything. 02:22:08.380 |
like the actual physical objects pushing against you, 02:22:11.300 |
It seems that the second order stuff in virtual reality 02:22:28.500 |
It would probably not be a very human intelligence, 02:22:34.140 |
- So to mess with this zeroth order, maybe first order, 02:22:39.140 |
what do you think about ideas of brain-computer interfaces? 02:22:41.740 |
So again, returning to our friend Elon Musk and Neuralink, 02:22:47.220 |
of course there's a lot of trying to cure diseases 02:22:51.820 |
but the long-term vision is to add an extra layer, 02:22:54.900 |
so basically expand the capacity of the brain, 02:23:04.500 |
or two, how does that change the fundamentals 02:23:09.020 |
but I don't see that the FDA would ever allow me 02:23:15.820 |
So at the moment, I can do horrible things to mice, 02:23:18.480 |
but I'm not able to do useful things to people, 02:23:33.580 |
are probably not going to happen in the present legal system. 02:23:38.660 |
out there philosophical and sort of engineering questions, 02:23:43.660 |
and for the first time ever, you jumped to the legal FDA. 02:23:48.220 |
- There would be enough people that would be crazy enough 02:23:51.700 |
to try a new type of brain-computer interface, right? 02:24:00.540 |
'cause I work a lot with autonomous vehicles, 02:24:03.700 |
a very difficult regulatory process of approving autonomous, 02:24:10.320 |
- No, they will totally happen as soon as we create jobs 02:24:12.760 |
for at least two lawyers and one regulator per car. 02:24:20.340 |
like lawyers is the fundamental substrate of reality. 02:24:32.380 |
These circuits are in some sense streams of software 02:24:35.500 |
and this is largely works by exception handling. 02:24:39.720 |
and they get synchronized with the next level structure 02:24:55.060 |
- Yeah, so the exceptions are actually incentivized 02:25:08.020 |
like is there anything interesting, insightful 02:25:15.460 |
- I do think so, but I don't think that you need 02:25:24.780 |
and getting in some kind of empathetic resonance. 02:25:27.640 |
And I'm a nerd, so I'm not very good at this, 02:25:30.700 |
but I noticed that people are able to do this 02:25:34.020 |
And it basically means that we model an interface layer 02:25:49.420 |
And if the oscillation itself changes slowly enough, 02:25:59.720 |
it seems like you can do a lot more computation 02:26:09.980 |
So the number of thoughts that I can productively think 02:26:15.900 |
- But it's much-- - If they had the discipline 02:26:17.620 |
to write it down and the speed to write it down, 02:26:21.760 |
but if you think about the computers that we can build, 02:26:28.820 |
It's something that it can put out in a second. 02:26:31.700 |
It's possible, sort of the number of thoughts 02:26:37.100 |
it could be several orders of magnitude higher 02:26:49.380 |
- Because they have to control the same problems every day. 02:26:52.180 |
When I walk, there are going to be processes in my brain 02:26:55.220 |
that model my walking pattern and regulate them and so on, 02:26:58.060 |
but it's going to be pretty much the same every day. 02:27:01.540 |
- But I'm talking about intellectual reasoning, 02:27:06.260 |
So you sit down and start thinking about that. 02:27:08.420 |
One of the constraints is that you don't have access 02:27:10.880 |
to a lot of, like, you don't have access to a lot of facts, 02:27:25.700 |
in trying to understand what is the best form of government, 02:27:38.500 |
if the bottleneck is literally the information that, 02:27:43.220 |
you know, the bottleneck of breakthrough ideas 02:27:50.340 |
then the possibility of connecting your brain 02:28:14.780 |
- Yeah, that could be-- - There is an evolution 02:28:31.900 |
or that are the best for a certain environment. 02:28:48.940 |
If you also want to have a form of government 02:28:56.220 |
have to work very long hours for very little gain, 02:29:03.460 |
that you get paid in the afterlife, the overtime, right? 02:29:08.700 |
And so for much of human history in the West, 02:29:12.540 |
you had a combination of monarchy and theocracy 02:29:28.220 |
He was translating Aristotle in a particular way 02:29:34.980 |
And he says that basically people are animals, 02:29:38.580 |
and it's very much the same way as Aristotle envisions, 02:29:40.980 |
which basically organisms with cybernetic control. 02:29:45.900 |
rational principles that humans can discover, 02:29:47.980 |
and everybody can discover them so they are universal. 02:29:58.420 |
you should be willing to self-regulate correctly. 02:30:02.700 |
You should be willing to do correct social regulation. 02:30:18.660 |
You should be choosing the right goals to work on. 02:30:23.340 |
So basically these three rational principles, 02:30:25.300 |
goal rationality he calls prudence or wisdom, 02:30:27.620 |
social regulation is justice, the correct social one, 02:30:34.860 |
And this willingness to act on your models is courage. 02:30:44.780 |
And these three divine virtues cannot be rationally deduced, 02:30:57.260 |
as God has to tell you that these are the things. 02:31:01.900 |
The Christian conspiracy forces you to believe 02:31:05.500 |
some guy with a long beard that they discovered this. 02:31:14.660 |
for the resulting civilization that you form. 02:31:18.980 |
So basically you serve this higher, larger thing, 02:31:26.540 |
Then there needs to be a commitment to shared purpose. 02:31:31.260 |
that you try to figure out what that should be 02:31:32.980 |
and how you can facilitate this, and this is love. 02:31:35.140 |
The commitment to shared purpose is the core of love. 02:31:38.380 |
You see the sacred thing that is more important 02:31:40.540 |
than your own organismic interests in the other. 02:31:44.500 |
and this is how you see the sacred in the other. 02:31:48.900 |
which means you need to be willing to act on that principle 02:31:55.780 |
when you start out building the civilization. 02:32:08.420 |
and then you see these three divine concepts, 02:32:13.780 |
- And now the problem is divine is a loaded concept 02:32:18.540 |
and we are still scarred from breaking free of it. 02:32:20.980 |
But the idea is basically we need to have a civilization 02:32:23.660 |
that acts as an intentional agent, like an insect state. 02:32:33.060 |
is basically the formation of religious states 02:32:39.060 |
in which the individual doesn't matter as much as the rule 02:32:50.100 |
because it's obviously on the way out, right? 02:32:51.820 |
So it is for the present type of society that we are in. 02:32:55.500 |
Religious institutions don't seem to be optimal 02:33:09.500 |
that is administered not by the oligarchs themselves 02:33:14.500 |
We have so much innovation that we have in every generation 02:33:19.860 |
And corporations die usually after 30 years or so 02:33:23.620 |
and something other takes a leading role in our societies. 02:33:29.380 |
and these institutions themselves are not elected 02:33:37.580 |
And this makes it possible that you can adapt to change 02:33:41.660 |
So you can, for instance, if a change in governments, 02:33:43.780 |
if people think that the current government is too corrupt 02:33:46.420 |
or is not up to date, you can just elect new people. 02:33:49.780 |
Or if a journalist finds out something inconvenient 02:33:52.140 |
about the institution and the institution has no plan B 02:33:58.460 |
This is what, when you run society by the deep state. 02:34:05.940 |
that you can change if something bad happens, right? 02:34:08.740 |
So you will have a continuity in the whole thing. 02:34:11.100 |
And this is the system that we came up in the West. 02:34:16.900 |
So it's mostly just second, third order consequences 02:34:20.700 |
that people are modeling in the design of these institutions. 02:34:24.780 |
that doesn't really take care of the downstream effects 02:34:27.140 |
of many of the decisions that are being made. 02:34:29.940 |
And I suspect that AI can help us this in a way 02:34:35.220 |
The society of the US is a society of cheaters. 02:34:38.020 |
It's basically cheating is so indistinguishable 02:34:40.700 |
from innovation and we want to encourage innovation. 02:34:43.300 |
- Can you elaborate on what you mean by cheating? 02:34:45.220 |
- It's basically people do things that they know are wrong. 02:34:47.780 |
It's acceptable to do things that you know are wrong 02:34:51.540 |
You can, for instance, suggest some non-sustainable 02:34:57.540 |
- Right, but you're always pushing the boundaries. 02:35:00.820 |
And yes, this is seen as a good thing largely. 02:35:05.060 |
- And this is different from other societies. 02:35:07.220 |
So for instance, social mobility is an aspect of this. 02:35:09.500 |
Social mobility is the result of individual innovation 02:35:12.740 |
that would not be sustainable at scale for everybody else. 02:35:15.860 |
Normally, you should not go up, you should go deep. 02:35:18.140 |
We need bakers and indeed we are very good bakers, 02:35:29.740 |
But it also means that the US is not optimizing 02:35:33.980 |
- And so it's not obvious as the evolutionary process 02:35:38.060 |
is unrolling, it's not obvious that that long-term 02:35:43.300 |
So basically, if you cheat, you will have a certain layer 02:35:50.580 |
- And we have to unroll this evolutionary process 02:35:53.060 |
to figure out if these side effects are so damaging 02:35:55.820 |
that the system's horrible or if the benefits 02:36:02.260 |
How do we get to which system of government is best? 02:36:10.940 |
- I suspect that we can find a way back to AI 02:36:18.300 |
Right, in some sense, our brain is a society of neurons 02:36:34.300 |
We often see government as the manifestation of power 02:36:37.020 |
or local interest, but it's actually a platform 02:36:39.100 |
for negotiating the conditions of human survival. 02:36:42.260 |
And this platform emerges over the current needs 02:36:45.420 |
and possibilities and the trajectory that we have. 02:36:47.620 |
So given the present state, there are only so many options 02:36:57.060 |
to disrupt everything because it will endanger 02:36:59.340 |
our food supply for a while and the entire infrastructure 02:37:07.140 |
And there are not that many natural transitions 02:37:12.260 |
- So we try not to have revolutions if we can have it. 02:37:15.820 |
So speaking of revolutions and the connection 02:37:21.940 |
you've also said that, you've said that in some sense, 02:37:25.820 |
becoming an adult means you take charge of your emotions. 02:37:31.780 |
But in the context of the mind, what's the role of emotion? 02:37:43.500 |
So psychologists often distinguish between emotion 02:37:46.380 |
and feeling and in common day parlance, we don't. 02:37:54.020 |
And that's especially true for the lowest level, 02:37:57.540 |
So when you have an affect, it's the configuration 02:37:59.520 |
of certain modulation parameters like arousal, valence, 02:38:02.920 |
your attentional focus, whether it's wide or narrow, 02:38:08.820 |
And all these parameters together put you in a certain way 02:38:12.340 |
to, you relate to the environment and to yourself. 02:38:14.620 |
And this is in some sense, an emotional configuration. 02:38:17.420 |
In the more narrow sense, an emotion is an affective state. 02:38:22.400 |
And the relevance of that object is given by motivation. 02:38:46.420 |
And your organism has to, or your brain has to find 02:38:53.180 |
So you basically can create a story for your life 02:38:57.500 |
And so we organize them all into hierarchies. 02:39:26.740 |
Are you distinguishing between the experience of emotion 02:39:35.500 |
And in this sense, what you feel is an appraisal 02:39:47.380 |
but of the subconscious geometric parts of your mind 02:39:55.700 |
And this neural network is making itself known 02:40:10.140 |
So you might feel anxiety in your solar plexus, 02:40:16.700 |
Your body map is the space that is always instantiated 02:40:27.100 |
try to talk to your symbolic parts of your brain 02:40:31.300 |
And then you perceive them as pleasant and unpleasant, 02:40:42.900 |
So for instance, when you feel connected to other people, 02:40:52.980 |
And it's very intuitive to encode it like this. 02:40:55.920 |
That's why it's encoded like this for most people. 02:40:59.020 |
It's a code in which the non-symbolic parts of your mind 02:41:05.900 |
that could be sort of gestural or visual or so on. 02:41:18.420 |
to understand what emotional state they're in, 02:41:22.220 |
and also to subvert your model of their emotional state. 02:41:29.380 |
that they're going to make in this situation. 02:41:44.260 |
So when you try to read the emotion of another person, 02:41:58.820 |
so having done this podcast and the video component, 02:42:35.240 |
like you're basically saying a bunch of brilliant things, 02:42:37.820 |
but I'm part of the play that you're the key actor in 02:42:47.320 |
of what the big point is, which is fascinating. 02:42:56.460 |
because my preference would be to wear a mask 02:42:58.860 |
with sunglasses to where I could just listen. 02:43:02.060 |
- Yes, I understand this because it's intrusive 02:43:09.180 |
have a taboo against that, and especially Russia, 02:43:16.080 |
You're expected to be hyper animated in your face, 02:43:19.240 |
and you're also expected to show positive affect. 02:43:50.340 |
gave the appraisals that exist in US and Russia. 02:44:06.080 |
- And for Russians, everything below the top 10% 02:44:29.120 |
It's also how we construct meaning in the US. 02:44:41.500 |
we emphasize the fact that if you hold something 02:44:45.160 |
above the waterline, you also need to put something 02:44:47.560 |
below the waterline, because existence by itself 02:44:53.480 |
At best neutral, or it could be just suffering, 02:44:58.160 |
but these moments of beauty are inextricably linked 02:45:03.360 |
And to not acknowledge the reality of suffering 02:45:07.840 |
of the fact that basically every conscious being 02:45:12.660 |
- Yeah, you just summarized the ethos of the Eastern Europe. 02:45:21.940 |
And if your facial expressions don't acknowledge 02:45:27.220 |
and in existence itself, then you must be an idiot. 02:45:30.780 |
- It's an interesting thing when you raise children 02:45:33.820 |
in the US, and you, in some sense, preserve the identity 02:45:42.200 |
And your daughter asks you about Ariel the mermaid, 02:46:00.340 |
which is the romantic one, and there's a much darker one. 02:46:12.060 |
and she meets this prince, and they fall in love. 02:46:14.620 |
And the prince really, really wants to be with her, 02:46:22.980 |
"because obviously you cannot breathe underwater 02:46:24.820 |
"and have other things to do than managing your kingdom 02:46:32.740 |
he falls in love with some princess and marries her, 02:46:34.920 |
and she shows up and quietly goes into his chamber, 02:46:38.540 |
and nobody is able to stop her or willing to do so, 02:46:43.300 |
And she comes quietly and sat out of his chamber, 02:46:55.900 |
In the Andersson story, the mermaid is playing 02:47:15.620 |
Instead, he marries somebody who has a kingdom 02:47:29.500 |
Yeah, instead, Disney, the Little Mermaid story 02:47:38.880 |
that I read Oscar Wilde before I read the other things, 02:47:41.260 |
so I'm indoctrinated, inoculated with this romanticism, 02:47:48.620 |
That's what you do, because if you are confronted 02:47:57.020 |
and all other human incentives, that's wrong. 02:48:13.260 |
and if you are able to change what you care about 02:48:15.940 |
to those things that you can change, you will not suffer. 02:48:18.180 |
- But would you then be able to experience happiness? 02:48:22.300 |
- Yes, but happiness itself is not important. 02:48:27.180 |
When you are a child, you think cookies are very important, 02:48:29.340 |
and you want to have all the cookies in the world, 02:48:32.460 |
because then you have as many cookies as you want, right? 02:48:35.740 |
- And as an adult, you realize a cookie is a tool. 02:48:46.700 |
- Yes, but then the cookie, the scarcity of a cookie, 02:49:05.220 |
And if you can change your appraisal of the environment, 02:49:08.180 |
then you can create arbitrary states of happiness. 02:49:12.300 |
So they discover the womb, this basement womb in their brain 02:49:22.580 |
Because they thought before that their unhappiness 02:49:28.700 |
They can release the neurotransmitters at will 02:49:42.780 |
And this was the problem that you couldn't solve 02:49:53.900 |
What could the possible answer be of the meaning of life? 02:50:00.860 |
- I think that if you look at the meaning of life, 02:50:14.340 |
In order to make it work, it's a molecular machine. 02:50:16.740 |
It needs a self-replicator, a neck entropy extractor, 02:50:21.820 |
you don't have a cell, and it is not living, right? 02:50:24.180 |
And life is basically the emergent complexity 02:50:27.700 |
Once you have this intelligent super molecule, the cell, 02:50:31.500 |
there is very little that you cannot make it do. 02:50:40.820 |
- So it's active function of these three components 02:50:45.820 |
or this super cell, a cell, is present in the cell, 02:50:56.980 |
So in a way, it's tempting to think of the cell 02:51:01.260 |
If you want to build intelligence on other planets, 02:51:04.260 |
the best way to do this is to infect them with cells. 02:51:07.460 |
And wait for long enough, and there's a reasonable chance 02:51:15.540 |
- Well, that idea is very akin to sort of the same dream 02:51:21.420 |
in cellular automata in their most simple mathematical form. 02:51:24.460 |
You just inject the system with some basic mechanisms 02:51:44.980 |
like our computers, our labs, or our engineering workshops. 02:51:50.420 |
to implement a particular kind of function that we dream up 02:52:01.860 |
And biological systems designed from the inside out, 02:52:07.300 |
by taking some of the relatively unorganized matter 02:52:10.380 |
around it and turn it into its own structure, 02:52:15.660 |
And cells can cooperate if they can rely on other cells 02:52:18.620 |
having a similar organization that is already compatible. 02:52:21.540 |
But unless that's there, the cell needs to divide 02:52:29.340 |
that works on a somewhat chaotic environment. 02:52:39.740 |
that you couldn't harvest without the complexity. 02:52:47.420 |
is to allow control under conditions of complexity. 02:52:52.260 |
between the ordered systems into the realm of chaos. 02:52:56.740 |
You build bridge heads into chaos with complexity. 02:53:08.340 |
Meaning only exists if a mind projects it, right? 02:53:13.380 |
I think that what feels most meaningful to me 02:53:16.220 |
is to try to build and maintain a sustainable civilization. 02:53:44.100 |
- So if there was no spontaneous abiogenesis, 02:53:53.860 |
to be in the right constellation to each other. 02:53:59.780 |
I mean, there's like turtles all the way down. 02:54:17.820 |
like big storm systems that endure for thousands of years. 02:54:24.380 |
because some of the clouds are ferromagnetic or something. 02:54:28.860 |
how certain clouds react rather than other clouds 02:54:31.540 |
and thereby produce some self-stabilizing patterns 02:54:34.540 |
that eventually to regulation feedback loops, 02:54:40.900 |
that basically has emergent, self-sustaining, 02:54:57.220 |
that has spontaneously formed because it could. 02:55:02.880 |
And the best von Neumann probe for such a thing 02:55:07.780 |
and very enduring, creates cells and sends them out. 02:55:12.860 |
And I'm not suggesting that this is the case, 02:55:14.500 |
but it would be compatible with the Prince-Bermion hypothesis 02:55:17.740 |
and it was my intuition that abiogenesis is very unlikely. 02:55:25.720 |
maybe more often than there are planetary surfaces. 02:55:33.180 |
a system that's large enough that allows randomness. 02:56:07.940 |
there is light and darkness that is being created. 02:56:11.540 |
And then you discover sky and ground, create them. 02:56:18.700 |
and you give everything their names and so on. 02:56:22.300 |
It's a sequence of steps that every mind has to go through 02:56:29.820 |
how initially they distinguish light and darkness. 02:56:35.860 |
and they discover the plants and the animals, 02:56:38.740 |
And it's a creative process that happens in every mind, 02:56:45.180 |
to make sense of the patterns on your retina. 02:56:47.940 |
Also, if there was some big nerd who set up a server 02:57:14.340 |
which is basically the automaton that runs the universe. 02:57:24.260 |
something must move the thing that is moving it. 02:57:30.300 |
is a supernatural being is complete nonsense, right? 02:57:45.580 |
except that it's able to produce change in information, right? 02:57:50.060 |
So there needs to be some kind of computational principle. 02:57:53.860 |
But to say this automaton is identical, again, 02:57:58.020 |
or with the thing that gives meaning to our life, 02:58:10.540 |
It's the thing in which we have a similar relationship 02:58:17.020 |
because we have evolved to organize in these structures. 02:58:19.820 |
So basically, the Christian God in its natural form, 02:58:26.300 |
is basically the platonic form of a civilization. 02:58:32.700 |
- Yes, it's this ideal that you try to approximate 02:58:43.860 |
And we're left with one of my favorite lines, 02:58:48.060 |
"Happiness is a cookie that the brain bakes itself." 02:58:53.060 |
It's been a huge honor and a pleasure to talk to you. 02:58:58.420 |
I'm sure our paths will cross many times again. 02:59:09.460 |
Thanks for listening to this conversation with Yosha Bach, 02:59:12.180 |
and thank you to our sponsors, ExpressVPN and Cash App. 02:59:18.660 |
by getting ExpressVPN at expressvpn.com/lexpod 02:59:22.860 |
and downloading Cash App and using code LEXPODCAST. 02:59:27.860 |
If you enjoy this thing, subscribe on YouTube, 02:59:35.500 |
or simply connect with me on Twitter @LexFriedman. 02:59:39.700 |
And yes, try to figure out how to spell it without the E. 02:59:43.740 |
And now let me leave you with some words of wisdom 02:59:47.380 |
If you take this as a computer game metaphor, 02:59:54.660 |
And this best level happens to be the last level, 02:59:58.860 |
as it happens against the backdrop of a dying world, 03:00:05.980 |
Thank you for listening and hope to see you next time.