back to indexJoscha Bach: Nature of Reality, Dreams, and Consciousness | Lex Fridman Podcast #212
Chapters
0:0 Introduction
0:33 Life is hard
2:56 Consciousness
9:42 What is life?
19:51 Free will
33:56 Simulation
36:6 Base layer of reality
51:42 Boston Dynamics
60:1 Engineering consciousness
70:30 Suffering
79:24 Postmodernism
83:43 Psychedelics
96:57 GPT-3
105:40 GPT-4
112:5 OpenAI Codex
114:20 Humans vs AI: Who is more dangerous?
131:4 Hitler
136:1 Autonomous weapon systems
143:29 Mark Zuckerberg
149:4 Love
163:18 Michael Malice and anarchism
180:15 Love
184:23 Advice for young people
189:0 Meaning of life
00:00:00.000 |
The following is a conversation with Yosha Bach, 00:00:04.960 |
Yosha is one of the most fascinating minds in the world, 00:00:14.520 |
To support this podcast, please check out our sponsors, 00:00:17.720 |
Coinbase, Codecademy, Linode, NetSuite, and ExpressVPN. 00:00:38.200 |
and sticking to the theme of a Russian program. 00:00:48.400 |
You wrote that, quote, "When life feels unbearable, 00:00:56.520 |
"I'm a piece of software running on the brain 00:01:03.400 |
Have you experienced low points in your life? 00:01:09.720 |
- Of course, we all experience low points in our life, 00:01:17.000 |
We might get desperate about our lack of self-regulation, 00:01:27.920 |
nobody does to get to their life without low points, 00:01:30.720 |
and without moments where they're despairing. 00:01:33.720 |
And I thought that, let's capture this state, 00:01:42.520 |
you realize that when you stop taking things personally, 00:01:44.880 |
when you realize that this notion of a person is a fiction, 00:01:50.720 |
where the robots realize that their memories and desires 00:01:55.840 |
and they don't have to act on those memories and desires, 00:01:59.120 |
that our memories and expectations is what make us unhappy. 00:02:04.200 |
The day in which we are, for the most part, it's okay, right? 00:02:08.320 |
When we are sitting here, right here, right now, 00:02:13.080 |
And the thing that affects us is the expectation 00:02:24.120 |
And once we basically zoom out from all this, 00:02:29.000 |
What's left is this state of being conscious, 00:02:56.400 |
- So you're like a leaf floating down the river. 00:02:59.120 |
You just have to accept that there's a river, 00:03:09.520 |
What part of that is actually under your control? 00:03:15.320 |
is largely a control model for our own attention. 00:03:39.360 |
And we might have the illusion that we are the elephant, 00:03:49.080 |
It just is the situation that we find ourselves in. 00:03:52.640 |
- How much prodding can we actually do of the elephant? 00:04:03.040 |
- Is the elephant consciousness in this metaphor? 00:04:14.360 |
that is actually providing the interface to everything 00:04:18.720 |
I think is the tool that directs the attention 00:04:21.880 |
of that system, which means it singles out features 00:04:35.920 |
- So everything outside of that consciousness 00:04:43.080 |
but it's also society that's outside of your... 00:04:48.320 |
So there is an environment to which the agent is stomping 00:04:51.320 |
and you are influencing a little part of that agent. 00:04:55.120 |
- So can you, is the agent a single human being? 00:05:06.160 |
is that it's a controller with a set point generator. 00:05:09.680 |
The notion of a controller comes from cybernetics 00:05:20.920 |
and the deviation of that value from a set point. 00:05:24.040 |
And it has a sensor that measures the system's deviation 00:05:32.680 |
So the controller tells the effector to do a certain thing. 00:05:38.560 |
between the set point and the current value of the system. 00:05:40.960 |
And there's environment which disturbs the regulated system, 00:05:55.880 |
And if you want to minimize the set point deviation 00:05:58.800 |
over a longer time span, you need to integrate it. 00:06:05.760 |
that your set point is to be comfortable in life, 00:06:08.320 |
maybe you need to make yourself uncomfortable first. 00:06:14.120 |
This is task of the controller is to use its sensors 00:06:34.920 |
then the task of the controller is to make a model 00:06:39.160 |
the conditions under which it exists and of itself. 00:06:45.760 |
And an agent is not necessarily a thing in the universe. 00:06:54.520 |
And when we notice the environment around us, 00:07:14.640 |
We are the agent that is using our own control model 00:07:23.440 |
And this is how we discover the idea that we have a body, 00:07:31.120 |
- Still don't understand what's the best way to think 00:07:34.960 |
of which object has agency with respect to human beings. 00:07:43.400 |
Is it the contents of the brain that has agency? 00:07:46.000 |
Like what's the actuators that you're referring to? 00:07:49.000 |
What is the controller and where does it reside? 00:07:54.080 |
'Cause I keep trying to ground it to space-time, 00:07:57.720 |
the three-dimensional space and the one dimension of time. 00:08:06.000 |
It depends on the way in which you're looking at the thing 00:08:16.640 |
Then you could say that Germany is the agent. 00:08:29.680 |
that basically affect the behavior of that nation state. 00:08:37.440 |
with, I think you were playfully mocking Jeff Hawkins 00:08:49.000 |
It's agents made up of agents made up of agents? 00:08:56.520 |
and the people are themselves agents in some kind of context 00:09:01.040 |
and then the people are made up of cells, each individual. 00:09:12.880 |
Most of the complexity that we are looking at, 00:09:15.600 |
everything in life is about self-organization. 00:09:18.480 |
So I think up from the level of life, you have agents. 00:09:33.720 |
but they're not that interesting agents that make models. 00:09:36.640 |
And because to make an interesting model of the world, 00:09:39.560 |
you typically need a system that is Turing complete. 00:09:52.280 |
So where do you think in this emerging complexity, 00:09:55.760 |
at which point does the thing start being living 00:09:59.100 |
- Personally, I think that the simplest answer 00:10:12.160 |
It's modular stuff that consists out of basically 00:10:17.160 |
this DNA tape with a redried head on top of it 00:10:20.480 |
that is able to perform arbitrary computations 00:10:27.680 |
that insulates the cell from its environment. 00:10:30.840 |
And there are chemical reactions inside of the cell 00:10:41.780 |
And if the cell goes into an equilibrium state, it dies. 00:10:46.560 |
And it requires something like a neck entropy extractor 00:10:52.200 |
So it's able to harvest neck entropy from its environment 00:10:58.120 |
- Yeah, so there's information and there's a wall 00:11:11.000 |
You could say that there are probably other things 00:11:13.380 |
in the universe that are cell-like and life-like, 00:11:21.520 |
to find an agreement of how to use the terms. 00:11:24.120 |
I like cells because it's completely coextensional 00:11:34.480 |
and this is very different from the non-animate stuff, 00:11:40.380 |
And it's mostly whether the cells are working or not. 00:11:43.080 |
And also this boundary of life where we say that, 00:11:46.120 |
for instance, a virus is basically an information packet 00:11:49.080 |
that is subverting the cell and not life by itself. 00:12:06.660 |
but this is eventually just how you want to use the word. 00:12:12.820 |
but is it somehow fundamental to the universe? 00:12:19.500 |
to eventually be drawn between life and non-life, 00:12:25.680 |
but there's nothing magical that is happening. 00:12:28.320 |
Living systems are a certain type of machine. 00:12:36.200 |
but the question is at which point is a system able 00:12:47.020 |
And of course, we can also build non-living things 00:12:49.200 |
that can do this, but we don't know anything in nature 00:12:52.420 |
that is not a cell and is not created by cellular life 00:13:03.080 |
I don't think we have the tools to see otherwise. 00:13:06.160 |
I always worry that we look at the world too narrowly. 00:13:11.160 |
Like there could be life of a very different kind 00:13:15.100 |
right under our noses that we're just not seeing 00:13:35.240 |
And I suspect that many of us ask ourselves since childhood, 00:13:40.760 |
What kind of systems and interconnections exist 00:13:51.320 |
and physics doesn't have much room at the moment 00:13:55.320 |
for opening up something that would not violate 00:13:59.760 |
the conservation of information as we know it. 00:14:02.060 |
- Yeah, but I wonder about time scale and scale, 00:14:07.040 |
spatial scale, whether we just need to open up our idea 00:14:15.480 |
It could be operating at a much slower time scale, 00:14:20.240 |
And it's almost sad to think that there's all this life 00:14:25.520 |
because we're just not thinking in terms of the right scale, 00:14:59.280 |
So complexity seems to be a necessary property of life. 00:15:11.940 |
- It seems to me that life is the main source 00:15:33.800 |
And this means that you can harvest like entropy 00:15:40.180 |
In some sense, the purpose of life is to create complexity. 00:15:46.840 |
I mean, there seems to be some kind of universal drive 00:16:00.040 |
I don't know if it's a property of the universe 00:16:02.360 |
or it's just a consequence of the way the universe works. 00:16:08.720 |
of emergent complexity that builds on top of each other 00:16:11.440 |
and starts having like greater and greater complexity 00:16:18.000 |
Little organisms building up a little society 00:16:20.760 |
that then operates almost as an individual organism itself. 00:16:24.080 |
And all of a sudden you have Germany and Merkel. 00:16:28.880 |
Everything that goes up has to come down at some point. 00:16:32.320 |
Right, so if you see this big exponential curve somewhere, 00:16:41.480 |
and the S-curve is the beginning of some kind of bump 00:16:45.560 |
And there is just this thing that when you are 00:16:58.920 |
And during that happening, you see an increase in complexity 00:17:02.960 |
because life forms are competing with each other 00:17:04.840 |
to get more and more and finer and finer corner 00:17:11.160 |
- But I feel like that's a gradual, beautiful process 00:17:14.000 |
that almost follows a process akin to evolution. 00:17:18.000 |
And the way it comes down is not the same way it came up. 00:17:23.000 |
The way it comes down is usually harshly and quickly. 00:17:26.560 |
So usually there's some kind of catastrophic event. 00:17:42.280 |
that could be fed has decreased dramatically. 00:17:44.840 |
And you could see that the quality of the art decreased 00:17:53.360 |
when they look at the history of the United States 00:18:11.280 |
Or are we at the downslope of the United States empire? 00:18:15.800 |
- It's very hard to say from a single human perspective, 00:18:18.520 |
but it seems to me that we are probably at the peak. 00:18:29.640 |
So my nature of optimism is I think we're on the rise. 00:18:35.920 |
But I think this is just all a matter of perspective. 00:18:47.440 |
in order to make that up thing actually work. 00:18:50.960 |
And so I tend to be on the side of the optimists. 00:18:53.600 |
- I think that we are basically a species of grasshoppers 00:19:00.720 |
you see an amazing rise of population numbers 00:19:08.760 |
But it's ultimately the question is, is it sustainable? 00:19:12.840 |
- See, I think we're a bunch of lions and tigers 00:19:21.400 |
And so I'm not exactly sure we're so destructive, 00:19:29.840 |
And if you look at the monkeys, they are very busy. 00:19:33.560 |
- The ones that have a lot of sex, those monkeys? 00:19:38.920 |
a discontent species that always needs to meddle. 00:19:58.720 |
And there's some prodding that the monkey gets to do. 00:20:12.920 |
Is this with Sam Harris or something like that? 00:20:20.480 |
you made a bunch of big debate points about free will. 00:20:27.760 |
where in terms of the monkey and the elephant, 00:20:31.680 |
do you think we land in terms of the illusion of free will? 00:20:37.240 |
- We have to think about what the free will is 00:20:44.400 |
We are not the thing that is making the decisions. 00:20:46.800 |
We are a model of that decision-making process. 00:20:56.120 |
And that difference is the first person perspective. 00:21:09.600 |
is that we often don't know what the best thing is. 00:21:15.560 |
We make informed bets using a betting algorithm 00:21:23.920 |
We don't know the mechanism by which we estimate 00:21:34.840 |
and the future, and then some kind of possibility, 00:21:41.640 |
And that's informed bet that the system is making. 00:21:46.440 |
the representation of that is what we call free will. 00:21:56.520 |
And yet if it was indeterministic, it would be random. 00:21:59.240 |
And it cannot be random because if it was random, 00:22:03.360 |
if just dice were being thrown in the universe 00:22:05.280 |
randomly forces you to do things, it would be meaningless. 00:22:18.520 |
you wouldn't experience it as a free will decision. 00:22:25.560 |
And you see this continuum between the free will 00:22:33.200 |
So for instance, when you are observing your own children, 00:22:40.040 |
where you have an agent with a set point generator. 00:22:47.360 |
And it might be confused and sometimes impulsive or whatever, 00:22:55.400 |
in the mind of the child, you see that it's automatic. 00:23:02.320 |
that will lead the child to making exactly the decision 00:23:19.680 |
that this individual can have at that moment. 00:23:24.680 |
because it's no longer decision-making under uncertainty. 00:23:35.040 |
So is this akin to systems like cellular automata 00:23:47.880 |
it starts to look like there's agents making decisions 00:24:03.080 |
that make the system evolve in deterministic ways, 00:24:08.080 |
it looks like there's organisms making decisions. 00:24:11.560 |
Is that where the illusion of free will emerges, 00:24:26.480 |
and you try to find some higher level regularity. 00:24:31.600 |
that you project into the world to make sense of it. 00:24:37.040 |
You have all these cells that interact with each other, 00:24:40.240 |
and the cells in our body are set up in such a way 00:24:42.720 |
that they benefit if their behavior is coherent, 00:24:49.720 |
And that means that they will evolve regulation mechanisms 00:24:52.840 |
that act as if they were serving a common goal. 00:24:55.840 |
And now you can make sense of all these cells 00:25:00.480 |
- Right, so for you then, free will is an illusion. 00:25:09.920 |
and it's the best model that it can come up with 00:25:11.960 |
under the circumstances, and it can get replaced 00:25:14.480 |
by a different model, which is automatic behavior, 00:25:30.280 |
and is there such a thing as you having control? 00:25:34.000 |
So like, are you manifesting your evolution as an entity? 00:25:39.000 |
- In some sense, the you is the model of the system 00:25:51.160 |
- And the contents of that model are being used 00:26:00.480 |
and the system creates that story like a loom, 00:26:16.200 |
or rather, we're not the writers of the story. 00:26:26.720 |
- I think that's mostly a confusion about concepts. 00:26:29.280 |
The conceptual illusion in our culture comes from the idea 00:26:40.040 |
- And then you have this dualist interpretation 00:26:48.960 |
and res cogitans, which is the world of ideas. 00:26:51.640 |
And in fact, both of them are mental representations. 00:26:54.560 |
One is the representations of the world as a game engine 00:27:01.080 |
And the other one's-- - That's the physical world? 00:27:02.240 |
- Yes, that's what we perceive as the physical world. 00:27:11.320 |
The world that you and me perceive is a game engine. 00:27:14.920 |
- And there are no colors and sounds in the physical world. 00:27:17.160 |
They only exist in the game engine generated by your brain. 00:27:20.080 |
And then you have ideas that cannot be mapped 00:27:29.520 |
and the objects that don't have a physical extension 00:27:52.840 |
but you're still seeing the rendering of that. 00:28:00.720 |
whether to shoot to turn left or to turn right 00:28:07.120 |
and Elder Scrolls and walking around in beautiful nature 00:28:17.200 |
in terms of perception to the bits, to the zeros and ones, 00:28:24.900 |
and your decisions actually feels like they're being applied 00:28:33.820 |
even though you don't have direct access to reality. 00:28:36.560 |
So there is basically a special character in the video game 00:28:39.560 |
that is being created by the video game engine. 00:28:42.640 |
- And this character is serving the aesthetics 00:28:47.080 |
- Yes, but I feel like I have control inside the video game. 00:28:57.760 |
it doesn't really matter that there's zeros and ones. 00:29:01.720 |
You don't care about the nature of the CPU that it runs on. 00:29:04.520 |
What you care about are the properties of the game 00:29:10.920 |
- And a similar thing happens when we interact with physics. 00:29:13.360 |
The world that you and me are in is not the physical world. 00:29:16.040 |
The world that you and me are in is a dream world. 00:29:25.080 |
but we know that the dynamics of the dream world 00:29:31.920 |
- But the causal structure of the dream world is different. 00:29:39.460 |
There's only water molecules that have tangents 00:29:42.440 |
between the molecules that are the result of electrons 00:29:47.360 |
in the molecules interacting with each other. 00:29:52.120 |
We're just seeing a very crude approximation. 00:29:59.320 |
Like to the point of being mapped directly one-to-one 00:30:07.680 |
This is like where you have like Donald Trump. 00:30:22.680 |
and our actions have impact in the real world, 00:30:28.740 |
- Yes, but it's basically like accepting the fact 00:30:34.560 |
is generated by something outside of this world 00:30:49.000 |
Free will is the monkey being able to steer the elephant. 00:30:58.080 |
Basically in the same way as you are modeling 00:31:02.340 |
that engulf your feet when you are walking on the beach 00:31:15.300 |
there is a certain abstraction that happens here. 00:31:22.100 |
in such a way that your brain can deal with it, 00:31:24.260 |
temporarily and spatially in terms of resources 00:31:31.200 |
whether your feet are going to get wet or not. 00:31:33.380 |
- But it's a really good interface and approximation. 00:31:44.760 |
So to me, waves is a really nice approximation 00:31:49.380 |
of what's all the complexity that's happening underneath. 00:31:53.160 |
that is constantly tuned to minimize surprises. 00:31:55.580 |
So it basically tries to predict as well as it can 00:32:06.740 |
Dream world is the result of the machine learning process 00:32:15.900 |
is not a different type of model or it's a different type, 00:32:19.460 |
but not different as in its model-like nature 00:32:25.580 |
Some things are oceans, some things are agents. 00:32:28.300 |
And one of these agents is using your own control model, 00:32:32.780 |
the things that you perceive yourself as doing. 00:32:38.220 |
- What about the fact that like when you're standing 00:32:56.580 |
and then maybe you have like friends or a loved one with you 00:33:02.740 |
- Yes, it's all happening inside of the dream. 00:33:06.860 |
But see, the word dream makes it seem like it's not real. 00:33:16.540 |
but the physical universe is incomprehensible 00:33:26.660 |
this is the best model of reality that I have. 00:33:30.840 |
is the thing that's happening at the very base of reality, 00:33:47.860 |
to say that there are models that are being experienced. 00:34:08.660 |
And the idea of physicalism is that we are in that layer, 00:34:13.480 |
Every alternative to physicalism is a simulation theory, 00:34:19.500 |
and the real world needs to be an apparent universe of that, 00:34:24.420 |
And when you look at the ocean in your own mind, 00:34:32.900 |
- Yes, but a simulation generated by our own brains. 00:34:36.780 |
- And this simulation is different from the physical reality 00:34:39.700 |
because the causal structure that is being produced, 00:34:42.940 |
is different from the causal structure of physics. 00:34:52.260 |
because your behavior will be inconsistent, right? 00:34:58.500 |
with an accurately predictive model of reality. 00:35:06.220 |
- So what do you think about Donald Hoffman's argument 00:35:12.780 |
the dream world to what he calls like the interface 00:35:23.100 |
which is like it could be an evolutionary advantage 00:35:26.500 |
to have the dream world drift away from physical reality. 00:35:30.980 |
- I think that only works if you have tenure. 00:35:32.820 |
As long as you're still interacting with the ground truth, 00:35:40.660 |
humans have achieved a kind of tenure in the animal kingdom. 00:35:45.140 |
- Yeah, and at some point we became too big to fail, 00:36:02.500 |
but eventually reality is going to come bite you in the ass 00:36:09.140 |
of what is that base layer of physical reality? 00:36:12.620 |
You have these attempts at the theories of everything, 00:36:21.140 |
or what Stephen Wolfram talks about with a hypergrass. 00:36:25.420 |
These are these tiny, tiny, tiny, tiny objects. 00:36:28.540 |
And then there is more like quantum mechanics 00:36:31.660 |
that's talking about objects that are much larger, 00:36:36.780 |
Do you have a sense of where the tiniest thing is 00:36:45.620 |
- I don't think that you can talk about where it is 00:36:48.580 |
because space is emergent over the activity of these things. 00:36:58.820 |
And so you could, in some sense, abstract it into locations 00:37:06.900 |
And this is how we construct our notion of space. 00:37:10.380 |
And physicists usually have a notion of space 00:37:20.980 |
who are very skeptical of the geometric notions. 00:37:34.220 |
which is in some sense what Gödel and Turing discovered 00:37:44.020 |
but if you have a language that talks about infinity, 00:37:46.820 |
at some point the language is going to contradict itself, 00:37:51.660 |
In order to deal with infinities in mathematics, 00:37:54.020 |
you have to postulate the existence initially. 00:38:06.020 |
you only look at the dynamics of too many parts to count. 00:38:09.060 |
And usually these numbers are not that large. 00:38:15.140 |
The infinities that we are dealing with in our universe 00:38:18.540 |
are mathematically speaking, relatively small integers. 00:38:39.260 |
And these convergent dynamics, these operators, 00:38:41.380 |
this is what we deal with when we are doing the geometry. 00:38:45.060 |
Geometry is stuff where we can pretend that it's continuous, 00:38:48.420 |
because if we subdivide the space sufficiently fine-grained, 00:38:56.140 |
And this approach dynamic, that is what we mean by it. 00:39:01.740 |
So to speak that you would know the last digit of pi, 00:39:19.900 |
- No, the issue is that everything that we think about 00:39:22.940 |
needs to be expressed in some kind of mental language, 00:39:40.540 |
which means that such a language is no longer valid. 00:39:43.620 |
And I suspect this is what made Pythagoras so unhappy 00:39:46.780 |
when somebody came up with the notion of irrational numbers 00:39:50.420 |
There's this myth that he had this person killed 00:40:02.380 |
That has confused mathematicians very seriously 00:40:06.060 |
because these numbers are not values, they are functions. 00:40:13.260 |
but you cannot pretend that pi has actually a value. 00:40:17.060 |
Pi is a function that would approach this value 00:40:28.620 |
between discrete and continuous for you to get to the bottom? 00:40:37.500 |
of the theory of everything, there's a few on the table. 00:40:41.140 |
So there's string theory, there's particular, 00:41:01.260 |
Eric Weinstein and a bunch of people throughout history. 00:41:06.660 |
who I think is one of the only people doing a discrete. 00:41:17.700 |
And digital physics is something that is, I think, 00:41:24.460 |
But the main reason why this is interesting is 00:41:29.460 |
because it's important sometimes to settle disagreements. 00:41:35.580 |
I don't think that you need infinities at all 00:41:46.140 |
You can build your computer algebra systems just as well 00:41:49.220 |
without believing in infinity in the first place. 00:41:52.820 |
- Yeah, so basically a limit means that something 00:42:02.460 |
and at some point the difference becomes negligible 00:42:09.740 |
if you have an n-gon which has enough corners, 00:42:12.860 |
then it's going to behave like a circle at some point. 00:42:15.220 |
And it's only going to be in some kind of esoteric thing 00:42:21.100 |
that you would be talking about this perfect circle. 00:42:23.860 |
And now it turns out that it also wouldn't work 00:42:25.940 |
in mathematics because you cannot construct mathematics 00:42:36.260 |
It's just a thing that some people thought we could. 00:42:40.820 |
So for instance, Roger Penrose uses this as an argument 00:42:46.180 |
that mathematicians can do dealing with infinities. 00:42:55.220 |
- Yeah, he talks about that there's the human mind 00:43:15.620 |
in the mathematical mind and in pure mathematics 00:43:24.100 |
that can be constructed in the physical universe. 00:43:31.700 |
cannot explain operations that happen in our mind. 00:43:36.900 |
So let's leave his discussion of consciousness aside 00:43:42.820 |
what he's basically referring to as intelligence? 00:43:46.100 |
So is the human mind fundamentally more capable 00:43:50.820 |
as a thinking machine than a universal Turing machine? 00:43:58.740 |
- So our mind is actually less than a Turing machine. 00:44:02.100 |
because it's defined as having an infinite tape. 00:44:08.100 |
- Our minds can only perform finitely many operations. 00:44:14.660 |
- And that's because he thinks that our minds 00:44:16.660 |
can do operations that have infinite resolution 00:44:23.260 |
Our minds are just able to discover these limit operators 00:44:37.460 |
So it's more than something that a Turing machine 00:44:42.100 |
So again, saying that there's something special 00:44:44.540 |
about our mind that cannot be replicated in the machine. 00:45:01.460 |
there's a human experience that includes intelligence, 00:45:09.420 |
that includes the hard problem of consciousness. 00:45:12.980 |
And the question is, can that be fully simulated 00:45:16.860 |
in the computer, in the mathematical model of the computer 00:45:36.500 |
What is the specific thing that cannot be modeled? 00:45:45.920 |
the section that he writes in the introduction 00:45:53.240 |
is the way in which human minds deal with infinities. 00:45:56.640 |
And that itself can, I think, easily be deconstructed. 00:46:11.080 |
And I concur, our experience is not mechanical. 00:46:28.640 |
as far as if you understand them as physical systems. 00:46:31.620 |
What can be conscious is the story of the system 00:46:36.220 |
in the world where you write all these things 00:46:48.220 |
And it's not a story that is written in a natural language. 00:46:52.500 |
in this multimedia language of the game engine. 00:46:55.380 |
And in there, you write in what kind of experience you have 00:46:59.340 |
and what this means for the behavior of the system, 00:47:01.460 |
for your behavior tendencies, for your focus, 00:47:03.700 |
for your attention, for your experience of valence, 00:47:06.420 |
And this is being used to inform the behavior of the system 00:47:10.740 |
And then the story updates with the reactions of the system 00:47:19.340 |
You don't live inside of the physical reality. 00:47:26.880 |
like you see, okay, it's in the perceptual language, 00:47:34.900 |
That's what consciousness is within that model, 00:47:42.660 |
When you play a video game, you can turn left 00:47:48.540 |
So in that dream world, how much control do you, 00:48:01.200 |
everybody's NPCs, and then there's the main character, 00:48:08.720 |
Is there a main character that you're controlling? 00:48:10.920 |
I'm getting to the point of the free will point. 00:48:14.560 |
- Imagine that you are building a robot that plays soccer. 00:48:30.760 |
and the world is disturbing him in trying to do this. 00:48:33.280 |
So he has to control many variables to make that happen 00:48:35.640 |
and to project itself and the ball into the future 00:48:51.360 |
And you could say that this robot does have agency 00:48:58.380 |
And the model is going to be a control model. 00:49:10.820 |
They don't have a unified model of the universe. 00:49:13.100 |
But there's not a reason why we shouldn't be getting there 00:49:25.940 |
So the robot will experience itself playing soccer 00:49:32.000 |
to construct a model of the locations of its legs 00:49:39.360 |
And it's not going to be at the level of the molecules. 00:49:42.200 |
It will be an abstraction that is exactly at the level 00:49:48.880 |
Right, it's going to be a high-level abstraction, 00:49:56.560 |
there is a model of the agency of that system. 00:50:03.020 |
that the contents of the model are going to be driving 00:50:06.040 |
the behavior of the robot in the immediate future. 00:50:08.860 |
- But there's the hard problem of consciousness, 00:50:14.320 |
there's a subjective experience of free will as well, 00:50:26.220 |
as it gets more and more and more sophisticated, 00:50:29.000 |
the agency comes from the programmer of the robot still, 00:50:35.780 |
- You could probably do an end-to-end learning system. 00:50:40.280 |
so you nudge the architecture in the right direction 00:50:44.320 |
But ultimately discovering the suitable hyper parameters 00:50:47.960 |
of the architecture is also only a search process, right? 00:51:10.960 |
You have to remove the human completely from the picture. 00:51:20.340 |
'Cause you have to go, you can't just shortcut evolution, 00:51:29.580 |
and that makes it seem like the robot is cheating, 00:51:35.960 |
- And you are looking at the current Boston Dynamics robots, 00:51:38.280 |
it doesn't look as if there is somebody pulling the strings, 00:51:44.840 |
So obviously with the case of Boston Dynamics, 00:51:49.760 |
it's always either hard-coded or remote-controlled. 00:51:59.040 |
but what I've been told about the previous ones 00:52:02.020 |
was that it's basically all cybernetic control, 00:52:05.260 |
which means you still have feedback mechanisms and so on, 00:52:13.200 |
It's for the most part just identifying a control hierarchy 00:52:29.400 |
that's just what I've been told about how they work. 00:52:31.400 |
- We have to separate several levels of discussions here. 00:52:34.980 |
So the only thing they do is pretty sophisticated control 00:52:40.920 |
in order to maintain balance or to right itself. 00:52:45.920 |
It's a control problem in terms of using the actuators 00:52:49.360 |
to when it's pushed or when it steps on a thing 00:52:52.420 |
that's uneven, how to always maintain balance. 00:52:55.400 |
And there's a tricky set of heuristics around that, 00:53:18.160 |
Dancing is even worse because dancing is hard coded in. 00:53:22.440 |
It's choreographed by humans, it's choreography software. 00:53:27.360 |
So there is no, of all that high level movement, 00:53:41.060 |
And yet we humans immediately project agency onto them, 00:53:48.900 |
- So the gap here is it doesn't necessarily have agency. 00:53:55.300 |
And the cybernetic control means you have a hierarchy 00:53:59.740 |
in certain boundaries so the robot doesn't fall over 00:54:06.680 |
with motion capture because the robot would fall over 00:54:10.640 |
the weight distribution and so on is different 00:54:12.800 |
from the weight distribution in the human body. 00:54:15.360 |
So if you were using the directly motion captured movements 00:54:19.560 |
of a human body to project it into this robot, 00:54:24.120 |
it will look a little bit off, but who cares. 00:54:35.860 |
to approximate a solution that makes it possible 00:54:44.780 |
and there's probably going to be some regression necessary 00:54:47.580 |
to get the control architecture to make these movements. 00:54:56.180 |
of how you should move and where you should move 00:54:59.900 |
- Yes, so I expect that the control level of these robots 00:55:07.860 |
But it's a relatively smart motor architecture. 00:55:10.340 |
It's just that there is no high level deliberation 00:55:14.420 |
- But see, it doesn't feel like free will or consciousness. 00:55:17.860 |
- No, no, that was not where I was trying to get to. 00:55:20.580 |
I think that in our own body, we have that too. 00:55:43.740 |
And this work already existed in the '80s and '90s. 00:55:46.980 |
People were starting to search for control architectures 00:55:51.140 |
and just use reinforcement learning architectures 00:55:57.740 |
the cybernetic control architecture already inside of you. 00:56:11.800 |
And now you add more and more control layers to this. 00:56:22.460 |
the pursuit of very different conflicting goals. 00:56:32.220 |
of the different set point violations that you have, 00:56:41.140 |
and eventually you need to come up with a strategy 00:56:46.020 |
And you don't need just to do this alone by yourself, 00:57:01.000 |
or even into ecosystemic intelligence on the planet, 00:57:10.020 |
And we have a number of priors built into us. 00:57:19.860 |
we can reverse engineer the goals that we're acting on, 00:57:34.580 |
- Yeah, I just don't know how big of a leap it is 00:57:38.500 |
to start create a system that's able to tell stories 00:57:48.260 |
or any robot that's operating in the physical space. 00:57:56.220 |
if it requires to solve the hard problem of consciousness, 00:58:01.620 |
- I suspect that consciousness itself is relatively simple. 00:58:07.300 |
and the interface between perception and reasoning. 00:58:09.900 |
That's for instance, the idea of the consciousness prior 00:58:14.700 |
that would be built into such a system by Joshua Bengio. 00:58:18.740 |
And what he describes, and I think that's accurate, 00:58:27.260 |
can be described through something like an energy function. 00:58:29.820 |
The energy function is modeling the contradictions 00:58:32.700 |
that exist within the model at any given point. 00:58:34.840 |
And you try to minimize these contradictions, 00:58:38.340 |
And to do this, you need to sometimes test things. 00:58:41.380 |
You need to conditionally disambiguate figure and ground. 00:58:49.540 |
but you will need to manually depress a few points 00:58:52.340 |
in your model to let it snap into a state that makes sense. 00:58:55.620 |
And this function that tries to get the biggest dip 00:58:59.660 |
according to Joshua Bengio, is related to consciousness. 00:59:04.640 |
that tries to maximize this dip in the energy function. 00:59:08.260 |
- Yeah, I think I would need to dig into details 00:59:13.340 |
because I think the way he uses the word consciousness 00:59:20.860 |
as opposed to the subjective experience, the hard problem. 00:59:23.700 |
- No, it's not even the self is in the world. 00:59:26.580 |
The self is the agent, and you don't need to be aware 00:59:31.100 |
The self is just a particular content that you can have, 00:59:35.980 |
But you can be conscious in, for instance, a dream at night 00:59:39.700 |
or during a meditation state, but you don't have a self. 00:59:43.780 |
- You're just aware of the fact that you are aware. 00:59:45.620 |
And what we mean by consciousness in the colloquial sense 00:59:59.180 |
- We are the thing that pays attention, right. 01:00:02.000 |
I don't see where the awareness that we're aware, 01:00:07.000 |
the hard problem doesn't feel like it's solved. 01:00:10.620 |
I mean, it's called a hard problem for a reason 01:00:14.900 |
because it seems like there needs to be a major leap. 01:00:19.340 |
- Yeah, I think the major leap is to understand 01:00:25.260 |
that a physical system is able to create a representation 01:00:33.960 |
But once you accept the fact that you are not in physics, 01:00:44.140 |
- Consciousness is being written into the story. 01:00:48.860 |
You ask yourself, is this real what I'm seeing? 01:00:51.300 |
And your brain writes into the story, yes, it's real. 01:00:53.860 |
- So what about the perception of consciousness? 01:01:12.860 |
and maybe you can tell me if they're neighboring ideas. 01:01:18.900 |
and the other is make it appear conscious to others. 01:01:27.420 |
What would it take to make you not conscious? 01:01:30.100 |
So when you are thinking about how you perceive the world, 01:01:35.220 |
can you decide to switch from looking at qualia 01:01:44.900 |
There is a particular way in which you can look at the world 01:01:48.380 |
and recognize its machine nature, including your own. 01:01:51.460 |
And in that state, you don't have that conscious experience 01:02:05.420 |
this is typically what we mean with enlightenment states. 01:02:11.740 |
but you can also do this on the experiential level, 01:02:16.260 |
- See, but then I can come back to a conscious state. 01:02:34.200 |
It's a nice thing to know that they're conscious, 01:02:38.340 |
and they can, I don't know how fundamental consciousness 01:02:43.940 |
but it seems like to be at least an important part. 01:02:48.060 |
And I ask that in the same kind of way for robots. 01:02:58.420 |
it feels like there needs to be elements of consciousness 01:03:11.420 |
that we are both acting on models of our own awareness. 01:03:14.900 |
- The question is how hard is it for the robot, 01:03:24.400 |
- Yes, so the issue for me is currently not so much 01:03:27.340 |
on how to build a system that creates a story 01:03:32.900 |
but to make an adequate representation of the world. 01:03:36.580 |
And the model that you and me have is a unified one. 01:03:40.260 |
It's one where you basically make sense of everything 01:03:45.020 |
Every feature in the world that enters your perception 01:03:51.820 |
And we don't have an AI that is able to construct 01:03:55.460 |
- So you need that unified model to do the party trick? 01:04:23.900 |
and predict the next frame and the sensory data 01:04:37.620 |
And this means you build lots and lots of functions 01:04:39.820 |
that take all the blips that you feel on your skin 01:04:42.180 |
and that you see on your retina, or that you hear, 01:04:48.140 |
that allows you to predict what kind of sensory data, 01:05:01.940 |
- You build a very accurate prediction mechanism 01:05:11.740 |
- And you have to do two things to make that happen. 01:05:13.820 |
One is you have to build a network of relationships 01:05:24.500 |
that is connected with relationships to other variables. 01:05:27.980 |
And these relationships are computable functions 01:05:36.100 |
there should be a face nearby that has the same direction. 01:05:42.540 |
because it's probably not a nose what you're looking at. 01:05:48.620 |
until you get to a point where your model converges. 01:06:01.140 |
And accommodation is the change of the models 01:06:12.380 |
that's able to do prediction and perception correct 01:06:21.500 |
is we want to minimize the contradictions in the model. 01:06:24.740 |
And of course, it's very easy to make a model 01:06:35.940 |
But you also want to reduce the degrees of freedom 01:06:47.860 |
between minimizing contradictions and reducing uncertainty. 01:06:59.340 |
So you need to assign value to what you observe. 01:07:05.180 |
that is estimating what you should be looking at 01:07:13.020 |
So you need to have something like convergence links 01:07:16.020 |
that tell you how to get from the present state of the model 01:07:25.620 |
And you need to have some kind of motivational system 01:07:30.820 |
So now we have a second agent next to the perceptual agent. 01:07:40.540 |
and that interacts with the perceptual system 01:07:49.060 |
what is it, a higher level narrative over some lower level? 01:08:05.740 |
and some cognitive needs and some social needs, 01:08:10.260 |
in your nervous system as the motivational system. 01:08:12.700 |
But they're basically cybernetic feedback loops. 01:08:23.140 |
or that makes your worm go to eat food and so on. 01:08:30.620 |
so it's able to solve that control problem to some degree. 01:08:33.660 |
And now what we learned is that it's very hard 01:08:39.380 |
to see what kind of relationships could exist between them. 01:08:48.380 |
if I would put the following things together? 01:08:51.060 |
Sometimes you find a gradient for that, right? 01:08:54.260 |
you don't need to remember where you came from. 01:08:59.420 |
But if you have a world where the problems are discontinuous 01:09:04.340 |
you need to retain memory of what you explored. 01:09:07.380 |
You need to construct a plan of what to explore next. 01:09:25.660 |
this attention agent is required for consciousness 01:09:31.460 |
So it's the index memories that this thing retains 01:09:36.220 |
when it manipulates the perceptual representations 01:09:39.220 |
to maximize the value and minimize the conflicts 01:09:44.860 |
So the purpose of consciousness is to create coherence 01:09:54.140 |
so you can coordinate your actions and so on. 01:09:56.380 |
And in order to do this, it needs to form memories. 01:10:04.140 |
that are being revisited later on to backtrack, 01:10:07.140 |
to undo certain states, to look for alternatives. 01:10:10.180 |
And these index memories that you can recall, 01:10:13.060 |
that is what you perceive as your stream of consciousness. 01:10:19.460 |
If you could not remember what you paid attention to, 01:10:22.940 |
- So consciousness is the index in the memory database. 01:10:30.020 |
But let me sneak up to the questions of consciousness 01:10:37.220 |
So we usually relate suffering to consciousness. 01:10:44.380 |
I think to me, that's a really strong sign of consciousness, 01:10:55.980 |
And like in your model, what you just described, 01:11:01.580 |
and what is the coherence with the perception, 01:11:05.120 |
with this predictive thing that's going on in the perception, 01:11:12.700 |
You know, the higher level suffering that humans do? 01:11:27.940 |
sends to another part of the mind to regulate its behavior, 01:11:30.860 |
to tell it the behavior that you're currently exhibiting 01:11:37.100 |
to move away from what you're currently doing 01:12:01.420 |
or you're mismodeling the dynamics of the world. 01:12:04.940 |
that cannot be improved by generating more pain. 01:12:12.380 |
What do you do if something doesn't get better, 01:12:20.980 |
without a change inside, this is what we call suffering. 01:12:36.460 |
the orchestra doesn't need much of a conductor, 01:12:42.060 |
or something is consistently producing disharmony 01:12:45.020 |
and mismatches, then the conductor becomes alert 01:12:49.020 |
So suffering attracts the activity of our consciousness. 01:13:08.820 |
We get some consciousness above our pay grade, maybe, 01:13:17.060 |
And trauma means that you are suffering an injury 01:13:27.940 |
And this means that the behavior of the system 01:13:33.460 |
in a way that some mismatch exists now in the regulation, 01:13:39.100 |
by following the pain in the direction which it hurts, 01:13:41.860 |
the situation doesn't improve, but get worse. 01:13:44.380 |
And so what needs to happen is that you grow up. 01:13:46.940 |
And that part that has grown up is able to deal 01:13:50.460 |
with the part that is stuck in this earlier phase. 01:13:54.620 |
you're adding extra layers to your cognition. 01:13:58.040 |
Let me ask you then, 'cause I gotta stick on suffering, 01:14:03.900 |
So not our consciousness, but the consciousness of others. 01:14:16.300 |
The amount of suffering on earth would be unthinkable." 01:14:53.980 |
and the darkest side of that, which is suffering, 01:15:00.340 |
And so I started thinking, how much responsibility 01:15:13.060 |
Like having to come up with a definition of consciousness 01:15:40.840 |
It's like these, you don't have to use the word consciousness 01:15:50.100 |
And these are the things that don't matter to me. 01:15:52.340 |
- Yeah, but when one of his commanders failed him, 01:15:54.580 |
he broke his spine and let him die in a horrible way. 01:15:59.140 |
And so in some sense, I think he was indifferent 01:16:02.620 |
to suffering or he was not indifferent in the sense 01:16:05.820 |
that he didn't see it as useful if he inflicted suffering, 01:16:09.440 |
but he did not see it as something that had to be avoided. 01:16:18.860 |
and the infliction of suffering to reach my goals 01:16:23.900 |
- I see, so like different societies throughout history 01:16:31.580 |
- But also even the objective of avoiding suffering. 01:16:37.540 |
I mean, this is where like religious belief really helps 01:16:40.740 |
that afterlife, that doesn't matter that you suffer or die, 01:17:02.180 |
And I don't think that religion has to be superstitious, 01:17:04.620 |
otherwise it should be condemned in all cases. 01:17:06.860 |
- You're somebody who's saying we live in a dream world, 01:17:12.540 |
There are limits to what languages can be constructed. 01:17:16.060 |
Mathematics breaks solid evidence for its own structure. 01:17:19.500 |
And once we have some idea of what languages exist 01:17:24.460 |
and what learning itself is in the first place, 01:17:26.580 |
and so on, we can begin to realize that our intuitions 01:17:31.580 |
that we are able to learn about the regularities 01:17:46.860 |
doesn't mean mathematics can't give us a consistent glimpse 01:17:54.980 |
- We can basically distinguish useful encodings 01:17:59.020 |
And when we apply our truth-seeking to the world, 01:18:07.500 |
What we typically do is we take the state vector 01:18:10.100 |
of the universe, separate it into separate objects 01:18:12.140 |
that interact with each other, so interfaces. 01:18:21.180 |
that we can apply to our models of the universe. 01:18:32.060 |
that are somehow discrete and interacting with each other 01:18:38.440 |
are projected into the world, not arbitrarily projected, 01:18:46.260 |
And we sometimes notice that we run into contradictions 01:18:51.020 |
like economic aspects of the world and so on, 01:18:53.980 |
or political aspects or psychological aspects 01:18:58.320 |
And the objects that we are using to separate the world 01:19:21.000 |
should correspond to the evidence that you have. 01:19:26.320 |
to talk about your favorite set of ideas and people, 01:19:41.640 |
And why to you is it not a useful framework of thought? 01:19:53.060 |
And postmodernism is a set of philosophical ideas 01:19:59.680 |
that is characterized by some useful thinkers, 01:20:08.540 |
because I think that it's not leading me anywhere 01:20:14.080 |
It's mostly, I think, born out of the insight 01:20:16.540 |
that the ontologies that we impose on the world 01:20:21.480 |
and that we can often get to a different interpretation 01:20:32.200 |
and a set of stories that are arbitrary, I think is wrong. 01:20:36.380 |
And the people that are engaging in this type of philosophy 01:21:02.280 |
But there is a very strong information of this on ideology, 01:21:25.240 |
because maybe my separation of the world into objects 01:21:33.560 |
But it mostly exists to dismiss the ideas of other people. 01:21:37.360 |
- It becomes, yeah, it becomes a political weapon of sorts. 01:21:51.840 |
that truth is something that is completely negotiable 01:22:10.520 |
the ideological part of any movement, actually, 01:22:17.600 |
And to me, an ideology is basically a viral memeplex 01:22:26.120 |
It gets warped in such a way that you're being cut off 01:22:35.920 |
- Right, so, I mean, there's certain properties 01:22:39.480 |
One of them is that dogmatism of just certainty, 01:22:50.320 |
It's very interesting to look at the type of model 01:23:00.040 |
the evidence for this is actually just much weaker 01:23:02.280 |
than you thought, and look here at some studies. 01:23:06.200 |
It's usually normative, which means some thoughts 01:23:09.360 |
are unthinkable because they would change your identity 01:23:16.360 |
And this cuts you off from considering an alternative, 01:23:23.280 |
to lock people into a certain mode of thought, 01:23:25.760 |
and this removes agency over your own thoughts, 01:23:28.720 |
It's basically not just a process of domestication, 01:23:32.660 |
but it's actually an intellectual castration that happens. 01:23:40.900 |
- Can I ask you about substances, chemical substances 01:23:53.160 |
So psychedelics that increasingly have been getting 01:23:58.860 |
So in general, psychedelics, psilocybin, MDMA, 01:24:02.660 |
but also a really interesting one, the big one, 01:24:06.300 |
What and where are the places that these substances take 01:24:12.160 |
the mind that is operating in the dream world? 01:24:15.380 |
Do you have an interesting sense how this throws a wrinkle 01:24:28.920 |
- I suspect that a way to look at psychedelics 01:24:41.620 |
are being severed in your mind, are no longer active. 01:24:45.320 |
Your mind basically gets free to move in a certain direction 01:24:48.880 |
because some inhibition, some particular inhibition 01:24:52.760 |
And as a result, you might stop having a self, 01:24:55.360 |
or you might stop perceiving the world as three-dimensional. 01:25:04.520 |
And I suppose that for every state that can be induced 01:25:07.600 |
with psychedelics, there are people that are naturally 01:25:11.000 |
So sometimes psychedelics shift you through a range 01:25:14.040 |
of possible mental states, and they can also shift you 01:25:17.060 |
out of the range of permissible mental states, 01:25:19.120 |
that is where you can make predictive models of reality. 01:25:22.660 |
And what I observe in people that use psychedelics a lot 01:25:29.600 |
Overfitting means that you are using more bits 01:25:34.560 |
for modeling the dynamics of a function than you should. 01:25:38.080 |
And so you can fit your curve to extremely detailed things 01:25:41.920 |
in the past, but this model is no longer predictive 01:25:45.880 |
- What is it about psychedelics that forces that? 01:25:59.360 |
So it feels like psychedelics expansion of the mind, 01:26:03.280 |
like taking you outside of, like forcing your model 01:26:14.400 |
what I would say is psychedelics are akin to is traveling 01:26:19.880 |
Like going, if you've never been to like India 01:26:22.040 |
or something like that from the United States, 01:26:24.280 |
very different set of people, different culture, 01:26:33.600 |
teleport people into a universe that is hyperbolic, 01:26:37.880 |
which means that if you imagine a room that you're in, 01:26:44.720 |
You need to go 720 degrees to go full circle. 01:26:48.120 |
- So the things that people learn in that state 01:26:50.880 |
cannot be easily transferred in this universe 01:27:00.360 |
of their spatial cognition has been desynchronized 01:27:08.680 |
So you learn something interesting about your brain. 01:27:11.000 |
It's difficult to understand what exactly happened, 01:27:13.200 |
but we get a pretty good idea once we understand 01:27:17.800 |
- Yeah, but doesn't give you a fresh perspective 01:27:30.240 |
- Well, there is no sound outside of your mind, 01:27:39.760 |
- Yeah, in the physical reality, there's sound waves 01:27:51.840 |
So, don't psychedelics give you a fresh perspective 01:28:24.160 |
you have seen things from certain perspectives, 01:28:31.560 |
which means they can learn to recognize them later 01:28:35.200 |
And I suspect that's the reason that many of us 01:28:47.840 |
and then it fluidly turns this into a flying dream 01:28:55.800 |
And similar things can happen with semantic relationships. 01:29:22.000 |
the way in which dreams are induced in the brain. 01:29:30.560 |
and you no longer get enough data from your eyes, 01:29:33.920 |
but there is a particular type of neurotransmitter 01:29:37.120 |
that is saturating your brain during these phases, 01:29:44.720 |
And psychedelics are linking into these mechanisms, 01:29:49.840 |
- So isn't that another trickier form of data augmentation? 01:30:00.920 |
So basically people are overclocking their brains, 01:30:17.840 |
which I just think that doesn't lead to overfitting, right? 01:30:26.320 |
my experiences with people that have done psychedelics 01:30:40.200 |
He genuinely believed, he writes in his manifestos, 01:30:46.280 |
because it's so much more efficient and so much better. 01:30:49.000 |
And he gave LSD to children in this community 01:30:52.640 |
of a few thousand people that he had near San Francisco. 01:30:55.760 |
And basically he was losing touch with reality. 01:31:10.720 |
What happened was that he got in a euphoric state. 01:31:13.520 |
That euphoric state happened because he was overfitting. 01:31:30.760 |
- I understand what you mean by overfitting now. 01:31:35.440 |
to the term overfitting in this case, but I got you. 01:31:42.720 |
from a lot of actions that he shouldn't have been doing. 01:31:46.600 |
who was studying dolphin languages and aliens and so on, 01:31:51.600 |
a lot of people that use psychedelics became very loopy. 01:31:58.680 |
when people are on psychedelics is that they are in a state 01:32:00.960 |
where they feel that everything can be explained now. 01:32:12.080 |
Very often these connections are over-interpretations. 01:32:23.360 |
or if it's more the social, like being the outsider 01:32:35.560 |
that could have a much stronger effect of overfitting 01:32:38.200 |
than do psychedelics themselves, the actual substances, 01:32:43.360 |
So it could be that as opposed to the actual substance. 01:32:46.520 |
If you're a boring person who wears a suit and tie 01:32:59.640 |
that the people you referenced are already weirdos. 01:33:10.960 |
started out as squares and were liberating themselves 01:33:17.920 |
of their own self-model, of their relationship to the world. 01:33:23.160 |
They basically saw and experienced a space of possibilities. 01:33:26.680 |
They experienced what it would be like to be another person. 01:33:37.480 |
I mean, I love the metaphor of data augmentation 01:33:44.880 |
of self-supervised learning in the computer vision domain 01:33:53.080 |
like chemically induced data augmentation in the human mind. 01:33:58.080 |
- There's also a very interesting effect that I noticed. 01:34:10.920 |
So severe cluster eight headaches or migraines 01:34:24.120 |
And there are no studies on this for that reason. 01:34:29.960 |
that it basically can reset the serotonergic system. 01:34:39.120 |
And as a result, it needs to find a new equilibrium. 01:34:41.920 |
And in some people, that equilibrium is better. 01:34:44.200 |
But it also follows that in other people, it might be worse. 01:34:47.120 |
So if you have a brain that is already teetering 01:34:52.840 |
it can be permanently pushed over that boundary. 01:34:55.560 |
- Well, that's why you have to do good science, 01:34:59.600 |
of how well it actually works for the different conditions 01:35:01.640 |
like MDMA seems to help with PTSD, same with psilocybin. 01:35:11.560 |
- Yeah, so based on the existing studies with MDMA, 01:35:14.680 |
it seems that if you look at Rick Doblin's work 01:35:18.120 |
and what he has published about this and talks about, 01:35:21.400 |
MDMA seems to be a psychologically relatively safe drug, 01:35:34.440 |
which a lot of kids do in party settings during raves 01:35:42.280 |
And this means that it's probably something that is best 01:35:45.400 |
and most productively used in a clinical setting 01:35:48.400 |
by people who really know what they're doing. 01:35:50.080 |
And I suspect that's also true for the other psychedelics. 01:35:59.520 |
the effects on Nisaki can be much more profound and lasting. 01:36:08.240 |
as far as I know in terms of the studies they're running, 01:36:19.000 |
So they could do like huge doses in a clinical setting 01:36:25.200 |
- Yeah, it seems that most of the psychedelics 01:36:29.320 |
which means that the effect on the rest of the body 01:36:36.200 |
Maybe ketamine can be dangerous in larger doses 01:36:41.320 |
But the LSD and psilocybin work in very, very small doses, 01:36:47.880 |
of psilocybin and LSD is only the active part. 01:36:54.160 |
on your mental wiring can be very dangerous, I think. 01:37:00.600 |
What are your thoughts about GPT-3 and language models 01:37:21.160 |
who realized I was bored in class and put me in his lab. 01:37:25.240 |
And he gave me the task to discover grammatical structure 01:37:30.120 |
And the unknown language that I picked was English 01:37:38.000 |
And he gave me the largest computer at the whole university. 01:37:42.000 |
It had two gigabytes of RAM, which was amazing. 01:37:45.400 |
with some in-memory compression to do statistics 01:37:49.360 |
And I first would create a dictionary of all the words, 01:37:53.960 |
which basically tokenizes everything and compresses things 01:37:57.320 |
so that I don't need to store the whole word, 01:38:02.320 |
And then I was taking this all apart in sentences 01:38:05.920 |
and I was trying to find all the relationships 01:38:25.400 |
and look at all the possibilities that can exist, 01:38:28.080 |
at least not with the resources that we had back then. 01:38:35.200 |
So I wrote something that was pretty much a hack 01:38:38.600 |
that did this for at least first order relationships. 01:38:42.360 |
And I came up with some kind of mutual information graph 01:38:47.520 |
that looks exactly like the grammatical structure 01:38:49.400 |
of the sentence, just by trying to encode the sentence 01:38:52.600 |
in such a way that the words would be written 01:38:58.040 |
And what I also found is that if we would be able 01:39:06.560 |
to reproduce grammatically correct sentences, 01:39:11.960 |
by just having more bits in these relationships. 01:39:18.680 |
And I didn't know how to make higher order models back then 01:39:27.680 |
And this thing that we cannot look at the relationships 01:39:33.200 |
is being solved in different domains in different ways. 01:39:35.720 |
So in computer graphics, the computer vision, 01:39:43.560 |
Convolutional neural networks are hierarchies of filters 01:39:46.600 |
that exploit the fact that neighboring pixels in images 01:39:57.640 |
that are next to each other hierarchically together, 01:40:18.120 |
So how can you learn the topology of language? 01:40:22.600 |
And I think for this reason that this difficulty existed, 01:40:28.680 |
in natural language processing, not in vision. 01:40:36.640 |
where every layer learns what to pay attention to 01:41:26.600 |
And so every word is basically a set of coordinates 01:41:33.120 |
to also encode the order of the words in a sentence 01:41:37.840 |
but 2048 tokens is about a couple pages of text 01:41:43.600 |
And so they managed to do pretty exhaustive statistics 01:41:49.160 |
between two pages of text, which is tremendous, right? 01:41:55.040 |
and I was only looking for first order relationships 01:42:15.200 |
that they're not only able to reproduce style, 01:42:30.240 |
So the results that GPT-3 got, I think were amazing. 01:42:34.080 |
- By the way, I actually didn't check carefully. 01:42:40.560 |
how you coupled semantics to the multiplication. 01:42:42.960 |
Is it able to do some basic math on two digit numbers? 01:42:53.120 |
- Yeah, it basically fails if you take larger digit numbers. 01:43:05.000 |
And this could be an issue of the training set, 01:43:19.440 |
- Yeah, and you're not writing a lot about it. 01:43:22.400 |
And the other thing is that the loss function 01:43:24.760 |
that is being used is only minimizing surprises. 01:43:27.040 |
So it's predicting what comes next in a typical text. 01:43:29.600 |
It's not trying to go for causal closure first as we do. 01:43:34.760 |
- But the fact that that kind of prediction works 01:43:51.920 |
So the problem is that it loses coherence at some point. 01:43:57.120 |
that GPT-3 is unable to deal with semantics at all, 01:44:01.360 |
because you ask it to perform certain transformations 01:44:04.080 |
in text and it performs these transformation in text. 01:44:09.200 |
to perform are transformations in text, right? 01:44:16.440 |
There was a paper that was generating lots and lots 01:44:32.480 |
that according to the authors, Mathematica could not. 01:44:35.120 |
To which some of the people in Mathematica responded 01:44:39.880 |
that they were not using Mathematica in the right way 01:44:43.560 |
I have not really followed the resolution of this conflict. 01:44:48.720 |
I really don't like in machine learning papers, 01:44:58.880 |
of specific use of Mathematica and demonstrate, 01:45:01.160 |
look, here's, they'll show successes and failures, 01:45:04.160 |
but they won't have a very clear representation 01:45:15.480 |
is that the authors could get better results from this 01:45:19.840 |
and their experiments than they could get from the vein, 01:45:23.480 |
which they were using computer algebra systems, 01:45:29.120 |
And it's able to perform substantially better 01:45:35.680 |
of training data using the same underlying algorithm. 01:45:41.320 |
So I'm using your tweets as if this is like Plato, right? 01:45:47.080 |
As if this is well thought out novels that you've written. 01:46:00.280 |
what are the limitations of GPT-3 when it scales? 01:46:04.200 |
So what do you think will be the capabilities of GPT-4, 01:46:11.760 |
- So obviously when we are writing things right now, 01:46:18.000 |
for the next generation of machine learning models. 01:46:23.080 |
And I think the tweet is already a little bit older 01:46:25.600 |
and we now have WUDAO and we have a number of other systems 01:46:33.560 |
Don't know what OpenAI's plans are in this regard. 01:46:39.040 |
So one is obviously everything you put on the internet 01:46:51.640 |
I read it as almost like GPT-4 is intelligent enough 01:46:58.240 |
So not only did a programmer tell it to collect this data 01:47:06.200 |
which is like it has achieved AGI kind of thing. 01:47:15.240 |
- So GPT-4 is listening and GPT-5 actually constructing 01:47:22.840 |
what everybody is trying to do right now in AI 01:47:25.000 |
is to extend the transformer to be able to deal with video. 01:47:28.000 |
And there are very promising extensions, right? 01:47:32.360 |
There's a book by Google that is called Perceiver, 01:47:36.520 |
and that is overcoming some of the limitations 01:47:39.760 |
of the transformer by letting it learn the topology 01:47:45.360 |
and by training it to find better input features. 01:47:50.080 |
So the basically feature abstractions that are being used 01:47:52.560 |
by this successor to GPT-3 are chosen such a way 01:48:02.240 |
So one of the limitations of GPT-3 is that it's amnesiac. 01:48:07.240 |
So it forgets everything beyond the two pages 01:48:10.000 |
that it currently reads, also during generation, 01:48:18.680 |
Can you just make a bigger, bigger, bigger input? 01:48:21.320 |
- No, I don't think that our own working memory 01:48:37.040 |
and it's not allowed to focus on anything else 01:48:58.640 |
We might get up and take another book from the shelf. 01:49:02.840 |
and we can edit our working memory in any way 01:49:13.080 |
So this ability to perform experiments on the world 01:49:22.200 |
to achieve a certain aesthetic of your modeling, 01:49:24.840 |
that is something that eventually needs to be done. 01:49:28.280 |
And at the moment, we are skirting this in some sense 01:49:31.080 |
by building systems that are larger and faster 01:49:33.400 |
so they can use dramatically larger resources 01:49:36.080 |
and human beings can do much more training data 01:49:45.480 |
- So do you think sort of making the systems like, 01:49:51.880 |
So like some of the language models are focused on two pages, 01:50:08.680 |
So it's like stacks, it's a GPT-3s all the way down. 01:50:13.720 |
So it's not necessarily two years, there's no gaps. 01:50:17.040 |
It's things out of two years or out of 20 years 01:50:24.600 |
that are predicted to be the most useful ones 01:50:29.720 |
And this prediction itself requires a very complicated model 01:50:32.800 |
and that's the actual model that you need to be making. 01:50:34.760 |
It's not just that you are trying to understand 01:50:54.280 |
So it starts out with the fact that you possibly 01:50:57.400 |
don't just want to have a feed-forward model, 01:51:04.520 |
you probably need to loop it back into itself 01:51:08.240 |
Once you do this, when you are predicting the next frame 01:51:12.040 |
and your internal next frame in every moment, 01:51:17.520 |
it means that signals can travel from the output 01:51:21.320 |
of the network into the middle of the network 01:51:25.920 |
- Do you think it could still be differentiable? 01:51:28.800 |
Do you think it still could be a neural network? 01:51:37.240 |
And when you want to deal with non-differentiable ones, 01:51:46.680 |
You need to be able to perform program synthesis. 01:51:49.360 |
You need to be able to backtrack in these operations 01:51:54.080 |
And this thing needs a model of what it's currently doing. 01:52:05.440 |
So let me ask you, it's not quite program synthesis, 01:52:21.240 |
I don't know if you got a chance to look at it, 01:52:22.800 |
but it's the system that's able to generate code 01:52:30.080 |
Like the header of a function with some comments. 01:52:34.880 |
or not a perfect job, which is very important, 01:52:39.240 |
but an incredibly good job of generating functions. 01:52:59.600 |
And that's because the majority of programming tasks 01:53:01.800 |
that are being done in the industry right now 01:53:06.560 |
- People are writing code that other people have written, 01:53:11.600 |
And a lot of the work that programmers do in practice 01:53:20.960 |
- How to copy and paste from Stack Overflow, that's right. 01:53:23.400 |
- Yes, and so of course we can automate that. 01:53:30.880 |
- Yes, but it's not just copying and pasting, 01:53:32.800 |
it's also basically learning which parts you need to modify 01:53:46.840 |
the semantics of what you're doing to some degree. 01:53:48.720 |
- Yeah, and you can automate some of those things. 01:53:51.600 |
The thing that makes people nervous, of course, 01:54:00.040 |
on the actual final operation of that program. 01:54:05.400 |
which in the space of language doesn't really matter, 01:54:08.800 |
but in the space of programs can matter a lot. 01:54:32.280 |
- But it's scarier when a program is doing it because, 01:54:48.060 |
to know when stuff is important to not mess up. 01:55:03.540 |
I mean, okay, if I give you code generated by 01:55:08.020 |
GitHub OpenPilot and code generated by a human 01:55:15.980 |
which, how do you select today and in the next 10 years, 01:55:21.820 |
Wouldn't you still be comfortable with the human? 01:55:24.260 |
- At the moment, when you go to Stanford to get an MRI, 01:55:29.540 |
they will write a bill to the insurance over $20,000. 01:55:34.540 |
And of this, maybe half of that gets paid by the insurance 01:55:40.540 |
And the MRI cost them $600 to make, maybe, probably less. 01:55:47.660 |
that writes the software and deploys this process? 01:55:50.520 |
It's very difficult for me to say whether I trust people. 01:56:01.940 |
where somebody is trying to serve an abstract greater whole 01:56:15.500 |
There's a lot of bad people, whether incompetent 01:56:29.580 |
the more resistance you have in your own human heart. 01:56:34.580 |
- But don't explain with malevolence or stupidity 01:56:40.140 |
So what happens in Stanford is not that somebody is evil. 01:56:45.100 |
It's just that they do what they're being paid for. 01:57:01.540 |
it's not absolute malevolence, but it's a small amount. 01:57:07.480 |
I mean, when you see there's something wrong with the world, 01:57:10.580 |
it's either incompetence and you're not able to see it, 01:57:15.100 |
or it's cowardice that you're not able to stand up, 01:57:17.780 |
not necessarily in a big way, but in a small way. 01:57:27.660 |
is a good example of that. - So the question is, 01:57:36.620 |
is going to crash, why would you try to save dollars? 01:57:39.540 |
If you don't think that humanity will be around 01:57:49.500 |
So the question is, is there an overarching aesthetics 01:57:53.980 |
that is projecting you and the world into the future, 01:58:16.420 |
we need to go beyond the insane bias discussions and so on, 01:58:22.040 |
between a statistic to their preferred current world model. 01:58:29.360 |
I was a little confused by the previous thing, 01:58:39.820 |
to having an optimism that human civilization 01:58:50.060 |
that it's a good thing for us to keep living. 01:58:54.060 |
- This morality itself is not an end to itself. 01:58:56.880 |
It's instrumental to people living in 100 years from now. 01:59:03.100 |
So it's only justifiable if you actually think 01:59:08.580 |
or increase the probability of people being around 01:59:12.500 |
And a lot of people don't actually believe that, 01:59:26.980 |
is for a lot of people no longer sustainable, 01:59:37.380 |
I think the leading cause of personal bankruptcy 01:59:48.820 |
and are achieving a much, much longer life as a result. 01:59:51.540 |
That's not actually the story that is happening 01:59:53.700 |
because you can compare it to other countries. 01:59:55.380 |
And life expectancy in the US is currently not increasing 02:00:01.760 |
So some industrialized countries are doing better 02:00:06.340 |
And what you can see is, for instance, administrative load. 02:00:10.060 |
The healthcare system has maybe to some degree deliberately 02:00:14.780 |
set up its job placement program to allow people 02:00:21.020 |
despite not having a useful use case in productivity. 02:00:29.440 |
And the number of administrators in the healthcare system 02:00:35.820 |
And this is something that you have to pay for, right? 02:00:37.860 |
And also the revenues that are being generated 02:00:41.260 |
in the healthcare system are relatively large 02:00:46.660 |
is because market mechanisms are not working. 02:01:00.820 |
- So this is a thing that has to do with values. 02:01:03.420 |
And this is not because people are malicious on all levels. 02:01:06.500 |
It's because they are not incentivized to act 02:01:09.140 |
on a greater whole, on this idea that you treat somebody 02:01:23.120 |
But I think there's a continued throughout history 02:01:25.920 |
mechanism design of trying to design incentives 02:01:29.360 |
in such a way that these systems behave better 02:01:32.760 |
I mean, it's a very difficult thing to operate 02:01:46.740 |
of what we are doing are predictive of the future 02:02:05.160 |
you probably make an estimate of what is the thing 02:02:09.420 |
What is it that I should change about my own policies? 02:02:18.460 |
Or I would change it into something different. 02:02:35.180 |
to operate your life is you need to always get sleep. 02:02:39.060 |
is totally the wrong way to operate in your life. 02:02:43.060 |
Like you should have gotten all your shit done in time 02:02:46.460 |
and gotten to sleep because sleep is very important 02:02:52.500 |
Look, the medical, the healthcare system is operating poor. 02:03:00.460 |
especially in the capitalist society we operate, 02:03:02.700 |
we keep running into trouble and last minute, 02:03:10.760 |
You have a lot of people that ultimately are trying 02:03:13.380 |
to build a better world and get urgency about them 02:03:18.380 |
when the problem becomes more and more imminent. 02:03:24.380 |
But if you look at the history, the long arc of history, 02:03:29.380 |
it seems like that operating on deadlines produces progress 02:03:39.060 |
should have engaged in mask production in January, 2020. 02:03:44.060 |
And that we should have shut down the airports early on 02:03:57.940 |
and then coming in and infecting people in the nursing homes 02:04:03.940 |
And that is something that was, I think, visible back then. 02:04:17.620 |
to not protect the nursing homes been punished? 02:04:20.580 |
Have the people that made the wrong decisions 02:04:23.180 |
with respect to testing that prevented the development 02:04:26.780 |
of testing by startup companies and the importing of tests 02:04:39.620 |
- No, just make sure that this doesn't happen again. 02:04:44.700 |
Yes, they're being held responsible by many voices, 02:04:50.740 |
that are going to see rise to the top in 10 years. 02:04:54.200 |
This moves slower than, there's obviously a lot 02:05:11.340 |
in the previous year is reverberating throughout the world. 02:05:29.140 |
And in this modernist time, the US felt actively threatened 02:05:35.740 |
The US was worried about possibility of failure. 02:05:38.740 |
And this imminence of possible failure led to decisions. 02:05:44.620 |
There was a time when the government would listen 02:05:53.620 |
So they would be writing letters to the government. 02:06:04.060 |
I don't think such a discussion would take place today. 02:06:12.660 |
I think the virus was not sufficiently deadly. 02:06:22.020 |
The mask, this is what I realized with masks early on. 02:06:25.340 |
They were not very quickly became not as a solution, 02:06:29.620 |
but they became a thing that politicians used 02:06:33.980 |
So the same things happened with vaccines, same thing. 02:06:38.820 |
people weren't talking about solutions to this problem 02:06:41.180 |
because I don't think the problem was bad enough. 02:06:47.980 |
I think in the developed world, things are too good 02:06:57.540 |
existential threats are faced, that's when we step up 02:07:03.020 |
Now, I don't, that's sort of my argument here, 02:07:10.700 |
I was hoping that it was actually sufficiently dangerous 02:07:14.940 |
for us to step up because especially in the early days, 02:07:30.700 |
so the masks point is a tricky one because to me, 02:07:35.700 |
the manufacture of masks isn't even the problem. 02:07:42.700 |
have not seen good science done on whether masks work or not. 02:07:45.900 |
Like there still has not been a large scale study. 02:07:49.440 |
To me, that should be, there should be large scale studies 02:07:55.180 |
in the same way that the vaccine development was aggressive. 02:08:06.020 |
there should be aggressive studies on that to understand. 02:08:12.180 |
there's still a lot of uncertainty about that. 02:08:14.180 |
Nobody wants to see this as an engineering problem 02:08:27.860 |
because our society in some sense perceives itself 02:08:37.960 |
That basically put us into the postmodernist mode. 02:08:45.260 |
the difference between the postmodern society 02:08:47.980 |
and the modern society is that the modernist society 02:08:52.340 |
and the postmodernist society has to deal with appearances. 02:09:02.260 |
and the media evaluates itself via other media, right? 02:09:36.340 |
And hopefully, I mean, this is where charismatic leaders 02:09:47.900 |
that will break through this postmodernist idea 02:09:55.460 |
and the drama on Twitter and all that kind of stuff. 02:10:02.600 |
to deal with the economic crisis that Germany was facing, 02:10:29.540 |
basically an option to completely expropriate 02:10:48.040 |
And the one that the Germans picked led to a catastrophe 02:10:54.380 |
And I'm not sure if the US has an immune response 02:10:58.080 |
I think that the far right is currently very weak in the US, 02:11:03.040 |
- Do you think from a historical perspective, 02:11:08.840 |
Hitler could have been stopped from within Germany 02:11:17.880 |
whether you want to focus on Stalin or Hitler, 02:11:22.460 |
as a political movement that could have been stopped. 02:11:32.420 |
It was a number of industrialists who supported him 02:11:47.940 |
and would act as a bulwark against Bolshevism, 02:11:56.260 |
And then many of the things that he was going to do, 02:12:01.660 |
was something where people thought this is rhetoric. 02:12:18.960 |
I want to carefully use this term, but uniquely evil. 02:12:29.100 |
So like, just thinking about the progress of history, 02:12:54.780 |
to do as destructive of the things that he did. 02:13:11.720 |
- It also depends on the context of the country 02:13:16.540 |
If you tell the Germans that they have a historical destiny 02:13:27.220 |
But Stalin has killed a few more people than Hitler did. 02:13:45.140 |
or if they were harmful to his racist project. 02:13:49.260 |
He basically felt that the Jews would be too cosmopolitan 02:13:57.500 |
and the value of society and an ethnostate in this way, 02:14:06.980 |
especially since they played such an important role 02:14:23.400 |
He basically, the Stalinist purges were such a random thing 02:14:26.140 |
where he said that there's a certain possibility 02:14:34.660 |
has a number of German collaborators or something, 02:14:40.660 |
the number of people that were killed in absolute numbers 02:14:44.260 |
were much higher under Mao than they were under Stalin. 02:14:49.540 |
The other thing is that you look at Genghis Khan and so on, 02:15:05.940 |
And it's very difficult to eventually measure it 02:15:09.540 |
because what's happening is basically evolution 02:15:15.020 |
where one monkey figures out a way to become viral 02:15:26.580 |
And what we find so abhorrent about these changes 02:15:29.920 |
is the complexity that is being destroyed by this. 02:15:33.740 |
that burns out a lot of the existing culture and structure 02:15:38.140 |
- Yeah, and it all just starts with one monkey, 02:15:44.500 |
and there's a bunch of them throughout history. 02:15:48.000 |
It's basically similar to wildfires in California, right? 02:15:51.140 |
The temperature is rising, there is less rain falling, 02:15:55.580 |
and then suddenly a single spark can have an effect 02:16:13.640 |
The argument was about morality of AI versus human. 02:16:18.320 |
And specifically in the context of writing programs, 02:16:48.640 |
- So I'm not talking about self-directed systems 02:16:52.660 |
that are making their own goals at a global scale. 02:17:01.140 |
to see order and patterns and use this as control models 02:17:11.120 |
to set the correct incentives for these systems, 02:17:14.360 |
- But so humans versus AI, let me give you an example. 02:17:22.180 |
Let's say there's a city somewhere in the Middle East 02:17:33.300 |
with drone technology is you have information 02:17:36.300 |
about the location of a particular terrorist, 02:17:40.620 |
you have a bombing of that particular building. 02:17:47.980 |
and also at the deployment of individual bombs 02:17:52.580 |
everything is done by human except the final targeting. 02:17:56.720 |
And the like the, it's like with spot similar thing, 02:18:01.860 |
Okay, what if you give AI control and saying, 02:18:16.820 |
all the bombing you do is constrained to the city. 02:18:19.460 |
Make sure it's precision based, but you take care of it. 02:18:22.880 |
So you do one level of abstraction out and saying, 02:18:31.440 |
the humans or the JavaScript GPT-3 generated code 02:18:38.240 |
I mean, that's, this is the kind of question I'm asking, 02:18:42.360 |
is the kind of bugs that we see in human nature, 02:18:47.120 |
are they better or worse than the kind of bugs we see in AI? 02:18:52.480 |
There is an issue that if people are creating 02:19:07.520 |
It's not because the computation is too expensive, 02:19:17.000 |
of the world because the AI does not understand 02:19:19.320 |
the context that it's operating in the right way. 02:19:22.000 |
And this is something that already happens with Excel. 02:19:24.840 |
Right, you don't need to have an AI system to do this. 02:19:30.440 |
where humans decide using automated criteria, 02:19:43.360 |
to some automatic criterion by people, right? 02:19:49.080 |
The issue is not the AI, it's the automation. 02:19:52.360 |
- So there's something about, right, it's automation. 02:20:00.760 |
where you give control to AI to do the automation. 02:20:07.240 |
that it feels like the scale of bug and scale mistake 02:20:10.880 |
and scale of destruction that could be achieved 02:20:19.760 |
an entire country accidentally versus humans. 02:20:22.720 |
It feels like the more civilians die as a react 02:20:27.320 |
or suffer as the consequences of your decisions, 02:20:35.640 |
And so like, it becomes more and more unlikely 02:20:47.160 |
- In a way, the AI that we're currently building 02:21:04.360 |
And I think that the main issue is not on the side 02:21:07.360 |
of the AI, it's on the side of the human command hierarchy 02:21:12.360 |
- So the question is, how hard is it to encode, 02:21:15.800 |
to properly encode the right incentives into the AI? 02:21:21.480 |
what happens if we let our airplanes being flown 02:21:24.520 |
with AI systems and then neural network is a black box 02:21:32.360 |
There are functional approximators using linear algebra 02:21:36.680 |
and there are performing things that we can understand. 02:21:40.080 |
But we can also, instead of letting the neural network fly 02:21:43.400 |
the airplane, use the neural network to generate approval 02:21:51.880 |
And so we can use our AI by combining different technologies 02:21:57.720 |
than the systems that a human being could create. 02:22:00.480 |
And so in this sense, I would say that if you use 02:22:11.400 |
but just to hack something together, because you can, 02:22:15.320 |
And if people are acting under these incentives 02:22:17.240 |
that they get away with delivering shoddy work 02:22:20.420 |
more cheaply using AI, there's less human oversight 02:22:25.160 |
- The thing is though, AI is still going to be unreliable, 02:22:37.240 |
and it's something that we can figure out and work with. 02:22:45.400 |
and the social systems that we can build and maintain 02:22:56.340 |
- Well, and also who creates the AI, who controls it, 02:23:00.160 |
who makes money from it, because it's ultimately humans. 02:23:11.200 |
I think that the story of a human being is somewhat random. 02:23:29.420 |
It's nice for those incentives to be transparent. 02:23:36.120 |
There seems to be a significant distrust of tech, 02:23:44.400 |
or people that run, for example, social media companies, 02:23:48.960 |
There's not a complete transparency of incentives 02:23:53.120 |
under which that particular human being operates. 02:24:00.760 |
or what the marketing team says for a company, 02:24:04.280 |
And that becomes a problem when the algorithms 02:24:08.280 |
and the systems created by him and other people 02:24:12.800 |
in that company start having more and more impact on society. 02:24:21.960 |
the definition and the explainability of the incentives 02:24:26.040 |
was decentralized such that nobody can manipulate it, 02:24:34.240 |
of how these systems actually operate could be done, 02:24:38.040 |
then yes, I think AI could achieve much fairer, 02:24:59.840 |
the communication of how the system actually works, 02:25:02.440 |
that feels like you can run into a lot of trouble. 02:25:05.320 |
And that's why there's currently a lot of distrust 02:25:12.660 |
- I suspect what happened traditionally in the US 02:25:16.880 |
was that since our decision-making is much more central, 02:25:19.840 |
or decentralized than in an authoritarian state, right? 02:25:30.280 |
and cohesion in society by controlling what people thought 02:25:40.360 |
It's not, I think, so much Russian influence or something. 02:25:47.840 |
with a conspiracy theory and disrupt what people think. 02:25:52.440 |
And if that conspiracy theory is more compelling 02:25:58.200 |
public conspiracy theory that we give people as a default, 02:26:03.440 |
You suddenly have the situation that a single individual 02:26:05.960 |
somewhere on a farm in Texas has more listeners than CNN. 02:26:10.060 |
- Which particular farmer are you referring to in Texas? 02:26:19.200 |
- Yes, I had dinner with him a couple of times. 02:26:22.240 |
It's an interesting situation because you cannot get 02:26:39.520 |
for the long-term effects of projecting these theories 02:26:43.920 |
And now there is a push of making social media 02:26:46.960 |
more like traditional media, which means that the opinion 02:26:54.640 |
With the goal of getting society into safe waters 02:26:58.400 |
and increase the stability and cohesion of society again, 02:27:08.360 |
And the incentives that people are under when they do this 02:27:11.440 |
are in such a way that the AI ethics that we would need 02:27:17.160 |
becomes very often something like AI politics, 02:27:26.160 |
another side is going to be disagreeing with. 02:27:35.700 |
to get vaccinated, it will mean that the people 02:27:37.660 |
that don't like you will not want to get vaccinated. 02:27:41.040 |
And as soon as you have this partisan discourse, 02:27:43.600 |
it's going to be very hard to make the right decisions 02:27:47.120 |
because the incentives get to be the wrong ones. 02:27:51.160 |
It needs to be done by people who do statistics all the time 02:27:54.240 |
and have extremely boring, long-winded discussions 02:27:59.640 |
because they are too complicated, but that are dead serious. 02:28:02.540 |
These people need to be able to be better at statistics 02:28:05.840 |
than the leading machine learning researchers. 02:28:07.920 |
And at the moment, the AI ethics debate is the one 02:28:16.840 |
and is able to signal that opinion in the right way- 02:28:24.360 |
because the field is so crucially important to our future. 02:28:28.280 |
but the only qualification you currently need 02:28:31.880 |
is to be outraged by the injustice in the world. 02:28:56.340 |
and we are around in a few hundred years from now, 02:29:00.480 |
preferably with a comfortable technological civilization 02:29:04.880 |
- I generally have a very foggy view of that world, 02:29:19.040 |
And whenever I see different policies or algorithms 02:29:24.560 |
obviously that's the ones that kind of resist. 02:29:27.980 |
- So the thing that terrifies me about this notion 02:29:30.800 |
is I think that German fascism was driven by love. 02:29:45.540 |
You're talking to the wrong person in this way about love. 02:29:52.600 |
And I think that love is the discovery of shared purpose. 02:29:56.020 |
It's the recognition of the sacred and the other. 02:29:59.720 |
And this enables non-transactional interactions. 02:30:14.760 |
like deep appreciation of the world around you fully, 02:30:19.760 |
like including the people that are very different than you, 02:30:26.000 |
the people that disagree with you completely, 02:30:33.520 |
And it's like appreciation of the full mess of it. 02:30:53.480 |
And now if you scale it up, what you recognize 02:30:59.200 |
to a next level agency, to the highest level agency 02:31:06.400 |
or beyond that, where you could say intelligent complexity 02:31:10.920 |
in the universe that you try to maximize in a certain way. 02:31:22.640 |
And once you project an aesthetic into the future, 02:31:25.400 |
you can see that there are some which defect from it, 02:31:33.840 |
You and me would probably agree that Hitler was evil 02:31:37.080 |
because the aesthetic of the world that he wanted 02:31:40.000 |
is in conflict with the aesthetic of the world 02:32:02.400 |
- No, it was just that there was no consensus 02:32:04.560 |
that the aesthetics that he had in mind were unacceptable. 02:32:09.560 |
Love is complicated because you can't just be so open-minded 02:32:31.840 |
having a certainty of what is and wasn't evil, 02:32:34.840 |
like always drawing lines of good versus evil. 02:32:43.720 |
like hard stances extending up against what is wrong, 02:32:51.320 |
and at the same time, empathy and open-mindedness 02:32:55.400 |
of towards not knowing what is right and wrong, 02:33:01.400 |
- I found that when I watched the Miyazaki movies 02:33:03.600 |
that there is nobody who captures my spirituality 02:33:17.120 |
not only an answer to Disney's simplistic notion of Mowgli, 02:33:24.960 |
and as soon as he sees people, realizes that he's one of them 02:33:27.760 |
and the way in which the moral life and nature 02:33:32.760 |
is simplified and romanticized and turned into kitsch. 02:33:45.320 |
and who cannot be socialized because she cannot be tamed, 02:33:52.400 |
it's something that is very, very complicated. 02:33:54.200 |
You see people extracting resources and destroying nature, 02:34:01.240 |
but to be able to have a life that is free from, 02:34:15.160 |
You see this moment when nature is turned into a garden 02:34:24.320 |
And to these questions, there is no easy answer. 02:34:26.800 |
So he just turns it into something that is being observed 02:34:31.160 |
And that happens with a certain degree of inevitability. 02:34:41.280 |
It's this little girl that is basically Heidi. 02:34:45.760 |
And I suspect that happened because when he did field work 02:34:55.680 |
he traveled to Switzerland and Southeastern Europe 02:35:04.280 |
and a certain way of life that informed his future thinking. 02:35:08.120 |
And Heidi has a very interesting relationship 02:35:15.920 |
She is in a way fearless because she is committed 02:35:20.800 |
Basically, she is completely committed to serving God. 02:35:26.320 |
It has nothing to do with the Roman Catholic Church 02:35:42.040 |
because she is not a girl boss or something like this. 02:35:48.640 |
She is the justification for the men in the audience 02:35:52.440 |
to protect her, to build a civilization around her 02:35:59.200 |
who is innocent and therefore nailed to the cross. 02:36:04.040 |
She is being protected by everybody around her 02:36:16.320 |
And this notion of innocence is not universal. 02:36:21.440 |
His idea of Germany was not that there is an innocence 02:36:26.840 |
There was a predator that was going to triumph. 02:36:32.240 |
There are many religions which don't care about innocence. 02:36:34.800 |
They might care about increasing the status of something. 02:36:39.800 |
And that's a very interesting notion that is quite unique 02:36:51.760 |
into the most relevant Protestant philosopher today. 02:37:23.400 |
And so maybe it's a natural convergence point 02:37:34.000 |
and our individual role as ants in this very large society. 02:37:42.080 |
between good and evil runs to the heart of every man. 02:37:44.680 |
Do you think all of us are capable of good and evil? 02:38:04.280 |
or whatever the highest ideal for a society you want, 02:38:15.840 |
to what we're able to do in terms of good and evil? 02:38:18.760 |
- So there is a certain way, if you are not terrible, 02:38:24.040 |
if you are committed to some kind of civilizational agency, 02:38:33.120 |
In the eyes of that transcendental principle, 02:38:39.000 |
otherwise you have just individual aesthetics. 02:38:43.200 |
is not evil because the cat does not envision 02:38:50.640 |
where there is no violence and nobody is suffering. 02:39:02.720 |
- No, but within, I guess the question is within the aesthetic, 02:39:12.120 |
it seems like we're still able to commit evil. 02:39:20.880 |
you are not necessarily are this next level agent, right? 02:39:26.080 |
like a cell does to its organism, its hyperorganism. 02:39:31.400 |
that it's being implemented by you and others. 02:39:34.640 |
And that means that you're not completely fully serving it. 02:39:40.400 |
whether you are acting on your impulses and local incentives 02:39:58.880 |
And this is the line between good and evil, right? 02:40:05.760 |
And here I'm acting on what I consider to be sacred. 02:40:22.600 |
It's not an immortal thing that is intrinsically valuable. 02:40:27.560 |
that you project to understand what's happening. 02:40:29.640 |
Somebody is serving this transcendental sacredness 02:40:33.300 |
If you don't have this soul, you cannot be evil. 02:40:39.720 |
- So if you look at life, like starting today 02:40:42.280 |
or starting tomorrow, when we leave here today, 02:40:48.280 |
that you can take through life, maybe countless. 02:40:59.840 |
some of those trajectories are the ideal life? 02:41:04.360 |
A life that if you were to be the hero of your life story, 02:41:10.960 |
Like, is there some Josh or Bach that you're striving to be? 02:41:16.040 |
as an individual trying to make a better world 02:41:24.740 |
And how much am I responsible for the failure to do so? 02:41:35.760 |
- In my own world view, I'm not very important. 02:41:38.320 |
So I don't have place for me as a hero in my own world. 02:41:48.080 |
And so it's not important for me to have status 02:41:57.400 |
if a few people can see me, that can be my friends. 02:42:09.720 |
but more in private, in the quiet of your own mind. 02:42:16.080 |
and would consider it a failure if you don't become that? 02:42:23.400 |
I don't perceive myself as having such an identity. 02:42:32.360 |
but it's basically a lack of having this notion 02:42:45.000 |
I mean, it's the leaf floating down the river. 02:43:13.640 |
Or the other way, I forgot which way it goes. 02:43:17.260 |
Can I ask you, I don't know if you know who Michael Malice is 02:43:21.720 |
but in terms of constructing systems of incentives, 02:43:29.560 |
I don't think I've talked to you about this before. 02:43:42.960 |
to collaborations between human beings thriving. 02:43:50.560 |
What's the role of government in a society that thrives? 02:43:56.920 |
Is anarchism at all compelling to you as a system? 02:44:00.600 |
So like not just small government, but no government at all. 02:44:07.960 |
The government is an agent that imposes an offset 02:44:12.720 |
on your reward function, on your payout metrics. 02:44:15.600 |
So your behavior becomes compatible with the common good. 02:44:19.540 |
- So the argument there is that you can have collectives 02:44:25.680 |
like governing organizations, but not government. 02:44:28.680 |
Like where you're born on a particular set of land 02:44:32.600 |
and therefore you must follow this rule or else. 02:44:44.940 |
So with government, the key aspect of government 02:44:48.200 |
is it protects you from the rest of the world with an army 02:45:00.080 |
It's the only one that's able to do violence. 02:45:15.720 |
by starting your own army because the government 02:45:17.760 |
will come down on you and destroy you if you try to do that. 02:45:20.960 |
And in countries where you can build your own army 02:45:23.320 |
and get away with it, some people will do it, right? 02:45:25.720 |
In these countries is what we call failed countries 02:45:33.520 |
the point is not to appeal to the moral intentions of people 02:45:39.220 |
if they get ahead with them that feel a particular kind 02:45:42.760 |
So you need to destroy that ecological niche. 02:45:45.280 |
And if effective government has a monopoly on violence, 02:45:50.080 |
it can create a world where nobody is able to use violence 02:45:54.820 |
So you want to use that monopoly on violence, 02:45:57.080 |
not to exert violence, but to make violence impossible, 02:46:02.160 |
So people need to get ahead with nonviolent means. 02:46:06.100 |
- So the idea is that you might be able to achieve that 02:46:12.200 |
So with the forces of capitalism is create security companies 02:46:22.520 |
it would be a much better representative of the people 02:46:48.360 |
until they are having a monopoly on violence. 02:46:53.920 |
So it's basically converging to the same thing. 02:47:00.000 |
I feel like it always converges towards government at scale. 02:47:03.060 |
But I think the idea is you can have a lot of collectives 02:47:06.100 |
that are, you basically never let anything scale too big. 02:47:11.100 |
So one of the problems with governments is it gets too big 02:47:26.000 |
So a successful company like Amazon or Facebook, 02:47:41.080 |
But there is something about the abuses of power, 02:47:51.920 |
- So the question is how can you set the incentives 02:47:56.400 |
And this mostly applies at the highest levels of government. 02:47:59.960 |
And because we haven't found a way to set them correctly, 02:48:02.960 |
we made the highest levels of government relatively weak. 02:48:08.600 |
why we had difficulty to coordinate the pandemic response. 02:48:22.720 |
And that's basically what happens in the next generation. 02:48:31.480 |
And maybe we don't agree on this, but if we did, 02:48:35.080 |
how can we make sure that this stays like this? 02:48:54.760 |
that the regulation should be happening, right? 02:48:59.800 |
and the regulator would be properly incentivized 02:49:03.760 |
and change the payout metrics of everything below it 02:49:06.340 |
in such a way that the local prisoners' dilemmas 02:49:24.940 |
the parts of government that don't work well currently 02:49:37.300 |
It's basically, it hasn't caught up in terms of technology. 02:49:46.100 |
of being able to have a lot of access to data, 02:49:48.420 |
be able to vote on different ideas at a local level, 02:49:52.060 |
at all levels, at the optimal level, like you're saying, 02:50:06.240 |
I feel like that's where government could operate that well 02:50:10.340 |
and can also break apart the inefficient bureaucracies 02:50:23.020 |
and evolutionary competition of modes of government 02:50:25.660 |
and of individual governments is in these modes. 02:50:29.900 |
is some kind of organism that has found different solutions 02:50:34.980 |
And you could look at all these different models 02:50:54.860 |
with the ugliness of the real existing solutions 02:51:00.980 |
And I suspect that communism originally was not a utopia. 02:51:04.540 |
I think that in the same way as original Christianity, 02:51:15.300 |
in which humans can coexist at scale without coercion. 02:51:20.300 |
The same way as we do in a healthy family, right? 02:51:24.580 |
you don't terrorize each other into compliance, 02:51:32.280 |
and what the intended future of the whole thing is. 02:51:35.340 |
And everybody coordinates their behavior in the right way 02:51:42.600 |
are instrumental to making that happen, right? 02:51:53.400 |
or other forms of terror to make that happen. 02:52:01.220 |
replaced a part of the economic terror with moral terror. 02:52:04.900 |
So we were told to do the right thing for moral reasons. 02:52:11.680 |
And the moral terror had actual real cost, right? 02:52:17.900 |
And the other thing is that the idea of communism 02:52:24.860 |
So it basically was projected into the afterlife. 02:52:35.540 |
that was presently wrong with society morally. 02:52:42.020 |
because it was too ideal and too far in the future 02:52:48.540 |
And the same thing happened with Christianity, right? 02:52:55.380 |
And I think this was just the idea of God's kingdom, 02:53:00.220 |
the next level transcendental agent in the perfect form. 02:53:03.000 |
So everything goes smoothly and without violence 02:53:05.680 |
and without conflict and without this human messiness 02:53:08.980 |
on this economic messiness and the terror and coercion 02:53:14.860 |
And the idea of whether humans can exist at scale 02:53:17.900 |
in a harmonious way and non-coercively is untested. 02:53:27.580 |
all the good things without any of the bad things. 02:53:30.740 |
And you are, I think, very susceptible to believe 02:53:35.240 |
and don't understand that everything has to happen 02:53:38.440 |
in causal patterns, that there's always feedback loops 02:53:42.480 |
There's nothing that just happens because it's good or bad. 02:53:47.200 |
They only exist with respect to larger systems. 02:53:50.640 |
- So can you intuit why utopias fail as systems? 02:54:02.900 |
so it's not only because it's impossible to achieve utopias, 02:54:34.560 |
- That's a bit like saying, why is my garden not perfect? 02:54:37.260 |
It's because some evil weeds are overgrowing it 02:54:45.520 |
and requires minimal interactions by the gardener. 02:54:56.400 |
not just the implementation of the desired functionality, 02:54:58.880 |
but the next level design, also in biological systems. 02:55:01.920 |
You need to create a system that wants to converge 02:55:06.200 |
And so instead of just creating an institution 02:55:08.760 |
like the FDA that is performing a particular kind of role 02:55:11.680 |
in society, you need to make sure that the FDA 02:55:19.240 |
to do it optimally and then makes the performance 02:55:24.240 |
instrumental to that thing, that actual goal, right? 02:55:27.740 |
And that is much harder to design and to achieve. 02:55:32.560 |
I mean, listen, communism also was quote unquote 02:55:43.560 |
It's just, it wasn't working given human nature. 02:55:45.920 |
The incentives were not correct given human nature. 02:56:04.640 |
There's only so much status to give for that. 02:56:06.920 |
And most people will not fall for this, right? 02:56:09.360 |
Or you can pay them and you probably have to pay them 02:56:12.960 |
in an asymmetric way because if you pay everybody the same 02:56:25.860 |
So capitalism is the present solution to the system. 02:56:28.640 |
And what we also noticed that I think that Marx was correct 02:56:32.160 |
in saying that capitalism is prone to crisis, 02:56:35.160 |
that capitalism is a system that in its dynamics 02:56:42.920 |
And that eventually it produces an enormous potential 02:56:47.440 |
for productivity, but it also is systematically 02:56:52.200 |
So a lot of people cannot participate in the production 02:56:58.480 |
We observe that the middle class in the US is tiny. 02:57:01.480 |
A lot of people think that they're middle class, 02:57:08.680 |
Every class is a magnitude smaller than the previous class. 02:57:16.960 |
- I think about classes, it's really like airline classes. 02:57:27.800 |
- Business class and very few are first class 02:57:37.000 |
probably I would push back against that definition 02:57:40.000 |
It does feel like the middle class is pretty large, 02:57:41.520 |
but yes, there's a discrepancy in terms of wealth. 02:57:46.640 |
- So if you think about in terms of the productivity 02:57:50.960 |
there is no reason for anybody to fly economy. 02:57:54.040 |
We would be able to let everybody travel in style. 02:57:57.960 |
- Well, but also some people like to be frugal 02:58:07.320 |
but you also don't need to be tortured, right? 02:58:24.400 |
So that, but that has nothing to do with the calm 02:58:30.240 |
- Yeah, I have two kids and sometimes I have to go back 02:58:35.000 |
And that means going from the West Coast to Germany 02:58:42.680 |
- Is it true that sort of when you're a father, 02:58:45.320 |
you grow immune to the crying and all that kind of stuff? 02:58:52.280 |
it can be other people's kids can be quite annoying 02:58:59.600 |
in the default natural way, you're lucky in this regard, 02:59:11.280 |
that in a given situation, they cannot do anything 02:59:21.080 |
my son is typically acting on pure experience 02:59:35.080 |
where he was just immediately expressing what he felt. 02:59:37.600 |
And if you cannot regulate this from the outside, 02:59:40.000 |
there's no point to be upset about it, right? 02:59:42.280 |
It's like dealing with weather or something like this. 02:59:58.960 |
at the top of her lungs and you're almost doing an accident 03:00:03.840 |
What should I have done to make you stop screaming? 03:00:10.120 |
- I think that's like a cat versus dog discussion. 03:00:21.320 |
What in this monkey riding an elephant in a dream world, 03:00:27.240 |
what role does love play in the human condition? 03:00:43.280 |
They go beyond the particular organism that you are 03:00:51.760 |
it means that you are expecting something in return for you 03:01:01.840 |
You expect a fair value for the money that you sent them 03:01:04.400 |
and vice versa because you don't know that person. 03:01:07.080 |
You don't have any kind of relationship to them. 03:01:09.480 |
But when you know this person a little bit better 03:01:13.240 |
and you understand what they're trying to achieve 03:01:14.840 |
in their life and you approve because you realize 03:01:30.520 |
is a kind of benefit, is a kind of transaction. 03:01:38.920 |
It's the reinforcement signal that your brain sends to you 03:01:48.960 |
This is the way in which we out-competed other hominins. 03:01:59.200 |
There was a population bottleneck for human society 03:02:11.720 |
that basically tribes that are not that far distant 03:02:19.280 |
And it's because basically the out-of-Africa population 03:02:28.120 |
And what probably happened is not that at any time 03:02:31.360 |
the number of people shrunk below a few hundred thousand. 03:02:35.120 |
What probably happened is that there was a small group 03:02:37.920 |
that had a decisive mutation that produced an advantage. 03:02:40.840 |
And this group multiplied and killed everybody else. 03:02:46.160 |
- Yeah, I wonder what the peculiar characteristics 03:02:55.480 |
- We can only just listen to the echoes in our, 03:03:01.680 |
- So I suspect what eventually made a big difference 03:03:17.440 |
that we no longer were groups of a few hundred individuals 03:03:20.720 |
and acted on direct reputation systems transactionally, 03:03:29.840 |
To form collectives outside of the direct collectives. 03:03:37.760 |
became committed to serving something outside 03:04:01.220 |
- We didn't have the same strong love as we did. 03:04:04.020 |
Right, that's why I mentioned this thing with fascism 03:04:14.380 |
There's this big, oh my God, be a part of something 03:04:53.440 |
- It basically means that you try to figure out 03:05:24.300 |
of what's true based on the high status people 03:05:26.340 |
of your in-group, that does not protect me from fascism. 03:05:29.740 |
The only way to protect yourself from fascism 03:05:31.880 |
is to decide it's the world that is being built here, 03:05:37.100 |
In some sense, try to make your behavior sustainable, 03:05:41.740 |
act in such a way that you would feel comfortable 03:05:46.420 |
Realize that everybody is you in a different timeline, 03:05:48.900 |
but is seeing things differently and has reasons to do so. 03:06:12.140 |
And what integrity looks like is not going on Twitter 03:06:14.940 |
and tweeting about it, but not participating quietly, 03:06:24.060 |
but actually living your, what you think is right. 03:06:32.220 |
So imagine the possibility that some of the people 03:06:35.660 |
around you are space aliens that only look human. 03:06:38.120 |
So they don't have the same prayers as you do. 03:06:45.180 |
There's a large diversity in these basic impulses 03:07:00.740 |
You just make it up as you go along like everybody else. 03:07:05.700 |
what it means that you are a full human being, 03:07:17.300 |
in the sense that if you do this, you're not good enough. 03:07:20.940 |
Because if you are acting on these incentives of integrity, 03:07:25.100 |
That's the way in which you can recognize each other. 03:07:28.380 |
There is a particular place where you can meet 03:07:35.420 |
because you realize that they act with integrity 03:07:40.300 |
So in some sense, you are safe if you do that. 03:07:47.100 |
and that are bad actors in a way that it's hard to imagine 03:07:52.820 |
But there is also people which will try to protect you. 03:08:15.700 |
and you will find happiness there and safety there. 03:08:26.460 |
So you can do everything right and you still can fail 03:08:29.460 |
and you can still horrible things happening to you 03:08:35.140 |
and you have to be grateful if it doesn't happen. 03:08:37.580 |
- And ultimately be grateful no matter what happens 03:08:43.020 |
'cause even just being alive is pretty damn nice. 03:08:49.660 |
The gratefulness in some sense is also just generated 03:09:19.420 |
I therefore conclude that the meaning of life 03:09:33.860 |
- I don't think that there's a single answer to this. 03:09:37.940 |
Nothing makes sense unless the mind makes it so. 03:10:05.660 |
Do you find meaning in projecting an aesthetic 03:10:18.340 |
- I kind of enjoy the idea that you just create 03:10:28.700 |
given your environment, given your set of skills. 03:10:44.460 |
they'll pause and be like, "Uh, that's weird." 03:10:50.540 |
but of course it's still motivated reasoning. 03:10:55.620 |
You're obviously acting on your incentives here. 03:10:57.740 |
- It's still a story we tell ourselves within a dream 03:11:03.820 |
- It's definitely a good strategy if you are a podcaster. 03:11:08.540 |
- And a human, which I'm still trying to figure out if I am. 03:11:13.020 |
- Yeah, there's a mutual relationship somehow. 03:11:16.100 |
- Josh, you're one of the most incredible people I know. 03:11:39.060 |
- Thanks for listening to this conversation with Josip Bach. 03:11:48.540 |
Check them out in the description to support this podcast. 03:11:52.060 |
Now, let me leave you with some words from Carl Jung. 03:11:55.700 |
"People will do anything, no matter how absurd, 03:12:09.300 |
Thank you for listening, and hope to see you next time.