back to index

Joscha Bach: Nature of Reality, Dreams, and Consciousness | Lex Fridman Podcast #212


Chapters

0:0 Introduction
0:33 Life is hard
2:56 Consciousness
9:42 What is life?
19:51 Free will
33:56 Simulation
36:6 Base layer of reality
51:42 Boston Dynamics
60:1 Engineering consciousness
70:30 Suffering
79:24 Postmodernism
83:43 Psychedelics
96:57 GPT-3
105:40 GPT-4
112:5 OpenAI Codex
114:20 Humans vs AI: Who is more dangerous?
131:4 Hitler
136:1 Autonomous weapon systems
143:29 Mark Zuckerberg
149:4 Love
163:18 Michael Malice and anarchism
180:15 Love
184:23 Advice for young people
189:0 Meaning of life

Whisper Transcript | Transcript Only Page

00:00:00.000 | The following is a conversation with Yosha Bach,
00:00:02.720 | his second time on the podcast.
00:00:04.960 | Yosha is one of the most fascinating minds in the world,
00:00:08.560 | exploring the nature of intelligence,
00:00:10.640 | cognition, computation, and consciousness.
00:00:14.520 | To support this podcast, please check out our sponsors,
00:00:17.720 | Coinbase, Codecademy, Linode, NetSuite, and ExpressVPN.
00:00:22.720 | Their links are in the description.
00:00:26.760 | This is the Lex Friedman Podcast,
00:00:29.000 | and here is my conversation with Yosha Bach.
00:00:32.400 | Thank you for once again coming on
00:00:35.160 | to this particular Russian program,
00:00:38.200 | and sticking to the theme of a Russian program.
00:00:40.760 | Let's start with the darkest of topics.
00:00:43.120 | - Привет.
00:00:43.960 | (both laughing)
00:00:45.200 | - So this is inspired by one of your tweets.
00:00:48.400 | You wrote that, quote, "When life feels unbearable,
00:00:52.680 | "I remind myself that I'm not a person.
00:00:56.520 | "I'm a piece of software running on the brain
00:00:58.800 | "of a random ape for a few decades.
00:01:01.360 | "It's not the worst brain to run on."
00:01:03.400 | Have you experienced low points in your life?
00:01:07.680 | Have you experienced depression?
00:01:09.720 | - Of course, we all experience low points in our life,
00:01:12.080 | and we get appalled by the things,
00:01:15.280 | by the ugliness of stuff around us.
00:01:17.000 | We might get desperate about our lack of self-regulation,
00:01:21.240 | and sometimes life is hard,
00:01:24.600 | and I suspect you don't get to your life,
00:01:27.920 | nobody does to get to their life without low points,
00:01:30.720 | and without moments where they're despairing.
00:01:33.720 | And I thought that, let's capture this state,
00:01:37.800 | and how to deal with that state.
00:01:40.160 | And I found that very often,
00:01:42.520 | you realize that when you stop taking things personally,
00:01:44.880 | when you realize that this notion of a person is a fiction,
00:01:49.000 | similar as it is in Westworld,
00:01:50.720 | where the robots realize that their memories and desires
00:01:53.320 | are the stuff that keeps them in the loop,
00:01:55.840 | and they don't have to act on those memories and desires,
00:01:59.120 | that our memories and expectations is what make us unhappy.
00:02:02.560 | And the present rarely does.
00:02:04.200 | The day in which we are, for the most part, it's okay, right?
00:02:08.320 | When we are sitting here, right here, right now,
00:02:11.240 | we can choose how we feel.
00:02:13.080 | And the thing that affects us is the expectation
00:02:16.720 | that something is going to be different
00:02:18.760 | from what we want it to be,
00:02:19.920 | or the memory that something was different
00:02:21.840 | from what you wanted it to be.
00:02:24.120 | And once we basically zoom out from all this,
00:02:27.320 | what's left is not a person.
00:02:29.000 | What's left is this state of being conscious,
00:02:32.320 | which is a software state.
00:02:33.600 | And software doesn't have an identity.
00:02:35.680 | It's a physical law.
00:02:36.760 | And it's a law that acts in all of us,
00:02:39.800 | and it's embedded in a suitable substrate.
00:02:42.280 | And we didn't pick that substrate, right?
00:02:43.800 | We are mostly randomly instantiated on it.
00:02:46.920 | And there are all these individuals,
00:02:48.920 | and everybody has to be one of them.
00:02:51.760 | And eventually you're stuck on one of them,
00:02:54.240 | and have to deal with that.
00:02:56.400 | - So you're like a leaf floating down the river.
00:02:59.120 | You just have to accept that there's a river,
00:03:01.400 | and you just float wherever it takes you.
00:03:03.880 | - You don't have to do this.
00:03:04.720 | The thing is that the illusion
00:03:06.360 | that you are an agent is a construct.
00:03:09.520 | What part of that is actually under your control?
00:03:13.160 | And I think that our consciousness
00:03:15.320 | is largely a control model for our own attention.
00:03:18.520 | So we notice where we are looking,
00:03:21.200 | and we can influence what we are looking,
00:03:22.760 | how we are disambiguating things,
00:03:24.200 | how we put things together in our mind.
00:03:26.600 | And the whole system that runs us
00:03:28.800 | is this big cybernetic motivational system.
00:03:30.960 | So we're basically like a little monkey
00:03:32.960 | sitting on top of an elephant.
00:03:34.960 | And we can prod this elephant here and there
00:03:37.520 | to go this way or that way.
00:03:39.360 | And we might have the illusion that we are the elephant,
00:03:42.000 | or that we are telling it what to do.
00:03:43.480 | And sometimes we notice that it walks
00:03:45.640 | into a completely different direction.
00:03:47.480 | And we didn't set this thing up.
00:03:49.080 | It just is the situation that we find ourselves in.
00:03:52.640 | - How much prodding can we actually do of the elephant?
00:03:55.400 | - A lot.
00:03:57.400 | But I think that our consciousness
00:04:00.720 | cannot create the motive force.
00:04:03.040 | - Is the elephant consciousness in this metaphor?
00:04:05.360 | - No, the monkey is the consciousness.
00:04:07.960 | The monkey is the attentional system
00:04:09.360 | that is observing things.
00:04:10.480 | There is a large perceptual system
00:04:12.400 | combined with a motivational system
00:04:14.360 | that is actually providing the interface to everything
00:04:17.280 | and our own consciousness,
00:04:18.720 | I think is the tool that directs the attention
00:04:21.880 | of that system, which means it singles out features
00:04:24.800 | and performs conditional operations
00:04:27.040 | for which it needs an index memory.
00:04:28.960 | But this index memory is what we perceive
00:04:31.480 | as our stream of consciousness.
00:04:32.800 | But the consciousness is not in charge.
00:04:34.920 | That's an illusion.
00:04:35.920 | - So everything outside of that consciousness
00:04:40.360 | is the elephant.
00:04:41.400 | So it's the physics of the universe,
00:04:43.080 | but it's also society that's outside of your...
00:04:46.160 | - I would say the elephant is the agent.
00:04:48.320 | So there is an environment to which the agent is stomping
00:04:51.320 | and you are influencing a little part of that agent.
00:04:55.120 | - So can you, is the agent a single human being?
00:04:59.000 | What's, which object has agency?
00:05:02.360 | - That's an interesting question.
00:05:03.840 | I think a way to think about an agent
00:05:06.160 | is that it's a controller with a set point generator.
00:05:09.680 | The notion of a controller comes from cybernetics
00:05:13.040 | and control theory.
00:05:14.400 | Control system consists out of a system
00:05:17.840 | that is regulating some value
00:05:20.920 | and the deviation of that value from a set point.
00:05:24.040 | And it has a sensor that measures the system's deviation
00:05:27.560 | from that set point and an effector
00:05:30.120 | that can be parametrized by the controller.
00:05:32.680 | So the controller tells the effector to do a certain thing.
00:05:35.600 | And the goal is to reduce the distance
00:05:38.560 | between the set point and the current value of the system.
00:05:40.960 | And there's environment which disturbs the regulated system,
00:05:43.640 | which brings it away from that set point.
00:05:45.640 | So simplest case is a thermostat.
00:05:47.920 | The thermostat is really simple
00:05:49.160 | because it doesn't have a model.
00:05:50.320 | The thermostat is only trying to minimize
00:05:52.480 | the set point deviation in the next moment.
00:05:55.880 | And if you want to minimize the set point deviation
00:05:58.800 | over a longer time span, you need to integrate it.
00:06:00.920 | You need to model what is going to happen.
00:06:03.760 | So for instance, when you think about
00:06:05.760 | that your set point is to be comfortable in life,
00:06:08.320 | maybe you need to make yourself uncomfortable first.
00:06:11.320 | Right, so you need to make a model
00:06:12.840 | of what's going to happen when.
00:06:14.120 | This is task of the controller is to use its sensors
00:06:18.000 | to measure the state of the environment
00:06:20.520 | and the system that is being regulated
00:06:22.880 | and figure out what to do.
00:06:24.880 | And if the task is complex enough,
00:06:27.600 | the set points are complicated enough.
00:06:30.080 | And if the controller has enough capacity
00:06:32.480 | and enough sensor feedback,
00:06:34.920 | then the task of the controller is to make a model
00:06:37.320 | of the entire universe that it's in,
00:06:39.160 | the conditions under which it exists and of itself.
00:06:42.240 | And this is a very complex agent.
00:06:43.880 | And we are in that category.
00:06:45.760 | And an agent is not necessarily a thing in the universe.
00:06:49.400 | It's a class of models that we use
00:06:51.600 | to interpret aspects of the universe.
00:06:54.520 | And when we notice the environment around us,
00:06:57.760 | a lot of things only make sense
00:06:59.400 | at the level that we're entangled with them
00:07:00.960 | if we interpret them as control systems
00:07:03.280 | that make models of the world
00:07:04.600 | and try to minimize their own set points.
00:07:07.280 | - So but the models are the agents.
00:07:10.480 | - The agent is a class of model.
00:07:12.520 | And we notice that we are an agent ourself.
00:07:14.640 | We are the agent that is using our own control model
00:07:17.760 | to perform actions.
00:07:18.800 | We notice we produce a change in the model
00:07:22.080 | and things in the world change.
00:07:23.440 | And this is how we discover the idea that we have a body,
00:07:26.720 | that we are situated in an environment,
00:07:28.240 | and that we have a first-person perspective.
00:07:31.120 | - Still don't understand what's the best way to think
00:07:34.960 | of which object has agency with respect to human beings.
00:07:39.800 | Is it the body?
00:07:41.520 | Is it the brain?
00:07:43.400 | Is it the contents of the brain that has agency?
00:07:46.000 | Like what's the actuators that you're referring to?
00:07:49.000 | What is the controller and where does it reside?
00:07:52.080 | Or is it these impossible things?
00:07:54.080 | 'Cause I keep trying to ground it to space-time,
00:07:57.720 | the three-dimensional space and the one dimension of time.
00:08:01.560 | What's the agent in that for humans?
00:08:04.560 | - There is not just one.
00:08:06.000 | It depends on the way in which you're looking at the thing
00:08:08.760 | and which you're framing it.
00:08:10.040 | Imagine that you are, say, Angela Merkel,
00:08:13.520 | and you are acting on behalf of Germany.
00:08:16.640 | Then you could say that Germany is the agent.
00:08:19.800 | And in the mind of Angela Merkel,
00:08:21.560 | she is Germany to some extent,
00:08:23.520 | because in the way in which she acts,
00:08:25.640 | the destiny of Germany changes.
00:08:28.040 | There are things that she can change
00:08:29.680 | that basically affect the behavior of that nation state.
00:08:33.560 | - Okay, so it's hierarchies of...
00:08:35.080 | To go to another one of your tweets
00:08:37.440 | with, I think you were playfully mocking Jeff Hawkins
00:08:42.360 | with saying his brain's all the way down.
00:08:44.440 | So it's like, it's agents all the way down?
00:08:49.000 | It's agents made up of agents made up of agents?
00:08:51.760 | Like if Angela Merkel's Germany
00:08:54.680 | and Germany's made up of a bunch of people
00:08:56.520 | and the people are themselves agents in some kind of context
00:09:01.040 | and then the people are made up of cells, each individual.
00:09:04.880 | So is it agents all the way down?
00:09:07.200 | - I suspect that has to be like this
00:09:08.880 | in a world where things are self-organizing.
00:09:12.880 | Most of the complexity that we are looking at,
00:09:15.600 | everything in life is about self-organization.
00:09:18.480 | So I think up from the level of life, you have agents.
00:09:23.480 | And below life, you rarely have agents
00:09:27.440 | because sometimes you have control systems
00:09:30.000 | that emerge randomly in nature
00:09:31.560 | and try to achieve a set point,
00:09:33.720 | but they're not that interesting agents that make models.
00:09:36.640 | And because to make an interesting model of the world,
00:09:39.560 | you typically need a system that is Turing complete.
00:09:42.360 | - Can I ask you a personal question?
00:09:44.160 | What's the line between life and non-life?
00:09:48.760 | It's personal because you're a life form.
00:09:52.280 | So where do you think in this emerging complexity,
00:09:55.760 | at which point does the thing start being living
00:09:58.000 | and have agency?
00:09:59.100 | - Personally, I think that the simplest answer
00:10:01.880 | that is that life is cells because--
00:10:04.560 | - Life is what?
00:10:05.400 | - Cells.
00:10:06.240 | - Cells.
00:10:07.060 | - Biological cells.
00:10:07.900 | So it's a particular kind of principle
00:10:09.720 | that we have discovered to exist in nature.
00:10:12.160 | It's modular stuff that consists out of basically
00:10:17.160 | this DNA tape with a redried head on top of it
00:10:20.480 | that is able to perform arbitrary computations
00:10:23.560 | and state transitions within the cell.
00:10:25.600 | And it's combined with a membrane
00:10:27.680 | that insulates the cell from its environment.
00:10:30.840 | And there are chemical reactions inside of the cell
00:10:35.080 | that are in disequilibrium.
00:10:36.600 | And the cell is running in such a way
00:10:39.040 | that this disequilibrium doesn't disappear.
00:10:41.780 | And if the cell goes into an equilibrium state, it dies.
00:10:46.560 | And it requires something like a neck entropy extractor
00:10:50.400 | to maintain this disequilibrium.
00:10:52.200 | So it's able to harvest neck entropy from its environment
00:10:56.000 | and keep itself running.
00:10:58.120 | - Yeah, so there's information and there's a wall
00:11:00.860 | to maintain this disequilibrium.
00:11:04.400 | But isn't this very earth-centric?
00:11:07.000 | Like what you're referring to as-
00:11:09.120 | - I'm not making a normative notion.
00:11:11.000 | You could say that there are probably other things
00:11:13.380 | in the universe that are cell-like and life-like,
00:11:16.680 | and you could also call them life,
00:11:18.140 | but eventually it's just a willingness
00:11:21.520 | to find an agreement of how to use the terms.
00:11:24.120 | I like cells because it's completely coextensional
00:11:26.840 | with the way that we use the word
00:11:28.780 | even before we knew about cells.
00:11:30.600 | So people were pointing at some stuff
00:11:32.760 | and saying this is somehow animate,
00:11:34.480 | and this is very different from the non-animate stuff,
00:11:36.840 | and what's the difference between the living
00:11:39.120 | and the dead stuff?
00:11:40.380 | And it's mostly whether the cells are working or not.
00:11:43.080 | And also this boundary of life where we say that,
00:11:46.120 | for instance, a virus is basically an information packet
00:11:49.080 | that is subverting the cell and not life by itself.
00:11:51.700 | That makes sense to me.
00:11:54.320 | And it's somewhat arbitrary.
00:11:56.080 | You could, of course, say that systems
00:11:58.160 | that permanently maintain a disequilibrium
00:12:00.360 | and can self-replicate are always life.
00:12:03.520 | And maybe that's a useful definition too,
00:12:06.660 | but this is eventually just how you want to use the word.
00:12:10.640 | - Is it so useful for conversation,
00:12:12.820 | but is it somehow fundamental to the universe?
00:12:17.520 | Do you think there's an actual line
00:12:19.500 | to eventually be drawn between life and non-life,
00:12:22.060 | or is it all a kind of continuum?
00:12:24.480 | - I don't think it's a continuum,
00:12:25.680 | but there's nothing magical that is happening.
00:12:28.320 | Living systems are a certain type of machine.
00:12:31.360 | - What about non-living systems?
00:12:33.200 | Is it also a machine?
00:12:34.520 | - There are non-living machines,
00:12:36.200 | but the question is at which point is a system able
00:12:39.160 | to perform arbitrary state transitions
00:12:43.320 | to make representations?
00:12:45.160 | And living things can do this.
00:12:47.020 | And of course, we can also build non-living things
00:12:49.200 | that can do this, but we don't know anything in nature
00:12:52.420 | that is not a cell and is not created by cellular life
00:12:56.400 | that is able to do that.
00:12:58.600 | Not only do we not know,
00:13:03.080 | I don't think we have the tools to see otherwise.
00:13:06.160 | I always worry that we look at the world too narrowly.
00:13:11.160 | Like there could be life of a very different kind
00:13:15.100 | right under our noses that we're just not seeing
00:13:19.040 | because we're not either limitations
00:13:21.880 | of our cognitive capacity,
00:13:23.440 | or we're just not open-minded enough,
00:13:27.160 | either with the tools of science
00:13:28.540 | or just the tools of our mind.
00:13:31.260 | - Yeah, that's possible.
00:13:33.200 | I find this thought very fascinating.
00:13:35.240 | And I suspect that many of us ask ourselves since childhood,
00:13:39.200 | what are the things that we are missing?
00:13:40.760 | What kind of systems and interconnections exist
00:13:43.840 | that are outside of our gaze?
00:13:46.540 | But we are looking for it,
00:13:51.320 | and physics doesn't have much room at the moment
00:13:55.320 | for opening up something that would not violate
00:13:59.760 | the conservation of information as we know it.
00:14:02.060 | - Yeah, but I wonder about time scale and scale,
00:14:07.040 | spatial scale, whether we just need to open up our idea
00:14:12.040 | of how life presents itself.
00:14:15.480 | It could be operating at a much slower time scale,
00:14:18.080 | a much faster time scale.
00:14:20.240 | And it's almost sad to think that there's all this life
00:14:24.120 | around us that we're not seeing,
00:14:25.520 | because we're just not thinking in terms of the right scale,
00:14:30.520 | both time and space.
00:14:34.560 | - What is your definition of life?
00:14:36.240 | What do you understand as life?
00:14:37.800 | - Entities of sufficiently high complexity
00:14:44.640 | that are full of surprises.
00:14:46.320 | (both laughing)
00:14:51.880 | I don't know, I don't have a free will,
00:14:54.080 | so that just came out of my mouth.
00:14:55.720 | I'm not sure that even makes sense.
00:14:57.400 | There's certain characteristics.
00:14:59.280 | So complexity seems to be a necessary property of life.
00:15:04.240 | And I almost want to say it has ability
00:15:09.240 | to do something unexpected.
00:15:11.940 | - It seems to me that life is the main source
00:15:15.560 | of complexity on earth.
00:15:17.000 | - Yes.
00:15:19.600 | - And complexity is basically a bridgehead
00:15:22.120 | that order builds into chaos by modeling,
00:15:27.120 | by processing information in such a way
00:15:29.040 | that you can perform reactions
00:15:31.240 | that would not be possible for dump systems.
00:15:33.800 | And this means that you can harvest like entropy
00:15:36.080 | that dump systems cannot harvest.
00:15:37.840 | And this is what complexity is mostly about.
00:15:40.180 | In some sense, the purpose of life is to create complexity.
00:15:45.100 | - Yeah, increasing.
00:15:46.840 | I mean, there seems to be some kind of universal drive
00:15:51.840 | towards increasing pockets of complexity.
00:15:54.840 | I don't know what that is.
00:15:57.640 | That seems to be like a fundamental,
00:16:00.040 | I don't know if it's a property of the universe
00:16:02.360 | or it's just a consequence of the way the universe works.
00:16:05.880 | But there seems to be this small pockets
00:16:08.720 | of emergent complexity that builds on top of each other
00:16:11.440 | and starts having like greater and greater complexity
00:16:15.440 | by having like a hierarchy of complexity.
00:16:18.000 | Little organisms building up a little society
00:16:20.760 | that then operates almost as an individual organism itself.
00:16:24.080 | And all of a sudden you have Germany and Merkel.
00:16:27.680 | - But that's not obvious to me.
00:16:28.880 | Everything that goes up has to come down at some point.
00:16:32.320 | Right, so if you see this big exponential curve somewhere,
00:16:36.520 | it's usually the beginning of an S-curve.
00:16:39.440 | Or something eventually reaches saturation
00:16:41.480 | and the S-curve is the beginning of some kind of bump
00:16:43.840 | that goes down again.
00:16:45.560 | And there is just this thing that when you are
00:16:49.200 | in sight of an evolution of life,
00:16:53.260 | you are on top of a puddle of negentropy
00:16:55.880 | that is being sucked dry by life.
00:16:58.920 | And during that happening, you see an increase in complexity
00:17:02.960 | because life forms are competing with each other
00:17:04.840 | to get more and more and finer and finer corner
00:17:09.000 | of that negentropy extraction.
00:17:11.160 | - But I feel like that's a gradual, beautiful process
00:17:14.000 | that almost follows a process akin to evolution.
00:17:18.000 | And the way it comes down is not the same way it came up.
00:17:23.000 | The way it comes down is usually harshly and quickly.
00:17:26.560 | So usually there's some kind of catastrophic event.
00:17:30.600 | - Well, the Roman Empire took a long time.
00:17:32.700 | - But would you classify that
00:17:37.840 | as a decrease in complexity though?
00:17:39.520 | - Yes, I think that this size of the cities
00:17:42.280 | that could be fed has decreased dramatically.
00:17:44.840 | And you could see that the quality of the art decreased
00:17:47.920 | and it did so gradually.
00:17:50.040 | And maybe future generations,
00:17:53.360 | when they look at the history of the United States
00:17:55.760 | in the 21st century,
00:17:57.440 | will also talk about the gradual decline,
00:17:59.200 | not something that suddenly happens.
00:18:01.000 | - Do you have a sense of where we are?
00:18:07.760 | Are we on the exponential rise?
00:18:09.800 | Are we at the peak?
00:18:11.280 | Or are we at the downslope of the United States empire?
00:18:15.800 | - It's very hard to say from a single human perspective,
00:18:18.520 | but it seems to me that we are probably at the peak.
00:18:23.520 | - I think that's probably the definition
00:18:26.960 | of like optimism and cynicism.
00:18:29.640 | So my nature of optimism is I think we're on the rise.
00:18:33.000 | (both laughing)
00:18:35.920 | But I think this is just all a matter of perspective.
00:18:39.320 | Nobody knows, but I do think that erring
00:18:41.520 | on the side of optimism,
00:18:43.280 | like you need a sufficient number,
00:18:45.440 | you need a minimum number of optimists
00:18:47.440 | in order to make that up thing actually work.
00:18:50.960 | And so I tend to be on the side of the optimists.
00:18:53.600 | - I think that we are basically a species of grasshoppers
00:18:56.520 | that have turned into locusts.
00:18:58.600 | And when you are in that locust mode,
00:19:00.720 | you see an amazing rise of population numbers
00:19:04.080 | and of the complexity of the interactions
00:19:07.000 | between the individuals.
00:19:08.760 | But it's ultimately the question is, is it sustainable?
00:19:12.840 | - See, I think we're a bunch of lions and tigers
00:19:16.120 | that have become domesticated cats,
00:19:18.800 | to use a different metaphor.
00:19:21.400 | And so I'm not exactly sure we're so destructive,
00:19:24.280 | we're just softer and nicer and lazier.
00:19:27.760 | - I think we have monkeys and not the cats.
00:19:29.840 | And if you look at the monkeys, they are very busy.
00:19:33.560 | - The ones that have a lot of sex, those monkeys?
00:19:35.800 | - Not just the bonobos.
00:19:37.160 | I think that all the monkeys are basically
00:19:38.920 | a discontent species that always needs to meddle.
00:19:41.360 | - Well, the gorillas seem to have
00:19:44.160 | a little bit more of a structure,
00:19:45.840 | but it's a different part of the tree.
00:19:48.240 | Okay, you mentioned the elephant
00:19:52.880 | and the monkey riding the elephant.
00:19:55.680 | And consciousness is the monkey.
00:19:58.720 | And there's some prodding that the monkey gets to do.
00:20:03.160 | And sometimes the elephant listens.
00:20:05.160 | I heard you got into some contentious,
00:20:08.920 | maybe you can correct me,
00:20:09.800 | but I heard you got into some contentious
00:20:11.520 | free will discussions.
00:20:12.920 | Is this with Sam Harris or something like that?
00:20:16.120 | - Not that I know of.
00:20:17.320 | - Some people on Clubhouse told me
00:20:20.480 | you made a bunch of big debate points about free will.
00:20:25.480 | Well, let me just then ask you,
00:20:27.760 | where in terms of the monkey and the elephant,
00:20:31.680 | do you think we land in terms of the illusion of free will?
00:20:35.280 | How much control does the monkey have?
00:20:37.240 | - We have to think about what the free will is
00:20:41.440 | in the first place.
00:20:43.240 | We are not the machine.
00:20:44.400 | We are not the thing that is making the decisions.
00:20:46.800 | We are a model of that decision-making process.
00:20:49.840 | And there is a difference between
00:20:52.880 | making your own decisions
00:20:54.160 | and predicting your own decisions.
00:20:56.120 | And that difference is the first person perspective.
00:20:59.840 | And what basically makes decision-making
00:21:04.800 | under conditions of free will distinct
00:21:06.600 | from just automatically doing the best thing
00:21:09.600 | is that we often don't know what the best thing is.
00:21:13.280 | We make decisions under uncertainty.
00:21:15.560 | We make informed bets using a betting algorithm
00:21:17.920 | that we don't yet understand
00:21:19.160 | because we haven't reverse engineered
00:21:20.920 | our own minds sufficiently.
00:21:22.360 | We don't know the expected rewards.
00:21:23.920 | We don't know the mechanism by which we estimate
00:21:25.920 | the rewards and so on.
00:21:27.200 | - But there is an algorithm.
00:21:28.040 | - We observe ourselves performing
00:21:30.600 | where we see that we weigh facts and factors
00:21:34.840 | and the future, and then some kind of possibility,
00:21:39.320 | some motive gets raised to an intention.
00:21:41.640 | And that's informed bet that the system is making.
00:21:44.520 | And that making of the informed bet,
00:21:46.440 | the representation of that is what we call free will.
00:21:49.480 | And it seems to be paradoxical
00:21:51.600 | because we think that's the crucial thing
00:21:53.600 | is that it's somehow indeterministic.
00:21:56.520 | And yet if it was indeterministic, it would be random.
00:21:59.240 | And it cannot be random because if it was random,
00:22:03.360 | if just dice were being thrown in the universe
00:22:05.280 | randomly forces you to do things, it would be meaningless.
00:22:08.200 | So the important part of the decisions
00:22:10.400 | is always the deterministic stuff.
00:22:12.680 | But it appears to be indeterministic to you
00:22:15.200 | because it's unpredictable.
00:22:16.840 | Because if it was predictable,
00:22:18.520 | you wouldn't experience it as a free will decision.
00:22:21.440 | You would experience it as just doing
00:22:23.240 | the necessary right thing.
00:22:25.560 | And you see this continuum between the free will
00:22:28.720 | and the execution of automatic behavior
00:22:31.720 | when you're observing other people.
00:22:33.200 | So for instance, when you are observing your own children,
00:22:36.200 | if you don't understand them,
00:22:37.560 | you will use this agent model
00:22:40.040 | where you have an agent with a set point generator.
00:22:43.240 | And the agent is doing the best it can
00:22:45.360 | to minimize the difference to the set point.
00:22:47.360 | And it might be confused and sometimes impulsive or whatever,
00:22:51.160 | but it's acting on its own free will.
00:22:53.320 | And when you understand what happens
00:22:55.400 | in the mind of the child, you see that it's automatic.
00:22:58.560 | And you can outmodel the child,
00:23:00.320 | you can build things around the child
00:23:02.320 | that will lead the child to making exactly the decision
00:23:05.240 | that you are predicting.
00:23:06.760 | And under these circumstances,
00:23:08.720 | like when you are a stage musician
00:23:10.520 | or somebody who is dealing with people
00:23:13.440 | that you sell a car to,
00:23:15.280 | and you completely understand the psychology
00:23:17.360 | and the impulses and the space of thoughts
00:23:19.680 | that this individual can have at that moment.
00:23:21.600 | Under these circumstances,
00:23:22.680 | it makes no sense to attribute free will
00:23:24.680 | because it's no longer decision-making under uncertainty.
00:23:28.280 | You are already certain.
00:23:29.240 | For them, there's uncertainty,
00:23:30.520 | but you already know what they're doing.
00:23:32.520 | - But what about for you?
00:23:35.040 | So is this akin to systems like cellular automata
00:23:40.640 | where it's deterministic,
00:23:44.360 | but when you squint your eyes a little bit,
00:23:47.880 | it starts to look like there's agents making decisions
00:23:51.720 | at the higher, sort of when you zoom out
00:23:54.760 | and look at the entities that are composed
00:23:57.680 | by the individual cells.
00:23:59.440 | Even though there's underlying simple rules
00:24:03.080 | that make the system evolve in deterministic ways,
00:24:08.080 | it looks like there's organisms making decisions.
00:24:11.560 | Is that where the illusion of free will emerges,
00:24:15.240 | that jump in scale?
00:24:17.480 | - It's a particular type of model,
00:24:19.200 | but this jump in scale is crucial.
00:24:21.400 | The jump in scale happens whenever
00:24:23.080 | you have too many parts to count
00:24:24.520 | and you cannot make a model at that level,
00:24:26.480 | and you try to find some higher level regularity.
00:24:29.480 | And the higher level regularity is a pattern
00:24:31.600 | that you project into the world to make sense of it.
00:24:35.240 | And agency is one of these patterns, right?
00:24:37.040 | You have all these cells that interact with each other,
00:24:40.240 | and the cells in our body are set up in such a way
00:24:42.720 | that they benefit if their behavior is coherent,
00:24:45.600 | which means that they act as if
00:24:47.600 | they were serving a common goal.
00:24:49.720 | And that means that they will evolve regulation mechanisms
00:24:52.840 | that act as if they were serving a common goal.
00:24:55.840 | And now you can make sense of all these cells
00:24:58.160 | by projecting the common goal into them.
00:25:00.480 | - Right, so for you then, free will is an illusion.
00:25:03.840 | - No, it's a model, and it's a construct.
00:25:06.960 | It's basically a model that the system
00:25:08.480 | is making of its own behavior,
00:25:09.920 | and it's the best model that it can come up with
00:25:11.960 | under the circumstances, and it can get replaced
00:25:14.480 | by a different model, which is automatic behavior,
00:25:16.920 | when you fully understand the mechanism
00:25:18.480 | under which you are acting.
00:25:19.680 | - Yeah, but another word for model is what?
00:25:22.960 | Story.
00:25:24.360 | So it's the story you're telling.
00:25:25.760 | I mean, do you actually have control?
00:25:27.840 | Is there such a thing as a you,
00:25:30.280 | and is there such a thing as you having control?
00:25:34.000 | So like, are you manifesting your evolution as an entity?
00:25:39.000 | - In some sense, the you is the model of the system
00:25:44.360 | that is in control.
00:25:45.680 | It's a story that the system tells itself
00:25:47.840 | about somebody who is in control.
00:25:50.320 | - Yeah.
00:25:51.160 | - And the contents of that model are being used
00:25:53.080 | to inform the behavior of the system.
00:25:54.960 | - Okay.
00:25:57.760 | - So the system is completely mechanical,
00:26:00.480 | and the system creates that story like a loom,
00:26:03.280 | and then it uses the contents of that story
00:26:06.000 | to inform its actions and writes the results
00:26:09.080 | of that action into the story.
00:26:11.200 | - So how is that not an illusion?
00:26:13.360 | The story is written then,
00:26:16.200 | or rather, we're not the writers of the story.
00:26:19.740 | - Yes, but we always knew that.
00:26:22.840 | - No, we don't know that.
00:26:25.280 | When did we know that?
00:26:26.720 | - I think that's mostly a confusion about concepts.
00:26:29.280 | The conceptual illusion in our culture comes from the idea
00:26:33.680 | that we live in physical reality,
00:26:35.700 | and that we experience physical reality,
00:26:37.440 | and that we have ideas about it.
00:26:39.200 | - Yep.
00:26:40.040 | - And then you have this dualist interpretation
00:26:41.640 | where you have two substances, res extensa,
00:26:45.040 | the world that you can touch
00:26:46.920 | and that is made of extended things,
00:26:48.960 | and res cogitans, which is the world of ideas.
00:26:51.640 | And in fact, both of them are mental representations.
00:26:54.560 | One is the representations of the world as a game engine
00:26:57.900 | that your mind generates to make sense
00:26:59.600 | of the perceptual data.
00:27:01.080 | And the other one's-- - That's the physical world?
00:27:02.240 | - Yes, that's what we perceive as the physical world.
00:27:04.440 | But we already know that the physical world
00:27:05.920 | is nothing like that, right?
00:27:07.000 | Quantum mechanics is very different
00:27:08.840 | from what you and me perceive as the world.
00:27:11.320 | The world that you and me perceive is a game engine.
00:27:14.080 | - Yeah.
00:27:14.920 | - And there are no colors and sounds in the physical world.
00:27:17.160 | They only exist in the game engine generated by your brain.
00:27:20.080 | And then you have ideas that cannot be mapped
00:27:22.840 | onto extended regions, right?
00:27:24.720 | So the objects that have a spatial extension
00:27:26.920 | in the game engine are res extensa,
00:27:29.520 | and the objects that don't have a physical extension
00:27:31.480 | in the game engine are ideas.
00:27:34.560 | And they both interact in our mind
00:27:36.160 | to produce models of the world.
00:27:38.200 | - Yep, but when you play video games,
00:27:41.760 | I understand that what's actually happening
00:27:45.040 | is zeros and ones inside of a computer,
00:27:50.040 | inside of a CPU and a GPU,
00:27:52.840 | but you're still seeing the rendering of that.
00:27:57.840 | And you're still making decisions
00:28:00.720 | whether to shoot to turn left or to turn right
00:28:03.800 | if you're playing a shooter.
00:28:05.480 | Every time I start thinking about Skyrim
00:28:07.120 | and Elder Scrolls and walking around in beautiful nature
00:28:09.840 | and swinging a sword.
00:28:10.920 | But it feels like you're making decisions
00:28:13.080 | inside that video game.
00:28:15.040 | So even though you don't have direct access
00:28:17.200 | in terms of perception to the bits, to the zeros and ones,
00:28:22.680 | it still feels like you're making decisions
00:28:24.900 | and your decisions actually feels like they're being applied
00:28:29.160 | all the way down to the zeros and ones.
00:28:32.160 | - Yes.
00:28:33.000 | - So it feels like you have control
00:28:33.820 | even though you don't have direct access to reality.
00:28:36.560 | So there is basically a special character in the video game
00:28:39.560 | that is being created by the video game engine.
00:28:41.800 | - Yeah.
00:28:42.640 | - And this character is serving the aesthetics
00:28:43.880 | of the video game.
00:28:45.680 | And that is you.
00:28:47.080 | - Yes, but I feel like I have control inside the video game.
00:28:50.920 | Like all those 12 year olds
00:28:53.080 | that kick my ass on the internet.
00:28:55.440 | - So when you play the video game,
00:28:57.760 | it doesn't really matter that there's zeros and ones.
00:28:59.960 | You don't care about the widths of the bus.
00:29:01.720 | You don't care about the nature of the CPU that it runs on.
00:29:04.520 | What you care about are the properties of the game
00:29:06.720 | that you're playing.
00:29:07.800 | And you hope that the CPU is good enough.
00:29:10.080 | - Yes.
00:29:10.920 | - And a similar thing happens when we interact with physics.
00:29:13.360 | The world that you and me are in is not the physical world.
00:29:16.040 | The world that you and me are in is a dream world.
00:29:18.480 | - How close is it to the real world though?
00:29:21.740 | - We know that it's not very close,
00:29:25.080 | but we know that the dynamics of the dream world
00:29:27.520 | match the dynamics of the physical world
00:29:29.360 | to a certain degree of resolution.
00:29:31.080 | - Right.
00:29:31.920 | - But the causal structure of the dream world is different.
00:29:34.920 | - So you see, for instance,
00:29:36.360 | waves crashing on your feet, right?
00:29:38.200 | But there are no waves in the ocean.
00:29:39.460 | There's only water molecules that have tangents
00:29:42.440 | between the molecules that are the result of electrons
00:29:47.360 | in the molecules interacting with each other.
00:29:50.080 | - Aren't they like very consistent?
00:29:52.120 | We're just seeing a very crude approximation.
00:29:55.680 | Isn't our dream world very consistent?
00:29:59.320 | Like to the point of being mapped directly one-to-one
00:30:02.960 | to the actual physical world,
00:30:04.240 | as opposed to us being completely tricked.
00:30:07.680 | This is like where you have like Donald Trump.
00:30:09.240 | - It's not a trick.
00:30:10.080 | That's my point.
00:30:10.900 | It's not an illusion.
00:30:11.840 | It's a form of data compression.
00:30:13.480 | - Yeah, yeah.
00:30:14.320 | - It's an attempt to deal with the dynamics
00:30:15.420 | of too many parts to count
00:30:16.960 | at the level at which we're entangled
00:30:18.680 | with the best model that you can find.
00:30:20.720 | - Yeah, so we can act in that dream world
00:30:22.680 | and our actions have impact in the real world,
00:30:26.120 | in the physical world.
00:30:27.080 | - Yes.
00:30:27.920 | - To which we don't have access.
00:30:28.740 | - Yes, but it's basically like accepting the fact
00:30:31.860 | that the software that we live in,
00:30:33.160 | the dream that we live in,
00:30:34.560 | is generated by something outside of this world
00:30:37.080 | that you and me are in.
00:30:38.040 | - So is the software deterministic
00:30:40.040 | and do we not have any control?
00:30:42.240 | Do we have...
00:30:43.080 | So free will is having a conscious being.
00:30:49.000 | Free will is the monkey being able to steer the elephant.
00:30:52.820 | - No, it's slightly different.
00:30:58.080 | Basically in the same way as you are modeling
00:31:00.520 | the water molecules in the ocean
00:31:02.340 | that engulf your feet when you are walking on the beach
00:31:05.060 | as waves and there are no waves,
00:31:07.380 | but only the atoms on more complicated stuff
00:31:09.780 | underneath the atoms and so on.
00:31:11.820 | And you know that, right?
00:31:13.060 | You would accept, yes,
00:31:15.300 | there is a certain abstraction that happens here.
00:31:17.660 | It's a simplification of what happens.
00:31:19.420 | And the simplification that is designed
00:31:22.100 | in such a way that your brain can deal with it,
00:31:24.260 | temporarily and spatially in terms of resources
00:31:27.000 | and tuned for the predictive value.
00:31:28.740 | So you can predict with some accuracy
00:31:31.200 | whether your feet are going to get wet or not.
00:31:33.380 | - But it's a really good interface and approximation.
00:31:37.620 | - Yes.
00:31:38.460 | - It's like E equals MC squared is a good...
00:31:40.340 | Equations are a good approximation
00:31:42.100 | for what they're much better approximation.
00:31:44.760 | So to me, waves is a really nice approximation
00:31:49.380 | of what's all the complexity that's happening underneath.
00:31:51.940 | - Basically it's a machine learning model
00:31:53.160 | that is constantly tuned to minimize surprises.
00:31:55.580 | So it basically tries to predict as well as it can
00:31:58.560 | what you're going to perceive next.
00:31:59.780 | - Are we talking about...
00:32:01.260 | Which is the machine learning,
00:32:02.620 | our perception system or the dream world?
00:32:05.700 | - The machine world is...
00:32:06.740 | Dream world is the result of the machine learning process
00:32:10.180 | of the perception system.
00:32:11.220 | - That's doing the compression.
00:32:12.220 | - Yes.
00:32:13.100 | And the model of you as an agent
00:32:15.900 | is not a different type of model or it's a different type,
00:32:19.460 | but not different as in its model-like nature
00:32:23.180 | from the model of the ocean, right?
00:32:25.580 | Some things are oceans, some things are agents.
00:32:28.300 | And one of these agents is using your own control model,
00:32:31.640 | the output of your model,
00:32:32.780 | the things that you perceive yourself as doing.
00:32:36.320 | And that is you.
00:32:38.220 | - What about the fact that like when you're standing
00:32:42.040 | with the water on your feet
00:32:45.940 | and you're looking out into the vast,
00:32:48.620 | like open water of the ocean,
00:32:52.020 | and then there's a beautiful sunset.
00:32:54.520 | And it...
00:32:55.480 | The fact that it's beautiful
00:32:56.580 | and then maybe you have like friends or a loved one with you
00:32:59.220 | and like you feel love.
00:33:00.380 | What is that?
00:33:01.220 | As the dream world or what is that?
00:33:02.740 | - Yes, it's all happening inside of the dream.
00:33:05.620 | - Okay.
00:33:06.860 | But see, the word dream makes it seem like it's not real.
00:33:11.380 | - No, of course it's not real.
00:33:12.880 | The physical universe is real,
00:33:16.540 | but the physical universe is incomprehensible
00:33:18.660 | and it doesn't have any feeling of realness.
00:33:21.100 | The feeling of realness that you experience
00:33:22.940 | gets attached to certain representations
00:33:25.460 | where your brain assesses,
00:33:26.660 | this is the best model of reality that I have.
00:33:28.540 | - So the only thing that's real to you
00:33:30.840 | is the thing that's happening at the very base of reality,
00:33:34.720 | like the...
00:33:36.380 | - Yeah, for something to be real,
00:33:37.660 | it needs to be implemented.
00:33:39.020 | So the model that you have of reality
00:33:42.340 | is real in as far as it is a model, right?
00:33:45.300 | It's an appropriate description of the world
00:33:47.860 | to say that there are models that are being experienced.
00:33:51.500 | But the world that you experience
00:33:54.740 | is not necessarily implemented.
00:33:56.900 | There is a difference between a reality,
00:33:59.380 | a simulation, and a simulacrum.
00:34:01.320 | The reality that we're talking about
00:34:04.500 | is something that fully emerges
00:34:06.100 | over a causally closed lowest layer.
00:34:08.660 | And the idea of physicalism is that we are in that layer,
00:34:11.320 | that basically our world emerges over that.
00:34:13.480 | Every alternative to physicalism is a simulation theory,
00:34:16.100 | which basically says that we are
00:34:18.020 | in some kind of simulation universe
00:34:19.500 | and the real world needs to be an apparent universe of that,
00:34:22.140 | where the actual causal structure is, right?
00:34:24.420 | And when you look at the ocean in your own mind,
00:34:27.680 | you are looking at a simulation
00:34:28.940 | that explains what you're going to see next.
00:34:31.500 | - So we are living in a simulation.
00:34:32.900 | - Yes, but a simulation generated by our own brains.
00:34:35.940 | - Yeah.
00:34:36.780 | - And this simulation is different from the physical reality
00:34:39.700 | because the causal structure that is being produced,
00:34:42.100 | what you are seeing,
00:34:42.940 | is different from the causal structure of physics.
00:34:44.980 | - But consistent.
00:34:45.940 | - Hopefully.
00:34:47.620 | If not, then you are going to end up
00:34:49.780 | in some kind of institution
00:34:51.060 | where people will take care of you
00:34:52.260 | because your behavior will be inconsistent, right?
00:34:54.620 | Your behavior needs to work in such a way
00:34:57.260 | that it's interacting
00:34:58.500 | with an accurately predictive model of reality.
00:35:01.020 | And if your brain is unable
00:35:02.680 | to make your model of reality predictive,
00:35:05.380 | you will need help.
00:35:06.220 | - So what do you think about Donald Hoffman's argument
00:35:10.300 | that it doesn't have to be consistent,
00:35:12.780 | the dream world to what he calls like the interface
00:35:17.820 | to the actual physical reality,
00:35:19.540 | where there could be evolution.
00:35:20.700 | I think he makes an evolutionary argument,
00:35:23.100 | which is like it could be an evolutionary advantage
00:35:26.500 | to have the dream world drift away from physical reality.
00:35:30.980 | - I think that only works if you have tenure.
00:35:32.820 | As long as you're still interacting with the ground truth,
00:35:35.300 | your model needs to be somewhat predictive.
00:35:37.660 | - Well, in some sense,
00:35:40.660 | humans have achieved a kind of tenure in the animal kingdom.
00:35:45.140 | - Yeah, and at some point we became too big to fail,
00:35:47.660 | so we became post-modernist.
00:35:49.340 | (laughing)
00:35:51.460 | - It all makes sense now.
00:35:52.300 | - It's a version of reality that we like.
00:35:55.020 | - Oh man, okay.
00:35:57.420 | - Yeah, but basically you can do magic.
00:36:00.220 | You can change your assessment of reality,
00:36:02.500 | but eventually reality is going to come bite you in the ass
00:36:05.620 | if it's not predictive.
00:36:06.860 | - Do you have a sense
00:36:09.140 | of what is that base layer of physical reality?
00:36:12.620 | You have these attempts at the theories of everything,
00:36:17.620 | the very, very small of like string theory,
00:36:21.140 | or what Stephen Wolfram talks about with a hypergrass.
00:36:25.420 | These are these tiny, tiny, tiny, tiny objects.
00:36:28.540 | And then there is more like quantum mechanics
00:36:31.660 | that's talking about objects that are much larger,
00:36:34.900 | but still very, very, very tiny.
00:36:36.780 | Do you have a sense of where the tiniest thing is
00:36:40.060 | that is like at the lowest level,
00:36:42.900 | the turtle at the very bottom?
00:36:44.780 | Do you have a sense of what that turtle is?
00:36:45.620 | - I don't think that you can talk about where it is
00:36:48.580 | because space is emergent over the activity of these things.
00:36:51.620 | So space, the coordinates only exist
00:36:55.540 | in relation to the other things.
00:36:58.820 | And so you could, in some sense, abstract it into locations
00:37:01.740 | that can hold information and trajectories
00:37:04.300 | that the information can take
00:37:05.540 | between the different locations.
00:37:06.900 | And this is how we construct our notion of space.
00:37:10.380 | And physicists usually have a notion of space
00:37:14.100 | that is continuous.
00:37:15.700 | And this is a point where I tend to agree
00:37:19.140 | with people like Stephen Wolfram,
00:37:20.980 | who are very skeptical of the geometric notions.
00:37:23.820 | I think that geometry is the dynamics
00:37:25.980 | of too many parts to count.
00:37:27.300 | And when there are no infinities,
00:37:30.820 | if there were two infinities,
00:37:32.500 | you would be running into contradictions,
00:37:34.220 | which is in some sense what Gödel and Turing discovered
00:37:37.780 | in response to Hilbert's call.
00:37:39.780 | - So there are no infinities.
00:37:41.340 | - There are no infinities.
00:37:42.180 | - Infinity is fake.
00:37:43.020 | - There is unboundedness,
00:37:44.020 | but if you have a language that talks about infinity,
00:37:46.820 | at some point the language is going to contradict itself,
00:37:49.580 | which means it's no longer valid.
00:37:51.660 | In order to deal with infinities in mathematics,
00:37:54.020 | you have to postulate the existence initially.
00:37:57.580 | You cannot construct the infinities.
00:37:59.180 | And that's an issue, right?
00:38:00.180 | You cannot build up an infinity from zero.
00:38:02.700 | But in practice, you never do this, right?
00:38:04.700 | When you perform calculations,
00:38:06.020 | you only look at the dynamics of too many parts to count.
00:38:09.060 | And usually these numbers are not that large.
00:38:13.420 | They're not googles or something.
00:38:15.140 | The infinities that we are dealing with in our universe
00:38:18.540 | are mathematically speaking, relatively small integers.
00:38:22.300 | And still, what we're looking at is dynamics
00:38:26.540 | where a trillion things behave similar
00:38:30.660 | to a hundred trillion things,
00:38:32.620 | or something that is very, very large,
00:38:37.620 | because they're converging.
00:38:39.260 | And these convergent dynamics, these operators,
00:38:41.380 | this is what we deal with when we are doing the geometry.
00:38:45.060 | Geometry is stuff where we can pretend that it's continuous,
00:38:48.420 | because if we subdivide the space sufficiently fine-grained,
00:38:53.100 | these things approach a certain dynamic.
00:38:56.140 | And this approach dynamic, that is what we mean by it.
00:38:59.260 | But I don't think that infinity would work.
00:39:01.740 | So to speak that you would know the last digit of pi,
00:39:05.100 | and that you have a physical process
00:39:06.580 | that rests on knowing the last digit of pi.
00:39:09.460 | - Yeah, that could be just a peculiar quirk
00:39:12.020 | of human cognition that we like discreet.
00:39:15.100 | Discreet makes sense to us.
00:39:16.660 | Infinity doesn't.
00:39:18.260 | So in terms of our intuitions.
00:39:19.900 | - No, the issue is that everything that we think about
00:39:22.940 | needs to be expressed in some kind of mental language,
00:39:25.660 | not necessarily a natural language,
00:39:27.740 | but some kind of mathematical language
00:39:29.860 | that your neurons can speak,
00:39:31.700 | that refers to something in the world.
00:39:34.140 | And what we have discovered is that
00:39:36.940 | we cannot construct a notion of infinity
00:39:39.020 | without running into contradictions,
00:39:40.540 | which means that such a language is no longer valid.
00:39:43.620 | And I suspect this is what made Pythagoras so unhappy
00:39:46.780 | when somebody came up with the notion of irrational numbers
00:39:49.380 | before it was time, right?
00:39:50.420 | There's this myth that he had this person killed
00:39:52.700 | when he blepped out the secret,
00:39:54.140 | that not everything can be expressed
00:39:55.740 | as a ratio between two numbers,
00:39:57.300 | but there are numbers between the ratios.
00:39:59.740 | The world was not ready for this.
00:40:01.060 | And I think he was right.
00:40:02.380 | That has confused mathematicians very seriously
00:40:06.060 | because these numbers are not values, they are functions.
00:40:09.660 | And so you can calculate these functions
00:40:11.580 | to a certain degree of approximation,
00:40:13.260 | but you cannot pretend that pi has actually a value.
00:40:17.060 | Pi is a function that would approach this value
00:40:20.020 | to some degree, but nothing in the world
00:40:22.700 | rests on knowing pi.
00:40:26.300 | - How important is this distinction
00:40:28.620 | between discrete and continuous for you to get to the bottom?
00:40:32.180 | 'Cause there's a, I mean,
00:40:33.980 | in discussion of your favorite flavor
00:40:37.500 | of the theory of everything, there's a few on the table.
00:40:41.140 | So there's string theory, there's particular,
00:40:44.400 | there's a loop quantum gravity,
00:40:48.180 | which focused on one particular unification.
00:40:51.660 | There's just a bunch of favorite flavors
00:40:56.140 | of different people trying to propose
00:40:59.460 | a theory of everything.
00:41:01.260 | Eric Weinstein and a bunch of people throughout history.
00:41:04.780 | And then of course, Stephen Wolfram,
00:41:06.660 | who I think is one of the only people doing a discrete.
00:41:10.860 | - No, no, there's a bunch of physicists
00:41:12.620 | who do this right now.
00:41:13.700 | And like Toffoli and Tomasello.
00:41:17.700 | And digital physics is something that is, I think,
00:41:22.540 | growing in popularity.
00:41:24.460 | But the main reason why this is interesting is
00:41:29.460 | because it's important sometimes to settle disagreements.
00:41:35.580 | I don't think that you need infinities at all
00:41:37.900 | and you never needed them.
00:41:39.820 | You can always deal with very large numbers
00:41:41.860 | and you can deal with limits, right?
00:41:43.180 | We are fine with doing that.
00:41:44.620 | You don't need any kind of infinity.
00:41:46.140 | You can build your computer algebra systems just as well
00:41:49.220 | without believing in infinity in the first place.
00:41:51.140 | - So you're okay with limits?
00:41:52.820 | - Yeah, so basically a limit means that something
00:41:55.300 | is behaving pretty much the same
00:41:58.060 | if you make the number larger.
00:41:59.820 | Because it's converging to a certain value
00:42:02.460 | and at some point the difference becomes negligible
00:42:04.820 | and you can no longer measure it.
00:42:06.660 | And in this sense, you have things that,
00:42:09.740 | if you have an n-gon which has enough corners,
00:42:12.860 | then it's going to behave like a circle at some point.
00:42:15.220 | And it's only going to be in some kind of esoteric thing
00:42:18.420 | that cannot exist in the physical universe
00:42:21.100 | that you would be talking about this perfect circle.
00:42:23.860 | And now it turns out that it also wouldn't work
00:42:25.940 | in mathematics because you cannot construct mathematics
00:42:28.420 | that has infinite resolution
00:42:30.060 | without running into contradictions.
00:42:31.860 | So that is itself not that important
00:42:35.060 | because we never did that, right?
00:42:36.260 | It's just a thing that some people thought we could.
00:42:39.060 | And this leads to confusion.
00:42:40.820 | So for instance, Roger Penrose uses this as an argument
00:42:43.620 | to say that there are certain things
00:42:46.180 | that mathematicians can do dealing with infinities.
00:42:50.620 | And by extension, our mind can do
00:42:53.260 | that computers cannot do.
00:42:55.220 | - Yeah, he talks about that there's the human mind
00:42:58.460 | can do certain mathematical things
00:43:00.820 | that the computer as defined
00:43:02.940 | by the universal Turing machine cannot.
00:43:06.220 | - Yes.
00:43:07.220 | - So that it has to do with infinity.
00:43:08.940 | - Yes, it's one of the things.
00:43:10.300 | So he is basically pointing at the fact
00:43:13.140 | that there are things that are possible
00:43:15.620 | in the mathematical mind and in pure mathematics
00:43:20.620 | that are not possible in machines
00:43:24.100 | that can be constructed in the physical universe.
00:43:27.140 | And because he's an honest guy,
00:43:29.180 | he thinks this means that present physics
00:43:31.700 | cannot explain operations that happen in our mind.
00:43:34.900 | - Do you think he's right on the...
00:43:36.900 | So let's leave his discussion of consciousness aside
00:43:39.860 | for the moment.
00:43:40.820 | Do you think he's right about just
00:43:42.820 | what he's basically referring to as intelligence?
00:43:46.100 | So is the human mind fundamentally more capable
00:43:50.820 | as a thinking machine than a universal Turing machine?
00:43:53.940 | - No.
00:43:54.780 | - So he's suggesting that, right?
00:43:58.740 | - So our mind is actually less than a Turing machine.
00:44:01.020 | There can be no Turing machine
00:44:02.100 | because it's defined as having an infinite tape.
00:44:05.140 | And we always only have a finite tape.
00:44:07.260 | - But he's saying it's better.
00:44:08.100 | - Our minds can only perform finitely many operations.
00:44:10.140 | Yes, he thinks so.
00:44:11.180 | - It can do the kind of computation
00:44:13.140 | that a Turing machine cannot.
00:44:14.660 | - And that's because he thinks that our minds
00:44:16.660 | can do operations that have infinite resolution
00:44:19.500 | in some sense.
00:44:21.060 | And I don't think that's the case.
00:44:23.260 | Our minds are just able to discover these limit operators
00:44:26.340 | over too many parts to count.
00:44:27.780 | - I see.
00:44:28.620 | What about his idea that consciousness
00:44:32.780 | is more than a computation?
00:44:37.460 | So it's more than something that a Turing machine
00:44:40.540 | can do.
00:44:42.100 | So again, saying that there's something special
00:44:44.540 | about our mind that cannot be replicated in the machine.
00:44:47.780 | - The issue is that I don't even know
00:44:51.380 | how to construct a language
00:44:52.740 | to express this statement correctly.
00:44:56.460 | - Well, the basic statement is,
00:45:01.460 | there's a human experience that includes intelligence,
00:45:08.240 | that includes self-awareness,
00:45:09.420 | that includes the hard problem of consciousness.
00:45:12.980 | And the question is, can that be fully simulated
00:45:16.860 | in the computer, in the mathematical model of the computer
00:45:20.940 | as we understand it today?
00:45:22.300 | Roger Penrose says no.
00:45:25.040 | So the universal Turing machine
00:45:30.220 | cannot simulate the universe.
00:45:32.460 | - So the interesting question is,
00:45:34.420 | and you have to ask him this, is why not?
00:45:36.500 | What is the specific thing that cannot be modeled?
00:45:39.900 | And when I looked at his writings,
00:45:42.320 | and I haven't read all of it,
00:45:43.520 | but when I read, for instance,
00:45:45.920 | the section that he writes in the introduction
00:45:49.040 | to "A Road to Infinity,"
00:45:51.040 | the thing that he specifically refers to
00:45:53.240 | is the way in which human minds deal with infinities.
00:45:56.640 | And that itself can, I think, easily be deconstructed.
00:46:02.020 | A lot of people feel that our experience
00:46:05.560 | cannot be explained in a mechanical way,
00:46:07.680 | and therefore it needs to be different.
00:46:11.080 | And I concur, our experience is not mechanical.
00:46:14.520 | Our experience is simulated.
00:46:16.720 | It exists only in a simulation.
00:46:18.440 | Only a simulation can be conscious.
00:46:19.980 | Physical systems cannot be conscious
00:46:21.600 | because they're only mechanical.
00:46:23.040 | Cells cannot be conscious.
00:46:25.080 | Neurons cannot be conscious.
00:46:26.320 | Brains cannot be conscious.
00:46:27.440 | People cannot be conscious,
00:46:28.640 | as far as if you understand them as physical systems.
00:46:31.620 | What can be conscious is the story of the system
00:46:36.220 | in the world where you write all these things
00:46:37.980 | into the story.
00:46:39.420 | You have experiences for the same reason
00:46:41.420 | that a character in a novel has experiences,
00:46:43.240 | because it's written into the story.
00:46:45.780 | And now the system is acting on that story.
00:46:48.220 | And it's not a story that is written in a natural language.
00:46:50.680 | It's written in a perceptual language,
00:46:52.500 | in this multimedia language of the game engine.
00:46:55.380 | And in there, you write in what kind of experience you have
00:46:59.340 | and what this means for the behavior of the system,
00:47:01.460 | for your behavior tendencies, for your focus,
00:47:03.700 | for your attention, for your experience of valence,
00:47:05.460 | and so on.
00:47:06.420 | And this is being used to inform the behavior of the system
00:47:09.600 | in the next step.
00:47:10.740 | And then the story updates with the reactions of the system
00:47:15.740 | and the changes in the world and so on.
00:47:17.780 | And you live inside of that model.
00:47:19.340 | You don't live inside of the physical reality.
00:47:21.640 | - And, I mean, just to linger on it,
00:47:26.880 | like you see, okay, it's in the perceptual language,
00:47:30.840 | the multimodal perceptual language.
00:47:33.300 | That's the experience.
00:47:34.900 | That's what consciousness is within that model,
00:47:38.880 | within that story.
00:47:40.860 | But do you have agency?
00:47:42.660 | When you play a video game, you can turn left
00:47:46.020 | and you can turn right in that story.
00:47:48.540 | So in that dream world, how much control do you,
00:47:54.220 | is there such a thing as you in that story?
00:47:58.560 | Is it right to say the main character,
00:48:01.200 | everybody's NPCs, and then there's the main character,
00:48:04.400 | and you're controlling the main character?
00:48:07.040 | Or is that an illusion?
00:48:08.720 | Is there a main character that you're controlling?
00:48:10.920 | I'm getting to the point of the free will point.
00:48:14.560 | - Imagine that you are building a robot that plays soccer.
00:48:17.800 | And you've been to MIT computer science,
00:48:19.880 | you basically know how to do that.
00:48:22.080 | And so you would say the robot is an agent
00:48:25.320 | that solves a control problem.
00:48:27.800 | How to get the ball into the goal.
00:48:29.280 | And it needs to perceive the world,
00:48:30.760 | and the world is disturbing him in trying to do this.
00:48:33.280 | So he has to control many variables to make that happen
00:48:35.640 | and to project itself and the ball into the future
00:48:38.840 | and understand its position on the field
00:48:40.680 | relative to the ball and so on,
00:48:42.120 | and the position of its limbs
00:48:44.600 | or in the space around it and so on.
00:48:46.920 | So it needs to have an adequate model
00:48:48.440 | that abstracting reality in a useful way.
00:48:51.360 | And you could say that this robot does have agency
00:48:55.880 | over what it's doing in some sense.
00:48:58.380 | And the model is going to be a control model.
00:49:01.460 | And inside of that control model,
00:49:03.020 | you can possibly get to a point
00:49:05.740 | where this thing is sufficiently abstract
00:49:07.780 | to discover its own agency.
00:49:09.500 | Our current robots don't do that.
00:49:10.820 | They don't have a unified model of the universe.
00:49:13.100 | But there's not a reason why we shouldn't be getting there
00:49:16.100 | at some point in the not too distant future.
00:49:18.620 | And once that happens,
00:49:20.020 | you will notice that the robot tells a story
00:49:23.180 | about a robot playing soccer.
00:49:25.940 | So the robot will experience itself playing soccer
00:49:29.400 | in a simulation of the world that it uses
00:49:32.000 | to construct a model of the locations of its legs
00:49:35.320 | and limbs in space on the field
00:49:38.160 | with relationship to the ball.
00:49:39.360 | And it's not going to be at the level of the molecules.
00:49:42.200 | It will be an abstraction that is exactly at the level
00:49:45.240 | that is most suitable for past planning
00:49:47.380 | of the movements of the robot.
00:49:48.880 | Right, it's going to be a high-level abstraction,
00:49:51.380 | but a very useful one that is as predictive
00:49:53.680 | as you can make it.
00:49:55.120 | And in that side of that story,
00:49:56.560 | there is a model of the agency of that system.
00:49:58.740 | So this model can accurately predict
00:50:03.020 | that the contents of the model are going to be driving
00:50:06.040 | the behavior of the robot in the immediate future.
00:50:08.860 | - But there's the hard problem of consciousness,
00:50:12.240 | which I would also,
00:50:14.320 | there's a subjective experience of free will as well,
00:50:18.020 | that I'm not sure where the robot gets that,
00:50:20.720 | where that little leap is.
00:50:22.600 | Because for me right now,
00:50:24.220 | everything I imagine with that robot,
00:50:26.220 | as it gets more and more and more sophisticated,
00:50:29.000 | the agency comes from the programmer of the robot still,
00:50:33.480 | of what was programmed in.
00:50:35.780 | - You could probably do an end-to-end learning system.
00:50:38.440 | You maybe need to give it a few priors,
00:50:40.280 | so you nudge the architecture in the right direction
00:50:42.460 | that it converges more quickly.
00:50:44.320 | But ultimately discovering the suitable hyper parameters
00:50:47.960 | of the architecture is also only a search process, right?
00:50:50.320 | And as the search process was evolution,
00:50:52.720 | that has informed our brain architecture,
00:50:55.320 | so we can converge in a single lifetime
00:50:57.360 | on useful interaction with the world.
00:50:59.560 | - See, the problem is,
00:51:01.100 | if we define hyper parameters broadly,
00:51:03.520 | so it's not just the parameters that control
00:51:06.800 | this end-to-end learning system,
00:51:08.700 | but the entirety of the design of the robot.
00:51:10.960 | You have to remove the human completely from the picture.
00:51:15.780 | And then in order to build the robot,
00:51:17.320 | you have to create an entire universe.
00:51:20.340 | 'Cause you have to go, you can't just shortcut evolution,
00:51:22.640 | you have to go from the very beginning.
00:51:24.640 | In order for it to have,
00:51:25.860 | 'cause I feel like there's always a human
00:51:28.040 | pulling the strings,
00:51:29.580 | and that makes it seem like the robot is cheating,
00:51:33.920 | it's getting a shortcut to consciousness.
00:51:35.960 | - And you are looking at the current Boston Dynamics robots,
00:51:38.280 | it doesn't look as if there is somebody pulling the strings,
00:51:40.880 | it doesn't look like cheating anymore.
00:51:42.400 | - Okay, so let's go there,
00:51:43.400 | 'cause I got to talk to you about this.
00:51:44.840 | So obviously with the case of Boston Dynamics,
00:51:47.720 | as you may or may not know,
00:51:49.760 | it's always either hard-coded or remote-controlled.
00:51:54.080 | There's no intelligence.
00:51:55.240 | - I don't know how the current generation
00:51:57.460 | of Boston Dynamics robots works,
00:51:59.040 | but what I've been told about the previous ones
00:52:02.020 | was that it's basically all cybernetic control,
00:52:05.260 | which means you still have feedback mechanisms and so on,
00:52:08.640 | but it's not deep learning for the most part
00:52:11.560 | as it's currently done.
00:52:13.200 | It's for the most part just identifying a control hierarchy
00:52:16.920 | that is congruent to the limbs that exist
00:52:19.800 | and the parameters that need to be optimized
00:52:21.440 | for the movement of these limbs,
00:52:22.580 | and then there is a convergence progress.
00:52:24.500 | So it's basically just regression
00:52:26.200 | that you would need to control this.
00:52:27.880 | But again, I don't know whether that's true,
00:52:29.400 | that's just what I've been told about how they work.
00:52:31.400 | - We have to separate several levels of discussions here.
00:52:34.980 | So the only thing they do is pretty sophisticated control
00:52:39.280 | with no machine learning
00:52:40.920 | in order to maintain balance or to right itself.
00:52:45.920 | It's a control problem in terms of using the actuators
00:52:49.360 | to when it's pushed or when it steps on a thing
00:52:52.420 | that's uneven, how to always maintain balance.
00:52:55.400 | And there's a tricky set of heuristics around that,
00:52:57.960 | but that's the only goal.
00:53:00.480 | Everything you see Boston Dynamics doing
00:53:02.640 | in terms of that to us humans is compelling,
00:53:06.120 | which is any kind of higher order movement,
00:53:09.420 | like turning, wiggling its butt,
00:53:12.200 | like jumping back on its two feet, dancing.
00:53:18.160 | Dancing is even worse because dancing is hard coded in.
00:53:22.440 | It's choreographed by humans, it's choreography software.
00:53:27.360 | So there is no, of all that high level movement,
00:53:30.880 | there's no anything that you can call,
00:53:34.200 | certainly can't call AI,
00:53:35.940 | there's no even like basic heuristics,
00:53:39.500 | it's all hard coded in.
00:53:41.060 | And yet we humans immediately project agency onto them,
00:53:46.060 | which is fascinating.
00:53:48.900 | - So the gap here is it doesn't necessarily have agency.
00:53:53.140 | What it has is cybernetic control.
00:53:55.300 | And the cybernetic control means you have a hierarchy
00:53:57.420 | of feedback loops that keep the behavior
00:53:59.740 | in certain boundaries so the robot doesn't fall over
00:54:02.300 | and it's able to perform the movements.
00:54:04.140 | And the choreography cannot really happen
00:54:06.680 | with motion capture because the robot would fall over
00:54:09.200 | because the physics of the robot,
00:54:10.640 | the weight distribution and so on is different
00:54:12.800 | from the weight distribution in the human body.
00:54:15.360 | So if you were using the directly motion captured movements
00:54:19.560 | of a human body to project it into this robot,
00:54:21.720 | it wouldn't work.
00:54:22.560 | You can do this with a computer animation,
00:54:24.120 | it will look a little bit off, but who cares.
00:54:26.120 | But if you want to correct for the physics,
00:54:29.080 | you need to basically tell the robot
00:54:31.520 | where it should move its limbs,
00:54:33.760 | and then the control algorithm is going
00:54:35.860 | to approximate a solution that makes it possible
00:54:38.980 | within the physics of the robot.
00:54:41.020 | And you have to find the basic solution
00:54:43.880 | for making that happen,
00:54:44.780 | and there's probably going to be some regression necessary
00:54:47.580 | to get the control architecture to make these movements.
00:54:51.220 | - But those two layers are separate.
00:54:52.660 | - Yes.
00:54:53.500 | - So the thing, the higher level instruction
00:54:56.180 | of how you should move and where you should move
00:54:59.060 | is a higher level.
00:54:59.900 | - Yes, so I expect that the control level of these robots
00:55:02.640 | at some level is dumb.
00:55:03.660 | This is just the physical control movement,
00:55:06.180 | the motor architecture.
00:55:07.860 | But it's a relatively smart motor architecture.
00:55:10.340 | It's just that there is no high level deliberation
00:55:12.500 | about what decisions to make necessarily.
00:55:14.420 | - But see, it doesn't feel like free will or consciousness.
00:55:17.860 | - No, no, that was not where I was trying to get to.
00:55:20.580 | I think that in our own body, we have that too.
00:55:24.540 | So we have a certain thing that is basically
00:55:26.920 | just a cybernetic control architecture
00:55:29.540 | that is moving our limbs.
00:55:31.300 | And deep learning can help in discovering
00:55:34.300 | such an architecture if you don't have it
00:55:35.940 | in the first place.
00:55:37.220 | If you already know your hardware,
00:55:38.660 | you can maybe handcraft it.
00:55:40.700 | But if you don't know your hardware,
00:55:41.900 | you can search for such an architecture.
00:55:43.740 | And this work already existed in the '80s and '90s.
00:55:46.980 | People were starting to search for control architectures
00:55:49.820 | by motor babbling and so on,
00:55:51.140 | and just use reinforcement learning architectures
00:55:53.900 | to discover such a thing.
00:55:55.580 | And now imagine that you have
00:55:57.740 | the cybernetic control architecture already inside of you.
00:56:01.540 | And you extend this a little bit.
00:56:03.700 | So you are seeking out food, for instance,
00:56:06.460 | or rest, and so on.
00:56:08.300 | And you get to have a baby at some point.
00:56:11.800 | And now you add more and more control layers to this.
00:56:15.700 | And the system is reverse engineering
00:56:17.760 | its own control architecture,
00:56:19.600 | and builds a high level model to synchronize
00:56:22.460 | the pursuit of very different conflicting goals.
00:56:26.380 | And this is how I think you get to purposes.
00:56:28.180 | Purposes are models of your goals.
00:56:30.100 | Your goals may be intrinsic as the result
00:56:32.220 | of the different set point violations that you have,
00:56:34.700 | hunger and thirst for very different things,
00:56:37.140 | and rest and pain avoidance and so on.
00:56:39.420 | And you put all these things together,
00:56:41.140 | and eventually you need to come up with a strategy
00:56:44.220 | to synchronize them all.
00:56:46.020 | And you don't need just to do this alone by yourself,
00:56:49.340 | because we are state building organisms.
00:56:51.380 | We cannot function as isolation
00:56:53.740 | the way that Homo sapiens is set up.
00:56:55.860 | So our own behavior only makes sense
00:56:58.140 | when you zoom out very far into a society,
00:57:01.000 | or even into ecosystemic intelligence on the planet,
00:57:04.900 | and our place in it.
00:57:06.500 | So the individual behavior only makes sense
00:57:08.500 | in these larger contexts.
00:57:10.020 | And we have a number of priors built into us.
00:57:11.860 | So we are behaving as if we were acting
00:57:14.700 | on these high level goals,
00:57:15.860 | pretty much right from the start.
00:57:17.940 | And eventually in the course of our life,
00:57:19.860 | we can reverse engineer the goals that we're acting on,
00:57:22.700 | what actually are our higher level purposes.
00:57:25.820 | And the more we understand that,
00:57:27.100 | the more our behavior makes sense.
00:57:28.660 | But this is all at this point,
00:57:30.380 | complex stories within stories
00:57:32.420 | that are driving our behavior.
00:57:34.580 | - Yeah, I just don't know how big of a leap it is
00:57:38.500 | to start create a system that's able to tell stories
00:57:41.980 | within stories.
00:57:42.960 | Like how big of a leap that is
00:57:45.580 | from where currently Boston Dynamics is,
00:57:48.260 | or any robot that's operating in the physical space.
00:57:53.820 | And that leap might be big
00:57:56.220 | if it requires to solve the hard problem of consciousness,
00:57:59.380 | which is telling a hell of a good story.
00:58:01.620 | - I suspect that consciousness itself is relatively simple.
00:58:05.260 | What's hard is perception,
00:58:07.300 | and the interface between perception and reasoning.
00:58:09.900 | That's for instance, the idea of the consciousness prior
00:58:14.700 | that would be built into such a system by Joshua Bengio.
00:58:18.740 | And what he describes, and I think that's accurate,
00:58:22.260 | is that our own model of the world
00:58:27.260 | can be described through something like an energy function.
00:58:29.820 | The energy function is modeling the contradictions
00:58:32.700 | that exist within the model at any given point.
00:58:34.840 | And you try to minimize these contradictions,
00:58:36.620 | the tangents in the model.
00:58:38.340 | And to do this, you need to sometimes test things.
00:58:41.380 | You need to conditionally disambiguate figure and ground.
00:58:43.740 | You need to distinguish whether this is true
00:58:46.540 | or that is true, and so on.
00:58:47.980 | Eventually you get to an interpretation,
00:58:49.540 | but you will need to manually depress a few points
00:58:52.340 | in your model to let it snap into a state that makes sense.
00:58:55.620 | And this function that tries to get the biggest dip
00:58:57.780 | in the energy function in your model,
00:58:59.660 | according to Joshua Bengio, is related to consciousness.
00:59:02.380 | It's a low dimensional discrete function
00:59:04.640 | that tries to maximize this dip in the energy function.
00:59:08.260 | - Yeah, I think I would need to dig into details
00:59:13.340 | because I think the way he uses the word consciousness
00:59:15.580 | is more akin to self-awareness,
00:59:17.760 | like modeling yourself within the world,
00:59:20.860 | as opposed to the subjective experience, the hard problem.
00:59:23.700 | - No, it's not even the self is in the world.
00:59:26.580 | The self is the agent, and you don't need to be aware
00:59:28.820 | of yourself in order to be conscious.
00:59:31.100 | The self is just a particular content that you can have,
00:59:34.400 | but you don't have to have.
00:59:35.980 | But you can be conscious in, for instance, a dream at night
00:59:39.700 | or during a meditation state, but you don't have a self.
00:59:42.940 | - Right.
00:59:43.780 | - You're just aware of the fact that you are aware.
00:59:45.620 | And what we mean by consciousness in the colloquial sense
00:59:49.880 | is largely this reflexive self-awareness,
00:59:53.820 | that we become aware of the fact
00:59:55.220 | that we are paying attention,
00:59:57.300 | that we are the thing that pays attention.
00:59:59.180 | - We are the thing that pays attention, right.
01:00:02.000 | I don't see where the awareness that we're aware,
01:00:07.000 | the hard problem doesn't feel like it's solved.
01:00:10.620 | I mean, it's called a hard problem for a reason
01:00:14.900 | because it seems like there needs to be a major leap.
01:00:19.340 | - Yeah, I think the major leap is to understand
01:00:21.620 | how it is possible that a machine can dream,
01:00:25.260 | that a physical system is able to create a representation
01:00:29.500 | that the physical system is acting on,
01:00:31.220 | and that is spun force and so on.
01:00:33.960 | But once you accept the fact that you are not in physics,
01:00:36.660 | but that you exist inside of the story,
01:00:39.180 | I think the mystery disappears.
01:00:40.580 | Everything is possible in a story.
01:00:42.100 | - You exist inside the story.
01:00:43.300 | Okay, so the machine--
01:00:44.140 | - Consciousness is being written into the story.
01:00:45.820 | The fact that you experience things
01:00:47.340 | is written to the story.
01:00:48.860 | You ask yourself, is this real what I'm seeing?
01:00:51.300 | And your brain writes into the story, yes, it's real.
01:00:53.860 | - So what about the perception of consciousness?
01:00:56.300 | So to me, you look conscious.
01:00:59.520 | So the illusion of consciousness,
01:01:02.500 | the demonstration of consciousness,
01:01:04.340 | I ask for the legged robot,
01:01:07.720 | how do we make this legged robot conscious?
01:01:10.620 | So there's two things,
01:01:12.860 | and maybe you can tell me if they're neighboring ideas.
01:01:16.380 | One is actually make it conscious,
01:01:18.900 | and the other is make it appear conscious to others.
01:01:21.720 | Are those related?
01:01:23.740 | - Let's ask it from the other direction.
01:01:27.420 | What would it take to make you not conscious?
01:01:30.100 | So when you are thinking about how you perceive the world,
01:01:35.220 | can you decide to switch from looking at qualia
01:01:39.900 | to looking at representational states?
01:01:42.860 | - And it turns out you can.
01:01:44.900 | There is a particular way in which you can look at the world
01:01:48.380 | and recognize its machine nature, including your own.
01:01:51.460 | And in that state, you don't have that conscious experience
01:01:54.280 | in this way anymore.
01:01:55.780 | It becomes apparent as a representation.
01:01:59.700 | Everything becomes opaque.
01:02:01.620 | And I think this thing that you recognize
01:02:04.060 | everything as a representation,
01:02:05.420 | this is typically what we mean with enlightenment states.
01:02:08.260 | And it can happen at the motivational level,
01:02:11.740 | but you can also do this on the experiential level,
01:02:14.820 | on the perceptual level.
01:02:16.260 | - See, but then I can come back to a conscious state.
01:02:18.900 | Okay, I particularly,
01:02:22.200 | I'm referring to the social aspect,
01:02:26.980 | that the demonstration of consciousness
01:02:30.140 | is a really nice thing at a party
01:02:32.200 | when you're trying to meet a new person.
01:02:34.200 | It's a nice thing to know that they're conscious,
01:02:38.340 | and they can, I don't know how fundamental consciousness
01:02:42.740 | is in human interaction,
01:02:43.940 | but it seems like to be at least an important part.
01:02:48.060 | And I ask that in the same kind of way for robots.
01:02:53.060 | You know, in order to create a rich,
01:02:55.380 | compelling human robot interaction,
01:02:58.420 | it feels like there needs to be elements of consciousness
01:03:00.760 | within that interaction.
01:03:02.700 | - And my cat is obviously conscious.
01:03:04.940 | And so my cat can do this party trick.
01:03:07.420 | She also knows that I am conscious,
01:03:09.260 | be able to have feedback about the fact
01:03:11.420 | that we are both acting on models of our own awareness.
01:03:14.900 | - The question is how hard is it for the robot,
01:03:19.700 | artificially created robot,
01:03:21.060 | to achieve cat-level and party tricks?
01:03:24.400 | - Yes, so the issue for me is currently not so much
01:03:27.340 | on how to build a system that creates a story
01:03:30.340 | about a robot that lives in the world,
01:03:32.900 | but to make an adequate representation of the world.
01:03:36.580 | And the model that you and me have is a unified one.
01:03:40.260 | It's one where you basically make sense of everything
01:03:44.100 | that you can perceive.
01:03:45.020 | Every feature in the world that enters your perception
01:03:47.980 | can be relationally mapped
01:03:49.500 | to a unified model of everything.
01:03:51.820 | And we don't have an AI that is able to construct
01:03:54.100 | such a unified model yet.
01:03:55.460 | - So you need that unified model to do the party trick?
01:03:58.860 | - Yes, I think that it doesn't make sense
01:04:01.820 | if this thing is conscious,
01:04:03.100 | but not in the same universe as you,
01:04:04.700 | because you could not relate to each other.
01:04:06.820 | - So what's the process, would you say,
01:04:09.020 | of engineering consciousness in a machine?
01:04:12.100 | Like, what are the ideas here?
01:04:14.620 | - So you probably want to have
01:04:16.780 | some kind of perceptual system.
01:04:19.100 | This perceptual system is a processing agent
01:04:21.340 | that is able to track sensory data
01:04:23.900 | and predict the next frame and the sensory data
01:04:26.780 | from the previous frames of the sensory data
01:04:29.820 | and the current state of the system.
01:04:31.780 | So the current state of the system is,
01:04:33.860 | in perception, instrumental
01:04:35.380 | to predicting what happens next.
01:04:37.620 | And this means you build lots and lots of functions
01:04:39.820 | that take all the blips that you feel on your skin
01:04:42.180 | and that you see on your retina, or that you hear,
01:04:45.580 | and puts them into a set of relationships
01:04:48.140 | that allows you to predict what kind of sensory data,
01:04:51.220 | what kind of sensor of blips,
01:04:52.900 | vector of blips you're going to perceive
01:04:54.900 | in the next frame, right?
01:04:56.100 | This is tuned, and it's constantly tuned
01:04:59.220 | until it gets as accurate as it can.
01:05:01.940 | - You build a very accurate prediction mechanism
01:05:05.100 | that is step one of the perception.
01:05:08.060 | So first you predict, then you perceive
01:05:09.900 | and see the error in your prediction.
01:05:11.740 | - And you have to do two things to make that happen.
01:05:13.820 | One is you have to build a network of relationships
01:05:16.900 | that are constraints,
01:05:18.460 | that take all the variants in the world,
01:05:21.060 | put each of the variances into a variable
01:05:24.500 | that is connected with relationships to other variables.
01:05:27.980 | And these relationships are computable functions
01:05:30.060 | that constrain each other.
01:05:31.140 | So when you see a nose
01:05:32.260 | that points at a certain direction in space,
01:05:34.900 | you have a constraint that says
01:05:36.100 | there should be a face nearby that has the same direction.
01:05:39.100 | And if that is not the case,
01:05:40.380 | you have some kind of contradiction
01:05:41.700 | that you need to resolve
01:05:42.540 | because it's probably not a nose what you're looking at.
01:05:44.620 | It just looks like one.
01:05:45.940 | So you have to reinterpret the data
01:05:48.620 | until you get to a point where your model converges.
01:05:52.460 | And this process of making the sensory data
01:05:54.940 | fit into your model structure
01:05:56.700 | is what Piaget calls the assimilation.
01:06:01.140 | And accommodation is the change of the models
01:06:04.060 | where you change your model in such a way
01:06:05.700 | that you can assimilate everything.
01:06:08.140 | - So you're talking about building
01:06:09.860 | a hell of an awesome perception system
01:06:12.380 | that's able to do prediction and perception correct
01:06:15.100 | and keep improving.
01:06:15.940 | - No, wait, just figure it out.
01:06:17.780 | - Wait, there's more.
01:06:18.660 | - Yes, there's more.
01:06:19.580 | So the first thing that we wanted to do
01:06:21.500 | is we want to minimize the contradictions in the model.
01:06:24.740 | And of course, it's very easy to make a model
01:06:26.780 | in which you minimize the contradictions
01:06:28.300 | just by allowing that it can be
01:06:29.780 | in many, many possible states, right?
01:06:31.580 | So if you increase degrees of freedom,
01:06:34.060 | you will have fewer contradictions.
01:06:35.940 | But you also want to reduce the degrees of freedom
01:06:37.900 | because degrees of freedom mean uncertainty.
01:06:40.340 | You want your model to reduce uncertainty
01:06:42.500 | as much as possible.
01:06:44.460 | But reducing uncertainty is expensive.
01:06:46.620 | So you have to have a trade-off
01:06:47.860 | between minimizing contradictions and reducing uncertainty.
01:06:52.500 | And you have only a finite amount of compute
01:06:54.700 | and experimental time and effort available
01:06:57.580 | to reduce uncertainty in the world.
01:06:59.340 | So you need to assign value to what you observe.
01:07:02.820 | So you need some kind of motivational system
01:07:05.180 | that is estimating what you should be looking at
01:07:07.780 | and what you should be thinking about it,
01:07:09.260 | how you should be applying your resources
01:07:10.980 | to model what that is, right?
01:07:13.020 | So you need to have something like convergence links
01:07:16.020 | that tell you how to get from the present state of the model
01:07:18.020 | to the next one.
01:07:19.100 | You need to have these compatibility links
01:07:20.740 | that tell you which constraints exist
01:07:23.620 | and which constraint violations exist.
01:07:25.620 | And you need to have some kind of motivational system
01:07:29.020 | that tells you what to pay attention to.
01:07:30.820 | So now we have a second agent next to the perceptual agent.
01:07:33.100 | We have a motivational agent.
01:07:34.980 | This is a cybernetic system
01:07:36.380 | that is modeling what the system needs,
01:07:38.860 | what's important for the system,
01:07:40.540 | and that interacts with the perceptual system
01:07:42.220 | to maximize the expected reward.
01:07:44.660 | - And you're saying the motivational system
01:07:46.140 | is some kind of, like,
01:07:49.060 | what is it, a higher level narrative over some lower level?
01:07:52.580 | - No, it's just your brainstem stuff,
01:07:54.020 | the limbic system stuff that tells you,
01:07:55.700 | okay, now you should get something to eat
01:07:57.700 | because I've just measured your blood sugar.
01:07:59.380 | - So you mean like motivational system,
01:08:00.980 | like the lower level stuff, like hungry?
01:08:03.100 | - Yes, there's basically physiological needs
01:08:05.740 | and some cognitive needs and some social needs,
01:08:07.540 | and they all interact.
01:08:08.460 | And they all implemented different parts
01:08:10.260 | in your nervous system as the motivational system.
01:08:12.700 | But they're basically cybernetic feedback loops.
01:08:14.740 | It's not that complicated.
01:08:16.460 | It's just a lot of code.
01:08:18.300 | And so you now have a motivational agent
01:08:21.460 | that makes your robot go for the ball,
01:08:23.140 | or that makes your worm go to eat food and so on.
01:08:27.620 | And you have the perceptual system
01:08:29.180 | that lets it predict the environment,
01:08:30.620 | so it's able to solve that control problem to some degree.
01:08:33.660 | And now what we learned is that it's very hard
01:08:35.860 | to build a machine learning system
01:08:37.260 | that looks at all the data simultaneously
01:08:39.380 | to see what kind of relationships could exist between them.
01:08:43.300 | So you need to selectively model the world.
01:08:45.620 | You need to figure out,
01:08:46.900 | where can I make the biggest difference
01:08:48.380 | if I would put the following things together?
01:08:51.060 | Sometimes you find a gradient for that, right?
01:08:53.060 | When you have a gradient,
01:08:54.260 | you don't need to remember where you came from.
01:08:56.540 | You just follow the gradient
01:08:57.620 | until it doesn't get any better.
01:08:59.420 | But if you have a world where the problems are discontinuous
01:09:02.220 | and the search spaces are discontinuous,
01:09:04.340 | you need to retain memory of what you explored.
01:09:07.380 | You need to construct a plan of what to explore next.
01:09:10.620 | And this thing means that you have,
01:09:13.100 | next to this perceptual construction system
01:09:15.420 | and the motivational cybernetics,
01:09:17.700 | an agent that is paying attention
01:09:20.300 | to what it should select at any given moment
01:09:22.740 | to maximize reward.
01:09:24.340 | And this scanning system,
01:09:25.660 | this attention agent is required for consciousness
01:09:28.940 | and consciousness is its control model.
01:09:31.460 | So it's the index memories that this thing retains
01:09:36.220 | when it manipulates the perceptual representations
01:09:39.220 | to maximize the value and minimize the conflicts
01:09:43.060 | and to increase coherence.
01:09:44.860 | So the purpose of consciousness is to create coherence
01:09:47.780 | in your perceptual representations,
01:09:49.540 | remove conflicts, predict the future,
01:09:52.180 | construct counterfactual representations
01:09:54.140 | so you can coordinate your actions and so on.
01:09:56.380 | And in order to do this, it needs to form memories.
01:10:00.260 | These memories are partial binding states
01:10:02.380 | of the working memory contents
01:10:04.140 | that are being revisited later on to backtrack,
01:10:07.140 | to undo certain states, to look for alternatives.
01:10:10.180 | And these index memories that you can recall,
01:10:13.060 | that is what you perceive as your stream of consciousness.
01:10:15.980 | And being able to recall these memories,
01:10:17.900 | this is what makes you conscious.
01:10:19.460 | If you could not remember what you paid attention to,
01:10:21.700 | you wouldn't be conscious.
01:10:22.940 | - So consciousness is the index in the memory database.
01:10:29.180 | Okay.
01:10:30.020 | But let me sneak up to the questions of consciousness
01:10:35.540 | a little further.
01:10:37.220 | So we usually relate suffering to consciousness.
01:10:42.700 | So the capacity to suffer.
01:10:44.380 | I think to me, that's a really strong sign of consciousness,
01:10:49.740 | is a thing that can suffer.
01:10:51.220 | How is that useful?
01:10:54.140 | Suffering.
01:10:55.980 | And like in your model, what you just described,
01:10:59.580 | which is indexing of memories,
01:11:01.580 | and what is the coherence with the perception,
01:11:05.120 | with this predictive thing that's going on in the perception,
01:11:09.300 | how does suffering relate to any of that?
01:11:12.700 | You know, the higher level suffering that humans do?
01:11:15.300 | - Basically pain is a reinforcement signal.
01:11:20.020 | Pain is a signal that one part of your brain
01:11:23.380 | sends to another part of your brain,
01:11:25.140 | or in an abstract sense, part of your mind
01:11:27.940 | sends to another part of the mind to regulate its behavior,
01:11:30.860 | to tell it the behavior that you're currently exhibiting
01:11:33.540 | should be improved.
01:11:34.940 | And this is the signal that I tell you
01:11:37.100 | to move away from what you're currently doing
01:11:40.180 | and push into a different direction.
01:11:42.300 | So pain gives part of you an impulse
01:11:46.060 | to do something differently.
01:11:47.940 | But sometimes this doesn't work,
01:11:49.940 | because the training part of your brain
01:11:52.140 | is talking to the wrong region,
01:11:54.180 | or because it has the wrong model
01:11:55.860 | of the relationships in the world.
01:11:57.180 | Maybe you're mismodeling yourself,
01:11:58.580 | or you're mismodeling the relationship
01:12:00.260 | of yourself to the world,
01:12:01.420 | or you're mismodeling the dynamics of the world.
01:12:03.500 | So you're trying to improve something
01:12:04.940 | that cannot be improved by generating more pain.
01:12:07.940 | But the system doesn't have any alternative.
01:12:10.420 | So it doesn't get better.
01:12:12.380 | What do you do if something doesn't get better,
01:12:14.260 | and you want it to get better?
01:12:15.580 | You increase the strength of the signal.
01:12:17.980 | And when the signal becomes chronic,
01:12:19.620 | when it becomes permanent,
01:12:20.980 | without a change inside, this is what we call suffering.
01:12:24.340 | And the purpose of consciousness
01:12:26.460 | is to deal with contradictions,
01:12:28.220 | with things that cannot be resolved.
01:12:30.340 | The purpose of consciousness, I think,
01:12:32.100 | is similar to a conductor in an orchestra.
01:12:35.060 | When everything works well,
01:12:36.460 | the orchestra doesn't need much of a conductor,
01:12:38.620 | as long as it's coherent.
01:12:40.300 | But when there is a lack of coherence,
01:12:42.060 | or something is consistently producing disharmony
01:12:45.020 | and mismatches, then the conductor becomes alert
01:12:48.020 | and interacts with it.
01:12:49.020 | So suffering attracts the activity of our consciousness.
01:12:52.660 | And the purpose of that is ideally
01:12:54.780 | that we bring new layers online,
01:12:56.660 | new layers of modeling that are able
01:12:59.300 | to create a model of the dysregulation,
01:13:02.460 | so we can deal with it.
01:13:04.500 | And this means that we typically get
01:13:06.860 | higher level consciousness, so to speak.
01:13:08.820 | We get some consciousness above our pay grade, maybe,
01:13:11.420 | if we have some suffering early in our life.
01:13:13.260 | Most of the interesting people
01:13:14.860 | had trauma early on in their childhood.
01:13:17.060 | And trauma means that you are suffering an injury
01:13:20.940 | for which the system is not prepared,
01:13:23.100 | which it cannot deal with,
01:13:24.380 | which it cannot insulate itself from.
01:13:26.260 | So something breaks.
01:13:27.940 | And this means that the behavior of the system
01:13:29.860 | is permanently disturbed
01:13:33.460 | in a way that some mismatch exists now in the regulation,
01:13:37.500 | that just by following your impulses,
01:13:39.100 | by following the pain in the direction which it hurts,
01:13:41.860 | the situation doesn't improve, but get worse.
01:13:44.380 | And so what needs to happen is that you grow up.
01:13:46.940 | And that part that has grown up is able to deal
01:13:50.460 | with the part that is stuck in this earlier phase.
01:13:53.340 | - Yeah, so it leads to growth,
01:13:54.620 | you're adding extra layers to your cognition.
01:13:58.040 | Let me ask you then, 'cause I gotta stick on suffering,
01:14:02.380 | the ethics of the whole thing.
01:14:03.900 | So not our consciousness, but the consciousness of others.
01:14:08.940 | You've tweeted, "One of my biggest fears
01:14:13.380 | is that insects could be conscious.
01:14:16.300 | The amount of suffering on earth would be unthinkable."
01:14:19.100 | So when we think of other conscious beings,
01:14:24.440 | is suffering a property of consciousness
01:14:30.300 | that we're most concerned about?
01:14:32.660 | So I'm still thinking about robots,
01:14:37.660 | how to make sense of other non-human things
01:14:43.140 | that appear to have the depth of experience
01:14:48.340 | that humans have.
01:14:49.420 | And to me, that means consciousness
01:14:53.980 | and the darkest side of that, which is suffering,
01:14:57.420 | the capacity to suffer.
01:15:00.340 | And so I started thinking, how much responsibility
01:15:03.540 | do we have for those other conscious beings?
01:15:06.580 | That's where the definition of consciousness
01:15:10.940 | becomes most urgent.
01:15:13.060 | Like having to come up with a definition of consciousness
01:15:15.100 | becomes most urgent, is who should we,
01:15:19.640 | and should we not be torturing?
01:15:21.300 | - There's no general answer to this.
01:15:26.300 | Was Genghis Khan doing anything wrong?
01:15:29.100 | It depends on how you look at it.
01:15:31.900 | - Well, he drew a line somewhere
01:15:35.340 | where this is us and that's them.
01:15:38.820 | It's the circle of empathy.
01:15:40.840 | It's like these, you don't have to use the word consciousness
01:15:44.860 | but these are the things that matter to me
01:15:48.980 | if they suffer or not.
01:15:50.100 | And these are the things that don't matter to me.
01:15:52.340 | - Yeah, but when one of his commanders failed him,
01:15:54.580 | he broke his spine and let him die in a horrible way.
01:15:59.140 | And so in some sense, I think he was indifferent
01:16:02.620 | to suffering or he was not indifferent in the sense
01:16:05.820 | that he didn't see it as useful if he inflicted suffering,
01:16:09.440 | but he did not see it as something that had to be avoided.
01:16:14.100 | That was not the goal.
01:16:15.460 | The question was, how can I use suffering
01:16:18.860 | and the infliction of suffering to reach my goals
01:16:21.260 | from his perspective?
01:16:23.900 | - I see, so like different societies throughout history
01:16:26.700 | put different value on the--
01:16:29.900 | - Different individuals, different psyches.
01:16:31.580 | - But also even the objective of avoiding suffering.
01:16:35.100 | Like some societies probably,
01:16:37.540 | I mean, this is where like religious belief really helps
01:16:40.740 | that afterlife, that doesn't matter that you suffer or die,
01:16:45.740 | what matters is you suffer honorably, right?
01:16:49.300 | So that you enter the afterlife.
01:16:52.300 | - It seems to be superstitious to me,
01:16:53.860 | basically beliefs that assert things
01:16:57.620 | for which no evidence exists,
01:16:59.980 | are incompatible with sound epistemology.
01:17:02.180 | And I don't think that religion has to be superstitious,
01:17:04.620 | otherwise it should be condemned in all cases.
01:17:06.860 | - You're somebody who's saying we live in a dream world,
01:17:09.140 | we have zero evidence for anything.
01:17:11.340 | - So it's not the case.
01:17:12.540 | There are limits to what languages can be constructed.
01:17:16.060 | Mathematics breaks solid evidence for its own structure.
01:17:19.500 | And once we have some idea of what languages exist
01:17:23.260 | and how a system can learn
01:17:24.460 | and what learning itself is in the first place,
01:17:26.580 | and so on, we can begin to realize that our intuitions
01:17:31.580 | that we are able to learn about the regularities
01:17:34.620 | of the world and minimize appraisal
01:17:36.300 | and understand the nature of our own agency
01:17:38.920 | to some degree of abstraction.
01:17:40.700 | That's not an illusion.
01:17:42.140 | It's a useful approximation.
01:17:44.140 | - Just because we live in a dream world
01:17:46.860 | doesn't mean mathematics can't give us a consistent glimpse
01:17:51.780 | of physical, of objective reality.
01:17:54.980 | - We can basically distinguish useful encodings
01:17:57.340 | from useless encodings.
01:17:59.020 | And when we apply our truth-seeking to the world,
01:18:03.500 | we know we usually cannot find out
01:18:05.500 | whether a certain thing is true.
01:18:07.500 | What we typically do is we take the state vector
01:18:10.100 | of the universe, separate it into separate objects
01:18:12.140 | that interact with each other, so interfaces.
01:18:14.460 | And this distinction that we are making
01:18:16.200 | is not completely arbitrary.
01:18:17.480 | It's done to optimize the compression
01:18:21.180 | that we can apply to our models of the universe.
01:18:23.400 | So we can predict what's happening
01:18:25.700 | with our limited resources.
01:18:27.340 | In this sense, it's not arbitrary.
01:18:29.280 | But the separation of the world into objects
01:18:32.060 | that are somehow discrete and interacting with each other
01:18:34.980 | is not the true reality, right?
01:18:36.900 | The boundaries between the objects
01:18:38.440 | are projected into the world, not arbitrarily projected,
01:18:41.700 | but still, it's only an approximation
01:18:44.040 | of what's actually the case.
01:18:46.260 | And we sometimes notice that we run into contradictions
01:18:49.020 | when we try to understand high-level things
01:18:51.020 | like economic aspects of the world and so on,
01:18:53.980 | or political aspects or psychological aspects
01:18:57.020 | where we make simplifications.
01:18:58.320 | And the objects that we are using to separate the world
01:19:00.860 | are just one of many possible projections
01:19:03.180 | of what's going on.
01:19:04.660 | And so it's not in this postmodernist sense
01:19:07.140 | completely arbitrary and you're free to pick
01:19:09.220 | what you want or dismiss what you don't like
01:19:11.060 | because it's all stories.
01:19:12.220 | No, that's not true.
01:19:13.620 | You have to show for every model
01:19:15.360 | of how well it predicts the world.
01:19:17.280 | So the confidence that you should have
01:19:19.200 | in the entities of your models
01:19:21.000 | should correspond to the evidence that you have.
01:19:23.440 | - Can I ask you on a small tangent
01:19:26.320 | to talk about your favorite set of ideas and people,
01:19:32.640 | which is postmodernism?
01:19:35.000 | - What? (laughs)
01:19:38.520 | - What is postmodernism?
01:19:40.680 | How would you define it?
01:19:41.640 | And why to you is it not a useful framework of thought?
01:19:46.640 | - Postmodernism is something
01:19:51.180 | that I'm really not an expert on.
01:19:53.060 | And postmodernism is a set of philosophical ideas
01:19:57.900 | that is difficult to lump together,
01:19:59.680 | that is characterized by some useful thinkers,
01:20:04.740 | some of them poststructuralist and so on.
01:20:06.980 | And I'm mostly not interested in it
01:20:08.540 | because I think that it's not leading me anywhere
01:20:11.420 | that I find particularly useful.
01:20:14.080 | It's mostly, I think, born out of the insight
01:20:16.540 | that the ontologies that we impose on the world
01:20:20.140 | are not literally true,
01:20:21.480 | and that we can often get to a different interpretation
01:20:23.680 | by the world by using a different ontology
01:20:25.440 | that is different separation of the world
01:20:27.720 | into interacting objects.
01:20:29.900 | But the idea that this makes the world
01:20:32.200 | and a set of stories that are arbitrary, I think is wrong.
01:20:36.380 | And the people that are engaging in this type of philosophy
01:20:40.640 | are working in an area
01:20:42.440 | that I largely don't find productive.
01:20:44.000 | There's nothing useful coming out of this.
01:20:46.240 | So this idea that truth is relative
01:20:48.200 | is not something that has in some sense,
01:20:50.200 | informed physics or theory of relativity.
01:20:52.800 | And there is no feedback between those.
01:20:54.720 | There is no meaningful influence
01:20:57.080 | of this type of philosophy on the sciences
01:20:59.840 | or in engineering or in politics.
01:21:02.280 | But there is a very strong information of this on ideology,
01:21:07.280 | because it basically has become an ideology
01:21:10.560 | that is justifying itself by the notion
01:21:14.040 | that truth is a relative concept.
01:21:16.360 | And it's not being used in such a way
01:21:18.560 | that the philosophers or sociologists
01:21:21.480 | that take up these ideas say,
01:21:23.280 | oh, I should doubt my own ideas
01:21:25.240 | because maybe my separation of the world into objects
01:21:27.520 | is not completely valid,
01:21:28.560 | and I should maybe use a different one
01:21:30.360 | and be open to a pluralism of ideas.
01:21:33.560 | But it mostly exists to dismiss the ideas of other people.
01:21:37.360 | - It becomes, yeah, it becomes a political weapon of sorts.
01:21:40.440 | - Yes. - To achieve power.
01:21:42.120 | - Basically, there's nothing wrong, I think,
01:21:45.320 | with developing a philosophy around this,
01:21:48.920 | but to develop norms around the idea
01:21:51.840 | that truth is something that is completely negotiable
01:21:55.360 | is incompatible with the scientific project.
01:21:58.480 | And I think if the academia has no defense
01:22:02.200 | against the ideological parts
01:22:04.680 | of the postmodernist movement, it's doomed.
01:22:09.040 | - Right, you have to acknowledge
01:22:10.520 | the ideological part of any movement, actually,
01:22:13.800 | including postmodernism.
01:22:15.520 | - Well, the question is what an ideology is.
01:22:17.600 | And to me, an ideology is basically a viral memeplex
01:22:21.200 | that is changing your mind in such a way
01:22:24.000 | that reality gets warped.
01:22:26.120 | It gets warped in such a way that you're being cut off
01:22:28.280 | from the rest of human thought space,
01:22:29.640 | and you cannot consider things outside
01:22:32.400 | of the range of ideas of your own ideology
01:22:35.000 | as possibly true.
01:22:35.920 | - Right, so, I mean, there's certain properties
01:22:37.800 | to an ideology that make it harmful.
01:22:39.480 | One of them is that dogmatism of just certainty,
01:22:44.160 | dogged certainty in that you're right,
01:22:46.760 | you have the truth, and nobody else does.
01:22:48.720 | - Yeah, but what is creating the certainty?
01:22:50.320 | It's very interesting to look at the type of model
01:22:53.160 | that is being produced.
01:22:54.220 | Is it basically just a strong prior?
01:22:56.660 | And you tell people, oh, this idea
01:22:58.600 | that you consider to be very true,
01:23:00.040 | the evidence for this is actually just much weaker
01:23:02.280 | than you thought, and look here at some studies.
01:23:04.460 | No, this is not how it works.
01:23:06.200 | It's usually normative, which means some thoughts
01:23:09.360 | are unthinkable because they would change your identity
01:23:13.880 | into something that is no longer acceptable.
01:23:16.360 | And this cuts you off from considering an alternative,
01:23:20.160 | and many de facto religions use this trick
01:23:23.280 | to lock people into a certain mode of thought,
01:23:25.760 | and this removes agency over your own thoughts,
01:23:27.800 | and it's very ugly to me.
01:23:28.720 | It's basically not just a process of domestication,
01:23:32.660 | but it's actually an intellectual castration that happens.
01:23:36.280 | It's an inability to think creatively
01:23:39.200 | and to bring forth new thoughts.
01:23:40.900 | - Can I ask you about substances, chemical substances
01:23:48.360 | that affect the video game, the dream world?
01:23:53.160 | So psychedelics that increasingly have been getting
01:23:57.160 | a lot of research done on them.
01:23:58.860 | So in general, psychedelics, psilocybin, MDMA,
01:24:02.660 | but also a really interesting one, the big one,
01:24:05.240 | which is DMT.
01:24:06.300 | What and where are the places that these substances take
01:24:12.160 | the mind that is operating in the dream world?
01:24:15.380 | Do you have an interesting sense how this throws a wrinkle
01:24:20.360 | into the prediction model?
01:24:22.280 | Is it just some weird little quirk,
01:24:24.500 | or is there some fundamental expansion
01:24:27.840 | of the mind going on?
01:24:28.920 | - I suspect that a way to look at psychedelics
01:24:34.120 | is that they induce particular types
01:24:36.440 | of lucid dreaming states.
01:24:38.560 | So it's a state in which certain connections
01:24:41.620 | are being severed in your mind, are no longer active.
01:24:45.320 | Your mind basically gets free to move in a certain direction
01:24:48.880 | because some inhibition, some particular inhibition
01:24:51.080 | doesn't work anymore.
01:24:52.760 | And as a result, you might stop having a self,
01:24:55.360 | or you might stop perceiving the world as three-dimensional.
01:25:00.840 | And you can explore that state.
01:25:04.520 | And I suppose that for every state that can be induced
01:25:07.600 | with psychedelics, there are people that are naturally
01:25:09.440 | in that state.
01:25:11.000 | So sometimes psychedelics shift you through a range
01:25:14.040 | of possible mental states, and they can also shift you
01:25:17.060 | out of the range of permissible mental states,
01:25:19.120 | that is where you can make predictive models of reality.
01:25:22.660 | And what I observe in people that use psychedelics a lot
01:25:27.000 | is that they tend to be overfitting.
01:25:29.600 | Overfitting means that you are using more bits
01:25:34.560 | for modeling the dynamics of a function than you should.
01:25:38.080 | And so you can fit your curve to extremely detailed things
01:25:41.920 | in the past, but this model is no longer predictive
01:25:44.440 | for the future.
01:25:45.880 | - What is it about psychedelics that forces that?
01:25:48.480 | I thought it would be the opposite.
01:25:51.080 | I thought that it's a good mechanism
01:25:54.720 | for generalization, for regularization.
01:25:59.360 | So it feels like psychedelics expansion of the mind,
01:26:03.280 | like taking you outside of, like forcing your model
01:26:06.040 | to be non-predictive is a good thing.
01:26:09.840 | Meaning like, it's almost like, okay,
01:26:14.400 | what I would say is psychedelics are akin to is traveling
01:26:17.400 | to a totally different environment.
01:26:19.880 | Like going, if you've never been to like India
01:26:22.040 | or something like that from the United States,
01:26:24.280 | very different set of people, different culture,
01:26:26.240 | different food, different roads and values
01:26:30.400 | and all those kinds of things.
01:26:31.480 | - Yeah, so psychedelics can, for instance,
01:26:33.600 | teleport people into a universe that is hyperbolic,
01:26:37.880 | which means that if you imagine a room that you're in,
01:26:41.360 | you can turn around 360 degrees
01:26:43.640 | and you didn't go full circle.
01:26:44.720 | You need to go 720 degrees to go full circle.
01:26:47.280 | - Exactly.
01:26:48.120 | - So the things that people learn in that state
01:26:50.880 | cannot be easily transferred in this universe
01:26:53.040 | that we are in.
01:26:54.320 | It could be that if they're able to abstract
01:26:56.480 | and understand what happened to them,
01:26:58.320 | that they understand that some part
01:27:00.360 | of their spatial cognition has been desynchronized
01:27:03.560 | and has found a different synchronization.
01:27:05.720 | And this different synchronization
01:27:06.920 | happens to be a hyperbolic one, right?
01:27:08.680 | So you learn something interesting about your brain.
01:27:11.000 | It's difficult to understand what exactly happened,
01:27:13.200 | but we get a pretty good idea once we understand
01:27:15.480 | how the brain is representing geometry.
01:27:17.800 | - Yeah, but doesn't give you a fresh perspective
01:27:20.240 | on the physical reality?
01:27:21.760 | (knocking)
01:27:24.000 | Who's making that sound?
01:27:27.880 | Is it inside my head or is it external?
01:27:30.240 | - Well, there is no sound outside of your mind,
01:27:33.280 | but it's making sense of phenomenon physics.
01:27:37.920 | (laughing)
01:27:39.760 | - Yeah, in the physical reality, there's sound waves
01:27:42.920 | traveling through air.
01:27:45.960 | Okay.
01:27:47.160 | - That's our model of what's happened.
01:27:48.680 | - That's our model of what happened, right.
01:27:51.840 | So, don't psychedelics give you a fresh perspective
01:27:56.840 | on this physical reality?
01:27:58.840 | Not this physical reality, but this more,
01:28:03.000 | what do you call the dream world?
01:28:08.360 | That's mapped directly to--
01:28:09.960 | - The purpose of dreaming at night, I think,
01:28:11.600 | is data augmentation.
01:28:13.720 | - Well, exactly.
01:28:14.920 | So that's very different.
01:28:16.320 | That's very similar to psychedelics.
01:28:17.520 | - So you basically change parameters
01:28:19.200 | about the things that you have learned.
01:28:21.680 | And for instance, when you are young,
01:28:24.160 | you have seen things from certain perspectives,
01:28:26.080 | but not from others.
01:28:27.320 | So your brain is generating new perspectives
01:28:29.640 | of objects that you already know,
01:28:31.560 | which means they can learn to recognize them later
01:28:34.120 | from different perspectives.
01:28:35.200 | And I suspect that's the reason that many of us
01:28:37.680 | remember to have flying dreams as children,
01:28:39.720 | because it's just different perspectives
01:28:41.320 | of the world that you already know.
01:28:43.000 | And that it starts to generate
01:28:45.080 | these different perspective changes,
01:28:47.840 | and then it fluidly turns this into a flying dream
01:28:50.520 | to make sense of what's happening, right?
01:28:52.240 | So you fill in the gaps,
01:28:53.560 | and suddenly you see yourself flying.
01:28:55.800 | And similar things can happen with semantic relationships.
01:28:58.800 | So it's not just spatial relationships,
01:29:00.520 | but it can also be the relationships
01:29:02.640 | between ideas that are being changed.
01:29:05.160 | And it seems that the mechanisms
01:29:06.880 | that make that happen during dreaming
01:29:09.000 | are interacting with these same receptors
01:29:14.280 | that are being simulated by psychedelics.
01:29:17.160 | So I suspect that there is a thing
01:29:19.760 | that I haven't read really about,
01:29:22.000 | the way in which dreams are induced in the brain.
01:29:24.360 | It's not just that the activity of the brain
01:29:27.560 | gets tuned down because your eyes are closed
01:29:30.560 | and you no longer get enough data from your eyes,
01:29:33.920 | but there is a particular type of neurotransmitter
01:29:37.120 | that is saturating your brain during these phases,
01:29:40.120 | during the RM phases,
01:29:41.200 | and you produce controlled hallucinations.
01:29:44.720 | And psychedelics are linking into these mechanisms,
01:29:48.680 | I suspect.
01:29:49.840 | - So isn't that another trickier form of data augmentation?
01:29:54.040 | - Yes, but it's also data augmentation
01:29:57.720 | that can happen outside of the specification
01:29:59.840 | that your brain is tuned to.
01:30:00.920 | So basically people are overclocking their brains,
01:30:03.400 | and that produces states
01:30:05.760 | that are subjectively extremely interesting.
01:30:09.240 | - Yeah, I just-
01:30:10.520 | - But from the outside, very suspicious.
01:30:12.800 | - So I think I'm over applying the metaphor
01:30:15.600 | of a neural network in my own mind,
01:30:17.840 | which I just think that doesn't lead to overfitting, right?
01:30:22.400 | But you were just sort of anecdotally saying
01:30:26.320 | my experiences with people that have done psychedelics
01:30:28.600 | are that kind of quality.
01:30:30.440 | - I think it typically happens.
01:30:31.560 | So if you look at people like Timothy Leary,
01:30:34.400 | and he has written beautiful manifestos
01:30:36.640 | about the effect of LSD on people.
01:30:40.200 | He genuinely believed, he writes in his manifestos,
01:30:42.760 | that in the future, science and art
01:30:44.840 | will only be done on psychedelics
01:30:46.280 | because it's so much more efficient and so much better.
01:30:49.000 | And he gave LSD to children in this community
01:30:52.640 | of a few thousand people that he had near San Francisco.
01:30:55.760 | And basically he was losing touch with reality.
01:31:00.480 | He did not understand the effects,
01:31:02.200 | the things that he was doing would have
01:31:04.840 | on the reception of psychedelics by society,
01:31:07.880 | because he was unable to think critically
01:31:09.880 | about what happened.
01:31:10.720 | What happened was that he got in a euphoric state.
01:31:13.520 | That euphoric state happened because he was overfitting.
01:31:16.600 | He was taking this sense of euphoria
01:31:19.440 | and translating it into a model
01:31:21.480 | of actual success in the world, right?
01:31:23.640 | He was feeling better.
01:31:25.240 | Limitations had disappeared,
01:31:26.920 | that he experienced to be existing,
01:31:29.560 | but he didn't get superpowers.
01:31:30.760 | - I understand what you mean by overfitting now.
01:31:33.840 | There's a lot of interpretation
01:31:35.440 | to the term overfitting in this case, but I got you.
01:31:38.640 | So he was getting positive rewards
01:31:42.720 | from a lot of actions that he shouldn't have been doing.
01:31:44.400 | - But not just this.
01:31:45.240 | So if you take, for instance, John Lilly,
01:31:46.600 | who was studying dolphin languages and aliens and so on,
01:31:51.600 | a lot of people that use psychedelics became very loopy.
01:31:54.960 | And the typical thing that you notice
01:31:58.680 | when people are on psychedelics is that they are in a state
01:32:00.960 | where they feel that everything can be explained now.
01:32:03.660 | Everything is clear.
01:32:04.880 | Everything is obvious.
01:32:06.600 | And sometimes they have indeed discovered
01:32:09.640 | a useful connection, but not always.
01:32:12.080 | Very often these connections are over-interpretations.
01:32:15.360 | - I wonder, you know, there's a question
01:32:17.720 | of correlation versus causation.
01:32:21.080 | And also I wonder if it's the psychedelics
01:32:23.360 | or if it's more the social, like being the outsider
01:32:27.080 | and having a strong community of outside
01:32:31.160 | and having a leadership position
01:32:32.840 | in an outsider cult-like community,
01:32:35.560 | that could have a much stronger effect of overfitting
01:32:38.200 | than do psychedelics themselves, the actual substances,
01:32:41.620 | because it's a counterculture thing.
01:32:43.360 | So it could be that as opposed to the actual substance.
01:32:46.520 | If you're a boring person who wears a suit and tie
01:32:49.720 | and works at a bank and takes psychedelics,
01:32:53.240 | that could be a very different effect
01:32:55.160 | of psychedelics on your mind.
01:32:57.800 | I'm just sort of raising the point
01:32:59.640 | that the people you referenced are already weirdos.
01:33:02.880 | I'm not sure exactly.
01:33:04.160 | - No, not necessarily.
01:33:05.200 | A lot of the people that tell me
01:33:07.520 | that they use psychedelics in a useful way
01:33:10.960 | started out as squares and were liberating themselves
01:33:14.520 | because they were stuck.
01:33:16.040 | They were basically stuck in local optimum
01:33:17.920 | of their own self-model, of their relationship to the world.
01:33:20.960 | And suddenly they had data augmentation.
01:33:23.160 | They basically saw and experienced a space of possibilities.
01:33:26.680 | They experienced what it would be like to be another person.
01:33:29.760 | And they took important lessons
01:33:32.240 | from that experience back home.
01:33:34.480 | (inhales deeply)
01:33:36.640 | - Yeah.
01:33:37.480 | I mean, I love the metaphor of data augmentation
01:33:40.640 | because that's been the primary driver
01:33:44.880 | of self-supervised learning in the computer vision domain
01:33:48.920 | is data augmentation.
01:33:50.080 | So it's funny to think of data augment,
01:33:53.080 | like chemically induced data augmentation in the human mind.
01:33:58.080 | - There's also a very interesting effect that I noticed.
01:34:03.200 | I know several people who are severe to me
01:34:07.080 | that LSD has cured their migraines.
01:34:10.920 | So severe cluster eight headaches or migraines
01:34:14.080 | that didn't respond to standard medication
01:34:16.960 | that disappeared after a single dose.
01:34:19.200 | And I don't recommend anybody doing this,
01:34:21.800 | especially not in the US where it's illegal.
01:34:24.120 | And there are no studies on this for that reason.
01:34:27.360 | But it seems that anecdotally
01:34:29.960 | that it basically can reset the serotonergic system.
01:34:34.360 | So it's basically pushing them
01:34:37.160 | outside of their normal boundaries.
01:34:39.120 | And as a result, it needs to find a new equilibrium.
01:34:41.920 | And in some people, that equilibrium is better.
01:34:44.200 | But it also follows that in other people, it might be worse.
01:34:47.120 | So if you have a brain that is already teetering
01:34:50.360 | on the boundary to psychosis,
01:34:52.840 | it can be permanently pushed over that boundary.
01:34:55.560 | - Well, that's why you have to do good science,
01:34:57.160 | which they're starting to do
01:34:58.040 | on all these different substances
01:34:59.600 | of how well it actually works for the different conditions
01:35:01.640 | like MDMA seems to help with PTSD, same with psilocybin.
01:35:06.640 | That you need to do good science,
01:35:09.040 | meaning large studies of large N.
01:35:11.560 | - Yeah, so based on the existing studies with MDMA,
01:35:14.680 | it seems that if you look at Rick Doblin's work
01:35:18.120 | and what he has published about this and talks about,
01:35:21.400 | MDMA seems to be a psychologically relatively safe drug,
01:35:24.800 | but it's physiologically not very safe.
01:35:26.800 | That is, there is neurotoxicity
01:35:30.120 | if you would use too large dose.
01:35:31.840 | And if you combine this with alcohol,
01:35:34.440 | which a lot of kids do in party settings during raves
01:35:37.600 | and so on, it's very hepatotoxic.
01:35:40.320 | So basically you can kill your liver.
01:35:42.280 | And this means that it's probably something that is best
01:35:45.400 | and most productively used in a clinical setting
01:35:48.400 | by people who really know what they're doing.
01:35:50.080 | And I suspect that's also true for the other psychedelics.
01:35:53.640 | That is, while the other psychedelics
01:35:56.120 | are probably not as toxic as say alcohol,
01:35:59.520 | the effects on Nisaki can be much more profound and lasting.
01:36:03.520 | - Yeah, well, as far as I know, psilocybin,
01:36:06.000 | so mushrooms, magic mushrooms,
01:36:08.240 | as far as I know in terms of the studies they're running,
01:36:11.820 | I think have no, like they're allowed to do
01:36:15.080 | what they're calling heroic doses.
01:36:17.120 | So that one does not have a toxicity.
01:36:19.000 | So they could do like huge doses in a clinical setting
01:36:21.760 | when they're doing study on psilocybin,
01:36:23.680 | which is kind of fun.
01:36:25.200 | - Yeah, it seems that most of the psychedelics
01:36:27.160 | work in extremely small doses,
01:36:29.320 | which means that the effect on the rest of the body
01:36:32.220 | is relatively low.
01:36:33.720 | And MDMA is probably the exception.
01:36:36.200 | Maybe ketamine can be dangerous in larger doses
01:36:38.360 | because it can depress breathing and so on.
01:36:41.320 | But the LSD and psilocybin work in very, very small doses,
01:36:46.000 | at least the active part of them,
01:36:47.880 | of psilocybin and LSD is only the active part.
01:36:50.640 | And the, but the effect that it can have
01:36:54.160 | on your mental wiring can be very dangerous, I think.
01:36:57.120 | - Let's talk about AI a little bit.
01:37:00.600 | What are your thoughts about GPT-3 and language models
01:37:05.360 | trained with self-supervised learning?
01:37:07.280 | It came out quite a bit ago,
01:37:11.480 | but I wanted to get your thoughts on it.
01:37:13.240 | - Yeah.
01:37:14.640 | In the '90s, I was in New Zealand
01:37:16.960 | and I had an amazing professor, Ian Witton,
01:37:21.160 | who realized I was bored in class and put me in his lab.
01:37:25.240 | And he gave me the task to discover grammatical structure
01:37:28.840 | in an unknown language.
01:37:30.120 | And the unknown language that I picked was English
01:37:33.800 | because it was the easiest one to find,
01:37:36.040 | Corpus 4, construct one.
01:37:38.000 | And he gave me the largest computer at the whole university.
01:37:42.000 | It had two gigabytes of RAM, which was amazing.
01:37:44.160 | And I wrote everything in C
01:37:45.400 | with some in-memory compression to do statistics
01:37:47.760 | over the language.
01:37:49.360 | And I first would create a dictionary of all the words,
01:37:53.960 | which basically tokenizes everything and compresses things
01:37:57.320 | so that I don't need to store the whole word,
01:37:58.840 | but just a code for every word.
01:38:02.320 | And then I was taking this all apart in sentences
01:38:05.920 | and I was trying to find all the relationships
01:38:09.160 | between all the words in the sentences
01:38:10.880 | and do statistics over them.
01:38:12.960 | And that proved to be impossible
01:38:15.200 | because the complexity is just too large.
01:38:18.040 | So if you want to discover the relationship
01:38:20.480 | between an article and a noun,
01:38:21.880 | and there are three adjectives in between,
01:38:23.880 | you cannot do N-gram statistics
01:38:25.400 | and look at all the possibilities that can exist,
01:38:28.080 | at least not with the resources that we had back then.
01:38:30.760 | So I realized I need to make some statistics
01:38:33.280 | over what I need to make statistics over.
01:38:35.200 | So I wrote something that was pretty much a hack
01:38:38.600 | that did this for at least first order relationships.
01:38:42.360 | And I came up with some kind of mutual information graph
01:38:45.080 | that was indeed discovering something
01:38:47.520 | that looks exactly like the grammatical structure
01:38:49.400 | of the sentence, just by trying to encode the sentence
01:38:52.600 | in such a way that the words would be written
01:38:54.680 | in the optimal order inside of the model.
01:38:58.040 | And what I also found is that if we would be able
01:39:02.080 | to increase the resolution of that
01:39:03.760 | and not just use this model
01:39:06.560 | to reproduce grammatically correct sentences,
01:39:09.000 | we would also be able to correct
01:39:10.440 | stylistically correct sentences
01:39:11.960 | by just having more bits in these relationships.
01:39:14.520 | And if we wanted to have meaning,
01:39:16.240 | we would have to go much higher order.
01:39:18.680 | And I didn't know how to make higher order models back then
01:39:21.400 | without spending way more years in research
01:39:23.800 | on how to make the statistics
01:39:25.520 | over what we need to make statistics over.
01:39:27.680 | And this thing that we cannot look at the relationships
01:39:31.480 | between all the bits in your input
01:39:33.200 | is being solved in different domains in different ways.
01:39:35.720 | So in computer graphics, the computer vision,
01:39:39.320 | standard methods for many years now
01:39:41.320 | is convolutional neural networks.
01:39:43.560 | Convolutional neural networks are hierarchies of filters
01:39:46.600 | that exploit the fact that neighboring pixels in images
01:39:49.520 | are usually semantically related
01:39:51.040 | and distance pixels in images
01:39:53.000 | are usually not semantically related.
01:39:55.440 | So you can just by grouping the pixels
01:39:57.640 | that are next to each other hierarchically together,
01:40:00.320 | reconstruct the shape of objects.
01:40:02.720 | And this is an important prior
01:40:04.560 | that we built into these models
01:40:06.080 | so they can converge quickly.
01:40:08.400 | But this doesn't work in language
01:40:09.800 | for the reason that adjacent words are often
01:40:12.880 | but not always related
01:40:14.120 | and distant words are sometimes related
01:40:16.360 | while the words in between are not.
01:40:18.120 | So how can you learn the topology of language?
01:40:22.600 | And I think for this reason that this difficulty existed,
01:40:26.400 | the transformer was invented
01:40:28.680 | in natural language processing, not in vision.
01:40:32.760 | And what the transformer is doing,
01:40:34.880 | it's a hierarchy of layers
01:40:36.640 | where every layer learns what to pay attention to
01:40:39.800 | in the given context in the previous layer.
01:40:42.800 | So what to make the statistics over.
01:40:44.840 | - And the context is significantly larger
01:40:49.560 | than the adjacent word.
01:40:51.120 | - Yes.
01:40:51.960 | So the context that GPT-3 has been using,
01:40:55.560 | the transformer itself is from 2017
01:40:58.080 | and it wasn't using that large of a context.
01:41:01.800 | OpenAI has basically scaled up this idea
01:41:04.640 | as far as they could at the time.
01:41:06.560 | And the context is about 2048 symbols,
01:41:10.920 | tokens in the language.
01:41:12.600 | These symbols are not characters,
01:41:15.080 | but they take the words and project them
01:41:17.040 | into a vector space where words
01:41:20.120 | that are statistically co-occurring a lot
01:41:22.040 | are neighbors already.
01:41:23.240 | So it's already a simplification
01:41:24.800 | of the problem a little bit.
01:41:26.600 | And so every word is basically a set of coordinates
01:41:29.280 | in a high dimensional space.
01:41:31.080 | And then they use some kind of trick
01:41:33.120 | to also encode the order of the words in a sentence
01:41:36.360 | or in the not just sentence,
01:41:37.840 | but 2048 tokens is about a couple pages of text
01:41:41.800 | or two and a half pages of text.
01:41:43.600 | And so they managed to do pretty exhaustive statistics
01:41:46.880 | over the potential relationships
01:41:49.160 | between two pages of text, which is tremendous, right?
01:41:51.720 | I was just using a single sentence back then
01:41:55.040 | and I was only looking for first order relationships
01:41:58.760 | and they were really looking for
01:42:01.000 | much, much higher level relationships.
01:42:02.760 | And what they discover after they fed this
01:42:05.280 | with an enormous amount of training data,
01:42:07.120 | pretty much the written internet
01:42:09.000 | or a subset of it that had some quality,
01:42:12.160 | but substantial portion of the common crawl
01:42:15.200 | that they're not only able to reproduce style,
01:42:18.200 | but they're also able to reproduce
01:42:19.880 | some pretty detailed semantics,
01:42:21.680 | like being able to add three digit numbers
01:42:24.720 | and multiply two digit numbers
01:42:26.280 | or to translate between pro-Eng languages
01:42:28.840 | and things like that.
01:42:30.240 | So the results that GPT-3 got, I think were amazing.
01:42:34.080 | - By the way, I actually didn't check carefully.
01:42:38.600 | It's funny you just mentioned
01:42:40.560 | how you coupled semantics to the multiplication.
01:42:42.960 | Is it able to do some basic math on two digit numbers?
01:42:46.720 | - Yes.
01:42:47.960 | - Okay, interesting.
01:42:48.840 | I thought there's a lot of failure cases.
01:42:53.120 | - Yeah, it basically fails if you take larger digit numbers.
01:42:56.160 | So four digit numbers and so on
01:42:58.480 | makes carrying mistakes and so on.
01:43:00.560 | And if you take larger numbers,
01:43:02.560 | you don't get useful results at all.
01:43:05.000 | And this could be an issue of the training set,
01:43:09.240 | where there are not many examples
01:43:10.960 | of successful long form addition
01:43:13.320 | and standard human written text.
01:43:15.320 | - And humans aren't very good
01:43:16.800 | at doing three digit numbers either.
01:43:19.440 | - Yeah, and you're not writing a lot about it.
01:43:22.400 | And the other thing is that the loss function
01:43:24.760 | that is being used is only minimizing surprises.
01:43:27.040 | So it's predicting what comes next in a typical text.
01:43:29.600 | It's not trying to go for causal closure first as we do.
01:43:33.000 | - Yeah.
01:43:34.760 | - But the fact that that kind of prediction works
01:43:39.640 | to generate text that's semantically rich
01:43:42.720 | and consistent is interesting.
01:43:45.000 | - Yeah.
01:43:45.840 | - So yeah, so it's amazing that it's able
01:43:47.200 | to generate semantically consistent text.
01:43:50.920 | - It's not consistent.
01:43:51.920 | So the problem is that it loses coherence at some point.
01:43:54.680 | But it's also, I think, not correct to say
01:43:57.120 | that GPT-3 is unable to deal with semantics at all,
01:44:01.360 | because you ask it to perform certain transformations
01:44:04.080 | in text and it performs these transformation in text.
01:44:07.200 | And the kind of additions that it's able
01:44:09.200 | to perform are transformations in text, right?
01:44:12.560 | And there are proper semantics involved.
01:44:15.360 | You can also do more.
01:44:16.440 | There was a paper that was generating lots and lots
01:44:20.240 | of mathematically correct text
01:44:24.160 | and was feeding this into a transformer.
01:44:26.360 | And as a result, it was able to learn how
01:44:29.560 | to do differentiation integration in race
01:44:32.480 | that according to the authors, Mathematica could not.
01:44:35.120 | To which some of the people in Mathematica responded
01:44:39.880 | that they were not using Mathematica in the right way
01:44:42.720 | and so on.
01:44:43.560 | I have not really followed the resolution of this conflict.
01:44:46.400 | - This part, as a small tangent,
01:44:48.720 | I really don't like in machine learning papers,
01:44:51.520 | which they often do anecdotal evidence.
01:44:56.520 | They'll find like one example in some kind
01:44:58.880 | of specific use of Mathematica and demonstrate,
01:45:01.160 | look, here's, they'll show successes and failures,
01:45:04.160 | but they won't have a very clear representation
01:45:07.640 | of how many cases this actually represents.
01:45:09.440 | - Yes, but I think as a first paper,
01:45:11.240 | this is a pretty good start.
01:45:12.640 | And so the take home message, I think,
01:45:15.480 | is that the authors could get better results from this
01:45:19.840 | and their experiments than they could get from the vein,
01:45:23.480 | which they were using computer algebra systems,
01:45:25.960 | which means that was not nothing.
01:45:29.120 | And it's able to perform substantially better
01:45:32.360 | than GPT-3 can based on a much larger amount
01:45:35.680 | of training data using the same underlying algorithm.
01:45:38.960 | - Well, let me ask again.
01:45:41.320 | So I'm using your tweets as if this is like Plato, right?
01:45:44.880 | (both laughing)
01:45:47.080 | As if this is well thought out novels that you've written.
01:45:51.800 | You tweeted, "GPT-4 is listening to us now."
01:45:58.640 | This is one way of asking,
01:46:00.280 | what are the limitations of GPT-3 when it scales?
01:46:04.200 | So what do you think will be the capabilities of GPT-4,
01:46:07.880 | GPT-5, and so on?
01:46:10.240 | What are the limits of this approach?
01:46:11.760 | - So obviously when we are writing things right now,
01:46:15.080 | everything that we are writing now
01:46:16.440 | is going to be training data
01:46:18.000 | for the next generation of machine learning models.
01:46:20.080 | So yes, of course, GPT-4 is listening to us.
01:46:23.080 | And I think the tweet is already a little bit older
01:46:25.600 | and we now have WUDAO and we have a number of other systems
01:46:30.080 | that basically are placeholders for GPT-4.
01:46:33.560 | Don't know what OpenAI's plans are in this regard.
01:46:35.920 | - I read that tweet in several ways.
01:46:39.040 | So one is obviously everything you put on the internet
01:46:42.680 | is used as training data.
01:46:44.640 | But in a second way, I read it is in a,
01:46:49.560 | we talked about agency.
01:46:51.640 | I read it as almost like GPT-4 is intelligent enough
01:46:55.440 | to be choosing to listen.
01:46:58.240 | So not only did a programmer tell it to collect this data
01:47:02.080 | and use it for training,
01:47:03.680 | I almost saw the humorous angle,
01:47:06.200 | which is like it has achieved AGI kind of thing.
01:47:09.080 | - Well, the thing is,
01:47:10.760 | could we be already be living in GPT-5?
01:47:13.200 | (both laughing)
01:47:15.240 | - So GPT-4 is listening and GPT-5 actually constructing
01:47:18.960 | the entirety of the reality.
01:47:20.920 | - Of course, in some sense,
01:47:22.840 | what everybody is trying to do right now in AI
01:47:25.000 | is to extend the transformer to be able to deal with video.
01:47:28.000 | And there are very promising extensions, right?
01:47:32.360 | There's a book by Google that is called Perceiver,
01:47:36.520 | and that is overcoming some of the limitations
01:47:39.760 | of the transformer by letting it learn the topology
01:47:42.440 | of the different modalities separately,
01:47:45.360 | and by training it to find better input features.
01:47:50.080 | So the basically feature abstractions that are being used
01:47:52.560 | by this successor to GPT-3 are chosen such a way
01:47:57.560 | that it's able to deal with video input.
01:48:00.800 | And there is more to be done.
01:48:02.240 | So one of the limitations of GPT-3 is that it's amnesiac.
01:48:07.240 | So it forgets everything beyond the two pages
01:48:10.000 | that it currently reads, also during generation,
01:48:12.360 | not just during learning.
01:48:14.440 | - Do you think that's fixable
01:48:16.600 | within the space of deep learning?
01:48:18.680 | Can you just make a bigger, bigger, bigger input?
01:48:21.320 | - No, I don't think that our own working memory
01:48:24.480 | is infinitely large.
01:48:25.600 | It's probably also just a few thousand bits.
01:48:28.000 | But what you can do is you can structure
01:48:31.040 | this working memory.
01:48:31.880 | So instead of just force feeding this thing,
01:48:34.960 | a certain thing that it has to focus on,
01:48:37.040 | and it's not allowed to focus on anything else
01:48:39.440 | as its network, you allow it to construct
01:48:42.680 | its own working memory, as we do, right?
01:48:44.840 | When we are reading a book,
01:48:46.600 | it's not that we are focusing our attention
01:48:48.680 | in such a way that we can only remember
01:48:50.680 | the current page.
01:48:52.360 | We will also try to remember other pages
01:48:54.640 | and try to undo what we learned from them
01:48:56.840 | or modify what we learned from them.
01:48:58.640 | We might get up and take another book from the shelf.
01:49:01.000 | We might go out and ask somebody,
01:49:02.840 | and we can edit our working memory in any way
01:49:06.000 | that is useful to put a context together
01:49:08.640 | that allows us to draw the right inferences
01:49:11.040 | and to learn the right things.
01:49:13.080 | So this ability to perform experiments on the world
01:49:16.320 | based on an attempt to become fully coherent
01:49:20.400 | and to achieve causal closure,
01:49:22.200 | to achieve a certain aesthetic of your modeling,
01:49:24.840 | that is something that eventually needs to be done.
01:49:28.280 | And at the moment, we are skirting this in some sense
01:49:31.080 | by building systems that are larger and faster
01:49:33.400 | so they can use dramatically larger resources
01:49:36.080 | and human beings can do much more training data
01:49:38.680 | to get to models that in some sense
01:49:40.360 | are already very superhuman,
01:49:42.320 | and in other ways are laughingly incoherent.
01:49:45.480 | - So do you think sort of making the systems like,
01:49:50.040 | what would you say, multi-resolutional?
01:49:51.880 | So like some of the language models are focused on two pages,
01:49:56.880 | some are focused on two books,
01:50:03.360 | some are focused on two years of reading,
01:50:06.560 | some are focused on a lifetime.
01:50:08.680 | So it's like stacks, it's a GPT-3s all the way down.
01:50:11.880 | - You want to have gaps in between them.
01:50:13.720 | So it's not necessarily two years, there's no gaps.
01:50:17.040 | It's things out of two years or out of 20 years
01:50:19.960 | or 2,000 years or 2 billion years
01:50:22.240 | where you are just selecting those bits
01:50:24.600 | that are predicted to be the most useful ones
01:50:27.520 | to understand what you're currently doing.
01:50:29.720 | And this prediction itself requires a very complicated model
01:50:32.800 | and that's the actual model that you need to be making.
01:50:34.760 | It's not just that you are trying to understand
01:50:36.960 | the relationships between things,
01:50:38.360 | but what you need to make relationships,
01:50:40.760 | discover relationships over.
01:50:42.520 | - I wonder what that thing looks like,
01:50:45.480 | what the architecture for the thing
01:50:47.800 | that's able to have that kind of model.
01:50:49.760 | - I think it needs more degrees of freedom
01:50:52.440 | than the current models have.
01:50:54.280 | So it starts out with the fact that you possibly
01:50:57.400 | don't just want to have a feed-forward model,
01:50:59.680 | but you want it to be fully recurrent.
01:51:02.680 | And to make it fully recurrent,
01:51:04.520 | you probably need to loop it back into itself
01:51:06.680 | and allow it to skip connections.
01:51:08.240 | Once you do this, when you are predicting the next frame
01:51:12.040 | and your internal next frame in every moment,
01:51:15.200 | and you are able to skip connection,
01:51:17.520 | it means that signals can travel from the output
01:51:21.320 | of the network into the middle of the network
01:51:24.320 | faster than the inputs do.
01:51:25.920 | - Do you think it could still be differentiable?
01:51:28.800 | Do you think it still could be a neural network?
01:51:30.680 | - Sometimes it can, and sometimes it cannot.
01:51:33.000 | So it can still be a neural network,
01:51:35.520 | but not a fully differentiable one.
01:51:37.240 | And when you want to deal with non-differentiable ones,
01:51:40.920 | you need to have an attention system
01:51:42.800 | that is discrete and two-dimensional
01:51:44.800 | and can perform grammatical operations.
01:51:46.680 | You need to be able to perform program synthesis.
01:51:49.360 | You need to be able to backtrack in these operations
01:51:52.360 | that you perform on this thing.
01:51:54.080 | And this thing needs a model of what it's currently doing.
01:51:56.480 | And I think this is exactly the purpose
01:51:58.480 | of our own consciousness.
01:51:59.920 | - Yeah, the program things
01:52:03.080 | that trick you on your own networks.
01:52:05.440 | So let me ask you, it's not quite program synthesis,
01:52:09.040 | but the application of these language models
01:52:12.080 | to generation, to program synthesis,
01:52:15.120 | but generation of programs.
01:52:16.600 | So if you look at GitHub OpenPilot,
01:52:19.200 | which is based on OpenAI's codecs,
01:52:21.240 | I don't know if you got a chance to look at it,
01:52:22.800 | but it's the system that's able to generate code
01:52:26.200 | once you prompt it with, what is it?
01:52:30.080 | Like the header of a function with some comments.
01:52:32.720 | It seems to do an incredibly good job,
01:52:34.880 | or not a perfect job, which is very important,
01:52:39.240 | but an incredibly good job of generating functions.
01:52:42.920 | What do you make of that?
01:52:44.280 | Are you, is this exciting,
01:52:45.520 | or is this just a party trick, a demo?
01:52:49.000 | Or is this revolutionary?
01:52:50.480 | - I haven't worked with this yet,
01:52:53.000 | so it's difficult for me to judge it,
01:52:55.160 | but I would not be surprised
01:52:57.160 | if it turns out to be revolutionary.
01:52:59.600 | And that's because the majority of programming tasks
01:53:01.800 | that are being done in the industry right now
01:53:04.280 | are not creative.
01:53:05.720 | - Yeah.
01:53:06.560 | - People are writing code that other people have written,
01:53:08.520 | or they're putting things together
01:53:09.720 | from code fragments that others have had.
01:53:11.600 | And a lot of the work that programmers do in practice
01:53:14.320 | is to figure out how to overcome the gaps
01:53:17.800 | in their current knowledge,
01:53:19.600 | in the things that people have already done.
01:53:20.960 | - How to copy and paste from Stack Overflow, that's right.
01:53:23.400 | - Yes, and so of course we can automate that.
01:53:26.440 | - Yeah, to make it much faster
01:53:29.120 | to copy and paste from Stack Overflow.
01:53:30.880 | - Yes, but it's not just copying and pasting,
01:53:32.800 | it's also basically learning which parts you need to modify
01:53:36.560 | to make them fit together.
01:53:38.160 | - Yeah, like literally sometimes as simple
01:53:41.160 | as just changing the variable names
01:53:43.360 | so it fits into the rest of your code.
01:53:45.040 | - Yes, but this requires that you understand
01:53:46.840 | the semantics of what you're doing to some degree.
01:53:48.720 | - Yeah, and you can automate some of those things.
01:53:51.600 | The thing that makes people nervous, of course,
01:53:53.520 | is that a little bit wrong in a program
01:53:57.760 | can have a dramatic effect
01:54:00.040 | on the actual final operation of that program.
01:54:03.520 | So that's one little error,
01:54:05.400 | which in the space of language doesn't really matter,
01:54:08.800 | but in the space of programs can matter a lot.
01:54:12.000 | - Yes, but this is already what is happening
01:54:14.160 | when humans program code.
01:54:15.880 | - Yeah, this is--
01:54:16.880 | - So we have a technology to deal with this.
01:54:20.280 | - Somehow it becomes scarier
01:54:22.720 | when you know that a program generated code
01:54:25.160 | that's running a nuclear power plant.
01:54:27.000 | It becomes scarier.
01:54:29.160 | You know humans have errors too.
01:54:31.440 | - Exactly.
01:54:32.280 | - But it's scarier when a program is doing it because,
01:54:35.880 | why, why?
01:54:38.740 | - I mean, there's a fear that a program,
01:54:43.700 | like a program may not be as good as humans
01:54:48.060 | to know when stuff is important to not mess up.
01:54:51.380 | Like there's a misalignment of priorities,
01:54:58.340 | of values, that's potential.
01:55:01.300 | Maybe that's the source of the worry.
01:55:03.540 | I mean, okay, if I give you code generated by
01:55:08.020 | GitHub OpenPilot and code generated by a human
01:55:12.500 | and say, here, use one of these,
01:55:15.980 | which, how do you select today and in the next 10 years,
01:55:20.340 | which code to use?
01:55:21.820 | Wouldn't you still be comfortable with the human?
01:55:24.260 | - At the moment, when you go to Stanford to get an MRI,
01:55:29.540 | they will write a bill to the insurance over $20,000.
01:55:34.540 | And of this, maybe half of that gets paid by the insurance
01:55:38.260 | and a quarter gets paid by you.
01:55:40.540 | And the MRI cost them $600 to make, maybe, probably less.
01:55:44.900 | And what are the values of the person
01:55:47.660 | that writes the software and deploys this process?
01:55:50.520 | It's very difficult for me to say whether I trust people.
01:55:56.220 | I think that what happens there is a mixture
01:55:58.700 | of proper Anglo-Saxon Protestant values
01:56:01.940 | where somebody is trying to serve an abstract greater whole
01:56:04.860 | and organized crime.
01:56:06.300 | - Well, that's a very harsh,
01:56:08.020 | I think that's a harsh view of humanity.
01:56:15.500 | There's a lot of bad people, whether incompetent
01:56:18.780 | or just malevolent in this world, yes.
01:56:21.700 | But it feels like the more malevolent,
01:56:25.820 | so the more damage you do to the world,
01:56:29.580 | the more resistance you have in your own human heart.
01:56:34.580 | - But don't explain with malevolence or stupidity
01:56:37.140 | what can be explained by just people
01:56:38.780 | acting on their incentives.
01:56:40.140 | So what happens in Stanford is not that somebody is evil.
01:56:45.100 | It's just that they do what they're being paid for.
01:56:48.740 | - No, it's not evil.
01:56:50.700 | I tend to, no, I see that as malevolence.
01:56:53.740 | I see as I, even like being a good German,
01:56:58.780 | as I told you offline, is some,
01:57:01.540 | it's not absolute malevolence, but it's a small amount.
01:57:06.140 | It's cowardice.
01:57:07.480 | I mean, when you see there's something wrong with the world,
01:57:10.580 | it's either incompetence and you're not able to see it,
01:57:15.100 | or it's cowardice that you're not able to stand up,
01:57:17.780 | not necessarily in a big way, but in a small way.
01:57:21.620 | So I do think that is a bit of malevolence.
01:57:25.780 | I'm not sure the example you're describing
01:57:27.660 | is a good example of that. - So the question is,
01:57:28.740 | what is it that you are aiming for?
01:57:31.220 | And if you don't believe in the future,
01:57:34.900 | if you, for instance, think that the dollar
01:57:36.620 | is going to crash, why would you try to save dollars?
01:57:39.540 | If you don't think that humanity will be around
01:57:42.580 | in 100 years from now because global warming
01:57:45.460 | will wipe out civilization,
01:57:47.500 | why would you need to act as if it were?
01:57:49.500 | So the question is, is there an overarching aesthetics
01:57:53.980 | that is projecting you and the world into the future,
01:57:56.900 | which I think is the basic idea of religion,
01:57:59.020 | that you understand the interactions
01:58:01.200 | that we have with each other
01:58:02.340 | as some kind of civilization level agent
01:58:04.760 | that is projecting itself into the future.
01:58:07.180 | If you don't have that shared purpose,
01:58:09.260 | what is there to be ethical for?
01:58:12.940 | So I think when we talk about ethics and AI,
01:58:16.420 | we need to go beyond the insane bias discussions and so on,
01:58:20.020 | where people are just measuring the distance
01:58:22.040 | between a statistic to their preferred current world model.
01:58:27.040 | - The optimism, wait, wait, wait,
01:58:29.360 | I was a little confused by the previous thing,
01:58:31.180 | just to clarify.
01:58:32.280 | There is a kind of underlying morality
01:58:39.820 | to having an optimism that human civilization
01:58:43.620 | will persist for longer than 100 years.
01:58:45.780 | I think a lot of people believe
01:58:50.060 | that it's a good thing for us to keep living.
01:58:53.220 | - Yeah, of course. - And thriving.
01:58:54.060 | - This morality itself is not an end to itself.
01:58:56.880 | It's instrumental to people living in 100 years from now.
01:59:00.940 | - Right. - Or 500 years from now.
01:59:03.100 | So it's only justifiable if you actually think
01:59:06.560 | that it will lead to people
01:59:08.580 | or increase the probability of people being around
01:59:10.900 | in that timeframe.
01:59:12.500 | And a lot of people don't actually believe that,
01:59:14.980 | at least not actively.
01:59:16.080 | - But believe what exactly?
01:59:18.020 | So I was-- - Most people don't believe
01:59:20.660 | that they can afford to act on such a model.
01:59:23.540 | Basically what happens in the US
01:59:25.380 | is I think that the healthcare system
01:59:26.980 | is for a lot of people no longer sustainable,
01:59:29.020 | which means that if they need the help
01:59:30.660 | of the healthcare system,
01:59:31.640 | they're often not able to afford it.
01:59:33.540 | And when they cannot help it,
01:59:35.140 | they are often going bankrupt.
01:59:37.380 | I think the leading cause of personal bankruptcy
01:59:40.300 | in the US is the healthcare system.
01:59:42.540 | - Yeah. - And that would not
01:59:44.080 | be necessary.
01:59:44.920 | It's not because people are consuming
01:59:46.740 | more and more medical services
01:59:48.820 | and are achieving a much, much longer life as a result.
01:59:51.540 | That's not actually the story that is happening
01:59:53.700 | because you can compare it to other countries.
01:59:55.380 | And life expectancy in the US is currently not increasing
01:59:58.500 | and it's not as high as in all the other
02:00:00.480 | industrialized countries.
02:00:01.760 | So some industrialized countries are doing better
02:00:03.880 | with a much cheaper healthcare system.
02:00:06.340 | And what you can see is, for instance, administrative load.
02:00:10.060 | The healthcare system has maybe to some degree deliberately
02:00:14.780 | set up its job placement program to allow people
02:00:18.940 | to continue living a middle-class existence,
02:00:21.020 | despite not having a useful use case in productivity.
02:00:26.020 | So they are being paid to push paper around.
02:00:29.440 | And the number of administrators in the healthcare system
02:00:32.300 | has been increasing much faster
02:00:33.960 | than the number of practitioners.
02:00:35.820 | And this is something that you have to pay for, right?
02:00:37.860 | And also the revenues that are being generated
02:00:41.260 | in the healthcare system are relatively large
02:00:42.980 | and somebody has to pay for them.
02:00:44.480 | And the result why they are so large
02:00:46.660 | is because market mechanisms are not working.
02:00:49.140 | The FDA is largely not protecting people
02:00:52.620 | from malpractice of healthcare providers.
02:00:56.100 | The FDA is protecting healthcare providers
02:00:58.700 | from competition.
02:00:59.980 | - Right, okay, okay.
02:01:00.820 | - So this is a thing that has to do with values.
02:01:03.420 | And this is not because people are malicious on all levels.
02:01:06.500 | It's because they are not incentivized to act
02:01:09.140 | on a greater whole, on this idea that you treat somebody
02:01:12.780 | who comes to you as a patient
02:01:14.360 | like you would treat a family member.
02:01:15.680 | - Yeah, yeah, but we're trying, I mean,
02:01:18.040 | you're highlighting a lot of the flaws
02:01:20.040 | of the different institutions
02:01:21.240 | the systems we're operating under.
02:01:23.120 | But I think there's a continued throughout history
02:01:25.920 | mechanism design of trying to design incentives
02:01:29.360 | in such a way that these systems behave better
02:01:31.680 | and better and better.
02:01:32.760 | I mean, it's a very difficult thing to operate
02:01:35.320 | a society of hundreds of millions of people
02:01:38.160 | effectively with--
02:01:39.200 | - Yes, so do we live in a society
02:01:41.400 | that is ever correcting?
02:01:42.820 | Is this, do we observe that our models
02:01:46.740 | of what we are doing are predictive of the future
02:01:49.420 | and when they are not, we improve them.
02:01:51.540 | Our laws adjudicated with clauses
02:01:54.780 | that you put into every law,
02:01:56.020 | what is meant to be achieved by that law
02:01:57.900 | and the law will be automatically repealed
02:02:00.060 | if it's not achieving that, right?
02:02:01.360 | If you are optimizing your own laws,
02:02:03.260 | if you're writing your own source code,
02:02:05.160 | you probably make an estimate of what is the thing
02:02:08.180 | that's currently wrong in my life?
02:02:09.420 | What is it that I should change about my own policies?
02:02:12.200 | What is the expected outcome?
02:02:14.100 | And if that outcome doesn't manifest,
02:02:16.580 | I will change the policy back, right?
02:02:18.460 | Or I would change it into something different.
02:02:20.280 | Are we doing this on a societal level?
02:02:22.220 | - I think so.
02:02:23.060 | I think it's easy to sort of highlight the,
02:02:25.580 | I think we're doing it in the way that,
02:02:27.960 | like I operate my current life.
02:02:30.380 | I didn't sleep much last night.
02:02:32.580 | You would say that, Lex, the way you need
02:02:35.180 | to operate your life is you need to always get sleep.
02:02:37.340 | The fact that you didn't sleep last night
02:02:39.060 | is totally the wrong way to operate in your life.
02:02:43.060 | Like you should have gotten all your shit done in time
02:02:46.460 | and gotten to sleep because sleep is very important
02:02:48.940 | for health and you're highlighting,
02:02:50.540 | look, this person is not sleeping.
02:02:52.500 | Look, the medical, the healthcare system is operating poor.
02:02:56.380 | But the point is that we just,
02:02:59.140 | it seems like this is the way,
02:03:00.460 | especially in the capitalist society we operate,
02:03:02.700 | we keep running into trouble and last minute,
02:03:06.100 | we try to get our way out through innovation
02:03:09.260 | and it seems to work.
02:03:10.760 | You have a lot of people that ultimately are trying
02:03:13.380 | to build a better world and get urgency about them
02:03:18.380 | when the problem becomes more and more imminent.
02:03:22.900 | And that's the way this operates.
02:03:24.380 | But if you look at the history, the long arc of history,
02:03:29.380 | it seems like that operating on deadlines produces progress
02:03:35.540 | and builds better and better systems.
02:03:36.980 | - You probably agree with me that the US
02:03:39.060 | should have engaged in mask production in January, 2020.
02:03:44.060 | And that we should have shut down the airports early on
02:03:47.900 | and that we should have made it mandatory
02:03:50.980 | that the people that work in nursing homes
02:03:53.340 | on campus, rather than living at home
02:03:57.940 | and then coming in and infecting people in the nursing homes
02:04:01.480 | that had no immune response to COVID.
02:04:03.940 | And that is something that was, I think, visible back then.
02:04:08.180 | The correct decisions haven't been made.
02:04:10.580 | We would have the same situation again.
02:04:12.580 | How do we know that these wrong decisions
02:04:14.340 | are not being made again?
02:04:15.780 | Have the people that made the decisions
02:04:17.620 | to not protect the nursing homes been punished?
02:04:20.580 | Have the people that made the wrong decisions
02:04:23.180 | with respect to testing that prevented the development
02:04:26.780 | of testing by startup companies and the importing of tests
02:04:30.180 | from countries that already had them,
02:04:32.140 | have these people been held responsible?
02:04:34.420 | - First of all, so what do you want to put
02:04:37.380 | before the firing squad?
02:04:38.780 | I think they are being held responsible.
02:04:39.620 | - No, just make sure that this doesn't happen again.
02:04:41.780 | - No, but it's not that.
02:04:44.700 | Yes, they're being held responsible by many voices,
02:04:47.120 | by people being frustrated.
02:04:48.820 | There's new leaders being born now
02:04:50.740 | that are going to see rise to the top in 10 years.
02:04:54.200 | This moves slower than, there's obviously a lot
02:04:57.740 | of older incompetence and bureaucracy
02:05:01.220 | and these systems move slowly.
02:05:03.660 | They move like science, one death at a time.
02:05:06.860 | So yes, I think the pain that's been felt
02:05:11.340 | in the previous year is reverberating throughout the world.
02:05:15.540 | - Maybe I'm getting old.
02:05:16.700 | I suspect that every generation in the US
02:05:19.180 | after the war has lost the plot even more.
02:05:21.520 | I don't see this development.
02:05:23.180 | - The war, World War II?
02:05:24.720 | - Yes, so basically there was a time
02:05:26.740 | when we were modernist.
02:05:29.140 | And in this modernist time, the US felt actively threatened
02:05:33.660 | by the things that happened in the world.
02:05:35.740 | The US was worried about possibility of failure.
02:05:38.740 | And this imminence of possible failure led to decisions.
02:05:44.620 | There was a time when the government would listen
02:05:47.380 | to physicists about how to do things.
02:05:50.580 | And the physicists were actually concerned
02:05:52.140 | about what the government should be doing.
02:05:53.620 | So they would be writing letters to the government.
02:05:56.140 | And so for instance, the decision
02:05:57.780 | for the Manhattan Project was something
02:05:59.380 | that was driven in a conversation
02:06:01.760 | between physicists and the government.
02:06:04.060 | I don't think such a discussion would take place today.
02:06:06.940 | - I disagree.
02:06:07.980 | I think if the virus was much deadlier,
02:06:10.540 | we would see a very different response.
02:06:12.660 | I think the virus was not sufficiently deadly.
02:06:14.960 | And instead, because it wasn't very deadly,
02:06:17.460 | what happened is the current system
02:06:20.460 | started to politicize it.
02:06:22.020 | The mask, this is what I realized with masks early on.
02:06:25.340 | They were not very quickly became not as a solution,
02:06:29.620 | but they became a thing that politicians used
02:06:32.660 | to divide the country.
02:06:33.980 | So the same things happened with vaccines, same thing.
02:06:37.020 | So like nobody's really,
02:06:38.820 | people weren't talking about solutions to this problem
02:06:41.180 | because I don't think the problem was bad enough.
02:06:43.140 | When you talk about the war,
02:06:45.100 | I think our lives are too comfortable.
02:06:47.980 | I think in the developed world, things are too good
02:06:52.300 | and we have not faced severe dangers.
02:06:54.980 | When the danger, the severe dangers,
02:06:57.540 | existential threats are faced, that's when we step up
02:07:00.780 | on a small scale and a large scale.
02:07:03.020 | Now, I don't, that's sort of my argument here,
02:07:08.020 | but I did think the virus,
02:07:10.700 | I was hoping that it was actually sufficiently dangerous
02:07:14.940 | for us to step up because especially in the early days,
02:07:18.660 | it was unclear.
02:07:19.720 | It still is unclear because of mutations,
02:07:23.260 | how bad it might be, right?
02:07:25.780 | And so I thought we would step up and even,
02:07:30.700 | so the masks point is a tricky one because to me,
02:07:35.700 | the manufacture of masks isn't even the problem.
02:07:38.820 | I'm still to this day and I was involved
02:07:41.060 | with a bunch of this work,
02:07:42.700 | have not seen good science done on whether masks work or not.
02:07:45.900 | Like there still has not been a large scale study.
02:07:49.440 | To me, that should be, there should be large scale studies
02:07:51.860 | on every possible solution, like aggressive,
02:07:55.180 | in the same way that the vaccine development was aggressive.
02:07:57.780 | There should be masks, which tests,
02:07:59.900 | what kind of tests work really well,
02:08:02.420 | what kind of, like even the question
02:08:04.700 | of how the virus spreads,
02:08:06.020 | there should be aggressive studies on that to understand.
02:08:09.820 | I'm still, as far as I know,
02:08:12.180 | there's still a lot of uncertainty about that.
02:08:14.180 | Nobody wants to see this as an engineering problem
02:08:17.100 | that needs to be solved.
02:08:18.540 | It's that I was surprised about,
02:08:21.740 | but I would-- - I find that our views
02:08:23.220 | are largely convergent, but not completely.
02:08:25.480 | So I agree with the thing that,
02:08:27.860 | because our society in some sense perceives itself
02:08:30.920 | as too big to fail.
02:08:32.620 | - Right. - And the virus
02:08:34.220 | did not alert people to the fact
02:08:35.980 | that we are facing possible failure.
02:08:37.960 | That basically put us into the postmodernist mode.
02:08:41.580 | And I don't mean in a philosophical sense,
02:08:43.300 | but in a societal sense,
02:08:45.260 | the difference between the postmodern society
02:08:47.980 | and the modern society is that the modernist society
02:08:50.580 | has to deal with the ground truths,
02:08:52.340 | and the postmodernist society has to deal with appearances.
02:08:55.540 | Politics becomes a performance,
02:08:57.860 | and the performance is done for an audience,
02:08:59.820 | and the organized audience is the media,
02:09:02.260 | and the media evaluates itself via other media, right?
02:09:05.380 | So you have an audience of critics
02:09:07.280 | that evaluate themselves.
02:09:09.100 | And I don't think it's so much the failure
02:09:10.700 | of the politicians, because to get in power
02:09:12.820 | and to stay in power, you need to be able
02:09:15.720 | to deal with the published opinion.
02:09:17.580 | - Well, I think it goes in cycles,
02:09:19.220 | because what's going to happen
02:09:21.580 | is all of the small business owners,
02:09:24.260 | all the people who truly are suffering
02:09:26.620 | and will suffer more because the effects
02:09:29.500 | of the closure of the economy
02:09:31.980 | and the lack of solutions to the virus,
02:09:34.260 | they're going to apprise.
02:09:36.340 | And hopefully, I mean, this is where charismatic leaders
02:09:40.200 | can get the world in trouble.
02:09:42.660 | But hopefully, we'll elect great leaders
02:09:47.900 | that will break through this postmodernist idea
02:09:51.180 | of the media and the perception
02:09:55.460 | and the drama on Twitter and all that kind of stuff.
02:09:57.700 | - But you know this can go either way.
02:09:59.380 | - Yeah.
02:10:00.340 | - When the Weimar Republic was unable
02:10:02.600 | to deal with the economic crisis that Germany was facing,
02:10:07.600 | there was an option to go back.
02:10:09.180 | And there were people which thought,
02:10:11.700 | let's get back to a constitutional monarchy
02:10:14.500 | and let's get this to work,
02:10:16.640 | because democracy doesn't work.
02:10:18.860 | And eventually, there was no way back.
02:10:21.820 | People decided there was no way back.
02:10:23.380 | They needed to go forward.
02:10:24.500 | And the only options for going forward
02:10:26.780 | was to become a Stalinist communist,
02:10:29.540 | basically an option to completely expropriate
02:10:34.320 | the factories and so on and nationalize them
02:10:36.900 | and to reorganize Germany in communist terms
02:10:40.620 | and ally itself with Stalin and fascism.
02:10:44.740 | And both options were obviously very bad.
02:10:48.040 | And the one that the Germans picked led to a catastrophe
02:10:51.360 | that devastated Europe.
02:10:54.380 | And I'm not sure if the US has an immune response
02:10:57.240 | against that.
02:10:58.080 | I think that the far right is currently very weak in the US,
02:11:01.440 | but this can easily change.
02:11:03.040 | - Do you think from a historical perspective,
02:11:08.840 | Hitler could have been stopped from within Germany
02:11:12.320 | or from outside or this?
02:11:15.040 | Well, depends on who you want to focus,
02:11:17.880 | whether you want to focus on Stalin or Hitler,
02:11:20.300 | but it feels like Hitler was the one
02:11:22.460 | as a political movement that could have been stopped.
02:11:25.260 | - I think that the point was that
02:11:28.120 | a lot of people wanted Hitler.
02:11:29.840 | So he got support from a lot of quarters.
02:11:32.420 | It was a number of industrialists who supported him
02:11:35.140 | because they thought that the democracy
02:11:36.820 | is obviously not working and unstable
02:11:38.520 | and you need a strong man.
02:11:40.680 | And he was willing to play that part.
02:11:43.280 | There were also people in the US
02:11:45.060 | who thought that Hitler would stop Stalin
02:11:47.940 | and would act as a bulwark against Bolshevism,
02:11:51.520 | which he probably would have done, right?
02:11:54.220 | But at which cost?
02:11:56.260 | And then many of the things that he was going to do,
02:11:59.780 | like the Holocaust,
02:12:01.660 | was something where people thought this is rhetoric.
02:12:04.660 | He's not actually going to do this.
02:12:07.180 | Especially many of the Jews themselves,
02:12:09.120 | which were humanists.
02:12:10.060 | And for them, this was outside of the scope
02:12:12.340 | that was thinkable.
02:12:13.300 | - Right.
02:12:14.300 | I mean, I wonder if Hitler is uniquely,
02:12:18.960 | I want to carefully use this term, but uniquely evil.
02:12:23.520 | So if Hitler was never born,
02:12:26.420 | if somebody else would come in his place.
02:12:29.100 | So like, just thinking about the progress of history,
02:12:33.820 | how important are those singular figures
02:12:36.740 | that lead to mass destruction and cruelty?
02:12:41.020 | Because my sense is Hitler was unique.
02:12:45.280 | It wasn't just about the environment
02:12:49.460 | and the context that gave him.
02:12:50.980 | Another person would not come in his place
02:12:54.780 | to do as destructive of the things that he did.
02:12:58.220 | There was a combination of charisma,
02:13:01.780 | of madness, of psychopathy, of just ego,
02:13:06.220 | all of those things,
02:13:07.220 | which are very unlikely to come together
02:13:09.580 | in one person in the right time.
02:13:11.720 | - It also depends on the context of the country
02:13:14.700 | that you're operating in.
02:13:16.540 | If you tell the Germans that they have a historical destiny
02:13:21.140 | in this romantic country,
02:13:23.860 | the effect is probably different
02:13:25.540 | than it is in other countries.
02:13:27.220 | But Stalin has killed a few more people than Hitler did.
02:13:32.220 | And if you look at the probability
02:13:35.860 | that you survived under Stalin,
02:13:37.580 | Hitler killed people if he thought
02:13:43.140 | they were not worth living,
02:13:45.140 | or if they were harmful to his racist project.
02:13:49.260 | He basically felt that the Jews would be too cosmopolitan
02:13:52.580 | and would not be willing to participate
02:13:55.140 | in the racist redefinition of society
02:13:57.500 | and the value of society and an ethnostate in this way,
02:14:01.420 | as he wanted it to have it.
02:14:03.240 | So he saw them as a harmful danger,
02:14:06.980 | especially since they played such an important role
02:14:09.480 | in the economy and culture of Germany.
02:14:13.300 | And so he had basically had some radical,
02:14:18.040 | but rational reason to murder them.
02:14:20.780 | And Stalin just killed everyone.
02:14:23.400 | He basically, the Stalinist purges were such a random thing
02:14:26.140 | where he said that there's a certain possibility
02:14:31.580 | that this particular part of the population
02:14:34.660 | has a number of German collaborators or something,
02:14:36.740 | and we just kill them all.
02:14:38.820 | Or if you look at what Mao did,
02:14:40.660 | the number of people that were killed in absolute numbers
02:14:44.260 | were much higher under Mao than they were under Stalin.
02:14:47.700 | So it's super hard to say.
02:14:49.540 | The other thing is that you look at Genghis Khan and so on,
02:14:53.580 | how many people he killed.
02:14:55.080 | When you see there are a number of things
02:14:58.940 | that happen in human history
02:14:59.940 | that actually really put a substantial dent
02:15:02.540 | in the existing population or Napoleon.
02:15:05.940 | And it's very difficult to eventually measure it
02:15:09.540 | because what's happening is basically evolution
02:15:12.060 | on a human scale,
02:15:15.020 | where one monkey figures out a way to become viral
02:15:19.340 | and is using this viral technology
02:15:22.420 | to change the patterns of society
02:15:24.580 | at the very, very large scale.
02:15:26.580 | And what we find so abhorrent about these changes
02:15:29.920 | is the complexity that is being destroyed by this.
02:15:32.380 | That it's basically like a big fire
02:15:33.740 | that burns out a lot of the existing culture and structure
02:15:36.840 | that existed before.
02:15:38.140 | - Yeah, and it all just starts with one monkey,
02:15:42.640 | one charismatic ape,
02:15:44.500 | and there's a bunch of them throughout history.
02:15:46.100 | - Yeah, but it's in a given environment.
02:15:48.000 | It's basically similar to wildfires in California, right?
02:15:51.140 | The temperature is rising, there is less rain falling,
02:15:55.580 | and then suddenly a single spark can have an effect
02:15:57.980 | that in other times would be contained.
02:15:59.980 | - Okay, speaking of which,
02:16:03.400 | I love how we went to Hitler and Stalin
02:16:05.860 | from 20, 30 minutes ago,
02:16:09.060 | GPT-3 generating, doing program synthesis.
02:16:13.640 | The argument was about morality of AI versus human.
02:16:18.320 | And specifically in the context of writing programs,
02:16:26.240 | specifically in the context of programs
02:16:28.560 | that can be destructive.
02:16:29.960 | So running nuclear power plants
02:16:31.840 | or autonomous weapons systems, for example.
02:16:35.120 | And I think your inclination was to say
02:16:39.180 | that it's not so obvious
02:16:40.740 | that AI would be less moral than humans,
02:16:43.480 | or less effective at making a world
02:16:46.720 | that would make humans happy.
02:16:48.640 | - So I'm not talking about self-directed systems
02:16:52.660 | that are making their own goals at a global scale.
02:16:57.300 | If you just talk about the deployment
02:16:59.140 | of technological systems that are able
02:17:01.140 | to see order and patterns and use this as control models
02:17:05.540 | to act on the goals that we give them,
02:17:08.400 | then if you have the correct incentives
02:17:11.120 | to set the correct incentives for these systems,
02:17:13.180 | I'm quite optimistic.
02:17:14.360 | - But so humans versus AI, let me give you an example.
02:17:20.660 | Autonomous weapons systems.
02:17:22.180 | Let's say there's a city somewhere in the Middle East
02:17:26.900 | that has a number of terrorists.
02:17:30.380 | And the question is, what's currently done
02:17:33.300 | with drone technology is you have information
02:17:36.300 | about the location of a particular terrorist,
02:17:38.540 | and you have a targeted attack,
02:17:40.620 | you have a bombing of that particular building.
02:17:42.980 | And that's all directed by humans
02:17:45.920 | at the high level strategy,
02:17:47.980 | and also at the deployment of individual bombs
02:17:50.120 | and missiles like that the actual,
02:17:52.580 | everything is done by human except the final targeting.
02:17:56.720 | And the like the, it's like with spot similar thing,
02:17:59.740 | like control the flight.
02:18:01.860 | Okay, what if you give AI control and saying,
02:18:05.780 | write a program that says,
02:18:10.340 | here's the best information I have available
02:18:12.220 | about the location of these five terrorists.
02:18:14.820 | Here's the city, make sure it's,
02:18:16.820 | all the bombing you do is constrained to the city.
02:18:19.460 | Make sure it's precision based, but you take care of it.
02:18:22.880 | So you do one level of abstraction out and saying,
02:18:26.820 | take care of the terrorists in the city.
02:18:29.600 | Which are you more comfortable with,
02:18:31.440 | the humans or the JavaScript GPT-3 generated code
02:18:35.720 | that's doing the deployment?
02:18:38.240 | I mean, that's, this is the kind of question I'm asking,
02:18:42.360 | is the kind of bugs that we see in human nature,
02:18:47.120 | are they better or worse than the kind of bugs we see in AI?
02:18:51.240 | - They're different bugs.
02:18:52.480 | There is an issue that if people are creating
02:18:56.040 | an imperfect automation of a process
02:18:59.960 | that normally requires a mobile judgment.
02:19:02.920 | And this mobile judgment is the reason why
02:19:06.160 | it cannot be automated often.
02:19:07.520 | It's not because the computation is too expensive,
02:19:12.240 | but because the model that you give the AI
02:19:14.360 | is not an adequate model of the dynamics
02:19:17.000 | of the world because the AI does not understand
02:19:19.320 | the context that it's operating in the right way.
02:19:22.000 | And this is something that already happens with Excel.
02:19:24.840 | Right, you don't need to have an AI system to do this.
02:19:27.920 | If you have an automated process in place
02:19:30.440 | where humans decide using automated criteria,
02:19:33.280 | whom to kill when and whom to target when,
02:19:36.120 | which already happens, right?
02:19:38.280 | And you have no way to get off the kill list
02:19:40.360 | once that happens.
02:19:41.640 | Once you have been targeted according
02:19:43.360 | to some automatic criterion by people, right?
02:19:45.800 | In a bureaucracy.
02:19:46.840 | That is the issue.
02:19:49.080 | The issue is not the AI, it's the automation.
02:19:52.360 | - So there's something about, right, it's automation.
02:19:56.480 | But there's something about the,
02:19:58.920 | there's a certain level of abstraction
02:20:00.760 | where you give control to AI to do the automation.
02:20:04.440 | There's a scale that could be achieved
02:20:07.240 | that it feels like the scale of bug and scale mistake
02:20:10.880 | and scale of destruction that could be achieved
02:20:14.720 | of the kind that humans cannot achieve.
02:20:17.000 | So AI is much more able to destroy
02:20:19.760 | an entire country accidentally versus humans.
02:20:22.720 | It feels like the more civilians die as a react
02:20:27.320 | or suffer as the consequences of your decisions,
02:20:31.040 | the more weight there is on the human mind
02:20:34.400 | to make that decision.
02:20:35.640 | And so like, it becomes more and more unlikely
02:20:39.200 | to make that decision for humans.
02:20:41.600 | For AI, it feels like it's harder
02:20:43.840 | to encode that kind of weight.
02:20:47.160 | - In a way, the AI that we're currently building
02:20:49.760 | is automating statistics, right?
02:20:52.040 | Intelligence is the ability to make models
02:20:54.000 | so you can act on them.
02:20:55.360 | And AI is the tool to make better models.
02:20:57.560 | So in principle, if you're using AI wisely,
02:21:01.680 | you're able to prevent more harm.
02:21:04.360 | And I think that the main issue is not on the side
02:21:07.360 | of the AI, it's on the side of the human command hierarchy
02:21:10.040 | that is using technology irresponsibly.
02:21:12.360 | - So the question is, how hard is it to encode,
02:21:15.800 | to properly encode the right incentives into the AI?
02:21:19.120 | - So for instance, there's this idea,
02:21:21.480 | what happens if we let our airplanes being flown
02:21:24.520 | with AI systems and then neural network is a black box
02:21:27.680 | and so on.
02:21:28.520 | And it turns out our neural networks
02:21:30.240 | are actually not black boxes anymore.
02:21:32.360 | There are functional approximators using linear algebra
02:21:36.680 | and there are performing things that we can understand.
02:21:40.080 | But we can also, instead of letting the neural network fly
02:21:43.400 | the airplane, use the neural network to generate approval
02:21:46.120 | of the correct program.
02:21:47.480 | There's a degree of accuracy of the proof
02:21:49.960 | that a human could not achieve.
02:21:51.880 | And so we can use our AI by combining different technologies
02:21:55.600 | to build systems that are much more reliable
02:21:57.720 | than the systems that a human being could create.
02:22:00.480 | And so in this sense, I would say that if you use
02:22:04.480 | an early stage of technology to save labor
02:22:08.400 | and don't employ competent people,
02:22:11.400 | but just to hack something together, because you can,
02:22:14.240 | that is very dangerous.
02:22:15.320 | And if people are acting under these incentives
02:22:17.240 | that they get away with delivering shoddy work
02:22:20.420 | more cheaply using AI, there's less human oversight
02:22:23.160 | than before, that's very dangerous.
02:22:25.160 | - The thing is though, AI is still going to be unreliable,
02:22:29.000 | perhaps less so than humans,
02:22:30.440 | but it will be unreliable in novel ways.
02:22:35.360 | - Yeah, but this is an empirical question
02:22:37.240 | and it's something that we can figure out and work with.
02:22:39.920 | So the issue is, do we trust the systems,
02:22:43.180 | the social systems that we have in place
02:22:45.400 | and the social systems that we can build and maintain
02:22:48.100 | that they're able to use AI responsibly?
02:22:50.480 | If they can, then AI is good news.
02:22:53.040 | If they cannot, then it's going to make
02:22:54.860 | the existing problems worse.
02:22:56.340 | - Well, and also who creates the AI, who controls it,
02:23:00.160 | who makes money from it, because it's ultimately humans.
02:23:03.220 | And then you start talking about
02:23:05.120 | how much you trust the humans.
02:23:07.000 | - So the question is, what does who mean?
02:23:08.800 | I don't think that we have identity per se.
02:23:11.200 | I think that the story of a human being is somewhat random.
02:23:15.580 | What happens is more or less that everybody
02:23:17.880 | is acting on their local incentives,
02:23:19.880 | what they perceive to be their incentives.
02:23:22.080 | And the question is, what are the incentives
02:23:24.720 | that the one that is pressing the button
02:23:27.480 | is operating under?
02:23:28.600 | - Yeah.
02:23:29.420 | It's nice for those incentives to be transparent.
02:23:32.680 | So for example, I'll give you an example.
02:23:36.120 | There seems to be a significant distrust of tech,
02:23:39.840 | like entrepreneurs in the tech space,
02:23:44.400 | or people that run, for example, social media companies,
02:23:47.440 | like Mark Zuckerberg.
02:23:48.960 | There's not a complete transparency of incentives
02:23:53.120 | under which that particular human being operates.
02:23:56.780 | We can listen to the words he says,
02:24:00.760 | or what the marketing team says for a company,
02:24:03.040 | but we don't know.
02:24:04.280 | And that becomes a problem when the algorithms
02:24:08.280 | and the systems created by him and other people
02:24:12.800 | in that company start having more and more impact on society.
02:24:16.680 | If the incentives were somehow,
02:24:21.960 | the definition and the explainability of the incentives
02:24:26.040 | was decentralized such that nobody can manipulate it,
02:24:30.880 | no propaganda type manipulation
02:24:34.240 | of how these systems actually operate could be done,
02:24:38.040 | then yes, I think AI could achieve much fairer,
02:24:43.040 | much more effective solutions
02:24:50.360 | to difficult ethical problems.
02:24:53.280 | But when there's humans in the loop
02:24:55.800 | manipulating the dissemination,
02:24:59.840 | the communication of how the system actually works,
02:25:02.440 | that feels like you can run into a lot of trouble.
02:25:05.320 | And that's why there's currently a lot of distrust
02:25:07.720 | for people at the heads of companies
02:25:10.160 | that have increasingly powerful AI systems.
02:25:12.660 | - I suspect what happened traditionally in the US
02:25:16.880 | was that since our decision-making is much more central,
02:25:19.840 | or decentralized than in an authoritarian state, right?
02:25:23.000 | People are making decisions autonomously
02:25:24.800 | at many, many levels in a society.
02:25:26.960 | What happened that was we created coherence
02:25:30.280 | and cohesion in society by controlling what people thought
02:25:33.960 | and what information they had.
02:25:35.720 | The media synchronized public opinion
02:25:38.720 | and social media have disrupted this.
02:25:40.360 | It's not, I think, so much Russian influence or something.
02:25:43.800 | It's everybody's influence.
02:25:45.480 | It's that a random person can come up
02:25:47.840 | with a conspiracy theory and disrupt what people think.
02:25:52.440 | And if that conspiracy theory is more compelling
02:25:55.460 | or more attractive than the standardized
02:25:58.200 | public conspiracy theory that we give people as a default,
02:26:01.880 | then it might get more traction, right?
02:26:03.440 | You suddenly have the situation that a single individual
02:26:05.960 | somewhere on a farm in Texas has more listeners than CNN.
02:26:10.060 | - Which particular farmer are you referring to in Texas?
02:26:13.880 | (both laughing)
02:26:16.540 | - Probably no one.
02:26:19.200 | - Yes, I had dinner with him a couple of times.
02:26:20.880 | - Okay. - Right.
02:26:22.240 | It's an interesting situation because you cannot get
02:26:24.520 | to be an anchor in CNN if you don't go
02:26:26.840 | through a complicated gatekeeping process.
02:26:30.400 | And suddenly you have random people
02:26:32.480 | without that gatekeeping process
02:26:34.880 | just optimizing for attention.
02:26:36.960 | Not necessarily with a lot of responsibility
02:26:39.520 | for the long-term effects of projecting these theories
02:26:42.680 | into the public.
02:26:43.920 | And now there is a push of making social media
02:26:46.960 | more like traditional media, which means that the opinion
02:26:50.040 | that is being projected in social media
02:26:52.160 | is more limited to an acceptable range.
02:26:54.640 | With the goal of getting society into safe waters
02:26:58.400 | and increase the stability and cohesion of society again,
02:27:00.800 | which I think is a laudable goal.
02:27:03.160 | But of course it also is an opportunity
02:27:05.100 | to seize the means of indoctrination.
02:27:08.360 | And the incentives that people are under when they do this
02:27:11.440 | are in such a way that the AI ethics that we would need
02:27:17.160 | becomes very often something like AI politics,
02:27:20.640 | which is basically partisan and ideological.
02:27:23.360 | And this means that whatever one side says,
02:27:26.160 | another side is going to be disagreeing with.
02:27:28.440 | In the same way as when you turn masks
02:27:30.920 | or the vaccine into a political issue,
02:27:33.160 | if you say that it is politically virtuous
02:27:35.700 | to get vaccinated, it will mean that the people
02:27:37.660 | that don't like you will not want to get vaccinated.
02:27:41.040 | And as soon as you have this partisan discourse,
02:27:43.600 | it's going to be very hard to make the right decisions
02:27:47.120 | because the incentives get to be the wrong ones.
02:27:48.880 | AI ethics needs to be super boring.
02:27:51.160 | It needs to be done by people who do statistics all the time
02:27:54.240 | and have extremely boring, long-winded discussions
02:27:58.240 | that most people cannot follow
02:27:59.640 | because they are too complicated, but that are dead serious.
02:28:02.540 | These people need to be able to be better at statistics
02:28:05.840 | than the leading machine learning researchers.
02:28:07.920 | And at the moment, the AI ethics debate is the one
02:28:12.040 | where you don't have any barrier to entry.
02:28:14.440 | Everybody who has a strong opinion
02:28:16.840 | and is able to signal that opinion in the right way-
02:28:18.760 | - Strong words from Joshua Bach.
02:28:20.480 | (Laura laughs)
02:28:22.160 | - And to me, that's a very frustrating thing
02:28:24.360 | because the field is so crucially important to our future.
02:28:26.720 | - It's so crucially important,
02:28:28.280 | but the only qualification you currently need
02:28:31.880 | is to be outraged by the injustice in the world.
02:28:34.760 | - It's more complicated, right?
02:28:36.280 | Everybody seems to be outraged.
02:28:37.880 | But let's just say that the incentives
02:28:40.760 | are not always the right ones.
02:28:42.080 | So basically, I suspect that a lot of people
02:28:45.560 | that enter this debate don't have a vision
02:28:48.200 | for what society should be looking like
02:28:50.080 | in a way that is non-violent,
02:28:51.440 | where we preserve liberal democracy,
02:28:53.600 | where we make sure that we all get along
02:28:56.340 | and we are around in a few hundred years from now,
02:29:00.480 | preferably with a comfortable technological civilization
02:29:03.520 | around us.
02:29:04.880 | - I generally have a very foggy view of that world,
02:29:10.120 | but I tend to try to follow,
02:29:12.120 | and I think society should in some degree
02:29:13.940 | follow the gradient of love,
02:29:16.400 | increasing the amount of love in the world.
02:29:19.040 | And whenever I see different policies or algorithms
02:29:22.100 | or ideas that are not doing so,
02:29:24.560 | obviously that's the ones that kind of resist.
02:29:27.980 | - So the thing that terrifies me about this notion
02:29:30.800 | is I think that German fascism was driven by love.
02:29:34.840 | It was just a very selective love.
02:29:37.920 | It was a love that basically-
02:29:38.760 | - But now you're just manipulating.
02:29:40.540 | I mean, that's, you have to be very careful.
02:29:45.540 | You're talking to the wrong person in this way about love.
02:29:50.560 | - So let's talk about what love is.
02:29:52.600 | And I think that love is the discovery of shared purpose.
02:29:56.020 | It's the recognition of the sacred and the other.
02:29:59.720 | And this enables non-transactional interactions.
02:30:02.840 | - But the size of the other that you include
02:30:07.800 | needs to be maximized.
02:30:09.800 | So it's basically appreciation,
02:30:14.760 | like deep appreciation of the world around you fully,
02:30:19.760 | like including the people that are very different than you,
02:30:26.000 | the people that disagree with you completely,
02:30:27.760 | including people, including living creatures
02:30:30.240 | outside of just people, including ideas.
02:30:33.520 | And it's like appreciation of the full mess of it.
02:30:36.840 | And also it has to do with like empathy,
02:30:39.200 | which is coupled with a lack of confidence,
02:30:44.040 | uncertainty about, of your own rightness.
02:30:47.160 | It's like an open, a radical open-mindedness
02:30:50.200 | to the way forward.
02:30:51.160 | - I agree with every part of what you said.
02:30:53.480 | And now if you scale it up, what you recognize
02:30:56.040 | is that love is in some sense the service
02:30:59.200 | to a next level agency, to the highest level agency
02:31:02.760 | that you can recognize.
02:31:04.240 | It could be, for instance, life on earth
02:31:06.400 | or beyond that, where you could say intelligent complexity
02:31:10.920 | in the universe that you try to maximize in a certain way.
02:31:14.080 | But when you think it's true,
02:31:15.800 | it basically means a certain aesthetic.
02:31:17.760 | And there is not one possible aesthetic.
02:31:20.800 | There are many possible aesthetics.
02:31:22.640 | And once you project an aesthetic into the future,
02:31:25.400 | you can see that there are some which defect from it,
02:31:29.240 | which are in conflict with it,
02:31:30.840 | that are corrupt, that are evil.
02:31:33.840 | You and me would probably agree that Hitler was evil
02:31:37.080 | because the aesthetic of the world that he wanted
02:31:40.000 | is in conflict with the aesthetic of the world
02:31:41.960 | that you and me have in mind.
02:31:43.400 | And so the thing that he destroyed,
02:31:48.520 | we want to keep them in the world.
02:31:50.240 | - There's kind of ways to deal.
02:31:55.240 | I mean, Hitler is an easier case,
02:31:56.680 | but perhaps he wasn't so easy in the '30s
02:31:59.200 | to understand who is Hitler and who is not.
02:32:02.400 | - No, it was just that there was no consensus
02:32:04.560 | that the aesthetics that he had in mind were unacceptable.
02:32:07.480 | - Yeah.
02:32:08.320 | I mean, it's difficult.
02:32:09.560 | Love is complicated because you can't just be so open-minded
02:32:16.040 | that you let evil walk into the door,
02:32:20.640 | but you can't be so self-assured
02:32:24.400 | that you can always identify evil perfectly
02:32:29.560 | because that's what leads to Nazi Germany,
02:32:31.840 | having a certainty of what is and wasn't evil,
02:32:34.840 | like always drawing lines of good versus evil.
02:32:37.520 | There seems to be,
02:32:39.840 | there has to be a dance between
02:32:43.720 | like hard stances extending up against what is wrong,
02:32:51.320 | and at the same time, empathy and open-mindedness
02:32:55.400 | of towards not knowing what is right and wrong,
02:32:59.560 | and like a dance between those.
02:33:01.400 | - I found that when I watched the Miyazaki movies
02:33:03.600 | that there is nobody who captures my spirituality
02:33:06.040 | as well as he does.
02:33:07.920 | It's very interesting and just vicious.
02:33:10.440 | There is something going on in his movies
02:33:13.080 | that is very interesting.
02:33:14.120 | So for instance, Mononoke is discussing
02:33:17.120 | not only an answer to Disney's simplistic notion of Mowgli,
02:33:22.120 | the jungle boy who was raised by wolves,
02:33:24.960 | and as soon as he sees people, realizes that he's one of them
02:33:27.760 | and the way in which the moral life and nature
02:33:32.760 | is simplified and romanticized and turned into kitsch.
02:33:36.640 | It's disgusting in the Disney movie,
02:33:38.160 | and he answers to this.
02:33:39.120 | You see, he's replaced by Mononoke,
02:33:41.920 | this wolf girl who was raised by wolves
02:33:43.720 | and who was fierce and dangerous
02:33:45.320 | and who cannot be socialized because she cannot be tamed,
02:33:49.320 | cannot be part of human society.
02:33:50.920 | And you see, human society,
02:33:52.400 | it's something that is very, very complicated.
02:33:54.200 | You see people extracting resources and destroying nature,
02:33:58.280 | but the purpose is not to be evil,
02:34:01.240 | but to be able to have a life that is free from,
02:34:04.760 | for instance, oppression and violence
02:34:07.160 | and to curb death and disease.
02:34:10.840 | And you basically see this conflict,
02:34:13.240 | which cannot be resolved in a certain way.
02:34:15.160 | You see this moment when nature is turned into a garden
02:34:18.360 | and it loses most of what it actually is
02:34:20.960 | and humans no longer submitting to life
02:34:23.000 | and death and nature.
02:34:24.320 | And to these questions, there is no easy answer.
02:34:26.800 | So he just turns it into something that is being observed
02:34:29.960 | as a journey that happens.
02:34:31.160 | And that happens with a certain degree of inevitability.
02:34:34.920 | And the nice thing about all his movies
02:34:37.080 | is there's a certain main character,
02:34:38.720 | and it's the same in all movies.
02:34:41.280 | It's this little girl that is basically Heidi.
02:34:45.760 | And I suspect that happened because when he did field work
02:34:50.520 | for working on the Heidi movies,
02:34:52.640 | back then the Heidi animations,
02:34:54.480 | before he did his own movies,
02:34:55.680 | he traveled to Switzerland and Southeastern Europe
02:35:00.200 | and the Adriatic and so on,
02:35:02.120 | and got an idea about a certain aesthetic
02:35:04.280 | and a certain way of life that informed his future thinking.
02:35:08.120 | And Heidi has a very interesting relationship
02:35:11.000 | to herself and to the world.
02:35:13.280 | There's nothing that she takes for herself.
02:35:15.920 | She is in a way fearless because she is committed
02:35:18.760 | to a service, to a greater whole.
02:35:20.800 | Basically, she is completely committed to serving God.
02:35:24.080 | And it's not an institutionalized God.
02:35:26.320 | It has nothing to do with the Roman Catholic Church
02:35:28.480 | or something like this.
02:35:30.440 | But in some sense, Heidi is an embodiment
02:35:32.640 | of the spirit of European Protestantism.
02:35:34.920 | It's this idea of a being
02:35:37.600 | that is completely perfect and pure.
02:35:40.200 | And it's not a feminist vision
02:35:42.040 | because she is not a girl boss or something like this.
02:35:48.640 | She is the justification for the men in the audience
02:35:52.440 | to protect her, to build a civilization around her
02:35:54.760 | that makes her possible.
02:35:56.560 | So she is not just the sacrifice of Jesus,
02:35:59.200 | who is innocent and therefore nailed to the cross.
02:36:02.720 | She is not being sacrificed.
02:36:04.040 | She is being protected by everybody around her
02:36:06.960 | who recognizes that she is sacred.
02:36:08.560 | And there are enough around her to see that.
02:36:11.160 | So that's a very interesting perspective.
02:36:13.960 | There's a certain notion of innocence.
02:36:16.320 | And this notion of innocence is not universal.
02:36:18.480 | It's not in all cultures.
02:36:20.120 | Hitler wasn't innocent.
02:36:21.440 | His idea of Germany was not that there is an innocence
02:36:25.560 | that is being protected.
02:36:26.840 | There was a predator that was going to triumph.
02:36:29.640 | And it's also something that is not
02:36:30.880 | at the core of every religion.
02:36:32.240 | There are many religions which don't care about innocence.
02:36:34.800 | They might care about increasing the status of something.
02:36:39.800 | And that's a very interesting notion that is quite unique
02:36:44.960 | and not claiming it's the optimal one.
02:36:47.560 | It's just a particular kind of aesthetic,
02:36:49.880 | which I think makes Miyazaki
02:36:51.760 | into the most relevant Protestant philosopher today.
02:36:55.440 | - And you're saying in terms of all the ways
02:36:59.720 | that a society can operate,
02:37:00.880 | perhaps the preservation of innocence
02:37:02.840 | might be one of the best.
02:37:07.120 | - No, it's just my aesthetic.
02:37:09.760 | - Your aesthetic, gotcha.
02:37:11.240 | - It's a particular way in which I feel
02:37:13.600 | that I relate to the world
02:37:14.800 | that is natural to my own specialization.
02:37:16.680 | And maybe it's not an accident
02:37:18.280 | that I have cultural roots in Europe,
02:37:21.240 | in a particular world.
02:37:23.400 | And so maybe it's a natural convergence point
02:37:26.600 | and it's not something that you will find
02:37:28.520 | in all other times in history.
02:37:31.000 | - So I'd like to ask you about Solzhenitsyn
02:37:34.000 | and our individual role as ants in this very large society.
02:37:39.000 | So he says that some version of the line
02:37:42.080 | between good and evil runs to the heart of every man.
02:37:44.680 | Do you think all of us are capable of good and evil?
02:37:47.480 | What's our role in this play,
02:37:50.880 | in this game we're all playing?
02:37:55.560 | Is all of us capable to play any role?
02:37:57.920 | Is there an ultimate responsibility to,
02:38:01.480 | you mentioned maintaining innocence
02:38:04.280 | or whatever the highest ideal for a society you want,
02:38:09.160 | are all of us capable of living up to that?
02:38:11.520 | And that's our responsibility.
02:38:13.320 | Or is there significant limitations
02:38:15.840 | to what we're able to do in terms of good and evil?
02:38:18.760 | - So there is a certain way, if you are not terrible,
02:38:24.040 | if you are committed to some kind of civilizational agency,
02:38:29.040 | the next level agent that you are serving,
02:38:31.080 | some kind of transcendent principle.
02:38:33.120 | In the eyes of that transcendental principle,
02:38:36.240 | you are able to discern good from evil.
02:38:38.040 | Otherwise you cannot,
02:38:39.000 | otherwise you have just individual aesthetics.
02:38:41.640 | The cat that is torturing a mouse
02:38:43.200 | is not evil because the cat does not envision
02:38:46.320 | or no part of the world,
02:38:47.760 | of the cat is envisioning a world
02:38:50.640 | where there is no violence and nobody is suffering.
02:38:53.720 | If you have an aesthetic
02:38:55.040 | where you want to protect innocence,
02:38:56.920 | then torturing somebody needlessly is evil,
02:39:00.000 | but only then.
02:39:02.720 | - No, but within, I guess the question is within the aesthetic,
02:39:05.920 | within your sense of what is good and evil,
02:39:10.280 | are we still,
02:39:12.120 | it seems like we're still able to commit evil.
02:39:17.120 | - Yes, so basically if you are committing
02:39:19.400 | to this next level agent,
02:39:20.880 | you are not necessarily are this next level agent, right?
02:39:23.640 | You are a part of it.
02:39:24.480 | You have a relationship to it,
02:39:26.080 | like a cell does to its organism, its hyperorganism.
02:39:29.760 | And it only exists to the degree
02:39:31.400 | that it's being implemented by you and others.
02:39:34.640 | And that means that you're not completely fully serving it.
02:39:38.600 | You have freedom in what you decide,
02:39:40.400 | whether you are acting on your impulses and local incentives
02:39:43.440 | and your feral impulses, so to speak,
02:39:45.600 | or whether you're committing to it.
02:39:47.200 | And what you perceive then is a tension
02:39:50.040 | between what you would be doing
02:39:52.440 | with respect to the thing that you recognize
02:39:55.360 | as the sacred if you do,
02:39:57.360 | and what you're actually doing.
02:39:58.880 | And this is the line between good and evil, right?
02:40:01.720 | Where you see, oh, I'm here acting
02:40:03.160 | on my local incentives or impulses.
02:40:05.760 | And here I'm acting on what I consider to be sacred.
02:40:08.160 | And there's a tension between those.
02:40:09.840 | And this is the line between good and evil
02:40:12.000 | that might run through your heart.
02:40:14.480 | And if you don't have that,
02:40:15.760 | if you don't have this relationship
02:40:17.240 | to a transcendental agent,
02:40:18.760 | you could call this relationship
02:40:20.040 | to the next level agent soul, right?
02:40:21.760 | It's not a thing.
02:40:22.600 | It's not an immortal thing that is intrinsically valuable.
02:40:25.840 | It's a certain kind of relationship
02:40:27.560 | that you project to understand what's happening.
02:40:29.640 | Somebody is serving this transcendental sacredness
02:40:31.960 | or they're not.
02:40:33.300 | If you don't have this soul, you cannot be evil.
02:40:35.960 | You're just a complex, natural phenomenon.
02:40:39.720 | - So if you look at life, like starting today
02:40:42.280 | or starting tomorrow, when we leave here today,
02:40:45.260 | there's a bunch of trajectories
02:40:48.280 | that you can take through life, maybe countless.
02:40:53.280 | Do you think some of these trajectories
02:40:57.400 | in your own conception of yourself,
02:40:59.840 | some of those trajectories are the ideal life?
02:41:04.360 | A life that if you were to be the hero of your life story,
02:41:09.720 | you would want to be?
02:41:10.960 | Like, is there some Josh or Bach that you're striving to be?
02:41:14.600 | Like, this is the question I ask myself
02:41:16.040 | as an individual trying to make a better world
02:41:20.320 | in the best way that I could conceive of.
02:41:22.640 | What is my responsibility there?
02:41:24.740 | And how much am I responsible for the failure to do so?
02:41:28.360 | 'Cause I'm lazy and incompetent too often
02:41:33.360 | in my own perception.
02:41:35.760 | - In my own world view, I'm not very important.
02:41:38.320 | So I don't have place for me as a hero in my own world.
02:41:42.460 | I'm trying to do the best that I can,
02:41:46.000 | which is often not very good.
02:41:48.080 | And so it's not important for me to have status
02:41:52.840 | or to be seen in a particular way.
02:41:55.560 | It's helpful if others can see me,
02:41:57.400 | if a few people can see me, that can be my friends.
02:41:59.800 | - No, sorry, I want to clarify.
02:42:01.480 | The hero, I didn't mean status or perception
02:42:05.280 | or like some kind of marketing thing,
02:42:09.720 | but more in private, in the quiet of your own mind.
02:42:13.240 | Is there the kind of man you want to be
02:42:16.080 | and would consider it a failure if you don't become that?
02:42:20.540 | That's what I meant by hero.
02:42:22.000 | - Yeah, not really.
02:42:23.400 | I don't perceive myself as having such an identity.
02:42:26.220 | And it's also sometimes frustrating,
02:42:32.360 | but it's basically a lack of having this notion
02:42:37.360 | of father that I need to be emulating.
02:42:40.620 | - It's interesting.
02:42:45.000 | I mean, it's the leaf floating down the river.
02:42:47.300 | I worry that...
02:42:50.240 | - Sometimes it's more like being the river.
02:42:59.120 | - I'm just a fat frog sitting in a leaf
02:43:02.800 | on a dirty, muddy lake.
02:43:06.760 | I wish I was-- - Full of love.
02:43:09.240 | - Waiting for a princess to kiss me.
02:43:13.640 | Or the other way, I forgot which way it goes.
02:43:15.880 | Somebody kisses somebody.
02:43:17.260 | Can I ask you, I don't know if you know who Michael Malice is
02:43:21.720 | but in terms of constructing systems of incentives,
02:43:27.440 | it's interesting to ask.
02:43:29.560 | I don't think I've talked to you about this before.
02:43:32.160 | Malice espouses anarchism.
02:43:35.720 | So he sees all government as fundamentally
02:43:38.480 | getting in the way or even being destructive
02:43:42.960 | to collaborations between human beings thriving.
02:43:47.960 | What do you think?
02:43:50.560 | What's the role of government in a society that thrives?
02:43:56.920 | Is anarchism at all compelling to you as a system?
02:44:00.600 | So like not just small government, but no government at all.
02:44:04.360 | - Yeah, I don't see how this would work.
02:44:07.960 | The government is an agent that imposes an offset
02:44:12.720 | on your reward function, on your payout metrics.
02:44:15.600 | So your behavior becomes compatible with the common good.
02:44:19.540 | - So the argument there is that you can have collectives
02:44:25.680 | like governing organizations, but not government.
02:44:28.680 | Like where you're born on a particular set of land
02:44:32.600 | and therefore you must follow this rule or else.
02:44:37.600 | You're forced by what they call violence
02:44:41.880 | because there's an implied violence here.
02:44:44.940 | So with government, the key aspect of government
02:44:48.200 | is it protects you from the rest of the world with an army
02:44:54.680 | and with police, right?
02:44:56.720 | So it has a monopoly on violence.
02:45:00.080 | It's the only one that's able to do violence.
02:45:02.080 | - So there are many forms of government,
02:45:03.560 | not all governments do that, right?
02:45:05.040 | But we find that in successful countries,
02:45:09.720 | the government has a monopoly on violence.
02:45:11.880 | And that means that you cannot get ahead
02:45:15.720 | by starting your own army because the government
02:45:17.760 | will come down on you and destroy you if you try to do that.
02:45:20.960 | And in countries where you can build your own army
02:45:23.320 | and get away with it, some people will do it, right?
02:45:25.720 | In these countries is what we call failed countries
02:45:28.600 | in a way.
02:45:30.120 | And if you don't want to have violence,
02:45:33.520 | the point is not to appeal to the moral intentions of people
02:45:36.920 | because some people will use strategies
02:45:39.220 | if they get ahead with them that feel a particular kind
02:45:41.840 | of ecological niche.
02:45:42.760 | So you need to destroy that ecological niche.
02:45:45.280 | And if effective government has a monopoly on violence,
02:45:50.080 | it can create a world where nobody is able to use violence
02:45:53.480 | and get ahead, right?
02:45:54.820 | So you want to use that monopoly on violence,
02:45:57.080 | not to exert violence, but to make violence impossible,
02:46:00.120 | to raise the cost of violence.
02:46:02.160 | So people need to get ahead with nonviolent means.
02:46:06.100 | - So the idea is that you might be able to achieve that
02:46:08.920 | in an anarchist state with companies.
02:46:12.200 | So with the forces of capitalism is create security companies
02:46:18.240 | where the one that's most ethically sound
02:46:20.720 | rises to the top, basically,
02:46:22.520 | it would be a much better representative of the people
02:46:25.240 | because there is a less sort of stickiness
02:46:29.200 | to the big military force sticking around,
02:46:33.200 | even though it's long overlived, outlived.
02:46:36.360 | - So you have groups of militants
02:46:39.000 | that are hopefully efficiently organized
02:46:40.920 | because otherwise they're going to lose
02:46:42.520 | against the other groups of militants.
02:46:44.600 | And they are coordinating themselves
02:46:46.520 | with the rest of society
02:46:48.360 | until they are having a monopoly on violence.
02:46:51.200 | How is that different from a government?
02:46:53.920 | So it's basically converging to the same thing.
02:46:56.240 | - So I think it always,
02:46:57.440 | I was trying to argue with Malice,
02:47:00.000 | I feel like it always converges towards government at scale.
02:47:03.060 | But I think the idea is you can have a lot of collectives
02:47:06.100 | that are, you basically never let anything scale too big.
02:47:11.100 | So one of the problems with governments is it gets too big
02:47:15.480 | in terms of the size of the group
02:47:19.800 | over which it has control.
02:47:21.540 | My sense is that would happen anyway.
02:47:26.000 | So a successful company like Amazon or Facebook,
02:47:30.680 | I mean, it starts forming a monopoly
02:47:33.040 | over entire populations,
02:47:36.080 | not over just the hundreds of millions,
02:47:37.900 | but billions of people.
02:47:39.360 | So I don't know.
02:47:41.080 | But there is something about the abuses of power,
02:47:45.100 | the government can have
02:47:46.040 | when it has a monopoly on violence.
02:47:47.880 | And so that's a tension there.
02:47:51.920 | - So the question is how can you set the incentives
02:47:55.160 | for government correctly?
02:47:56.400 | And this mostly applies at the highest levels of government.
02:47:59.960 | And because we haven't found a way to set them correctly,
02:48:02.960 | we made the highest levels of government relatively weak.
02:48:06.300 | And this is, I think, part of the reason
02:48:08.600 | why we had difficulty to coordinate the pandemic response.
02:48:12.280 | And China didn't have that much difficulty.
02:48:14.940 | And there is, of course, a much higher risk
02:48:17.480 | of the abuse of power that exists in China
02:48:19.980 | because the power is largely unchecked.
02:48:22.720 | And that's basically what happens in the next generation.
02:48:26.080 | For instance, imagine that we would agree
02:48:28.360 | that the current government of China
02:48:29.640 | is largely correct and benevolent.
02:48:31.480 | And maybe we don't agree on this, but if we did,
02:48:35.080 | how can we make sure that this stays like this?
02:48:37.560 | And if you don't have checks and balances
02:48:40.240 | and division of power, it's hard to achieve.
02:48:43.000 | We don't have a solution for that problem.
02:48:45.340 | But the abolishment of government basically
02:48:47.860 | would remove the control structure.
02:48:49.540 | From a cybernetic perspective,
02:48:51.560 | there is an optimal point in the system
02:48:54.760 | that the regulation should be happening, right?
02:48:56.480 | But you can measure the current incentives
02:48:59.800 | and the regulator would be properly incentivized
02:49:01.960 | to make the right decisions
02:49:03.760 | and change the payout metrics of everything below it
02:49:06.340 | in such a way that the local prisoners' dilemmas
02:49:08.600 | get resolved, right?
02:49:09.920 | You cannot resolve the prisoners' dilemma
02:49:12.060 | without some kind of eternal control
02:49:14.100 | that emulates an infinite game in a way.
02:49:17.280 | - Yeah, I mean, there's a sense in which
02:49:22.380 | it seems like the reason government,
02:49:24.940 | the parts of government that don't work well currently
02:49:27.780 | is because there's not good mechanisms
02:49:32.020 | through which to interact for the citizenry
02:49:36.220 | to interact with government.
02:49:37.300 | It's basically, it hasn't caught up in terms of technology.
02:49:41.500 | And I think once you integrate
02:49:43.860 | some of the digital revolution
02:49:46.100 | of being able to have a lot of access to data,
02:49:48.420 | be able to vote on different ideas at a local level,
02:49:52.060 | at all levels, at the optimal level, like you're saying,
02:49:55.740 | that can resolve the prisoner dilemmas,
02:49:58.580 | and to integrate AI to help you out,
02:50:00.660 | automate things that are like,
02:50:02.420 | that don't require the human ingenuity,
02:50:06.240 | I feel like that's where government could operate that well
02:50:10.340 | and can also break apart the inefficient bureaucracies
02:50:14.620 | if needed.
02:50:15.440 | There'll be a strong incentive
02:50:16.540 | to be efficient and successful.
02:50:20.620 | - So our human history, we see an evolution
02:50:23.020 | and evolutionary competition of modes of government
02:50:25.660 | and of individual governments is in these modes.
02:50:28.180 | And every nation state in some sense
02:50:29.900 | is some kind of organism that has found different solutions
02:50:33.180 | for the problem of government.
02:50:34.980 | And you could look at all these different models
02:50:37.500 | and the different scales at which it exists
02:50:39.420 | as empirical attempts to validate the idea
02:50:42.980 | of how to build a better government.
02:50:45.760 | And I suspect that the idea of anarchism,
02:50:49.180 | similar to the idea of communism,
02:50:51.900 | is the result of being disenchanted
02:50:54.860 | with the ugliness of the real existing solutions
02:50:57.340 | and the attempt to get to an utopia.
02:51:00.980 | And I suspect that communism originally was not a utopia.
02:51:04.540 | I think that in the same way as original Christianity,
02:51:07.580 | it had a particular kind of vision.
02:51:10.020 | And this vision is a society,
02:51:12.540 | a mode of organization within the society
02:51:15.300 | in which humans can coexist at scale without coercion.
02:51:20.300 | The same way as we do in a healthy family, right?
02:51:23.740 | In a good family,
02:51:24.580 | you don't terrorize each other into compliance,
02:51:28.140 | but you understand what everybody needs
02:51:30.380 | and what everybody is able to contribute
02:51:32.280 | and what the intended future of the whole thing is.
02:51:35.340 | And everybody coordinates their behavior in the right way
02:51:38.460 | and informs each other about how to do this.
02:51:40.880 | And all the interactions that happen
02:51:42.600 | are instrumental to making that happen, right?
02:51:45.860 | Could this happen at scale?
02:51:47.300 | And I think this is the idea of communism.
02:51:49.220 | Communism is opposed to the idea
02:51:51.460 | that we need economic terror
02:51:53.400 | or other forms of terror to make that happen.
02:51:55.760 | But in practice, what happened
02:51:56.940 | is that the proto-communist countries,
02:51:59.380 | the real existing socialism,
02:52:01.220 | replaced a part of the economic terror with moral terror.
02:52:04.900 | So we were told to do the right thing for moral reasons.
02:52:07.620 | And of course it didn't really work
02:52:09.260 | and the economy eventually collapsed.
02:52:11.680 | And the moral terror had actual real cost, right?
02:52:14.620 | People were in prison
02:52:15.900 | because they were morally non-compliant.
02:52:17.900 | And the other thing is that the idea of communism
02:52:23.700 | became a utopia.
02:52:24.860 | So it basically was projected into the afterlife.
02:52:26.980 | We were told in my childhood
02:52:29.460 | that communism was a hypothetical society
02:52:32.100 | to which we were in a permanent revolution
02:52:34.300 | that justified everything
02:52:35.540 | that was presently wrong with society morally.
02:52:38.420 | But it was something that our grandchildren
02:52:40.420 | probably would not ever see
02:52:42.020 | because it was too ideal and too far in the future
02:52:44.700 | to make it happen right now.
02:52:45.820 | And people were just not there yet morally.
02:52:48.540 | And the same thing happened with Christianity, right?
02:52:51.300 | This notion of heaven was mythologized
02:52:53.780 | and projected into an afterlife.
02:52:55.380 | And I think this was just the idea of God's kingdom,
02:52:57.900 | of this world in which we instantiate
02:53:00.220 | the next level transcendental agent in the perfect form.
02:53:03.000 | So everything goes smoothly and without violence
02:53:05.680 | and without conflict and without this human messiness
02:53:08.980 | on this economic messiness and the terror and coercion
02:53:12.380 | that existed in the present societies.
02:53:14.860 | And the idea of whether humans can exist at scale
02:53:17.900 | in a harmonious way and non-coercively is untested.
02:53:21.000 | A lot of people tested it,
02:53:22.980 | but didn't get it to work so far.
02:53:25.220 | And the utopia is a world in where you get
02:53:27.580 | all the good things without any of the bad things.
02:53:30.740 | And you are, I think, very susceptible to believe
02:53:33.700 | in utopias when you are very young
02:53:35.240 | and don't understand that everything has to happen
02:53:38.440 | in causal patterns, that there's always feedback loops
02:53:40.800 | that ultimately are closed.
02:53:42.480 | There's nothing that just happens because it's good or bad.
02:53:45.320 | Good or bad don't exist in isolation.
02:53:47.200 | They only exist with respect to larger systems.
02:53:50.640 | - So can you intuit why utopias fail as systems?
02:53:55.640 | So like having a utopia that's out there
02:54:00.080 | beyond the horizon, is it because then,
02:54:02.900 | so it's not only because it's impossible to achieve utopias,
02:54:08.200 | but it's because what certain humans,
02:54:11.920 | certain small number of humans start to
02:54:15.160 | sort of greedily attain power and money
02:54:22.440 | and control and influence as they become,
02:54:28.960 | as they see the power in using this idea
02:54:33.720 | of a utopia for propaganda.
02:54:34.560 | - That's a bit like saying, why is my garden not perfect?
02:54:37.260 | It's because some evil weeds are overgrowing it
02:54:39.760 | and they always do.
02:54:40.880 | - Yeah.
02:54:41.720 | - But this is not how it works.
02:54:43.320 | A good garden is a system that is in balance
02:54:45.520 | and requires minimal interactions by the gardener.
02:54:48.740 | And so you need to create a system
02:54:51.980 | that is designed to self-stabilize.
02:54:54.360 | And the design of social systems requires
02:54:56.400 | not just the implementation of the desired functionality,
02:54:58.880 | but the next level design, also in biological systems.
02:55:01.920 | You need to create a system that wants to converge
02:55:04.240 | to the intended function.
02:55:06.200 | And so instead of just creating an institution
02:55:08.760 | like the FDA that is performing a particular kind of role
02:55:11.680 | in society, you need to make sure that the FDA
02:55:15.200 | is actually driven by a system that wants
02:55:16.920 | to do this optimally, that is incentivized
02:55:19.240 | to do it optimally and then makes the performance
02:55:22.400 | that is actually enacted in every generation
02:55:24.240 | instrumental to that thing, that actual goal, right?
02:55:27.740 | And that is much harder to design and to achieve.
02:55:30.160 | - So you have to design a system where,
02:55:32.560 | I mean, listen, communism also was quote unquote
02:55:35.400 | incentivized to be a feedback loop system
02:55:40.400 | that achieves that utopia.
02:55:43.560 | It's just, it wasn't working given human nature.
02:55:45.920 | The incentives were not correct given human nature.
02:55:48.080 | - So how do you incentivize people
02:55:50.520 | when they are getting coal off the ground
02:55:52.400 | to work as hard as possible?
02:55:53.980 | Because it's a terrible job
02:55:55.600 | and it's very bad for your health.
02:55:57.120 | And right, how do you do this?
02:55:59.600 | And you can give them prices and metals
02:56:02.860 | and status to some degree, right?
02:56:04.640 | There's only so much status to give for that.
02:56:06.920 | And most people will not fall for this, right?
02:56:09.360 | Or you can pay them and you probably have to pay them
02:56:12.960 | in an asymmetric way because if you pay everybody the same
02:56:15.720 | and you nationalize the coal mines,
02:56:19.120 | eventually people will figure out
02:56:20.640 | that they can game the system.
02:56:21.960 | - Yes, so you're describing capitalism.
02:56:25.860 | So capitalism is the present solution to the system.
02:56:28.640 | And what we also noticed that I think that Marx was correct
02:56:32.160 | in saying that capitalism is prone to crisis,
02:56:35.160 | that capitalism is a system that in its dynamics
02:56:38.520 | is not convergent, but divergent.
02:56:40.840 | It's not a stable system.
02:56:42.920 | And that eventually it produces an enormous potential
02:56:47.440 | for productivity, but it also is systematically
02:56:50.880 | misallocating resources.
02:56:52.200 | So a lot of people cannot participate in the production
02:56:55.600 | and consumption anymore.
02:56:57.240 | And this is what we observe.
02:56:58.480 | We observe that the middle class in the US is tiny.
02:57:01.480 | A lot of people think that they're middle class,
02:57:05.560 | but if you are still flying economy,
02:57:07.480 | you're not middle class.
02:57:08.680 | Every class is a magnitude smaller than the previous class.
02:57:14.720 | (laughing)
02:57:16.960 | - I think about classes, it's really like airline classes.
02:57:23.640 | (laughing)
02:57:24.480 | - A lot of people are economy class,
02:57:25.880 | a lot of people are economy class.
02:57:26.960 | - Have we really-
02:57:27.800 | - Business class and very few are first class
02:57:30.040 | and some are budget.
02:57:30.960 | - I mean, I understand.
02:57:32.880 | I think there is a, yeah, maybe some people,
02:57:37.000 | probably I would push back against that definition
02:57:39.160 | of the middle class.
02:57:40.000 | It does feel like the middle class is pretty large,
02:57:41.520 | but yes, there's a discrepancy in terms of wealth.
02:57:44.520 | There's a big wealth gap.
02:57:46.640 | - So if you think about in terms of the productivity
02:57:48.640 | that our society could have,
02:57:50.960 | there is no reason for anybody to fly economy.
02:57:54.040 | We would be able to let everybody travel in style.
02:57:57.960 | - Well, but also some people like to be frugal
02:58:00.280 | even when they're billionaires.
02:58:01.440 | Okay, so let's take that into account.
02:58:03.680 | - Yes, but I mean, we probably don't need
02:58:05.960 | to be traveling lavish,
02:58:07.320 | but you also don't need to be tortured, right?
02:58:09.800 | There is a difference between frugal
02:58:11.880 | and subjecting yourself to torture.
02:58:14.200 | - Listen, I love economy.
02:58:15.280 | I don't understand why you're comparing
02:58:16.840 | a flying economy to torture.
02:58:19.440 | I don't, although the fight here,
02:58:22.560 | there's two crying babies next to me.
02:58:24.400 | So that, but that has nothing to do with the calm
02:58:26.480 | as to do with crying babies.
02:58:28.360 | They're very cute though.
02:58:29.400 | So they kind of-
02:58:30.240 | - Yeah, I have two kids and sometimes I have to go back
02:58:32.920 | to visit the grandparents.
02:58:35.000 | And that means going from the West Coast to Germany
02:58:40.000 | and it's a long flight.
02:58:42.680 | - Is it true that sort of when you're a father,
02:58:45.320 | you grow immune to the crying and all that kind of stuff?
02:58:48.560 | Like, because like me just not having kids,
02:58:52.280 | it can be other people's kids can be quite annoying
02:58:54.600 | when they're crying and screaming
02:58:55.840 | and all that kind of stuff.
02:58:57.240 | - When you have children and you're wired up
02:58:59.600 | in the default natural way, you're lucky in this regard,
02:59:02.640 | you fall in love with them.
02:59:04.400 | And this falling in love with them means
02:59:07.040 | that you basically start to see the world
02:59:09.600 | through their eyes and you understand
02:59:11.280 | that in a given situation, they cannot do anything
02:59:13.600 | but being expressing despair.
02:59:17.800 | And so it becomes more differentiated.
02:59:19.760 | I noticed that for instance,
02:59:21.080 | my son is typically acting on pure experience
02:59:25.960 | of what things are like right now.
02:59:28.600 | And he has to do this right now.
02:59:30.440 | And you have this small child that is,
02:59:32.600 | when he was a baby and so on,
02:59:35.080 | where he was just immediately expressing what he felt.
02:59:37.600 | And if you cannot regulate this from the outside,
02:59:40.000 | there's no point to be upset about it, right?
02:59:42.280 | It's like dealing with weather or something like this.
02:59:45.120 | You all have to get through it.
02:59:46.680 | And it's not easy for him either.
02:59:48.680 | But if you also have a daughter,
02:59:51.880 | maybe she is planning for that.
02:59:53.360 | Maybe she understands that she's sitting
02:59:57.040 | in the car behind you and she's screaming
02:59:58.960 | at the top of her lungs and you're almost doing an accident
03:00:01.920 | and you really don't know what to do.
03:00:03.840 | What should I have done to make you stop screaming?
03:00:06.520 | You could have given me candy.
03:00:08.080 | (laughing)
03:00:10.120 | - I think that's like a cat versus dog discussion.
03:00:12.240 | I love it.
03:00:13.080 | 'Cause you said like a fundamental aspect
03:00:17.160 | of that is love that makes it all worth it.
03:00:21.320 | What in this monkey riding an elephant in a dream world,
03:00:27.240 | what role does love play in the human condition?
03:00:30.840 | - I think that love is the facilitator
03:00:34.400 | of non-transactional interaction.
03:00:36.320 | When you are observing your own purposes,
03:00:41.000 | some of these purposes go beyond your ego.
03:00:43.280 | They go beyond the particular organism that you are
03:00:46.480 | and your local interests.
03:00:47.560 | - That's what you mean by non-transactional.
03:00:49.320 | - Yes, so basically when you are acting
03:00:50.920 | in a transactional way,
03:00:51.760 | it means that you are expecting something in return for you
03:00:54.760 | from the one that you're interacting with.
03:00:58.720 | You are interacting with a random stranger.
03:01:00.320 | You buy something from them on eBay.
03:01:01.840 | You expect a fair value for the money that you sent them
03:01:04.400 | and vice versa because you don't know that person.
03:01:07.080 | You don't have any kind of relationship to them.
03:01:09.480 | But when you know this person a little bit better
03:01:11.160 | and you know the situation that they're in
03:01:13.240 | and you understand what they're trying to achieve
03:01:14.840 | in their life and you approve because you realize
03:01:17.880 | that they're in some sense serving
03:01:19.680 | the same human sacredness as you are.
03:01:22.880 | And they need to think that you have.
03:01:24.280 | Maybe you give it to them as a present.
03:01:26.240 | - But I mean, the feeling itself of joy
03:01:30.520 | is a kind of benefit, is a kind of transaction.
03:01:34.160 | - Yes, but the joy is not the point.
03:01:36.960 | The joy is the signal that you get.
03:01:38.920 | It's the reinforcement signal that your brain sends to you
03:01:41.360 | because you are acting on the incentives
03:01:44.160 | of the agent that you're a part of.
03:01:46.160 | We are meant to be part of something larger.
03:01:48.960 | This is the way in which we out-competed other hominins.
03:01:51.760 | - Take that Neanderthals.
03:01:56.840 | - Yeah, right.
03:01:57.840 | And also other humans.
03:01:59.200 | There was a population bottleneck for human society
03:02:03.520 | that leads to an extreme lack
03:02:06.200 | of genetic diversity among humans.
03:02:07.960 | If you look at Bushmen in the Kalahari,
03:02:11.720 | that basically tribes that are not that far distant
03:02:14.360 | to each other have more genetic diversity
03:02:16.280 | than exists between Europeans and Chinese.
03:02:19.280 | And it's because basically the out-of-Africa population
03:02:23.800 | at some point had a bottleneck
03:02:25.440 | of just a few thousand individuals.
03:02:28.120 | And what probably happened is not that at any time
03:02:31.360 | the number of people shrunk below a few hundred thousand.
03:02:35.120 | What probably happened is that there was a small group
03:02:37.920 | that had a decisive mutation that produced an advantage.
03:02:40.840 | And this group multiplied and killed everybody else.
03:02:44.120 | And we are descendants of that group.
03:02:46.160 | - Yeah, I wonder what the peculiar characteristics
03:02:50.800 | of that group.
03:02:52.160 | - Yeah.
03:02:53.000 | - I mean, we can never know.
03:02:53.840 | - Me too, and a lot of people do.
03:02:55.480 | - We can only just listen to the echoes in our,
03:02:58.240 | like the ripples that are still within us.
03:03:01.680 | - So I suspect what eventually made a big difference
03:03:04.400 | was the ability to organize at scale,
03:03:07.160 | be able to program each other.
03:03:10.140 | - With ideas.
03:03:11.400 | - That we became programmable,
03:03:12.600 | that we are willing to work in lockstep,
03:03:14.520 | that we went below, above the tribal level,
03:03:17.440 | that we no longer were groups of a few hundred individuals
03:03:20.720 | and acted on direct reputation systems transactionally,
03:03:24.460 | but that we basically evolved an adaptation
03:03:27.440 | to become state building.
03:03:29.000 | - Yeah.
03:03:29.840 | To form collectives outside of the direct collectives.
03:03:35.720 | - Yes, and that's basically a part of us
03:03:37.760 | became committed to serving something outside
03:03:40.420 | of what we know. - Bigger than ourselves.
03:03:41.960 | Yeah, then that's kind of what love is.
03:03:44.120 | - And it's terrifying because it meant
03:03:45.820 | that we eradicated the others.
03:03:47.420 | It's a force, it's an adaptive force
03:03:50.980 | that gets us ahead in evolution,
03:03:52.940 | which means we displace something else
03:03:54.520 | that doesn't have that.
03:03:55.680 | - Oh, so we had to murder a lot of people
03:03:58.740 | that weren't about love.
03:04:00.380 | So love led to destruction.
03:04:01.220 | - We didn't have the same strong love as we did.
03:04:04.020 | Right, that's why I mentioned this thing with fascism
03:04:07.420 | and you see this, these speeches,
03:04:10.620 | do you want total war?
03:04:12.220 | And everybody says, yes, right?
03:04:14.380 | There's this big, oh my God, be a part of something
03:04:17.620 | that is more important than me
03:04:18.660 | that gives meaning to my existence.
03:04:20.420 | (laughing)
03:04:22.980 | - Fair enough.
03:04:23.800 | Do you have advice for young people today
03:04:30.980 | in high school, in college,
03:04:33.140 | that are thinking about what to do
03:04:36.300 | with their career, with their life,
03:04:38.580 | so that at the end of the whole thing,
03:04:40.380 | they can be proud of what they did?
03:04:42.140 | - Don't cheat.
03:04:44.620 | Have integrity, aim for integrity.
03:04:48.500 | - So what does integrity look like
03:04:49.860 | when you're at the river or at the leaf
03:04:51.940 | or the fat frog in a lake?
03:04:53.440 | - It basically means that you try to figure out
03:04:57.660 | what the thing is that is the most right.
03:05:02.020 | And this doesn't mean that you have to look
03:05:04.580 | for what other people tell you what's right,
03:05:07.100 | but you have to aim for moral autonomy.
03:05:09.700 | So things need to be right
03:05:11.340 | independently of what other people say.
03:05:14.060 | I always felt that when people told me
03:05:17.580 | to listen to what others say,
03:05:20.820 | like read the room, build your ideas
03:05:24.300 | of what's true based on the high status people
03:05:26.340 | of your in-group, that does not protect me from fascism.
03:05:29.740 | The only way to protect yourself from fascism
03:05:31.880 | is to decide it's the world that is being built here,
03:05:35.540 | the world that I want to be in.
03:05:37.100 | In some sense, try to make your behavior sustainable,
03:05:41.740 | act in such a way that you would feel comfortable
03:05:44.540 | on all sides of the transaction.
03:05:46.420 | Realize that everybody is you in a different timeline,
03:05:48.900 | but is seeing things differently and has reasons to do so.
03:05:52.640 | - Yeah, I've come to realize this recently,
03:05:58.060 | that there is an inner voice
03:05:59.340 | that tells you what's right and wrong.
03:06:02.780 | And speaking of reading the room,
03:06:06.140 | there's times what integrity looks like
03:06:08.060 | is there's times when a lot of people
03:06:10.420 | are doing something wrong.
03:06:12.140 | And what integrity looks like is not going on Twitter
03:06:14.940 | and tweeting about it, but not participating quietly,
03:06:19.660 | not doing, so it's not like signaling
03:06:21.780 | or not all this kind of stuff,
03:06:24.060 | but actually living your, what you think is right.
03:06:27.980 | Like living it, not signaling.
03:06:29.460 | - There's also sometimes this expectation
03:06:30.940 | that others are like us.
03:06:32.220 | So imagine the possibility that some of the people
03:06:35.660 | around you are space aliens that only look human.
03:06:38.120 | So they don't have the same prayers as you do.
03:06:41.620 | They don't have the same impulses,
03:06:44.060 | that's what's right and wrong.
03:06:45.180 | There's a large diversity in these basic impulses
03:06:48.860 | that people can have in a given situation.
03:06:51.820 | And now realize that you are a space alien.
03:06:54.660 | You are not actually human.
03:06:55.860 | You think that you're human,
03:06:57.180 | but you don't know what it means,
03:06:58.780 | like what it's like to be human.
03:07:00.740 | You just make it up as you go along like everybody else.
03:07:03.980 | And you have to figure that out,
03:07:05.700 | what it means that you are a full human being,
03:07:09.580 | what it means to be human in the world
03:07:11.140 | and how to connect with others on that.
03:07:13.500 | And there's also something, don't be afraid
03:07:17.300 | in the sense that if you do this, you're not good enough.
03:07:20.940 | Because if you are acting on these incentives of integrity,
03:07:23.580 | you become trustworthy.
03:07:25.100 | That's the way in which you can recognize each other.
03:07:28.380 | There is a particular place where you can meet
03:07:30.700 | and you can figure out what that place is,
03:07:33.060 | where you will give support to people
03:07:35.420 | because you realize that they act with integrity
03:07:38.420 | and they will also do that.
03:07:40.300 | So in some sense, you are safe if you do that.
03:07:42.580 | You're not always protected.
03:07:44.940 | There are people which will abuse you
03:07:47.100 | and that are bad actors in a way that it's hard to imagine
03:07:51.300 | before you meet them.
03:07:52.820 | But there is also people which will try to protect you.
03:07:57.820 | - Yeah, thank you for saying that.
03:08:00.900 | That's such a hopeful message
03:08:03.900 | that no matter what happens to you,
03:08:05.460 | there'll be a place,
03:08:06.780 | there's people you'll meet
03:08:10.220 | that also have what you have
03:08:15.700 | and you will find happiness there and safety there.
03:08:20.260 | - Yeah, but it doesn't need to end well.
03:08:21.780 | It can also all go wrong.
03:08:23.580 | So there's no guarantees in this life.
03:08:26.460 | So you can do everything right and you still can fail
03:08:29.460 | and you can still horrible things happening to you
03:08:32.580 | that traumatize you and mutilate you
03:08:35.140 | and you have to be grateful if it doesn't happen.
03:08:37.580 | - And ultimately be grateful no matter what happens
03:08:43.020 | 'cause even just being alive is pretty damn nice.
03:08:45.700 | - Yeah, even that, you know.
03:08:49.660 | The gratefulness in some sense is also just generated
03:08:52.300 | by your brain to keep you going.
03:08:54.540 | It's all a trick.
03:08:57.940 | - Speaking of which, Kamu said,
03:09:01.940 | "I see many people die because they judge
03:09:05.540 | that life is not worth living.
03:09:08.020 | I see others paradoxically getting killed
03:09:10.820 | for the ideas or illusions
03:09:12.220 | that give them a reason for living.
03:09:15.020 | What is called the reason for living
03:09:16.420 | is also an excellent reason for dying.
03:09:19.420 | I therefore conclude that the meaning of life
03:09:22.020 | is the most urgent of questions."
03:09:24.660 | So I have to ask what,
03:09:27.900 | Jasha Bach, is the meaning of life?
03:09:30.380 | It is an urgent question according to Kamu.
03:09:33.860 | - I don't think that there's a single answer to this.
03:09:37.940 | Nothing makes sense unless the mind makes it so.
03:09:41.340 | So you basically have to project a purpose.
03:09:44.820 | And if you zoom out far enough,
03:09:47.380 | there's the heat test of the universe
03:09:49.060 | and everything is meaningless.
03:09:50.500 | Everything is just a blip in between.
03:09:52.100 | And the question is,
03:09:53.220 | do you find meaning in this blip in between?
03:09:55.860 | Do you find meaning in observing squirrels?
03:09:59.820 | Do you find meaning in raising children
03:10:01.740 | and projecting a multi-generational organism
03:10:04.420 | into the future?
03:10:05.660 | Do you find meaning in projecting an aesthetic
03:10:08.260 | of the world that you like into the future
03:10:10.620 | and trying to serve that aesthetic?
03:10:12.380 | And if you do, then life has that meaning.
03:10:15.340 | And if you don't, then it doesn't.
03:10:18.340 | - I kind of enjoy the idea that you just create
03:10:21.740 | the most vibrant, the most weird,
03:10:25.660 | the most unique kind of blip you can,
03:10:28.700 | given your environment, given your set of skills.
03:10:32.000 | Just be the most weird set of,
03:10:36.940 | like local pocket of complexity you can be.
03:10:41.700 | So that like when people study the universe,
03:10:44.460 | they'll pause and be like, "Uh, that's weird."
03:10:48.260 | - That looks like a useful strategy,
03:10:50.540 | but of course it's still motivated reasoning.
03:10:52.860 | (laughing)
03:10:55.620 | You're obviously acting on your incentives here.
03:10:57.740 | - It's still a story we tell ourselves within a dream
03:11:00.660 | that's hardly in touch with reality.
03:11:03.820 | - It's definitely a good strategy if you are a podcaster.
03:11:06.300 | (laughing)
03:11:08.540 | - And a human, which I'm still trying to figure out if I am.
03:11:13.020 | - Yeah, there's a mutual relationship somehow.
03:11:15.060 | - Somehow.
03:11:16.100 | - Josh, you're one of the most incredible people I know.
03:11:20.900 | I really love talking to you.
03:11:22.420 | I love talking to you again.
03:11:23.540 | And it's really an honor that you spend
03:11:26.100 | your valuable time with me.
03:11:27.140 | I hope we get to talk many times
03:11:28.620 | throughout our short and meaningless lives.
03:11:33.620 | - Or meaningful.
03:11:34.660 | - Or meaningful.
03:11:35.940 | - Thank you, Alex.
03:11:36.760 | I enjoyed this conversation very much.
03:11:39.060 | - Thanks for listening to this conversation with Josip Bach.
03:11:41.740 | A thank you to Coinbase, Codecademy, Linode,
03:11:45.940 | NetSuite, and ExpressVPN.
03:11:48.540 | Check them out in the description to support this podcast.
03:11:52.060 | Now, let me leave you with some words from Carl Jung.
03:11:55.700 | "People will do anything, no matter how absurd,
03:11:58.980 | "in order to avoid facing their own souls.
03:12:01.760 | "One does not become enlightened
03:12:03.540 | "by imagining figures of light,
03:12:05.760 | "but by making the darkness conscious."
03:12:09.300 | Thank you for listening, and hope to see you next time.
03:12:12.540 | (upbeat music)
03:12:15.120 | (upbeat music)
03:12:17.700 | [BLANK_AUDIO]