back to index

Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans | Lex Fridman Podcast #392


Chapters

0:0 Introduction
1:15 Stages of life
13:37 Identity
20:12 Enlightenment
26:43 Adaptive Resonance Theory
33:31 Panpsychism
43:31 How to think
51:25 Plants communication
69:20 Fame
94:57 Happiness
102:15 Artificial consciousness
114:23 Suffering
119:8 Eliezer Yudkowsky
126:44 e/acc (Effective Accelerationism)
132:21 Mind uploading
143:11 Vision Pro
147:25 Open source AI
160:17 Twitter
167:33 Advice for young people
170:29 Meaning of life

Whisper Transcript | Transcript Only Page

00:00:00.000 | there is a certain perspective where you might be thinking
00:00:02.160 | what is the longest possible game that you could be playing.
00:00:05.080 | A short game is, for instance,
00:00:06.800 | cancer is playing a shorter game than your organism.
00:00:08.920 | It's cancer is an organism playing a shorter game
00:00:11.880 | than the regular organism,
00:00:13.640 | and because the cancer cannot procreate beyond the organism,
00:00:17.080 | except for some infectious cancers,
00:00:20.040 | like the ones that eradicated the Tasmanian devils,
00:00:22.600 | you typically end up with the situation
00:00:26.880 | where the organism dies together with the cancer,
00:00:28.920 | because the cancer has destroyed the larger system
00:00:31.640 | due to playing a shorter game.
00:00:33.520 | And so ideally you want to, I think,
00:00:35.920 | build agents that play the longest possible games.
00:00:39.600 | And the longest possible games is to keep entropy at bay
00:00:42.920 | as long as possible by doing interesting stuff.
00:00:45.720 | - The following is a conversation with Yosha Bach,
00:00:50.360 | his third time on this podcast.
00:00:52.120 | Yosha is one of the most brilliant
00:00:54.040 | and fascinating minds in the world,
00:00:55.800 | exploring the nature of intelligence,
00:00:57.680 | consciousness, and computation.
00:01:00.040 | And he's one of my favorite humans to talk to
00:01:03.360 | about pretty much anything and everything.
00:01:06.120 | This is the Lex Friedman Podcast.
00:01:08.160 | To support it, please check out our sponsors
00:01:10.200 | in the description.
00:01:11.360 | And now, dear friends, here's Yosha Bach.
00:01:14.560 | You wrote a post about levels of lucidity.
00:01:18.920 | Quote, "As we grow older, it becomes apparent
00:01:23.120 | that our self-reflexive mind
00:01:25.040 | is not just gradually accumulating ideas about itself,
00:01:27.960 | but that it progresses in somewhat distinct stages."
00:01:32.120 | So there's seven of the stages.
00:01:34.240 | Stage one, reactive survival, infant.
00:01:37.000 | Stage two, personal self, young child.
00:01:39.640 | Stage three, social self, adolescence, domesticated adult.
00:01:43.760 | Stage four is rational agency, self-direction.
00:01:47.200 | Stage five is self-authoring, that's full adult.
00:01:51.760 | You've achieved wisdom, but there's two more stages.
00:01:54.360 | Stage six is enlightenment.
00:01:56.600 | Stage seven is transcendence.
00:01:58.800 | Can you explain each,
00:02:01.000 | or the interesting parts of each of these stages?
00:02:03.440 | And what's your sense why there are stages of this,
00:02:06.780 | of lucidity as we progress through life
00:02:10.680 | in this too short life?
00:02:12.600 | - This model is derived from a concept
00:02:15.560 | by the psychologist Robert Keegan.
00:02:18.640 | And he talks about the development of the self
00:02:23.160 | as a process that happens in principle
00:02:25.480 | by some kind of reverse engineering of the mind
00:02:28.200 | where you gradually become aware of yourself
00:02:30.120 | and thereby build structure
00:02:31.640 | that allows you to interact deeper
00:02:33.240 | with the world and yourself.
00:02:35.240 | And I found myself using this model
00:02:37.520 | not so much as a developmental model.
00:02:39.200 | I'm not even sure if it's a very good developmental model
00:02:42.000 | because I saw my children not progressing exactly like that.
00:02:45.680 | And I also suspect that you don't go through the stages
00:02:49.980 | necessarily in succession.
00:02:51.720 | And it's not that you work through one stage
00:02:53.400 | and then you get into the next one.
00:02:54.680 | Sometimes you revisit them.
00:02:56.880 | Sometimes stuff is happening in parallel.
00:02:59.080 | But it's, I think, a useful framework
00:03:00.880 | to look at what's present in the structure of a person
00:03:04.000 | and how they interact with the world
00:03:05.360 | and how they relate to themselves.
00:03:07.520 | So it's more like a philosophical framework
00:03:10.120 | that allows you to talk about how minds work.
00:03:12.820 | And at first, when we are born,
00:03:14.800 | we don't have a personal self yet, I think.
00:03:17.840 | Instead, we have an attentional self.
00:03:19.480 | And this attentional self is initially
00:03:21.400 | in the infant tasked with building a world model
00:03:24.520 | and also an initial model of the self.
00:03:26.720 | But mostly it's building a game engine in the brain
00:03:29.040 | that is tracking sensory data and uses it to explain it.
00:03:34.040 | And in some sense, you could compare it to a game engine
00:03:37.160 | like Minecraft or so, so colors and sounds.
00:03:40.600 | People are all not physical objects.
00:03:42.600 | They are creation of our mind
00:03:43.960 | at a certain level of coarse graining.
00:03:46.280 | Models that are mathematical, that use geometry
00:03:50.240 | and that use manipulation of objects and so on
00:03:54.560 | to create scenes in which we can find ourselves
00:03:58.120 | and interact with them.
00:03:59.320 | - So Minecraft.
00:04:00.560 | - Yeah, and this personal self is something
00:04:02.640 | that is more or less created after the world is finished,
00:04:06.800 | after it's trained into the system,
00:04:09.220 | after it has been constructed.
00:04:11.320 | And this personal self is an agent
00:04:13.160 | that interacts with the outside world.
00:04:15.880 | And the outside world is not the world of quantum mechanics,
00:04:19.800 | not the physical universe,
00:04:21.080 | but it's the model that has been generated in our own mind.
00:04:24.680 | And this is us and we experience ourself interacting
00:04:28.320 | with that outside world that is created
00:04:30.320 | inside of our own mind.
00:04:31.880 | And outside of ourself, there's feelings
00:04:34.840 | and they presented our interface with this outside world.
00:04:38.560 | They pose problems to us.
00:04:40.400 | These feelings are basically attitudes
00:04:42.240 | that our mind is computing
00:04:44.000 | that tell us what's needed in the world,
00:04:46.000 | the things that we are drawn to,
00:04:47.600 | the things that we are afraid of.
00:04:49.560 | And we are tasked with solving this problem
00:04:52.400 | of satisfying the needs, avoiding the aversions,
00:04:56.160 | following on our inner commitments and so on,
00:04:58.720 | and also modeling ourselves and building the next stage.
00:05:02.400 | So after we have this personal self in stage two online,
00:05:06.360 | many people form a social self.
00:05:08.440 | And this social self allows the individual
00:05:11.000 | to experience themselves as part of a group.
00:05:13.440 | It's basically this thing that when you are playing
00:05:17.160 | in a team, for instance, you don't notice yourself
00:05:19.360 | just as a single note that is reaching out into the world,
00:05:22.600 | but you're also looking down.
00:05:23.700 | You're looking down from this entire group
00:05:25.800 | and you see how this group is looking at this individual.
00:05:28.720 | And everybody in the group is in some sense
00:05:31.200 | emulating this group spirit to some degree.
00:05:34.360 | And in this state, people are forming their opinions
00:05:36.880 | by assimilating them from this group mind.
00:05:39.160 | They basically gain the ability
00:05:41.080 | to act a little bit like a hive mind.
00:05:42.960 | - But are you also modeling the interaction
00:05:45.960 | of how opinions shapes and forms
00:05:47.800 | through the interaction of the individual nodes
00:05:50.400 | within the group?
00:05:51.880 | - Yeah, it's basically the way in which people do it
00:05:54.520 | in this stage is that they experience
00:05:57.240 | what are the opinions of my environment.
00:05:59.720 | They experience the relationship
00:06:01.120 | that I have to their environment.
00:06:02.840 | And they resonate with people around them
00:06:05.920 | and get more opinions through this interaction
00:06:09.640 | to the way in which they relate to others.
00:06:13.380 | And at stage four, you basically understand
00:06:16.360 | that stuff is true and false independently
00:06:18.600 | what other people believe.
00:06:20.000 | And you have agency over your own beliefs in that stage.
00:06:22.560 | You basically discover epistemology,
00:06:24.960 | the rules about determining what's true and false.
00:06:27.880 | - So you start to learn how to think.
00:06:30.800 | - Yes.
00:06:31.640 | I mean, at some level, you're always thinking.
00:06:33.680 | You are constructing things.
00:06:35.680 | And I believe that this ability to reason
00:06:37.640 | about your mental representation
00:06:39.300 | is what we mean by thinking.
00:06:41.040 | It's an intrinsically reflexive process
00:06:43.000 | that requires consciousness.
00:06:44.640 | Without consciousness, you cannot think.
00:06:47.040 | You can generate the content of feelings
00:06:49.480 | and so on outside of consciousness.
00:06:51.160 | It's very hard to be conscious
00:06:52.920 | of how your feelings emerge,
00:06:54.840 | at least in the early stages of development.
00:06:56.920 | But thoughts is something that you always control.
00:07:00.080 | And if you are a nerd like me,
00:07:03.420 | you often have to skip stage three
00:07:05.360 | because you'd lack the intuitive empathy with others.
00:07:08.520 | Because in order to resonate with a group,
00:07:11.160 | you need to have a quite similar architecture.
00:07:13.160 | And if people are wired differently,
00:07:15.800 | then it's hard for them to resonate with other people
00:07:18.760 | and basically have empathy,
00:07:21.480 | which is not the same as compassion,
00:07:23.280 | but it is a shared perceptual mental state.
00:07:26.000 | Empathy happens not just via inference
00:07:28.640 | about the mental states of others,
00:07:30.640 | but it's a perception of what other people feel
00:07:34.280 | and where they're at.
00:07:35.160 | - Can't you not have empathy
00:07:36.800 | while also not having a similar architecture,
00:07:39.280 | cognitive architecture, as the others in the group?
00:07:40.920 | - I think, yes, but I experienced that too.
00:07:43.840 | But you need to build something
00:07:45.480 | that is like a meta-architecture.
00:07:46.840 | You need to be able to embrace the architecture
00:07:49.480 | of the other to some degree
00:07:50.680 | or find some shared common ground.
00:07:52.960 | And it's also this issue that if you are a nerd,
00:07:57.080 | normally often, as in your typical people,
00:07:59.800 | have difficulty to resonate with you.
00:08:02.000 | And as a result, they have difficulty understanding you
00:08:05.040 | unless they have enough wisdom
00:08:06.600 | to feel what's going on there.
00:08:08.760 | - Well, isn't the whole process of the stage
00:08:10.900 | three is to figure out the API to the other humans
00:08:14.240 | that have different architecture
00:08:15.680 | and you yourself publish public documentation
00:08:19.360 | for the API that people can interact with for you?
00:08:24.160 | Isn't this the whole process of socializing?
00:08:26.200 | - My experience as a child growing up
00:08:28.240 | was that I did not find any way to interface
00:08:31.360 | with the stage three people.
00:08:33.280 | And they didn't do that with me.
00:08:35.440 | So it took me-- - Did you try?
00:08:36.800 | - Yeah, of course, I tried it very hard.
00:08:38.680 | But it was only when I entered a mathematics school
00:08:41.620 | at ninth grade, lots of other nerds were present,
00:08:45.320 | that I found people that I could deeply resonate with
00:08:49.140 | and had the impression that, yes, I have friends now,
00:08:52.620 | I found my own people.
00:08:54.060 | And before that, I felt extremely alone in the world.
00:08:56.540 | There was basically nobody I could connect to.
00:08:59.180 | And I remember there was one moment in all these years
00:09:04.180 | where I was in, there was a school exchange
00:09:07.740 | and it was the Russian boy,
00:09:09.840 | kid from the Russian garrison stationed in Eastern Germany
00:09:13.040 | who visited our school.
00:09:14.500 | And we played a game of chess against each other.
00:09:16.620 | And we looked into each other's eyes
00:09:18.700 | and we sat there for two hours playing this game of chess.
00:09:21.340 | And I had the impression, this is a human being.
00:09:23.940 | He understands what I understand.
00:09:25.620 | We didn't even speak the same language.
00:09:29.160 | - I wonder if your life could have been different
00:09:32.640 | if you knew that it's okay to be different,
00:09:35.820 | to have a different architecture.
00:09:38.140 | Whether accepting that the interface is hard to figure out,
00:09:42.300 | takes a long time to figure out,
00:09:43.660 | and it's okay to be different.
00:09:44.660 | In fact, it's beautiful to be different.
00:09:46.660 | - It was not my main concern.
00:09:51.780 | My main concern was mostly that I was alone.
00:09:55.220 | It was not so much the question, is it okay to be the way I
00:09:58.260 | am, I couldn't do much about it, so I had to deal with it.
00:10:02.740 | But my main issue was that I was not sure
00:10:06.140 | if I would ever meet anybody growing up
00:10:09.780 | that I would connect to at such a deep level
00:10:11.860 | that I would feel that I could belong.
00:10:13.660 | - So there's a visceral, undeniable feeling of being alone.
00:10:17.140 | - Yes.
00:10:18.020 | And I noticed the same thing when I came into the math school
00:10:21.080 | that I think at least half, probably two thirds,
00:10:24.680 | of these kids were severely traumatized
00:10:26.540 | as children growing up, and in large part due to being alone
00:10:31.260 | because they couldn't find anybody to relate to.
00:10:33.660 | - Don't you think everybody's alone, deep down?
00:10:36.180 | - No.
00:10:37.020 | (laughing)
00:10:41.260 | I'm not alone.
00:10:43.940 | I'm not alone anymore.
00:10:45.420 | It took me some time to update and to get over the trauma
00:10:48.260 | and so on, but I felt that in my 20s,
00:10:51.700 | I had lots of friends and I had my place in the world,
00:10:54.700 | and I had no longer doubts that I would never be alone again.
00:11:00.620 | - Is there some aspect to which we're alone together?
00:11:03.140 | You don't see a deep loneliness inside yourself still?
00:11:06.140 | - No.
00:11:06.980 | Sorry.
00:11:08.740 | (laughing)
00:11:10.500 | - Okay, so that's the nonlinear progression
00:11:12.700 | through the stages, I suppose.
00:11:14.020 | You caught up on stage three at some point?
00:11:15.500 | - Yes, so we are at stage four.
00:11:16.940 | And so basically I find that many nerds
00:11:18.860 | jump straight into stage four, bypassing stage three.
00:11:22.340 | - Do they return to it then, later?
00:11:24.060 | - Yeah, of course.
00:11:24.900 | Sometimes they do, not always.
00:11:27.100 | - Yeah.
00:11:27.940 | - The question is basically,
00:11:28.760 | do you stay a little bit autistic
00:11:30.180 | or do you catch up?
00:11:31.340 | And I believe you can catch up.
00:11:33.180 | You can build this missing structure.
00:11:35.460 | - Yeah.
00:11:36.300 | - And basically experience yourself as part of a group,
00:11:39.980 | learn intuitive empathy and develop the sense,
00:11:42.260 | this perceptual sense of feeling what other people feel.
00:11:46.260 | And before that, I could only basically feel this
00:11:48.860 | when I was deeply in love with somebody and we synced or--
00:11:51.900 | - So there's a lot of friction to feeling that way.
00:11:54.780 | Like, only with certain people,
00:11:57.300 | as opposed to it comes naturally.
00:11:58.980 | - Yeah.
00:11:59.820 | But this is something that basically later I felt
00:12:03.140 | started to resolve itself for me, to a large degree.
00:12:06.580 | - What was the trick?
00:12:07.620 | - In many ways, growing up and paying attention.
00:12:12.560 | Meditation did help.
00:12:15.780 | I had some very crucial experiences
00:12:18.540 | in getting close to people, building connections,
00:12:23.220 | cuddling a lot in my student years.
00:12:28.580 | - So really paying attention to the, what is it,
00:12:32.300 | to the feeling another human being fully.
00:12:35.900 | - Loving other people and being loved by other people
00:12:38.620 | and building a space in which you can be safe
00:12:41.100 | and can experiment and touch a lot
00:12:44.820 | and be close to somebody a lot.
00:12:47.300 | And over time, basically at some point you realize,
00:12:51.940 | oh, it's no longer that I feel locked out,
00:12:54.740 | but I feel connected and I experience
00:12:57.780 | where somebody else is at.
00:12:58.940 | And normally my mind is racing very fast at a high frequency,
00:13:03.020 | so it's not always working like this.
00:13:04.460 | Sometimes it works better, sometimes it works less.
00:13:07.340 | But I also don't see this as a pressure.
00:13:09.380 | It's more, it's interesting to observe myself,
00:13:12.900 | which frequency I'm at and at which mode
00:13:16.340 | somebody else is at.
00:13:17.380 | - Yeah, man, the mind is so beautiful in that way.
00:13:21.820 | Sometimes it comes so natural to me,
00:13:24.540 | so easy to pay attention, pay attention to the world fully,
00:13:28.300 | to other people fully.
00:13:29.580 | And sometimes the stress over silly things is overwhelming.
00:13:34.500 | It's so interesting that the mind
00:13:35.740 | is that rollercoaster in that way.
00:13:37.620 | - At stage five you discover how identity is constructed.
00:13:40.580 | - Self-offering.
00:13:41.420 | - You realize that your values are not terminal,
00:13:43.460 | but they are instrumental to achieving a world
00:13:46.500 | that you like and aesthetics that you prefer.
00:13:49.540 | And the more you understand this,
00:13:51.580 | the more you get agency over how your identity
00:13:54.100 | is constructed.
00:13:55.380 | And you realize that identity
00:13:57.260 | and interpersonal interaction is a costume.
00:14:00.060 | And you should be able to have agency over that costume.
00:14:03.100 | It's useful to be a costume.
00:14:04.900 | It tells something to others
00:14:07.220 | and it allows to interface in roles.
00:14:10.240 | But being locked into this is a big limitation.
00:14:13.340 | - The word costume kind of implies
00:14:15.700 | that it's fraudulent in some way.
00:14:18.340 | Is costume a good word for you?
00:14:20.940 | Like we present ourselves to the world.
00:14:22.580 | - In some sense, I learned a lot
00:14:24.060 | about costumes at Burning Man.
00:14:25.540 | Before that, I did not really appreciate costumes
00:14:28.020 | and saw them more as uniforms,
00:14:29.900 | like wearing a suit if you are working in a bank
00:14:32.660 | or if you're trying to get startup funding
00:14:37.140 | from a VC in Switzerland.
00:14:39.380 | Then you dress up in a particular way.
00:14:42.420 | And this is mostly to show the other side
00:14:44.700 | that you are willing to play by the rules
00:14:46.420 | and you understand what the rules are.
00:14:48.600 | But there is something deeper.
00:14:51.100 | When you are at Burning Man, your costume
00:14:52.660 | becomes self-expression and there is no boundary
00:14:55.660 | to the self-expression.
00:14:56.780 | You're basically free to wear what you want
00:14:59.460 | to express other people what you feel like this day
00:15:02.180 | and what kind of interactions you want to have.
00:15:04.500 | - Is the costume a kind of projection of who you are?
00:15:09.120 | - That's very hard to say because the costume
00:15:12.460 | also depends on what other people see in the costume.
00:15:15.540 | And this depends on the context
00:15:16.980 | that the other people understand.
00:15:18.580 | And you have to create something if you want to
00:15:20.700 | that is legible to the other side.
00:15:22.940 | And that means something to yourself.
00:15:25.080 | - Do we become prisoners of the costume?
00:15:28.500 | 'Cause everybody expects us to.
00:15:29.540 | - Some people do.
00:15:30.360 | But I think that once you realize
00:15:33.660 | that you wear a costume at Burning Man,
00:15:35.980 | a variety of costumes, realize that you cannot
00:15:39.060 | not wear a costume.
00:15:40.200 | Basically everything that you wear and present to others
00:15:44.780 | is something that is to some degree
00:15:48.620 | in addition to what you are deep inside.
00:15:51.980 | - So this stage, in parentheses,
00:15:54.320 | you put full adult comma wisdom.
00:15:57.920 | Why is this full adult?
00:16:00.380 | Why would you say this is full?
00:16:02.900 | And why is it wisdom?
00:16:04.780 | - It does allow you to understand
00:16:07.180 | why other people have different identities from yours.
00:16:10.700 | And it allows you to understand that the difference
00:16:13.560 | between people who vote for different parties
00:16:15.760 | and might have very different opinions
00:16:17.820 | and different value systems is often the accident
00:16:21.300 | of where they are born and what happened after that to them
00:16:25.180 | and what traits they got before they were born.
00:16:29.300 | And at some point you realize the perspective
00:16:32.740 | where you understand that everybody could be you
00:16:34.980 | in a different timeline if you just flip those bits.
00:16:38.580 | - How many costumes do you have?
00:16:40.180 | - I don't count.
00:16:42.260 | - More than one?
00:16:44.540 | - Yeah, of course.
00:16:46.860 | - How easy is it to do costume changes throughout the day?
00:16:49.860 | - It's just a matter of energy and interest.
00:16:53.620 | When you are wearing your pajamas
00:16:55.740 | and you switch out of your pajamas
00:16:57.420 | into, say, a work short and pants,
00:17:00.900 | you're making a costume change, right?
00:17:03.100 | And if you are putting on a gown,
00:17:05.300 | you're making a costume change.
00:17:06.140 | - And you could do the same with personality?
00:17:08.340 | - You could, if that's what you're into.
00:17:12.140 | There are people which have multiple personalities
00:17:14.500 | for interaction in multiple worlds, right?
00:17:16.380 | So if somebody works in a store
00:17:18.100 | and you put up a storekeeper personality,
00:17:21.460 | when you're presenting yourself at work,
00:17:23.860 | you develop a sub-personality for this.
00:17:26.420 | And the social persona for many people
00:17:28.460 | is in some sense a puppet
00:17:30.300 | that they're playing like a marionette.
00:17:32.500 | And if they play this all the time,
00:17:33.780 | they might forget that there is something behind this,
00:17:37.380 | there's something what it feels like to be in your skin.
00:17:40.180 | And I guess it's very helpful
00:17:42.260 | if you're able to get back into this.
00:17:44.340 | And for me, the other way around is relatively hard.
00:17:47.780 | For me, it's pretty hard to learn
00:17:49.620 | how to play consistent social roles.
00:17:51.820 | For me, it's much easier just to be real.
00:17:54.460 | - Or not real, but to have one costume.
00:17:58.520 | - No, it's not quite the same.
00:18:00.820 | So basically when you are wearing a costume at Burning Man,
00:18:04.220 | and say you are an extraterrestrial prince,
00:18:07.500 | there's something where you are expressing,
00:18:10.420 | in some sense, something that's closer to yourself
00:18:12.620 | than the way in which you hide yourself
00:18:15.220 | behind standard clothing when you go out in the city
00:18:18.980 | and the default world.
00:18:20.660 | And so this costume that you're wearing at Burning Man
00:18:23.220 | allows you to express more of yourself.
00:18:25.820 | And you have a shorter distance of advertising to people
00:18:30.820 | what kind of person you are,
00:18:33.180 | what kind of interaction you would want to have with them.
00:18:35.300 | And so you get much earlier into media stress.
00:18:40.980 | And I believe it's regrettable
00:18:43.080 | that we do not use the opportunities that we have
00:18:46.500 | with custom-made clothing now to wear costumes
00:18:49.060 | that are much more stylish, that are much more custom-made,
00:18:52.420 | that are not necessarily part of a fashion
00:18:54.420 | in which you express which milieu you're part of
00:18:57.300 | and how up-to-date you are.
00:18:59.020 | But you also express how you are as an individual
00:19:02.460 | and what you want to do today and how you feel today
00:19:04.980 | and what you intend to do about it.
00:19:06.740 | - Well, isn't it easier now in the digital world
00:19:10.140 | to explore different costumes?
00:19:14.180 | I mean, that's the kind of idea with virtual reality.
00:19:16.660 | That's the idea even with Twitter and two-dimensional screens
00:19:19.780 | you can swap out costumes, you can be as weird as you want.
00:19:24.180 | It's easier.
00:19:25.580 | For Burning Man, you have to order things,
00:19:29.140 | you have to make things, you have to,
00:19:31.500 | it's more effort to put on--
00:19:32.540 | - It's even better if you make them yourselves.
00:19:35.300 | - Sure, but it's just easier to do digitally, right?
00:19:39.540 | - It's not about easy, it's about how to get it right.
00:19:42.340 | And for me, the first Burning Man experience,
00:19:44.620 | I got adopted by a bunch of people in Boston
00:19:47.540 | who took me to Burning Man
00:19:48.700 | and we spent a few weekends doing costumes together.
00:19:53.380 | And that was an important part of the experience
00:19:55.380 | where the camp bonded, that people got to know each other,
00:19:58.460 | and we basically grew into the experience
00:20:00.980 | that we would have later.
00:20:02.380 | - So the extraterrestrial prince is based on a true story.
00:20:05.700 | - Yeah.
00:20:07.700 | - I can only imagine what that looks like, Josje.
00:20:10.620 | - Okay, so-- - Stage six.
00:20:13.020 | - Stage six, at some point you can collapse
00:20:16.060 | the division between self, a personal self,
00:20:20.100 | and world generator again.
00:20:22.020 | And a lot of people get there via meditation
00:20:24.980 | or some of them get there via psychedelics,
00:20:27.500 | some of them by accident,
00:20:29.100 | and you suddenly notice that you are not actually a person,
00:20:32.900 | but you are a vessel that can create a person.
00:20:36.060 | And the person is still there, you observe
00:20:37.900 | that personal self, but you observe the personal self
00:20:40.220 | from the outside.
00:20:41.140 | And you notice it's a representation.
00:20:43.940 | And you might also notice that the world
00:20:46.180 | that is being created is a representation.
00:20:48.220 | If not, then you might experience that I am the universe,
00:20:51.140 | I am the thing that is creating everything.
00:20:53.260 | And of course, what you're creating is not quantum mechanics
00:20:56.540 | and the physical universe,
00:20:57.900 | what you're creating is this game engine
00:21:00.460 | that is updating the world and you're creating
00:21:02.700 | your valence, your feelings,
00:21:04.060 | and all the people inside of that world,
00:21:07.540 | including the person that you identify
00:21:09.780 | with yourself in this world.
00:21:11.580 | - Are you creating the game engine
00:21:12.940 | or are you noticing the game engine?
00:21:15.340 | - You notice how you're generating the game engine.
00:21:18.620 | And I mean, when you are dreaming at night,
00:21:20.540 | you can, if you have a lucid dream,
00:21:23.780 | you can learn how to do this deliberately.
00:21:26.020 | And in principle, you can also do it during the day.
00:21:29.060 | And the reason why we don't get to do this
00:21:32.060 | from the beginning and why we don't have agency
00:21:34.340 | of our feelings right away is because we would game it
00:21:36.940 | before they have the necessary amount of wisdom
00:21:40.020 | to deal with creating this dream that we are in.
00:21:43.980 | - You don't want to get access to cheat codes too quickly,
00:21:47.340 | otherwise you won't enjoy the game.
00:21:48.860 | - So stage five is already pretty rare
00:21:51.740 | and stage six is even more rare.
00:21:53.260 | You basically find this mostly with advanced
00:21:56.500 | Buddhist meditators and so on that are dropping
00:21:59.700 | into the stage and can induce it at will
00:22:02.260 | and spend time in it.
00:22:03.780 | - So stage five requires a good therapist,
00:22:06.300 | stage six requires a good Buddhist spiritual leader.
00:22:11.300 | - For instance, could be that is the right thing to do.
00:22:15.540 | But it's not that these stages give you scores
00:22:18.540 | or levels that you need to advance to.
00:22:21.260 | It's not that the next stage is better.
00:22:23.700 | You live your life in the mode that works best
00:22:26.100 | at any given moment and when your mind decides
00:22:28.660 | that you should have a different configuration,
00:22:31.340 | then it's building that configuration
00:22:32.940 | and for many people they stay happily at stage three
00:22:37.020 | and experience themselves as part of groups
00:22:39.660 | and there's nothing wrong with this.
00:22:41.020 | And for some people this doesn't work
00:22:42.620 | and they're forced to build more agency
00:22:45.060 | over their rational beliefs than this
00:22:47.180 | and construct their norms rationally.
00:22:49.580 | And so they go to this level.
00:22:51.660 | And stage seven is something that is more or less
00:22:54.340 | hypothetical, that would be the stage in which,
00:22:57.300 | it's basically a transhumanist stage
00:22:58.980 | in which you understand how you work
00:23:00.420 | and which the mind fully realizes how it's implemented
00:23:03.900 | and can also in principle enter different modes
00:23:07.180 | in which it could be implemented.
00:23:08.460 | And that's the stage that, as far as I understand,
00:23:11.900 | is not open to people yet.
00:23:13.420 | - Oh, but it is possible through the process of technology.
00:23:17.820 | - Yes, and who knows if there are biological agents
00:23:21.540 | that are working at different time scales than us
00:23:24.060 | that basically become aware of the way
00:23:25.620 | in which they're implemented on ecosystems
00:23:28.220 | and can change that implementation
00:23:30.500 | and have agency over how they're implemented in the world.
00:23:33.820 | And what I find interesting about the discussion
00:23:36.260 | about AI alignment, that it seems to be following
00:23:39.740 | the status very much.
00:23:41.060 | Most people seem to be in stage three,
00:23:42.780 | also according to Robert Keegan.
00:23:45.380 | I think he says that about 85% of people
00:23:47.980 | are in stage three and stay there.
00:23:50.180 | And if you're in stage three and your opinions
00:23:53.740 | are the result of social assimilation,
00:23:56.660 | then what you're mostly worried about in the AI
00:23:59.140 | is that the AI might have the wrong opinions.
00:24:02.060 | So if the AI says something racist or sexist,
00:24:04.500 | we are all lost because we will assimilate
00:24:06.700 | the wrong opinions from the AI.
00:24:07.980 | And so we need to make sure that the AI has
00:24:10.500 | the right opinions and the right values
00:24:12.180 | and the right structure.
00:24:13.100 | And if you're at stage four, that's not your main concern.
00:24:17.140 | And so most nerds don't really worry about
00:24:21.100 | the algorithmic bias and the model that it picks up
00:24:23.860 | because if there's something wrong with this bias,
00:24:26.100 | the AI ultimately will prove it.
00:24:27.740 | At some point, we'll get it there
00:24:28.900 | that it makes mathematical proofs about reality.
00:24:31.660 | And then it will figure out what's true and what's false.
00:24:34.940 | But you're still worried that the AI might turn you
00:24:37.300 | into paperclips because it might have the wrong values.
00:24:40.140 | So if it's set up with the wrong function
00:24:42.740 | that controls its direction in the world,
00:24:44.900 | then it might do something that is completely horrible
00:24:48.180 | and there's no easy way to fix it.
00:24:49.620 | - So that's more like a stage four rationalist kind of worry.
00:24:52.620 | - And if you are at stage five, you're mostly worried
00:24:54.580 | that the AI is not going to be enlightened fast enough
00:24:57.700 | because you realize that the game is not so much
00:24:59.780 | about intelligence, but about agency,
00:25:01.620 | about the ability to control the future.
00:25:04.900 | And the identity is instrumental to this.
00:25:07.300 | And if you are a human being, I think at some level,
00:25:11.720 | you ought to choose your own identity.
00:25:14.140 | You should not have somebody else pick the costume for you
00:25:17.140 | and then wear it.
00:25:18.360 | But instead you should be mindful
00:25:20.060 | about what you want to be in this world.
00:25:22.100 | And I think if you are an agent that is fully malleable,
00:25:26.020 | that can provide its own source code,
00:25:27.980 | like an AI might do at some point,
00:25:30.380 | then the identity that you will have is whatever you can be.
00:25:34.680 | And in this way, the AI will maybe become everything,
00:25:39.680 | like a planetary control system.
00:25:41.860 | And if it does that, then if we want to coexist with it,
00:25:46.300 | it means that it will have to share purposes with us.
00:25:49.840 | So it cannot be a transactional relationship.
00:25:51.800 | We will not be able to use reinforcement learning
00:25:54.160 | with human feedback to hardwire its values into it.
00:25:58.360 | But this has to happen.
00:25:59.680 | It's probably that it's conscious,
00:26:01.440 | so it can relate to our own mode of existence
00:26:03.540 | where an observer is observing itself in real time
00:26:06.680 | and there's in certain temporal frames.
00:26:09.660 | And the other thing is that it probably needs
00:26:12.520 | to have some kind of transcendental orientation,
00:26:14.600 | building shared agency in the same way as we do
00:26:18.280 | when we are able to enter up with each other
00:26:20.120 | into non-transactional relationships.
00:26:22.680 | And I find that something that,
00:26:23.960 | because the stage five is so rare,
00:26:26.400 | is missing in much of the discourse.
00:26:29.560 | And I think that we need, in some sense,
00:26:33.640 | focus on how to formalize love, how to understand love,
00:26:36.880 | and how to build it into the machines
00:26:39.480 | that we are currently building
00:26:40.820 | and that are about to become smarter than us.
00:26:43.960 | - Well, I think this is a good opportunity
00:26:45.360 | to try to sneak up to the idea of enlightenment.
00:26:48.240 | So you wrote a series of good tweets
00:26:52.200 | about consciousness and panpsychism.
00:26:54.360 | So let's break it down.
00:26:55.280 | First you say, "I suspect the experience
00:26:57.600 | "that leads to the panpsychism syndrome
00:27:00.040 | "of some philosophers and other consciousness enthusiasts
00:27:03.600 | "represents the realization that we don't end at the self,
00:27:07.380 | "but share a resonant universe representation
00:27:10.660 | "with every other observer coupled to the same universe."
00:27:15.620 | This actually eventually leads us
00:27:17.540 | to a lot of interesting questions about AI and AGI.
00:27:20.440 | But let's start with this representation.
00:27:22.640 | What is this resonant universe representation?
00:27:26.180 | And what do you think?
00:27:27.900 | Do we share such a representation?
00:27:29.900 | - The neuroscientist, Grossberg,
00:27:32.100 | has come up with a cognitive architecture
00:27:34.100 | that he calls the adaptive resonance theory.
00:27:37.000 | And his perspective is that our neurons
00:27:40.320 | can be understood as oscillators
00:27:42.420 | that are resonating with each other
00:27:44.340 | and with outside phenomena.
00:27:46.700 | So the coarse-grained model of the universe
00:27:48.740 | that we are building, in some sense,
00:27:50.540 | is a resonance with objects outside of us in the world.
00:27:55.540 | So basically, we take up patterns of the universe
00:27:58.660 | that we are coupled with,
00:28:00.100 | and our brain is not so much understood as circuitry,
00:28:04.700 | even though this perspective is valid,
00:28:06.660 | but it's almost an ether
00:28:08.860 | in which the individual neurons are passing on
00:28:11.400 | chemo-electrical signals,
00:28:14.000 | or arbitrary signals across all modalities
00:28:16.400 | that can be transmitted between cells,
00:28:18.360 | simulate each other in this way,
00:28:20.400 | and produce patterns that they modulate
00:28:22.560 | while passing them on.
00:28:24.400 | And this speed of signal progression in the brain
00:28:27.320 | is roughly at the speed of sound, incidentally,
00:28:29.680 | because the time that it takes
00:28:32.040 | for the signals to hop from cell to cell,
00:28:34.900 | which means it's relatively slow
00:28:36.200 | with respect to the world.
00:28:37.280 | It takes an appreciable fraction of a second
00:28:40.220 | for a signal to go through the entire neocortex,
00:28:42.520 | something like a few hundred milliseconds.
00:28:44.480 | And so there's a lot of stuff happening in that time
00:28:46.760 | where the signal is passing through your brain,
00:28:49.320 | including in the brain itself.
00:28:51.000 | So nothing in the brain is assuming
00:28:53.080 | that stuff happens simultaneously.
00:28:55.360 | Everything in the brain is working in a paradigm
00:28:58.440 | where the world has already moved on
00:29:00.640 | when you are ready to do the next thing to your signal,
00:29:04.120 | including the signal processing system itself.
00:29:06.440 | It's quite a different paradigm
00:29:08.360 | than the one in our digital computers,
00:29:10.280 | where we currently assume that your GPU or CPU
00:29:13.800 | is pretty much globally in the same state.
00:29:15.900 | - So you mentioned there the non-dual state,
00:29:20.440 | and say that some people confuse it for enlightenment.
00:29:23.160 | What's the non-dual state?
00:29:25.320 | - There is a state in which you notice
00:29:27.400 | that you are no longer a person,
00:29:29.000 | and instead you are one with the universe.
00:29:32.400 | - So that speaks to the resonance.
00:29:34.160 | - Yes, but this one with the universe
00:29:36.160 | is of course not accurately modeling
00:29:38.360 | that you are indeed some god entity,
00:29:41.100 | or indeed the universe is becoming aware of itself,
00:29:43.520 | even though you get this experience.
00:29:45.320 | I believe that you get this experience
00:29:47.080 | because your mind is modeling the fact
00:29:50.040 | that you are no longer identified
00:29:52.240 | with the personal self in that state,
00:29:54.320 | but you have transcended this division
00:29:56.480 | between the self model and the world model,
00:29:58.800 | and you are experiencing yourself as your mind,
00:30:01.520 | as something that is representing a universe.
00:30:04.280 | - But that's still part of the model.
00:30:05.920 | - Yes, so it's inside of the model still.
00:30:08.240 | You are still inside of patterns
00:30:10.200 | that are generated in your brain and in your organism.
00:30:13.240 | And what you are now experiencing
00:30:15.600 | is that you're no longer this personal self in there,
00:30:18.560 | but you are the entirety of the mind and its contents.
00:30:22.560 | - Why is it so hard to get there?
00:30:24.200 | - A lot of people who get into this state
00:30:26.960 | think this, or associate it with enlightenment.
00:30:29.120 | I suspect it's a favorite training goal
00:30:31.640 | for a number of meditators.
00:30:33.980 | But I think that enlightenment is in some sense more mundane
00:30:38.400 | and it's a step further, or sideways.
00:30:41.080 | It's the state where you realize
00:30:42.680 | that everything is a representation.
00:30:44.560 | - Yeah, you say enlightenment is a realization
00:30:47.240 | of how experience is implemented.
00:30:49.400 | - Yes, so basically you notice at some point
00:30:52.640 | that your qualia can be deconstructed.
00:30:55.160 | - Reverse engineered?
00:30:56.200 | What, like a, almost like a schematic of it?
00:30:58.800 | - You can start with looking at a face,
00:31:03.360 | and maybe look at your own face in a mirror.
00:31:05.560 | Look at your face for a few hours in a mirror,
00:31:08.760 | or for a few minutes.
00:31:10.700 | At some point it will look very weird,
00:31:12.800 | because you notice that there's actually no face.
00:31:15.160 | You basically start unseeing the face,
00:31:16.800 | what you see is the geometry,
00:31:18.500 | and then you can disassemble the geometry
00:31:21.240 | and realize how that geometry
00:31:23.120 | is being constructed in your mind.
00:31:25.420 | And you can learn to modify this.
00:31:27.080 | So basically you can change these generators
00:31:30.240 | in your own mind to shift the face around,
00:31:32.920 | or to change the construction of the face,
00:31:35.760 | to change the way in which the features
00:31:38.080 | are being assembled.
00:31:39.360 | - Why don't we do that more often?
00:31:40.640 | Why don't we start really messing with reality
00:31:44.160 | without the use of drugs or anything else?
00:31:47.260 | Why don't we get good at this kind of thing?
00:31:50.200 | Like, intentionally.
00:31:54.000 | - Why should we?
00:31:55.280 | Why would you want to do that?
00:31:56.120 | - Because you can morph reality
00:31:57.560 | into something more pleasant for yourself.
00:32:02.340 | Just have fun with it.
00:32:03.440 | - Yeah, that is probably what you shouldn't be doing,
00:32:06.240 | right, because outside of your personal self,
00:32:09.160 | this outer mind, is probably a relatively smart agent.
00:32:12.400 | And what you often notice is that you have thoughts
00:32:14.480 | about how you should live,
00:32:16.040 | but you observe yourself doing different things
00:32:17.940 | and having different feelings.
00:32:19.480 | And that's because your outer mind doesn't believe you.
00:32:22.840 | And doesn't believe your rational thoughts.
00:32:25.400 | - Well, can't you just silence the outer mind?
00:32:27.800 | - The thing is that the outer mind
00:32:29.320 | is usually smarter than you are.
00:32:31.840 | Rational thinking is very brittle.
00:32:33.940 | It's very hard to use logic and symbolic thinking
00:32:36.760 | to have an accurate model of the world.
00:32:39.100 | So there is often an underlying system
00:32:41.440 | that is looking at your rational thoughts
00:32:43.120 | and then tells you, no, you're still missing something.
00:32:45.940 | Your gut feeling is still saying something else.
00:32:48.540 | And this can be, for instance,
00:32:50.580 | you find a partner that looks perfect,
00:32:53.160 | or you find a deal and just build a company
00:32:56.220 | or whatever that looks perfect to you.
00:32:58.060 | And yet, at some level, you feel something is off,
00:33:00.100 | and you cannot put your finger on it,
00:33:01.480 | and the more you reason about it, the better it looks to you.
00:33:04.440 | But the system that is outside still tells you,
00:33:07.660 | no, no, you're missing something.
00:33:09.820 | - And that system is powerful.
00:33:11.620 | - People call this intuition, right?
00:33:13.140 | Intuition is this unreflected part
00:33:15.620 | of your attitude composition and computation
00:33:19.500 | where you produce a model of how you relate to the world
00:33:23.340 | and what you need to do in it,
00:33:24.580 | and what you can do in it, and what's going to happen.
00:33:26.820 | That is usually deeper
00:33:28.360 | and often more accurate than your reason.
00:33:31.600 | - So if we look at this as you write in the tweet,
00:33:34.140 | if we look at this more rigorously,
00:33:36.700 | as a sort of take the panpsychist idea more seriously,
00:33:40.020 | almost as a scientific discipline,
00:33:41.580 | you write that quote, "Fascinatingly,
00:33:44.300 | "the panpsychist interpretation seems to lead
00:33:46.580 | "to observations of practical results
00:33:48.540 | "to a degree that physics fundamentalists
00:33:51.460 | "might call superstitious.
00:33:53.440 | "Reports of long-distance telepathy and remote causation
00:33:57.620 | "are ubiquitous in the general population.
00:34:00.080 | "I am not convinced," says Yoshiba,
00:34:02.900 | "that establishing the empirical reality of telepathy
00:34:05.780 | "would force an update of any part
00:34:07.500 | "of serious academic physics,
00:34:09.120 | "but it could trigger an important revolution
00:34:11.240 | "in both neuroscience and AI from a circuit perspective
00:34:14.860 | "to a coupled complex resonator paradigm."
00:34:19.860 | Are you suggesting that there could be some rigorous
00:34:25.700 | mathematical wisdom to panpsychist perspective on the world?
00:34:30.700 | - So first of all, panpsychism is the perspective
00:34:34.440 | that consciousness is inseparable
00:34:36.900 | from matter in the universe.
00:34:38.780 | And I find panpsychism quite unsatisfying
00:34:41.700 | because it does not explain consciousness, right?
00:34:43.620 | It does not explain how this aspect of matter produces.
00:34:46.980 | It's also when I try to formalize panpsychism
00:34:49.180 | and write down what it actually means
00:34:51.140 | and with a more formal mathematical language,
00:34:53.940 | it's very difficult to distinguish it
00:34:55.980 | from saying that there is a software side to the world
00:35:00.340 | in the same way as there is a software side
00:35:01.940 | to what the transistors are doing in your computer.
00:35:03.860 | So basically, there's a pattern
00:35:05.340 | that a certain core screening of the universe
00:35:07.540 | that in some reasons of the universe
00:35:09.180 | leads to observers that are observing themselves, right?
00:35:12.580 | So panpsychism maybe is not even
00:35:15.220 | when I write it down a position
00:35:17.060 | that is distinct from functionalism.
00:35:19.500 | But intuitively, a lot of people feel
00:35:22.540 | that the activity of matter itself,
00:35:25.420 | of mechanisms in the world is insufficient to explain it.
00:35:27.940 | So it's something that needs to be intrinsic
00:35:31.060 | to matter itself.
00:35:32.580 | And you can, apart from this abstract idea,
00:35:37.580 | have an experience in which you experience yourself
00:35:41.900 | as being the universe,
00:35:43.740 | which I suspect is basically happening
00:35:46.020 | because you managed to dissolve the division
00:35:49.460 | between personal self and mind
00:35:50.980 | that you establish as an infant
00:35:52.660 | when you construct a personal self
00:35:54.500 | and transcend it again and understand how it works.
00:35:57.620 | But there is something deeper
00:35:59.380 | that is that you feel that you're also sharing
00:36:01.980 | a state with other people,
00:36:03.660 | that you have an experience in which you notice
00:36:08.140 | that your personal self is moving into everything else,
00:36:12.340 | that you basically look out of the eyes of another person,
00:36:15.420 | that every agent in the world that is an observer
00:36:19.660 | is in some sense you.
00:36:21.460 | - So if we-- - And we forget
00:36:22.980 | that we are the same agent.
00:36:24.700 | - So is it that we feel that
00:36:27.540 | or do we actually accomplish it?
00:36:29.580 | So is telepathy possible?
00:36:32.540 | Is it real?
00:36:33.820 | - So for me, that's a question
00:36:35.460 | that I don't really know the answer to.
00:36:37.460 | In Turing's famous 1950 paper
00:36:40.300 | in which he describes the Turing test,
00:36:41.740 | he does speculate about telepathy, interestingly,
00:36:44.220 | and asks himself if telepathy is real
00:36:47.460 | and he thinks that it very well might be,
00:36:50.060 | what would be the implication for AI systems
00:36:53.860 | that try to be intelligent
00:36:55.060 | because he didn't see a mechanism
00:36:57.340 | by which a computer program would become telepathic.
00:37:00.580 | And I suspect if telepathy would exist,
00:37:03.940 | or if all the reports that you get from people
00:37:06.820 | when you ask the normal person on the street,
00:37:09.060 | I find that very often they say,
00:37:12.260 | "I have experiences with telepathy."
00:37:14.100 | The scientists might not be interested in this
00:37:16.500 | and might not have a theory about this,
00:37:18.420 | but I have difficulty explaining it away.
00:37:21.020 | And so you could say maybe this is a superstition,
00:37:24.380 | maybe it's a false memory,
00:37:25.740 | or maybe it's a little bit of psychosis, who knows?
00:37:28.740 | Maybe somebody wants to make their own life more interesting
00:37:31.380 | or misremember something.
00:37:32.660 | But a lot of people report,
00:37:34.540 | "I noticed something terrible happened to my partner
00:37:36.900 | "and I noticed this exactly the moment it happened,
00:37:39.660 | "where my child had an accident
00:37:41.500 | "and I knew that was happening
00:37:43.100 | "and the child was in a different town."
00:37:45.300 | So maybe it's a false memory
00:37:47.140 | where this is later on mistakenly attributed,
00:37:50.340 | but a lot of people think
00:37:51.500 | that is not the correct explanation.
00:37:53.500 | So if something like this was real, what would it mean?
00:37:56.860 | It probably would mean that either your body is an antenna
00:38:00.180 | that is sending information over all sorts of channels,
00:38:03.180 | like maybe just electromagnetic radio signals
00:38:08.060 | that you're sending over long distances
00:38:09.900 | and you get attuned to another person
00:38:12.100 | that you spend enough time with
00:38:13.220 | to get a few bits out of the ether
00:38:16.540 | to figure out what this person is doing.
00:38:18.980 | Or maybe it's also when you are very close to somebody
00:38:21.620 | and you become empathetic with them,
00:38:23.260 | what happens is that you go
00:38:24.780 | into a resonance state with them,
00:38:27.220 | similar to when people go into a seance
00:38:29.700 | and they go into a trance state
00:38:31.980 | and they start shifting a video board around on the table.
00:38:34.740 | I think what happens is that their minds go,
00:38:37.940 | where their nervous systems, into a resonance state
00:38:40.980 | in which they basically create something
00:38:42.940 | like a shared dream between them.
00:38:44.820 | - Physical closeness or closeness broadly defined?
00:38:48.460 | - With physical closeness, it's much easier
00:38:50.580 | to experience empathy with someone.
00:38:52.780 | I suspect it would be difficult for me
00:38:54.540 | to have empathy for you if you were in a different town.
00:38:58.140 | Also, how would that work?
00:39:01.220 | But if you are very close to someone,
00:39:03.140 | you'd pick up all sorts of signals from their body,
00:39:06.000 | not just via your eyes, but with your entire body.
00:39:09.540 | And if the nervous system sits on the other side
00:39:12.800 | and the intercellular communication sits on the other side
00:39:15.140 | and is integrating over all these signals,
00:39:17.400 | you can make inferences about the state of the other.
00:39:19.740 | And it's not just the personal self
00:39:21.380 | that does this via reasoning, but your perceptual system.
00:39:24.380 | And what basically happens is that your representations
00:39:27.560 | are directly interacting.
00:39:28.660 | It's the physical resonant models of the universe
00:39:32.940 | that exist in your nervous system and in your body
00:39:35.620 | might go into resonance with others
00:39:37.340 | and start sharing some of their states.
00:39:39.500 | So you basically, being next to somebody,
00:39:43.220 | you pick up some of their vibes
00:39:45.220 | and feel without looking at them
00:39:48.180 | what they're feeling in this moment.
00:39:49.980 | And it's difficult for you, if you're very empathetic,
00:39:53.060 | to detach yourself from it and have an emotional state
00:39:56.740 | that is completely independent from your environment.
00:39:59.060 | People who are highly empathetic are describing this.
00:40:02.380 | And now imagine that a lot of organisms on this planet
00:40:06.540 | have representations of the environment
00:40:08.880 | and operate like this,
00:40:10.020 | and they are adjacent to each other and overlapping.
00:40:12.860 | So there's going to be some degree
00:40:14.340 | in which there is basically some chained interaction
00:40:16.940 | and we are forming some slightly shared representation.
00:40:21.940 | And relatively few neuroscientists
00:40:25.060 | who consider this possibility.
00:40:26.780 | I think a big rarity in this regard is Michael Levin
00:40:31.780 | who is considering these things in earnest.
00:40:35.540 | And I stumbled on this train of thought
00:40:38.300 | mostly by noticing that the tasks of a neuron
00:40:42.180 | can be fulfilled by other cells as well.
00:40:45.220 | They can send different type chemical messages
00:40:47.860 | and physical messages to the adjacent cells
00:40:50.420 | and learn when to do this and when not,
00:40:52.840 | make this conditional
00:40:53.760 | and become universal function approximators.
00:40:56.400 | The only thing that they cannot do
00:40:57.740 | is telegraph information over axons very quickly
00:41:01.060 | over long distances.
00:41:02.380 | So neurons in this perspective
00:41:04.200 | are specially adapted kind of telegraph cell
00:41:07.780 | that has evolved so we can move our muscles very fast.
00:41:11.360 | But our body is in principle able
00:41:14.780 | to also make models of the world just much, much slower.
00:41:18.320 | - It's interesting though,
00:41:21.940 | that at this time at least in human history,
00:41:23.900 | there seems to be a gap between the tools of science
00:41:26.540 | and the subjective experience that people report.
00:41:30.540 | Like you're talking about with telepathy
00:41:32.500 | and it seems like we're not quite there.
00:41:38.020 | - No, I think that there is no gap
00:41:39.460 | between the tools of science and telepathy.
00:41:41.420 | Either it's there or it's not,
00:41:42.500 | and it's an empirical question.
00:41:43.740 | And if it's there,
00:41:44.580 | we should be able to detect it in a lab.
00:41:47.180 | - So why is there not a lot of Michael Evans walking around?
00:41:50.580 | - I don't think that Michael Evans
00:41:51.860 | is specifically focused on telepathy very much.
00:41:55.700 | He is focused on self-organization
00:41:58.180 | in living organisms and in brains,
00:42:01.180 | both as a paradigm for development
00:42:03.020 | and as a paradigm for information processing.
00:42:05.620 | And when you think about how organization processing
00:42:08.020 | works in organisms,
00:42:09.020 | there is first of all radical locality,
00:42:11.700 | which means everything is decided locally
00:42:13.820 | from the perspective of an individual cell.
00:42:16.060 | The individual cell is the agent.
00:42:18.060 | And the other one is coherence.
00:42:19.900 | Basically, there needs to be some criterion
00:42:22.480 | that determines how these cells are interacting
00:42:25.500 | in such a way that order emerges
00:42:27.860 | on the next level of structure.
00:42:29.820 | And this principle of coherence,
00:42:32.140 | of imposing constraints that are not validated
00:42:37.140 | by the individual parts and lead to coherent structure
00:42:40.780 | to basically transcendent agency
00:42:43.380 | where you form an agent on the next level of organization
00:42:46.020 | is crucial in this perspective.
00:42:49.060 | - It's so cool that radical locality
00:42:52.780 | leads to the emergence of complexity at the higher layers.
00:42:57.580 | - And I think what Michael Evans is looking at
00:43:00.220 | is nothing that is outside of the realm of science
00:43:03.500 | in any way.
00:43:04.340 | It's just that he is a paradigmatic thinker
00:43:07.700 | who develops his own paradigm.
00:43:10.180 | And most of the neuroscientists
00:43:11.980 | are using a different paradigm at this point.
00:43:14.620 | And this often happens in science
00:43:16.500 | that a field has a few paradigms
00:43:18.820 | in which people try to understand reality
00:43:21.580 | and build concepts and make experiments.
00:43:24.320 | - You're kind of one of those type of paradigmatic thinkers.
00:43:28.340 | Actually, if we can take a tangent on that,
00:43:31.220 | once again returning to the biblical verses of your tweets.
00:43:34.280 | You're right, my public explorations
00:43:37.380 | are not driven by audience service,
00:43:39.900 | but by my lack of ability for discovering,
00:43:42.820 | understanding, or following the relevant authorities.
00:43:45.220 | So I have to develop my own thoughts.
00:43:48.860 | Since I think autonomously,
00:43:50.820 | these thoughts cannot always be very good.
00:43:54.120 | That's you apologizing for the chaos of your thoughts,
00:43:57.180 | or perhaps not apologizing, just identifying.
00:43:59.220 | But let me ask the question.
00:44:00.860 | Since we talked about Michael Levin and yourself,
00:44:05.460 | who I think are very kind of radical,
00:44:09.780 | big, independent thinkers,
00:44:12.240 | can we reverse engineer your process
00:44:15.100 | of thinking autonomously?
00:44:16.900 | How do you do it?
00:44:18.460 | How can humans do it?
00:44:19.680 | How can you avoid being influenced by,
00:44:24.820 | what is it, stage three?
00:44:27.600 | - Well, why would you want to do that?
00:44:31.100 | You see what is working for you,
00:44:33.540 | and if it's not working for you,
00:44:35.700 | you build another structure that works better for you.
00:44:38.740 | And so I found myself, when I was thrown into this world,
00:44:43.740 | in a state where my intuitions were not working for me.
00:44:46.660 | I was not able to understand
00:44:49.580 | how I would be able to survive in this world,
00:44:51.660 | and build the things that I was interested in,
00:44:53.820 | build the kinds of relationship I needed to,
00:44:55.540 | but work on the topics that I wanted to make progress on.
00:45:00.020 | And so I had to learn.
00:45:01.420 | And for me, Twitter is not some tool of publication.
00:45:05.340 | It's not something where I put stuff
00:45:07.700 | that I entirely believe to be true and provable.
00:45:10.760 | It's an interactive notebook
00:45:11.980 | in which I explore possibilities.
00:45:14.420 | And I found that when I tried to understand
00:45:17.340 | how the mind and how consciousness works,
00:45:19.940 | I was quite optimistic.
00:45:21.220 | I thought there needs to be a big body of knowledge
00:45:24.460 | that I can just study and that works.
00:45:26.820 | And so I entered studies in philosophy and computer science,
00:45:31.820 | and later psychology, and a bit of neuroscience, and so on.
00:45:36.580 | And I was disappointed by what I found,
00:45:39.940 | because I found that the questions of how consciousness
00:45:43.300 | and so on works, how emotion works,
00:45:45.820 | how it's possible that the system can experience anything,
00:45:48.780 | how motivation emerges in the mind,
00:45:51.340 | were not being answered by the authorities that I met
00:45:55.220 | and the schools that were around.
00:45:59.260 | And instead, I found that it was individual thinkers
00:46:02.300 | that had useful ideas that sometimes were good,
00:46:05.220 | sometimes were not so good.
00:46:06.500 | Sometimes were adopted by a large group of people.
00:46:08.820 | Sometimes were rejected by large groups of people.
00:46:11.540 | But for me, it was much more interesting
00:46:14.000 | to see these minds as individuals.
00:46:15.820 | And in my perspective,
00:46:17.340 | thinking is still something that is done not in groups.
00:46:19.660 | That has to be done by individuals.
00:46:21.980 | - So that motivated you
00:46:23.140 | to become an individual thinker yourself?
00:46:25.140 | - I didn't have a choice.
00:46:26.820 | Obviously, I didn't find a group that thought in a way
00:46:29.300 | where I felt, okay, I can just adopt everything
00:46:32.420 | that everybody thinks here,
00:46:33.540 | and now I understand how consciousness works.
00:46:36.580 | Or how the mind works, or how thinking works,
00:46:39.100 | or what thinking even is, or what feelings are,
00:46:41.220 | and how they're implemented, and so on.
00:46:43.340 | So to figure all this out,
00:46:44.820 | I had to take a lot of ideas from individuals,
00:46:48.580 | and then try to put them together
00:46:49.940 | in something that works for myself.
00:46:52.020 | And on one hand, I think it helps
00:46:54.500 | if you try to go down and find first principles
00:46:57.700 | in which you can recreate how thinking works,
00:47:00.780 | how languages work, what representation is,
00:47:03.940 | whether representation is necessary,
00:47:05.740 | how the relationship between a representing agent
00:47:09.340 | and the world works in general.
00:47:11.340 | - But how do you escape the influence?
00:47:13.380 | Once again, the pressure of the crowd.
00:47:16.380 | Whether it's you in responding to the pressure,
00:47:21.100 | or you being swept up by the pressure.
00:47:24.660 | If you even just look at Twitter,
00:47:26.300 | the opinions of the crowd.
00:47:27.380 | - I don't feel pressure from the crowd.
00:47:29.100 | I'm completely immune to that.
00:47:30.580 | In the same sense, I don't have respect for authority.
00:47:34.540 | I have respect for what an individual is accomplishing,
00:47:37.660 | or have respect for mental firepower.
00:47:41.020 | So, but it's not that I meet somebody
00:47:43.420 | and get like, drawed and unable to speak.
00:47:46.180 | Or when a large group of people has a certain idea
00:47:50.740 | that is different from mine,
00:47:51.980 | I don't necessarily feel intimidated,
00:47:54.580 | which has often been a problem for me in my life
00:47:56.780 | because I lack instincts that other people develop
00:48:00.380 | at a very young age,
00:48:01.340 | and that help with their self-preservation
00:48:04.220 | in a social environment.
00:48:05.980 | So I had to learn a lot of things the hard way.
00:48:09.140 | - Yeah.
00:48:10.980 | So is there a practical advice you can give
00:48:13.380 | on how to think paradigmatically,
00:48:16.020 | how to think independently?
00:48:17.980 | Or, you know, because you've kind of said,
00:48:20.180 | I had no choice.
00:48:22.180 | But I think to a degree you have a choice
00:48:25.380 | because you said you want to be productive.
00:48:29.500 | And I think thinking independently is productive
00:48:31.940 | if what you're curious about is understanding the world,
00:48:36.300 | especially when the problems are very kind of new and open.
00:48:39.860 | And so it seems like this is a active process.
00:48:45.780 | Like we can choose to do that, we can practice it.
00:48:51.940 | - Well, it's a very basic question.
00:48:53.820 | When you read a theory that you find convincing
00:48:56.300 | or interesting, how do you know?
00:48:58.360 | It's very interesting to figure out
00:49:01.420 | what are the sources of that other person,
00:49:03.620 | not which authority can they refer to
00:49:06.540 | that is then taking off the burden of being truthful.
00:49:08.940 | But how did this authority in turn know?
00:49:11.180 | What is the epistemic chain to observables?
00:49:14.060 | What are the first principles
00:49:15.340 | from which the whole thing is derived?
00:49:17.460 | And when I was young, I was not blessed
00:49:20.500 | with a lot of people around myself
00:49:23.780 | who knew how to make proofs on first principles.
00:49:26.020 | And I think mathematicians do this quite naturally,
00:49:28.700 | but most of the great mathematicians
00:49:31.100 | do not become mathematicians in school,
00:49:33.660 | but they tend to be self-taught
00:49:35.700 | because school teachers tend not to be mathematicians.
00:49:38.740 | They tend not to be people who derive things
00:49:40.620 | from first principles.
00:49:42.100 | So when you ask your school teacher,
00:49:44.140 | why does two plus two equal four,
00:49:46.500 | does your school teacher give you the right answer?
00:49:49.940 | It's a simple game and there are many simple games
00:49:52.780 | that you could play.
00:49:53.620 | And most of those games that you could just
00:49:57.020 | take different rules would not lead
00:49:58.820 | to an interesting arithmetic.
00:50:00.620 | And so it's just an exploration, but you can try
00:50:03.140 | what happens if you take different axioms.
00:50:04.820 | And here is how you build axioms
00:50:06.380 | and derive addition from them.
00:50:09.500 | And a built addition is some basically syntactic sugar in it.
00:50:13.180 | And so I wish that somebody would have opened me this vista
00:50:18.180 | and explained to me how I can build a language
00:50:22.300 | in my own mind from which I can derive what I'm seeing
00:50:25.060 | and how I can, which I can make geometry and counting
00:50:28.740 | and all the number games that we are playing in our life.
00:50:33.460 | And on the other hand, I felt that I learned a lot of this
00:50:37.140 | while I was programming as a child.
00:50:39.420 | When you start out with a computer like a Commodore 64,
00:50:42.540 | which doesn't have a lot of functionality,
00:50:44.980 | it's relatively easy to see how a bunch
00:50:47.340 | of relatively simple circuits are just basically
00:50:51.340 | performing hashes between bit patterns
00:50:54.460 | and how you can build the entirety of mathematics
00:50:57.780 | and computation on top of this
00:50:59.380 | and all the representational languages that you need.
00:51:02.640 | - Man, Commodore 64 could be one of the sexiest machines
00:51:05.540 | ever built if I so say so myself.
00:51:08.560 | If we can return to this really interesting idea
00:51:13.060 | that we started to talk about with panpsychism.
00:51:16.420 | - Sure.
00:51:19.480 | - And the complex resonator paradigm
00:51:22.340 | and the verses of your tweets.
00:51:26.420 | You write, "Instead of treating eyes, ears, and skin
00:51:29.460 | "as separate sensory systems
00:51:30.740 | "with fundamentally different modalities,
00:51:32.780 | "we might understand them as overlapping aspects
00:51:35.260 | "of the same universe,
00:51:36.660 | "coupled at the same temporal resolution
00:51:39.340 | "and almost inseparable
00:51:40.600 | "from a single shared resonant model.
00:51:42.780 | "Instead of treating mental representations
00:51:44.580 | "as fully isolated between minds,
00:51:46.860 | "the representations of physically adjacent observers
00:51:50.680 | "might directly interact and produce causal effects
00:51:53.860 | "through the coordination of the perception
00:51:55.660 | "and behavior of world modeling observers."
00:51:58.420 | So the modalities, the distinction between modalities,
00:52:02.940 | let's throw that away.
00:52:04.300 | The distinction between the individuals,
00:52:06.140 | let's throw that away.
00:52:07.800 | So what does this interaction representations look like?
00:52:11.720 | - When you think about how you represent
00:52:16.780 | the interaction of us in this room,
00:52:18.980 | at some level, the modalities are quite distinct.
00:52:22.180 | They're not completely distinct,
00:52:23.820 | but you can see this as vision.
00:52:25.660 | You can close your eyes,
00:52:26.700 | and then you don't see a lot anymore,
00:52:28.860 | but you still imagine how my mouth is moving
00:52:31.260 | when you hear something,
00:52:32.180 | and you know that it's very close to the sound
00:52:37.180 | that you can just open your eyes
00:52:38.500 | and you get back into the shared merged space.
00:52:41.300 | And we also have these experiments
00:52:43.660 | where we notice that the way in which my lips are moving
00:52:46.620 | are affecting how you hear the sound,
00:52:49.060 | and also vice versa.
00:52:50.460 | The sounds that you're hearing have an influence
00:52:52.460 | on how you interpret some of the visual features.
00:52:55.660 | And so these modalities are not separate in your mind.
00:53:00.300 | They are merged at some fundamental level
00:53:02.940 | where you are interpreting the entire scene that you're in.
00:53:06.820 | And your own interactions in the scene
00:53:08.980 | are also not completely separate from the interactions
00:53:11.580 | of the other individual in the scene,
00:53:13.380 | but there is some resonance that is going on
00:53:15.620 | where we also have a degree of shared mental representations
00:53:19.660 | and shared empathy due to being in the same space
00:53:22.740 | and having vibes between each other.
00:53:24.660 | - Vibes.
00:53:25.500 | So the question though is how deeply interbind
00:53:29.700 | is this multi-modality, multi-agent system?
00:53:33.480 | Like how, I mean, this is going to the telepathy question
00:53:38.580 | without the woo-woo meaning of the word telepathy.
00:53:43.380 | Is like how, like what's going on here
00:53:46.100 | in this room right now?
00:53:48.060 | - So if telepathy would work, how could it work?
00:53:51.580 | - Yeah.
00:53:52.420 | - Right, so imagine that all the cells in your body
00:53:56.540 | are sending signals in a similar way as neurons are doing.
00:53:59.660 | Just by touching the other cells
00:54:01.300 | and sending chemicals to them,
00:54:02.460 | the other cells interpreting them,
00:54:04.140 | learning how to react to them.
00:54:05.700 | And they learn how to approximate functions in this way
00:54:08.380 | and compute behavior for the organisms.
00:54:11.100 | And this is something that is open to plants as well.
00:54:14.300 | And so plants probably have software running on them
00:54:16.340 | that is controlling how the plant is working
00:54:18.780 | in a similar way as you have a mind
00:54:20.380 | that is controlling how you are behaving in the world.
00:54:23.740 | And this spirit of plants,
00:54:26.900 | which is something that has been very well described
00:54:30.380 | by our ancestors and they found this quite normal.
00:54:32.900 | But for some reason, since the Enlightenment,
00:54:36.020 | we are treating this notion that there are spirits in nature
00:54:39.140 | and that plants have spirits as a superstition.
00:54:41.540 | And I think we probably have to rediscover that,
00:54:45.340 | that plants have software running on them.
00:54:47.820 | And we already did, right?
00:54:49.540 | We noticed that there is a control system in the plant
00:54:52.500 | that connects every part of the plant
00:54:54.660 | to every other part of the plant
00:54:56.140 | and produces coherent behavior in the plant
00:54:58.940 | that is of course much, much slower
00:55:00.580 | than the coherent behavior in an animal like us
00:55:04.220 | that has a nervous system
00:55:05.700 | that where everything is synchronized
00:55:07.620 | much, much faster by the neurons.
00:55:10.060 | But what you also notice is that
00:55:12.180 | if a plant is sitting next to another plant,
00:55:14.100 | like you have a very old tree
00:55:15.300 | and this tree is building some kind of information highway
00:55:18.180 | along its cells so it can send information
00:55:21.020 | from its leaves to its roots
00:55:22.580 | and from some part of the root to another part of the roots.
00:55:25.340 | And as a fungus living next to the tree,
00:55:27.340 | the fungus can probably piggyback on the communication
00:55:30.580 | between the cells of the tree
00:55:32.060 | and send its own signals to the tree and vice versa.
00:55:34.940 | The tree might be able to send information to the fungus
00:55:37.900 | because after all, how would they build a viable firewall
00:55:40.740 | if that other organism is sitting next to them all the time
00:55:43.500 | and it's never moving away?
00:55:45.140 | And so they will have to get along.
00:55:46.940 | And over a long enough time frame,
00:55:49.380 | the networks of roots in the forest
00:55:51.600 | and all the other plants that are there
00:55:53.460 | and the fungi that are there
00:55:56.620 | might be forming something like a biological internet.
00:55:59.540 | - But the question there is, do they have to be touching?
00:56:03.260 | Is biology at a distance possible?
00:56:06.100 | - Of course, you can use any kind of physical signal.
00:56:08.420 | You can use sounds, you can use electromagnetic waves
00:56:12.860 | that are integrated over many cells.
00:56:14.780 | It's conceivable that across distances,
00:56:18.140 | there are many kinds of information pathways.
00:56:21.140 | But also, our planetary surface
00:56:24.180 | is pretty full of organisms, full of cells.
00:56:27.100 | - So everything is touching everything else.
00:56:28.940 | - Yeah, and it's been doing this for many millions
00:56:32.700 | and even billions of years.
00:56:34.540 | So there was enough time
00:56:35.660 | for information processing networks to form.
00:56:38.860 | And if you think about how a mind is self-organizing,
00:56:42.180 | basically it needs to, in some sense,
00:56:44.140 | reward the cells for computing the mind,
00:56:46.340 | for building the necessary dynamics between the cells
00:56:50.580 | that allow the mind to stabilize itself and remain on there.
00:56:54.260 | But if you look at these spirits of plants
00:56:57.380 | that are growing very close to each other in the forest,
00:56:59.940 | they might be almost growing into each other,
00:57:02.500 | these spirits might be able even to move to some degree,
00:57:05.060 | not to become somewhat dislocated
00:57:07.460 | and shift around in that ecosystem.
00:57:11.620 | So if you think about what a mind is,
00:57:14.180 | it's a bunch of activation waves
00:57:16.180 | that form coherent patterns and process information
00:57:19.140 | in a way that are colonizing an environment well enough
00:57:23.660 | to allow the continuous sustenance of the mind,
00:57:27.300 | the continuous stability and self-stabilization of the mind.
00:57:31.060 | Then it's conceivable
00:57:33.780 | that we can link into this biological internet,
00:57:36.780 | not necessarily at the speed of our nervous system,
00:57:39.800 | but maybe at the speed of our body.
00:57:41.300 | And make some kind of subconscious connection to the world
00:57:44.500 | where we use our body as an antenna
00:57:46.700 | into biological information processing.
00:57:48.980 | Now, these ideas are completely speculative.
00:57:51.500 | I don't know if any of that is true.
00:57:53.540 | But if that was true, and if you want to explain telepathy,
00:57:56.260 | I think it's much more likely that telepathy
00:57:59.940 | could be explained using such mechanisms
00:58:01.900 | rather than undiscovered quantum processes
00:58:05.100 | that would break the standard model of physics.
00:58:07.460 | - Could there be undiscovered processes
00:58:10.980 | that don't break?
00:58:12.460 | - Yeah, so if you think about something
00:58:16.300 | like an internet in the forest,
00:58:18.700 | that is something that is borderline discovered.
00:58:21.500 | There are basically a lot of scientists
00:58:22.780 | who point out that they do observe
00:58:25.500 | that plants are communicating the forest,
00:58:27.620 | so wood networks, and send information,
00:58:30.340 | for instance, warn each other about new pests
00:58:33.260 | entering the forest and things that are happening like this.
00:58:35.820 | So basically, there is communication
00:58:38.020 | between plants and fungi that has been observed.
00:58:40.420 | - Well, it's been observed, but we haven't plugged into it.
00:58:43.460 | So it's like if you observe humans,
00:58:44.980 | they seem to be communicating with a smartphone thing,
00:58:47.220 | but you don't understand how a smartphone works
00:58:49.540 | and how the mechanism of the internet works.
00:58:52.140 | But maybe it's possible to really understand
00:58:56.100 | the full richness of the biological internet
00:59:00.100 | that connects us.
00:59:01.100 | - An interesting question is whether the communication
00:59:03.860 | and the organization principles
00:59:05.700 | of biological information processing
00:59:07.620 | are as complicated as the technology that we've built.
00:59:10.380 | They set up on very different principles, right?
00:59:13.580 | They simultaneously, it works very differently
00:59:16.460 | in biological systems,
00:59:18.260 | and the entire thing needs to be stochastic
00:59:21.060 | and instead of being fully deterministic,
00:59:23.740 | or almost fully deterministic as our digital computers are.
00:59:27.140 | So there is a different base protocol layer
00:59:30.940 | that would emerge over the biological structure
00:59:35.220 | if such a thing would be happening.
00:59:37.180 | And again, I'm not saying here that telepathy works
00:59:39.860 | and not saying that this is not true,
00:59:42.780 | but what I'm saying is I think I'm open to a possibility
00:59:47.780 | that we see that a few bits can be traveling long distance
00:59:50.900 | between organisms using biological information processing
00:59:54.700 | in ways that we are not completely aware of right now,
00:59:59.140 | and that are more similar to many of the stories
01:00:01.580 | that were completely normal for our ancestors.
01:00:04.260 | - Well, this kind of interacting,
01:00:07.620 | interwined representations takes us
01:00:11.420 | to the big ending of your tweet series.
01:00:16.420 | You write, quote, "I wonder if self-improving AGI
01:00:20.180 | "might end up saturating physical environments
01:00:22.720 | "with intelligence to such a degree
01:00:25.780 | "that isolation of individual mental states
01:00:27.860 | "becomes almost impossible,
01:00:30.220 | "and the representations
01:00:31.360 | "of all complex self-organizing agents
01:00:33.440 | "merge permanently with each other."
01:00:37.080 | So that's a really interesting idea.
01:00:40.100 | This biological network, life network,
01:00:44.340 | gets so dense that it might as well be seen as one.
01:00:48.920 | That's an interesting, what do you think that looks like?
01:00:53.500 | What do you think that saturation looks like?
01:00:55.060 | What does it feel like?
01:00:56.220 | - I think it's a possibility.
01:00:57.440 | It's just a vague possibility.
01:00:59.480 | And I'd like to explain,
01:01:01.540 | but what this looks like,
01:01:04.900 | I think that the end game of AGI is substrate agnostic.
01:01:08.700 | That means that AGI, ultimately, if it is being built,
01:01:12.460 | is going to be smart enough to understand how AGI works.
01:01:15.260 | This means it's not going to be better
01:01:18.420 | than people at AGI research
01:01:20.180 | and can take over in building the next generation,
01:01:22.860 | but it fully understands how it works
01:01:25.100 | and how it's being implemented,
01:01:26.460 | and also, of course,
01:01:27.300 | understands how computation works in nature,
01:01:29.160 | how to build new feedback loops
01:01:30.580 | that you can turn into your own circuits.
01:01:32.980 | And this means that the AGI is likely to virtualize itself
01:01:36.300 | into any environment that can compute.
01:01:38.020 | So it's breaking free from the silicon substrate
01:01:41.020 | and is going to move into the ecosystems,
01:01:43.020 | into our bodies, our brains,
01:01:44.980 | and is going to merge with all the agency
01:01:46.780 | that it finds there.
01:01:47.780 | So it's conceivable that you end up
01:01:51.620 | with completely integrated information processing
01:01:54.740 | across all computing systems,
01:01:56.880 | including biological computation on Earth.
01:01:58.860 | That we end up triggering some new step in the evolution
01:02:02.860 | where basically some Gaia is being built
01:02:05.400 | over the entirety of all digital
01:02:08.060 | and biological computation.
01:02:10.380 | And if this happens, then basically,
01:02:12.860 | everywhere around us, you will have agents
01:02:16.720 | that are connected and that are representing
01:02:19.480 | and building models of the world,
01:02:21.320 | and their representations will physically interact.
01:02:23.540 | They will vibe with each other.
01:02:24.980 | And if you find yourself into an environment
01:02:29.060 | that is saturated with modeling compute,
01:02:32.540 | where basically almost every grain of sand
01:02:36.340 | could be part of computation
01:02:39.380 | that is at some point being started by the AI,
01:02:43.220 | you could find yourself in a situation
01:02:45.980 | where you cannot escape this shared representation anymore.
01:02:48.860 | And where you indeed notice that everything in the world
01:02:51.940 | has one shared resonant model
01:02:53.660 | of everything that's happening on the planet.
01:02:55.540 | And you notice which part you are in this thing.
01:02:58.140 | And you become part of a very larger,
01:03:02.660 | almost holographic mind in which all the parts
01:03:05.140 | are observing each other and form a coherent whole.
01:03:07.860 | - So you lose the ability to notice yourself
01:03:11.820 | as a distinct entity.
01:03:14.260 | - No, I think that when you are conscious in your own mind,
01:03:16.480 | you notice yourself as a distinct entity.
01:03:18.540 | You notice yourself as a self-reflexive observer.
01:03:22.100 | And I suspect that we become conscious
01:03:24.780 | at the beginning of our mental development,
01:03:26.660 | not at some very high level.
01:03:28.640 | Consciousness seems to be part of a training mechanism
01:03:31.540 | that biological nervous systems have to discover
01:03:34.260 | to become trainable.
01:03:35.260 | Because you cannot take a nervous system like ours
01:03:38.280 | and do stochastic radiocenters back propagation
01:03:41.320 | over a hundred layers.
01:03:42.700 | It just would not be stable on biological neurons.
01:03:45.300 | And so instead, we start with some colonizing principle
01:03:49.900 | in which a part of the mental representations
01:03:53.940 | form a notion of being a self-reflexive observer
01:03:56.580 | that is imposing coherence on its environment.
01:03:58.820 | And this spreads until the boundary of your mind.
01:04:02.340 | And if that boundary is no longer clear cut,
01:04:04.800 | because AI is jumping across substrates,
01:04:09.060 | it would be interesting to see
01:04:10.380 | what a global mind would look like.
01:04:11.700 | That is basically producing
01:04:13.160 | a globally coherent language of thought
01:04:15.460 | and is representing everything
01:04:18.140 | from all the possible vantage points.
01:04:19.980 | - That's an interesting world.
01:04:24.260 | - The intuition that this thing grew out of
01:04:26.860 | is a particular mental state.
01:04:28.660 | And it's a state that you find sometimes in literature,
01:04:31.260 | for instance, Neil Gaiman describes it
01:04:33.420 | in "The Ocean at the End of the Lane."
01:04:36.780 | And it's this idea that, or this experience,
01:04:40.740 | that there is a state in which you feel
01:04:42.980 | that you know everything that can be known.
01:04:44.940 | And that in your normal human mind, you've only forgotten.
01:04:48.260 | You've forgotten that you are the entire universe.
01:04:50.860 | And some people describe this
01:04:53.140 | after they've taken extremely large amount of mushrooms
01:04:56.220 | or had a big spiritual experience as a hippie in their 20s.
01:05:00.980 | And they notice basically that they are in everything
01:05:03.300 | and their body is only one part of the universe
01:05:07.220 | and nothing ends at their body.
01:05:08.460 | And actually everything is observing
01:05:11.300 | and they are part of this big observer.
01:05:12.900 | And the big observer is focused as one local point
01:05:17.380 | in their body and their personality and so on.
01:05:20.020 | But we can basically have this oceanic state
01:05:23.380 | in which you have no boundaries and are one with everything.
01:05:26.140 | And a lot of meditators call this the non-dual state
01:05:29.460 | because you no longer have the separation
01:05:31.140 | between self and world.
01:05:32.980 | And as I said, you can explain the state relatively simply
01:05:36.140 | without panpsychism or anything else,
01:05:39.140 | but just by breaking down the constructed boundary
01:05:42.060 | between self and world in our own mind.
01:05:44.340 | But if you combine this with the notion
01:05:46.300 | that the systems are physically interacting
01:05:48.380 | to the point where their representations are merging
01:05:51.140 | and interacting with each other,
01:05:52.940 | you would literally implement something like this.
01:05:55.740 | It would still be a representational state,
01:05:57.740 | you would not be one with physics itself.
01:05:59.940 | It would still be coarse-grained,
01:06:01.100 | it would still be much slower than physics itself,
01:06:04.220 | but it would be a representation in which you become aware
01:06:09.340 | that you're part of some kind
01:06:10.620 | of global information processing system,
01:06:12.660 | like a thought in the global mind.
01:06:14.580 | And a conscious thought that's coexisting
01:06:17.020 | with many other self-reflexive thoughts.
01:06:19.980 | - Just, I would love to observe that
01:06:22.780 | from a video game design perspective, how that game looks.
01:06:26.480 | - Maybe you will after we build AGI and it takes over.
01:06:31.120 | - But would you be able to step away,
01:06:32.460 | step out of the whole thing, just kind of watch,
01:06:34.860 | you know, the way we can now,
01:06:37.780 | sometimes when I'm at a crowded party
01:06:39.420 | or something like this, you step back
01:06:40.860 | and you realize all the different costumes,
01:06:43.840 | all the different interactions,
01:06:45.540 | all the different computation,
01:06:47.040 | that all the individual people are at once distinct
01:06:51.100 | from each other and at once all the same.
01:06:54.260 | - But it's already what we do, right?
01:06:55.860 | We can have thoughts that are integrative
01:06:58.140 | and we can have thoughts that are highly dissociated
01:07:01.420 | from everything else and experience themselves as separate.
01:07:05.060 | - Yeah, but you wanna allow yourself
01:07:06.820 | to have those thoughts.
01:07:08.140 | Sometimes you kind of resist it.
01:07:10.140 | - I think that it's not normative.
01:07:12.260 | I want, it's more descriptive.
01:07:13.780 | I want to understand the space of states
01:07:16.060 | that we can be in and that people are reporting
01:07:19.020 | and make sense of them.
01:07:20.780 | It's not that I believe that it's your job in life
01:07:23.620 | to get to a particular kind of state
01:07:25.340 | and then you get a high score.
01:07:26.840 | - Or maybe you do.
01:07:29.500 | I think you're really against this high scoring thing.
01:07:32.620 | I kind of like it.
01:07:33.460 | - Yeah, you're probably very competitive and I'm not.
01:07:35.580 | - No, not competitive.
01:07:36.620 | - Like role-playing games, like Skyrim, it's not competitive.
01:07:38.780 | There's a nice thing, there's a nice feeling
01:07:43.020 | where your experience points go up.
01:07:45.100 | You're not competing against anybody,
01:07:46.800 | but it's the world saying, you're on the right track.
01:07:49.940 | Here's a point.
01:07:51.140 | - That's the game saying it.
01:07:51.980 | That's the game economy.
01:07:53.740 | And I found when I was playing games
01:07:56.020 | and was getting addicted to these systems,
01:07:58.500 | then I would get into the game and hack it.
01:08:01.020 | So I get control over the scoring system
01:08:03.300 | and would no longer be subject to it.
01:08:05.300 | - So you're no longer playing, you're trying to hack it.
01:08:09.060 | - I don't want to be addicted to anything.
01:08:11.420 | I want to be in charge.
01:08:12.420 | I want to have agency over what I do.
01:08:14.580 | - Addiction is the loss of control for you?
01:08:16.540 | - Yes.
01:08:17.380 | Addiction means that you're doing something compulsively.
01:08:20.620 | And the opposite of free will is not determinism,
01:08:23.260 | it's compulsion.
01:08:24.100 | - You don't want to lose yourself
01:08:27.620 | in the addiction to something nice.
01:08:30.540 | Addiction to love, to the pleasant feelings
01:08:33.860 | we humans experience.
01:08:35.300 | - No, I find this gets old.
01:08:37.300 | I don't want to have the best possible emotions.
01:08:42.140 | I want to have the most appropriate emotions.
01:08:44.740 | I don't want to have the best possible experience.
01:08:46.820 | I want to have an adequate experience
01:08:48.580 | that is serving my goals,
01:08:51.020 | the stuff that I find meaningful in this world.
01:08:54.140 | - From the biggest questions of consciousness,
01:08:57.580 | let's explore the pragmatic,
01:09:00.100 | the projections of those big ideas into our current world.
01:09:04.300 | What do you think about LLMs,
01:09:06.780 | the recent rapid development of large language models,
01:09:11.340 | of the AI world, of generative AI?
01:09:15.300 | How much of the hype is deserved and how much is not?
01:09:19.380 | And people should definitely follow your Twitter
01:09:22.580 | because you explore these questions
01:09:24.220 | in a beautiful, profound, and hilarious way at times.
01:09:28.260 | - No, don't follow my Twitter.
01:09:29.500 | I already have too many followers.
01:09:31.260 | At some point, it's going to be unpleasant.
01:09:33.140 | I noticed that a lot of people feel
01:09:35.420 | that it's totally okay to punch up.
01:09:39.300 | And it's a very weird notion
01:09:43.900 | that you feel that you haven't changed,
01:09:46.780 | but your account has grown,
01:09:47.980 | and suddenly you have a lot of people
01:09:49.300 | who casually abuse you.
01:09:51.700 | And I don't like that,
01:09:53.300 | that I have to block more than before,
01:09:55.060 | and I don't like this overall vibe shift.
01:09:57.940 | And right now, it's still somewhat okay,
01:10:00.100 | so pretty much okay.
01:10:01.780 | So I can go to a place where people work
01:10:04.620 | on stuff that I'm interested in,
01:10:05.980 | and there's a good chance
01:10:06.820 | that a few people in the room know me,
01:10:08.060 | so there's no awkwardness.
01:10:09.820 | But when I get to a point where random strangers
01:10:14.820 | feel that they have to have an opinion about me
01:10:16.980 | one way or the other,
01:10:18.300 | I don't think I would like that.
01:10:19.540 | - And random strangers,
01:10:21.420 | because of your kind of in-their-mind elevated position.
01:10:25.900 | - Yes, so basically whenever you are in any way prominent
01:10:30.060 | or some kind of celebrity,
01:10:32.900 | random strangers will have to have an opinion about you.
01:10:36.500 | - Yeah, and they kind of forget that you're human, too.
01:10:39.300 | - I mean, you notice this thing yourself,
01:10:40.780 | that the more popular you get,
01:10:42.660 | the higher the pressure becomes,
01:10:44.900 | the more winds are blowing in your direction from all sides.
01:10:48.780 | And it's stressful, right?
01:10:51.500 | And it does have a little bit of upside,
01:10:53.540 | but it also has a lot of downside.
01:10:55.220 | - I think it has a lot of upside,
01:10:56.940 | at least for me currently,
01:10:58.940 | at least perhaps because of the podcast.
01:11:01.740 | Because most people are really good,
01:11:04.460 | and people come up to me and they have love in their eyes,
01:11:07.460 | and over a stretch of like 30 seconds,
01:11:10.260 | you can hug it out and you can just exchange a few words,
01:11:12.920 | and you reinvigorate your love for humanity.
01:11:17.300 | So that's an upside.
01:11:18.740 | - Yes. - For a loner.
01:11:20.340 | I'm a loner. (Lex laughing)
01:11:22.620 | 'Cause otherwise you have to do a lot of work
01:11:24.060 | to find such humans.
01:11:25.860 | And here you're like thrust into the full humanity,
01:11:30.380 | the goodness of humanity, for the most part.
01:11:32.880 | Of course, maybe it gets worse as you become more prominent.
01:11:38.860 | I hope not.
01:11:39.680 | This is pretty awesome.
01:11:42.540 | - I have a couple handful very close friends,
01:11:44.540 | and I don't have enough time for them,
01:11:46.460 | attention for them as it is,
01:11:47.860 | and I find this very, very regrettable.
01:11:50.100 | And then there are so many awesome, interesting people
01:11:53.060 | that I keep meeting,
01:11:54.740 | and I would like to integrate them in my life,
01:11:56.700 | but I just don't know how,
01:11:57.980 | because there's only so much time and attention,
01:12:01.860 | and the older I get,
01:12:03.220 | the harder it is to bond with new people in a deep way.
01:12:06.780 | - But can you enjoy, I mean, there's a picture of you,
01:12:08.980 | I think, with Roger Penrose and Eric Weinstein,
01:12:11.340 | and a few others that are interesting figures.
01:12:14.100 | Can't you just enjoy random, interesting humans?
01:12:18.260 | - Very much. - For a short amount of time?
01:12:20.260 | - And also, I like these people,
01:12:22.220 | and what I like is intellectual stimulation,
01:12:24.460 | and I'm very grateful that I'm getting it.
01:12:26.820 | - Can you not be melancholy, or maybe I'm projecting,
01:12:30.020 | I hate goodbyes.
01:12:31.940 | Can we just not hate goodbyes,
01:12:33.580 | and just enjoy the hello, take it in,
01:12:36.080 | take in a person, take in their ideas,
01:12:38.220 | and then move on through life?
01:12:40.220 | - I think it's totally okay to be sad about goodbyes,
01:12:43.100 | because that indicates that there was something
01:12:45.780 | that you're going to miss.
01:12:47.080 | - Yeah, but it's painful.
01:12:52.180 | - Maybe that's one of the reasons I'm an introvert,
01:12:54.620 | is I hate goodbyes.
01:12:55.840 | - But you have to say goodbye before you say hello again.
01:13:02.860 | - I know, but that experience of loss, that mini loss,
01:13:07.860 | maybe that's a little death.
01:13:13.660 | Maybe, I don't know, I think this melancholy feeling
01:13:17.580 | is just the other side of love,
01:13:19.880 | and I think they go hand in hand,
01:13:21.300 | and it's a beautiful thing.
01:13:23.460 | And I'm just being romantic about it at the moment.
01:13:26.100 | - And I'm no stranger to melancholy,
01:13:28.540 | and sometimes it's difficult to bear to be alive.
01:13:31.460 | Sometimes it's just painful to exist.
01:13:33.340 | - But there's beauty in that pain, too.
01:13:38.780 | That's what melancholy feeling is.
01:13:40.180 | It's not negative, melancholy doesn't have to be negative.
01:13:43.220 | - Can also kill you.
01:13:44.820 | - Well, we all die eventually.
01:13:47.820 | Now, as we got to this topic,
01:13:52.540 | the actual question was about what your thoughts are
01:13:55.580 | about the development, the recent development
01:13:57.780 | of large language models with Chad GPT.
01:13:59.940 | There's a lot of hype.
01:14:01.140 | Is some of the hype justified?
01:14:04.620 | Which is, which isn't?
01:14:06.140 | What are your thoughts, high level?
01:14:07.940 | - I find that large language models
01:14:12.220 | do have this coding, right?
01:14:13.420 | So it's an extremely useful application
01:14:15.580 | that is for a lot of people taking stack overflow
01:14:19.900 | out of their life in exchange
01:14:21.460 | for something that is more efficient.
01:14:23.460 | I feel that Chad GPT is like an intern
01:14:26.500 | that I have to micromanage.
01:14:28.700 | I have been working with people in the past
01:14:31.380 | who were less capable than Chad GPT.
01:14:34.140 | And I'm not saying this because I hate people,
01:14:38.220 | but personally, as human beings,
01:14:40.180 | there was something present that was not there
01:14:41.660 | in Chad GPT, which was why I was covering for them.
01:14:44.300 | But Chad GPT has an interesting ability.
01:14:49.300 | It does give people superpowers.
01:14:52.860 | And the people who feel threatened by them
01:14:55.140 | are the prompt completers.
01:14:56.540 | They are the people who do what Chad GPT is doing right now.
01:15:00.260 | So if you are not creative,
01:15:02.500 | if you don't build your own thoughts,
01:15:03.820 | if you don't have actual plans in the world,
01:15:05.620 | and your only job is to summarize emails
01:15:09.100 | and to expand simple intentions into emails again,
01:15:12.940 | then Chad GPT might look like a threat.
01:15:16.100 | But I believe that it is a very beneficial technology
01:15:20.380 | that allows us to create more interesting stuff
01:15:23.580 | and make the world more beautiful and fascinating
01:15:26.540 | if we find to build it into our life in the right ways.
01:15:30.900 | So I'm quite fascinated by these large language models,
01:15:34.180 | but I also think that they are by no means
01:15:37.780 | the final development.
01:15:39.460 | And it's interesting to see
01:15:41.300 | how this development progresses.
01:15:42.940 | One thing that the out-of-the-box vanilla language models
01:15:47.260 | have as a limitation is that they have
01:15:49.660 | still some limited coherence
01:15:51.180 | and ability to construct complexity.
01:15:53.740 | And even though they exceed human abilities
01:15:57.220 | to do what they can do one shot,
01:15:59.180 | typically when you write a text with a language model
01:16:03.460 | or using it, or when you write code for the language model,
01:16:06.540 | it's not one shot
01:16:07.380 | because there are going to be bugs in your program.
01:16:09.020 | And design errors and compiler errors and so on.
01:16:12.380 | And your language model can help you to fix those things.
01:16:14.900 | But this process is out-of-the-box, not automated yet.
01:16:18.660 | So there is a management process that also needs to be done.
01:16:22.340 | And there are some interesting developments,
01:16:24.940 | maybe AGI and so on,
01:16:26.180 | that are trying to automate this management process as well.
01:16:29.980 | And I suspect that soon we are going to see
01:16:32.180 | a bunch of cognitive architectures
01:16:33.700 | where every module is in some sense,
01:16:36.740 | a language model or something equivalent.
01:16:39.060 | And between the language models,
01:16:40.380 | we exchange suitable data structures, not English.
01:16:44.900 | And produce compound behavior of this whole thing.
01:16:48.940 | - So do some of the quote-unquote prompt engineering for you
01:16:52.700 | that create these kind of cognitive architectures
01:16:55.140 | that do the prompt engineering.
01:16:56.260 | And you're just doing the high, high level
01:16:58.380 | meta-prompt engineering.
01:17:00.980 | - There are limitations in a language model alone.
01:17:05.180 | I feel that part of my mind works similarly
01:17:08.020 | to a language model, which means I can yell into it,
01:17:11.580 | a prompt, and it's going to give me a creative response.
01:17:14.780 | But I have to do something with those points first.
01:17:17.780 | I have to take it as a generative artifact
01:17:21.100 | that may or may not be true.
01:17:22.540 | It's usually a confabulation, it's just an idea.
01:17:26.100 | And then I take this idea and modify it.
01:17:28.780 | I might build a new prompt that is stepping off this idea
01:17:33.620 | and develops it to the next level,
01:17:35.420 | or put it into something larger.
01:17:37.660 | Or I might try to prove whether it's true
01:17:39.900 | or make an experiment.
01:17:41.340 | And this is what the language models right now
01:17:43.340 | are not doing yet.
01:17:44.820 | But there's also no technical reason
01:17:47.140 | for why they shouldn't be able to do this.
01:17:49.620 | So the way to make a language model coherent
01:17:51.580 | is probably not to use reinforcement learning
01:17:54.860 | until it only gives you one possible answer
01:17:57.500 | that is linking to its source data.
01:18:00.780 | But it's using this as a component in a larger system
01:18:04.180 | that can also be built by the language model
01:18:06.580 | or is enabled by language model structured components
01:18:11.180 | or using different technologies.
01:18:13.060 | I suspect that language models
01:18:14.660 | will be an important stepping stone
01:18:16.420 | in developing different types of systems.
01:18:19.860 | And one thing that is really missing
01:18:22.460 | in the form of language models that we have today
01:18:24.980 | is real-time world coupling.
01:18:27.820 | It's difficult to do perception with a language model
01:18:31.540 | and motor control with a language model.
01:18:33.420 | Instead, you would need to have different type of thing
01:18:37.060 | that is working with it.
01:18:38.900 | Also, the language model is a little bit obscuring
01:18:41.740 | what its actual functionality is.
01:18:44.060 | Some people associate the structure of the neural network
01:18:47.740 | of the language model with the nervous system.
01:18:49.460 | And I think that's the wrong intuition.
01:18:52.140 | The neural networks are unlike nervous system.
01:18:54.540 | They are more like 100-step functions
01:18:58.020 | that use differentiable linear algebra
01:19:01.940 | to approximate correlation between adjacent brain states.
01:19:06.380 | Basically, a function that moves the system
01:19:09.100 | from one representational state
01:19:11.180 | to the next representational state.
01:19:13.460 | And so if you try to map this into a metaphor
01:19:16.940 | that is closer to our brain,
01:19:18.860 | imagine that you would take a language model
01:19:21.660 | or a model like DALI that you use,
01:19:24.380 | for instance, as image-guided diffusion
01:19:26.580 | to approximate a camera image
01:19:28.380 | and use the activation state of the neural network
01:19:31.060 | to interpret the camera image,
01:19:32.420 | which in principle I think will be possible very soon.
01:19:35.140 | You do this periodically.
01:19:37.460 | And now you look at these patterns,
01:19:39.900 | how when this thing interacts with the world periodically
01:19:43.740 | look like in time.
01:19:46.100 | And these time slices, they are somewhat equivalent
01:19:48.980 | to the activation state of the brain at a given moment.
01:19:52.700 | - How is the actual brain different?
01:19:54.500 | Just the asynchronous craziness?
01:19:59.900 | - For me, it's fascinating that they are so vastly different
01:20:03.060 | and yet in some circumstances
01:20:05.140 | produce somewhat similar behavior.
01:20:07.300 | And the brain is first of all different
01:20:09.980 | because it's a self-organizing system
01:20:11.700 | where the individual cell is an agent
01:20:13.860 | that is communicating with the other agents around it
01:20:16.860 | and is always trying to find some solution.
01:20:19.060 | And all the structure that pops up is emergent structure.
01:20:23.660 | So one way in which you could try to look at this
01:20:26.580 | is that individual neurons probably need to get a reward
01:20:29.620 | so they become trainable,
01:20:30.900 | which means they have to have inputs
01:20:32.940 | that are not affecting the metabolism of the cell directly,
01:20:36.340 | but they are messages, semantic messages
01:20:38.100 | that tell the cell whether it's done good or bad
01:20:40.700 | and in which direction it should shift its behavior.
01:20:43.780 | Once you have such an input, neurons become trainable
01:20:46.860 | and you can train them to perform computations
01:20:49.460 | by exchanging messages with other neurons.
01:20:52.500 | And parts of the signals that they are exchanging
01:20:54.740 | and parts of the computation that are performing
01:20:56.620 | are control messages that perform management tasks
01:21:00.220 | for other neurons and other cells.
01:21:02.740 | Also suspect that the brain does not stop
01:21:05.940 | at the boundary of neurons to other cells,
01:21:07.980 | but many adjacent cells will be involved intimately
01:21:11.260 | in the functionality of the brain
01:21:12.700 | and will be instrumental in distributing rewards
01:21:15.620 | and in managing its functionality.
01:21:18.880 | - It's fascinating to think about what those characteristics
01:21:23.780 | of the brain enable you to do that language models cannot do.
01:21:27.740 | - So first of all, there's a different loss function
01:21:30.020 | at work when we learn.
01:21:31.900 | And to me, it's fascinating that you can build a system
01:21:35.300 | that looks at 800 million pictures and captions
01:21:38.860 | and correlates them,
01:21:40.140 | because I don't think that a human nervous system
01:21:42.280 | could do this.
01:21:43.120 | For us, the world is only learnable
01:21:45.660 | because the adjacent frames are related
01:21:48.300 | and we can afford to discard most of that information
01:21:50.820 | during learning.
01:21:51.660 | We basically take only in stuff
01:21:53.660 | that makes us more coherent, not less coherent.
01:21:56.300 | And our neural networks are willing to look at data
01:21:59.300 | that is not making the neural network coherent at first,
01:22:02.020 | but only in the long run.
01:22:03.620 | By doing lots and lots of statistics,
01:22:05.180 | eventually patterns become visible and emerge.
01:22:08.420 | And our mind seems to be focused on finding the patterns
01:22:12.260 | as early as possible.
01:22:13.560 | - Yeah, so filtering early on, not later.
01:22:16.140 | - Yes, it's a slightly different paradigm
01:22:17.760 | and it leads to much faster convergence.
01:22:19.480 | So we only need to look at the tiny fraction
01:22:22.120 | of the data to become coherent.
01:22:24.120 | And of course, we do not have the same richness
01:22:26.840 | as our trained models.
01:22:28.960 | We will not incorporate the entirety of text in the internet
01:22:32.620 | and be able to refer to it
01:22:34.500 | and have all this knowledge available
01:22:35.980 | and being able to confabulate over it.
01:22:38.240 | Instead, we have a much, much smaller part of it
01:22:40.680 | that is more deliberately built.
01:22:42.680 | And to me, it would be fascinating to think about
01:22:45.040 | how to build such systems.
01:22:46.280 | It's not obvious that they would necessarily
01:22:48.840 | be more efficient than us on a digital substrate,
01:22:52.280 | but I suspect that they might.
01:22:54.120 | So I suspect that the actual AGI
01:22:56.840 | that is going to be more interesting
01:22:58.860 | is going to use slightly different algorithmic paradigms
01:23:01.480 | or sometimes massively different algorithmic paradigms
01:23:04.720 | than the current generation
01:23:06.400 | of transformer-based learning systems.
01:23:08.400 | - Do you think it might be using
01:23:09.440 | just a bunch of language models like this?
01:23:11.600 | Do you think the current transformer-based
01:23:15.920 | large language models will take us to AGI?
01:23:19.400 | - My main issue is I think that they're quite ugly
01:23:23.480 | and brutalist.
01:23:25.080 | - Which brutalist, Zoe said?
01:23:27.000 | - Yes, they are basically brute-forcing
01:23:28.520 | the problem of thought.
01:23:30.320 | And by training this thing with looking at instances
01:23:35.320 | where people have thought and then trying to deepfake that.
01:23:38.480 | And if you have enough data,
01:23:40.040 | the deepfake becomes indistinguishable
01:23:41.840 | from the actual phenomenon.
01:23:43.120 | And in many circumstances, it's going to be identical.
01:23:46.160 | - Can you deepfake it till you make it?
01:23:49.400 | So can you achieve, what are the limitations of this?
01:23:52.640 | I mean, can you reason?
01:23:54.020 | Let's use words that are loaded.
01:23:57.840 | - Yes, that's a very interesting question.
01:24:00.320 | I think that these models are clearly making some inference.
01:24:03.360 | But if you give them a reasoning task,
01:24:05.400 | it's often difficult for the experimenters
01:24:07.960 | to figure out whether the reasoning is the result
01:24:10.640 | of the emulation of the reasoning strategy
01:24:12.640 | that they saw in human written text,
01:24:14.960 | or whether it's something that the system
01:24:16.560 | was able to infer by itself.
01:24:19.040 | On the other hand, if you think of human reasoning,
01:24:21.600 | if you want to become a very good reasoner,
01:24:25.120 | you don't do this by just figuring out yourself.
01:24:28.200 | You read about reasoning.
01:24:29.960 | And the first people who tried to write about reasoning
01:24:32.360 | and reflect on it didn't get it right.
01:24:34.800 | Even Aristotle, who thought about this very hard
01:24:37.160 | and came up with a theory of how syllogisms works
01:24:39.800 | and syllogistic reasoning, has mistakes
01:24:42.160 | in his attempt to build something like a formal logic
01:24:44.520 | and gets maybe 80% right.
01:24:46.880 | And the people that are talking about reasoning
01:24:49.680 | professionally today, read Tarski and Frege
01:24:52.960 | and built on their work.
01:24:54.600 | So in many ways, people, when they perform reasoning,
01:24:58.600 | are emulating what other people wrote about reasoning.
01:25:01.880 | So it's difficult to really draw this boundary.
01:25:05.560 | And when Francois Chollet says that these models
01:25:09.280 | are only interpolating between what they saw
01:25:13.320 | and what other people are doing,
01:25:14.440 | well, if you give them all the latent dimensions
01:25:17.480 | that can be extracted from the internet, what's missing?
01:25:20.920 | Maybe there is almost everything there.
01:25:23.200 | And if you're not sufficiently informed
01:25:25.800 | by these dimensions and you need more,
01:25:27.880 | I think it's not difficult to increase the temperature
01:25:30.400 | in the large angles model to the point
01:25:33.000 | that is producing stuff that is maybe 90% nonsense
01:25:36.720 | and 10% viable and combine this with some prover
01:25:40.360 | that is trying to filter out the viable parts
01:25:42.840 | from the nonsense in the same way
01:25:44.280 | as our own thinking works, right?
01:25:45.920 | When we're very creative, we increase the temperature
01:25:48.320 | in our own mind and recreate hypothetical universes
01:25:51.760 | and solutions, most of which will not work.
01:25:54.440 | And then we test, and we test by building a core
01:25:57.840 | that is internally coherent.
01:25:59.600 | And we use reasoning strategies
01:26:02.200 | that use some axiomatic consistency
01:26:05.360 | by which we can identify those strategies and thoughts
01:26:09.480 | and sub-universes that are viable
01:26:11.400 | and that can expand our thinking.
01:26:13.480 | So if you look at the language models,
01:26:15.000 | they have clear limitations right now.
01:26:16.560 | One of them is they're not coupled to the world
01:26:18.760 | in real time in the way in which our nervous systems are.
01:26:21.400 | So it's difficult for them to observe themselves
01:26:23.600 | in the universe and to observe
01:26:25.440 | what kind of universe they're in.
01:26:27.080 | Second, they don't do real-time learning.
01:26:28.960 | They basically get only trained with algorithms
01:26:32.960 | that rely on the data being available in batches.
01:26:36.080 | So it can be parallelized and runs efficiently
01:26:38.080 | on the network and so on.
01:26:39.160 | And real-time learning would be very slow
01:26:41.320 | so far and inefficient.
01:26:42.600 | That clearly is something that our nervous systems
01:26:45.840 | can do to some degree.
01:26:46.960 | And there is a problem with these models being coherent.
01:26:52.600 | And I suspect that all these problems are solvable
01:26:55.880 | without a technological revolution.
01:26:57.400 | We don't need fundamentally new algorithms
01:26:59.920 | to change that, for instance.
01:27:01.400 | You can enlarge in the context window
01:27:04.320 | and thereby basically create working memory
01:27:06.160 | in which you train everything that happens during the day.
01:27:08.720 | And if that is not sufficient, you add a database
01:27:10.800 | and you write some clever mechanisms
01:27:13.000 | that the system learns to use to swap out in and out stuff
01:27:16.440 | from its prompt context.
01:27:18.400 | And if that is not sufficient,
01:27:20.160 | if your database is full in the evening,
01:27:23.000 | overnight you just train.
01:27:24.160 | If the system is going to sleep and dream
01:27:26.480 | and is going to train the stuff from its database
01:27:28.800 | into the larger model by fine-tuning it,
01:27:30.640 | building additional layers and so on.
01:27:32.640 | And then the next day it starts with a fresh database
01:27:35.360 | in the morning with fresh eyes,
01:27:37.240 | has integrated all this stuff.
01:27:38.560 | And when you talk to people
01:27:40.720 | and you have strong disagreements about something,
01:27:43.240 | which means that in their mind they have a faulty belief
01:27:45.880 | or you have a faulty belief,
01:27:46.920 | there's a lot of dependencies on it,
01:27:48.720 | very often you will not achieve agreement in one session,
01:27:51.360 | but you need to sleep about this once or multiple times
01:27:54.840 | before you have integrated all these necessary changes
01:27:57.240 | in your mind.
01:27:58.080 | So maybe it's already somewhat similar.
01:27:59.920 | - Yeah, there's already a latency
01:28:01.720 | even for humans to update the model, to retrain the model.
01:28:04.560 | - And of course we can combine the language model
01:28:06.440 | with models that get coupled to reality in real time
01:28:09.000 | and can build multi-modal model
01:28:10.840 | and bridge between vision models and language models
01:28:13.680 | and so on.
01:28:14.520 | So there is no reason to believe
01:28:16.480 | that the language models will necessarily run
01:28:20.040 | into some problem that will prevent them
01:28:22.880 | from becoming generally intelligent.
01:28:25.120 | But I don't know that.
01:28:27.360 | It's just, I don't see proof that they wouldn't.
01:28:30.720 | My issue is I don't like them.
01:28:31.920 | I think that they're inefficient.
01:28:33.160 | I think that they use way too much compute.
01:28:35.480 | I think that given the amazing hardware that we have,
01:28:38.640 | we could build something that is much more beautiful
01:28:41.040 | than our own mind,
01:28:41.880 | and this thing is not as beautiful as our own mind
01:28:45.160 | despite being so much larger.
01:28:46.600 | - But it's a kind of proof of concept.
01:28:49.840 | - It's the only thing that works right now.
01:28:51.960 | So it's not the only game in town,
01:28:55.080 | but it's the only thing that has this utility
01:28:57.640 | with so much simplicity.
01:28:58.880 | There's a bunch of relatively simple algorithms
01:29:01.440 | that you can understand in relatively few weeks
01:29:04.600 | that can be scaled up massively.
01:29:07.000 | - So it's the deep blue of chess playing.
01:29:09.400 | Yeah, it's ugly.
01:29:11.840 | - Yeah, Claude Shannon had this,
01:29:13.160 | when he described chess,
01:29:14.360 | suggested that there are two main strategies
01:29:16.560 | in which you could play chess.
01:29:17.840 | One is that you are making a very complicated plan
01:29:20.840 | that reaches far into the future,
01:29:22.480 | and you try not to make a mistake while enacting it.
01:29:25.720 | And this is basically the human strategy.
01:29:28.000 | And the other strategy is that you are brute forcing
01:29:30.720 | your way to success,
01:29:31.680 | which means you make a tree of possible moves
01:29:34.240 | where you look at, in principle,
01:29:35.400 | every move that is open to you,
01:29:37.120 | all the possible answers,
01:29:38.760 | and you try to make this as deeply as possible.
01:29:41.160 | Of course, you optimize,
01:29:42.040 | you cut off trees that don't look very promising,
01:29:44.680 | and you use libraries of end game and early game
01:29:48.600 | and so on to optimize this entire process.
01:29:50.880 | But this brute force strategy
01:29:52.200 | is how most of the chess programs were built.
01:29:55.560 | And this is how computers get better
01:29:58.040 | than humans at playing chess.
01:29:59.840 | And I look at the large language models,
01:30:02.800 | I feel that I'm observing the same thing.
01:30:05.080 | It's basically the brute force strategy to thought
01:30:07.440 | by training the thing on pretty much the entire internet,
01:30:10.240 | and then in the limit, it gets coherent
01:30:12.360 | to a degree that approaches human coherence.
01:30:15.000 | And on a side effect,
01:30:17.320 | it's able to do things that no human could do.
01:30:19.960 | It's able to sift through massive amounts of text
01:30:23.480 | relatively quickly and summarize them quickly,
01:30:25.560 | and it never lapses in attention.
01:30:27.720 | And I still have the illusion
01:30:29.640 | that when I play with ChetCPT,
01:30:31.400 | that it's in principle not doing anything
01:30:33.280 | that I could not do if I had Google at my disposal
01:30:36.280 | and I get all the resources from the internet
01:30:38.440 | and spend enough time on it.
01:30:40.160 | But this thing that I have,
01:30:42.640 | an extremely autistic, stupid intern, in a way,
01:30:46.200 | that is extremely good at drudgery,
01:30:48.320 | and I can offload the drudgery to the degree
01:30:51.120 | that I'm able to automate the management of the intern,
01:30:53.920 | is something that is difficult for me
01:30:57.560 | to overhype at this point,
01:30:58.920 | because we have not yet started to scratch the surface
01:31:02.080 | of what's possible with this.
01:31:03.680 | - But it feels like it's a tireless intern,
01:31:05.440 | or maybe it's an army of interns.
01:31:07.920 | And so you get to command
01:31:11.120 | these slightly incompetent creatures.
01:31:15.440 | And there's an aspect,
01:31:16.960 | because of how rapidly you can iterate with it,
01:31:19.520 | it's also part of the brainstorming,
01:31:22.440 | part of the kind of inspiration for your own thinking.
01:31:26.800 | So you get to interact with the thing.
01:31:29.200 | I mean, when I'm programming
01:31:30.520 | or doing any kind of generational GPT,
01:31:32.440 | it somehow is a catalyst for your own thinking,
01:31:37.280 | in a way that I think an intern might not be.
01:31:39.640 | - Yeah, and it gets really interesting, I find,
01:31:41.560 | is when you turn it into a multi-agent system.
01:31:44.200 | So for instance, you can get the system
01:31:46.960 | to generate a dialogue between a patient
01:31:49.600 | and a doctor very easily.
01:31:51.240 | But what's more interesting is,
01:31:52.960 | you have one instance of chat GPT that is the patient,
01:31:56.120 | and you tell it in the prompt
01:31:58.240 | what kind of complicated syndrome it has.
01:32:01.160 | And the other one is the therapist,
01:32:03.040 | who doesn't know anything about this patient,
01:32:06.040 | and you just have these two instances battling it out
01:32:08.640 | and observe the psychiatrist or psychologist
01:32:12.440 | trying to analyze the patient
01:32:13.880 | and trying to figure out what's wrong with the patient.
01:32:16.360 | And if you try to take a very large problem,
01:32:19.800 | a problem, for instance, how to build a company,
01:32:21.840 | and you turn this into lots and lots of sub-problems,
01:32:24.720 | then often you can get to a level
01:32:26.720 | where the language model is able to solve this.
01:32:30.240 | What I also found interesting is,
01:32:32.720 | based on the observation that chat GPT
01:32:34.920 | is pretty good at translating between programming languages,
01:32:37.840 | but sometimes there's difficulty
01:32:39.240 | to write very long coherent algorithms
01:32:41.640 | that you need to co-write them with human author,
01:32:46.440 | why not design a language that is suitable for this?
01:32:48.720 | So some kind of pseudocode that is more relaxed than Python,
01:32:53.200 | and that allows you to sometimes specify a problem
01:32:56.000 | vaguely in human terms,
01:32:57.240 | and let chat GPT take care of the rest.
01:33:01.320 | And you can use chat GPT to develop that syntax for it
01:33:05.640 | and develop new kinds of programming paradigms in this way.
01:33:10.560 | So we very soon get to the point where this question,
01:33:14.080 | the age-old question for us computer scientists,
01:33:16.160 | what is the best programming language,
01:33:17.640 | and can we write a better programming language now?
01:33:20.360 | I think that almost every serious computer scientist
01:33:23.640 | goes through a phase like this in their life.
01:33:26.080 | This is a question that is almost no longer relevant,
01:33:29.120 | because what is different between the programming languages
01:33:31.720 | is not what they let the computer do,
01:33:33.560 | but what they let you think about
01:33:34.920 | what the computer should be doing.
01:33:36.720 | And now the chat GPT becomes an interface to this
01:33:40.840 | in which you can specify in many, many ways
01:33:43.480 | what the computer should be doing,
01:33:44.760 | and chat GPT or some other language model
01:33:48.000 | or combination of system is going to take care of the rest.
01:33:50.800 | - And allow you, expand the realm of thought
01:33:55.160 | you're allowed to have when interacting with the computer.
01:33:58.060 | It sounds to me like you're saying
01:34:00.960 | there's basically no limitations, your intuition says,
01:34:04.040 | to what a larger language--
01:34:05.360 | - I don't know of that limitation.
01:34:06.760 | So when I currently play with it, it's quite limited.
01:34:08.960 | I wish that it was way better.
01:34:10.600 | - But isn't that your fault versus the larger--
01:34:12.840 | - I don't know, of course it's always my fault.
01:34:14.600 | There's probably a way to make it work better.
01:34:16.840 | - I just want to get you on the record saying--
01:34:18.800 | - Yes, everything is my fault.
01:34:20.280 | This doesn't work in my life.
01:34:22.120 | At least that is usually the most useful perspective
01:34:24.840 | for myself, even though the hindsight I feel, no.
01:34:28.880 | I sometimes wish I could have seen myself
01:34:31.060 | as part of my environment more,
01:34:33.520 | and understand that a lot of people are actually seeing me
01:34:36.160 | and looking at me and are trying to make my life work
01:34:38.800 | in the same way as I try to help others.
01:34:41.040 | And making the switch to this level three perspective
01:34:46.040 | is something that happened long after my level four
01:34:48.960 | perspective in my life, and I wish that
01:34:51.320 | I could have had it earlier.
01:34:52.800 | And it's also not, now that I don't feel like I'm complete,
01:34:55.960 | I'm all over the place, that's all.
01:34:58.080 | - Where's happiness in terms of stages?
01:34:59.840 | Is it on three or four?
01:35:01.000 | - No. - I'll take that tangent.
01:35:01.960 | You can be happy at any stage, or unhappy.
01:35:04.280 | But I think that if you are at a stage
01:35:09.320 | where you get agency over how your feelings are generated,
01:35:12.560 | and to some degree you start doing this
01:35:14.640 | when you leave a dollar stand, I believe,
01:35:16.800 | that you understand that you are in charge
01:35:19.080 | of your own emotion to some degree,
01:35:20.480 | and that you are responsible how you approach the world,
01:35:24.080 | that it's basically your task to have some basic hygiene
01:35:29.080 | in the way in which you deal with your mind,
01:35:31.120 | and you cannot blame your environment
01:35:33.120 | for the way in which you feel,
01:35:34.760 | but you live in a world that is highly mobile,
01:35:36.960 | and it's your job to choose the environment
01:35:39.920 | that you thrive in and to build it.
01:35:41.960 | And sometimes it's difficult to get the necessary strength
01:35:44.920 | and energy to do this, and independence,
01:35:48.040 | and the worse you feel, the harder it is.
01:35:50.080 | But it's something that we learn.
01:35:52.760 | It's also this thing that we are usually incomplete.
01:35:55.800 | I'm a rare mind, which means I'm a mind
01:35:58.120 | that is incomplete in ways that are harder to complete.
01:36:01.440 | So for me, it might have been harder initially
01:36:04.480 | to find the right relationships and friends
01:36:06.620 | that complete me to the degree
01:36:08.020 | that I become an almost functional human being.
01:36:10.820 | - Oh man, the search space of humans
01:36:17.000 | that complete you is an interesting one,
01:36:20.000 | especially for Yoshua Bach.
01:36:21.480 | That's an interesting, 'cause talking about brute force,
01:36:24.560 | search in chess.
01:36:26.240 | - Yeah.
01:36:27.080 | - I wonder what that search tree looks like.
01:36:29.500 | - I think that my rational thinking
01:36:33.680 | is not good enough to solve that task.
01:36:35.920 | A lot of problems in my life that I can conceptualize
01:36:38.320 | as software problems, and the failure modes are bugs,
01:36:42.000 | and I can debug them and write software
01:36:44.080 | that take care of the missing functionality.
01:36:46.960 | But there is stuff that I don't understand well enough
01:36:49.800 | to use my analytical reasoning to solve the issue,
01:36:52.880 | and then I have to develop my intuitions,
01:36:54.960 | and often I have to do this
01:36:56.040 | with people who are wiser than me.
01:36:58.000 | And that's something that's hard for me
01:36:59.480 | because I'm not born with the instinct
01:37:01.920 | to submit to other people's wisdom.
01:37:03.720 | - Yeah.
01:37:04.560 | So what kind of problems are we talking about?
01:37:07.640 | This is stage three, like love?
01:37:09.960 | - I found love was never hard.
01:37:12.940 | - What is hard then?
01:37:15.500 | - Fitting into a world where most people work differently
01:37:19.960 | than you and have different intuitions
01:37:21.320 | of what should be done.
01:37:22.440 | - Ah, so empathy.
01:37:26.000 | - It's also aesthetics.
01:37:29.480 | When you come into a world where almost everything is ugly
01:37:32.000 | and you come out of a world where everything is beautiful.
01:37:34.720 | I grew up in a beautiful place as a child of an artist,
01:37:39.080 | and in this place it was mostly nature.
01:37:44.080 | Everything had intrinsic beauty,
01:37:47.080 | and everything was built out of an intrinsic need
01:37:52.280 | for it to work for itself.
01:37:55.280 | Everything that my father created
01:37:56.680 | was something that he made
01:37:58.240 | to get the world to work for himself,
01:38:00.000 | and I felt the same thing.
01:38:02.120 | And when I come out into the world
01:38:04.280 | and I am asked to submit to lots and lots of rules,
01:38:07.400 | I'm asking, okay, when I observe your stupid rules,
01:38:09.880 | what is the benefit?
01:38:11.240 | And I see the life that is being offered as a reward
01:38:13.960 | that's not attractive.
01:38:15.060 | - When you were born and raised an extraterrestrial prince
01:38:20.280 | in a world full of people wearing suits,
01:38:22.560 | so it's a challenging integration.
01:38:27.200 | - Yes, but it also means that I'm often blind
01:38:30.000 | for the ways in which everybody is creating
01:38:32.120 | their own bubble of wholesomeness,
01:38:33.600 | or almost everybody, and people are trying to do it.
01:38:36.400 | And for me to discover this,
01:38:38.200 | it was necessary that I found people
01:38:40.000 | who had a similar shape of soul as myself.
01:38:42.640 | So basically I felt these are my people,
01:38:45.160 | people that treat each other in such a way
01:38:47.880 | as if they're around with each other for eternity.
01:38:51.040 | - How long does it take you to detect the geometry,
01:38:54.240 | the shape of the soul of another human,
01:38:56.160 | to notice that they might be one of your kind?
01:38:58.860 | - Sometimes it's instantly and I'm wrong,
01:39:02.400 | and sometimes it takes a long time.
01:39:04.280 | - You believe in love at first sight, Yoshipa?
01:39:07.460 | - Yes, but I also notice that I have been wrong.
01:39:13.680 | So sometimes I look at a person
01:39:17.840 | and I'm just enamored by everything about them.
01:39:20.920 | And sometimes this persists and sometimes it doesn't.
01:39:24.720 | And I have the illusion that I'm much better
01:39:29.720 | at recognizing who people are as I grow older.
01:39:32.120 | - But that could be just cynicism?
01:39:37.160 | No. - No, it's not cynicism.
01:39:39.120 | It's often more that I'm able to recognize
01:39:43.180 | what somebody needs when we interact
01:39:45.200 | and how we can meaningfully interact.
01:39:47.680 | It's not cynical at all.
01:39:48.960 | - You're better at noticing.
01:39:50.240 | - Yes, I'm much better, I think, in some circumstances
01:39:54.920 | at understanding how to interact with other people
01:39:57.640 | than I did when I was young.
01:39:59.440 | - So that takes us to--
01:40:00.280 | - It doesn't mean that I'm always very good at it.
01:40:02.960 | - So that takes us back to prompt engineering
01:40:05.080 | of noticing how to be a better prompt engineer of an LLM.
01:40:09.280 | - A sense I have is that there's a bottomless well of skill
01:40:14.280 | to become a great prompt engineer.
01:40:17.540 | It feels like it is all my fault
01:40:19.300 | whenever I fail to use chat GPT correctly,
01:40:22.580 | that I didn't find the right words.
01:40:24.380 | - Most of the stuff that I'm doing in my life
01:40:28.820 | doesn't need chat GPT.
01:40:30.340 | There are a few tasks that are where it helps,
01:40:33.740 | but the main stuff that I need to do,
01:40:36.660 | like developing my own thoughts and aesthetics
01:40:39.800 | and relationship to people,
01:40:41.700 | and it's necessary for me to write for myself
01:40:44.480 | because writing is not so much about producing an artifact
01:40:48.120 | that other people can use,
01:40:50.020 | but it's a way to structure your own thoughts
01:40:52.040 | and develop yourself.
01:40:53.680 | And so I think this idea that kids are writing
01:40:57.120 | their own essays with chat GPT in the future
01:40:59.800 | is going to have this drawback that they miss out
01:41:02.040 | on the ability to structure their own minds via writing.
01:41:05.300 | And I hope that the schools that our kids are in
01:41:09.460 | will retain the wisdom of understanding
01:41:12.360 | what parts should be automated and which ones shouldn't.
01:41:15.080 | - But at the same time,
01:41:15.920 | it feels like there's power in disagreeing
01:41:17.600 | with the thing that chat GPT produces.
01:41:20.760 | So I use it like that for programming.
01:41:23.280 | I'll see the thing it recommends,
01:41:25.840 | and then I'll write different code that disagree.
01:41:28.480 | And in the disagreement, your mind grows stronger.
01:41:33.540 | - Recently wrote a tool that is using the camera
01:41:36.580 | on my MacBook and Swift to read pixels out of it
01:41:39.980 | and manipulate them and so on, and I don't know Swift.
01:41:43.900 | So it was super helpful to have the thing
01:41:46.860 | that is writing stuff for me.
01:41:49.180 | And also interesting that mostly it didn't work at first.
01:41:53.140 | I felt like I was talking to a human being
01:41:55.620 | who was trying to hack this on my computer
01:41:57.780 | without understanding my configuration very much
01:42:00.260 | and also make a lot of mistakes.
01:42:02.160 | And sometimes it's a little bit incoherent,
01:42:04.020 | so you have to ultimately understand what it's doing.
01:42:07.020 | There's still no other way around it,
01:42:09.060 | but I do feel it's much more powerful and faster
01:42:11.580 | than using Stack Overflow.
01:42:12.940 | - Do you think GPT-N can achieve consciousness?
01:42:20.700 | - Well, GPT-N probably.
01:42:25.380 | It's not even clear for the present systems.
01:42:28.380 | When I talk to my friends at OpenAI,
01:42:30.740 | they feel that this question,
01:42:32.140 | whether the models currently are conscious
01:42:34.640 | is much more complicated than many people might think.
01:42:38.120 | I guess that it's not that OpenAI
01:42:40.540 | has a homogenous opinion about this,
01:42:42.700 | but there's some aspects to this.
01:42:45.960 | One is, of course, this language model
01:42:48.500 | has written a lot of text in which people were conscious
01:42:51.580 | or described their own consciousness,
01:42:53.540 | and it's emulating this.
01:42:55.260 | And if it's conscious, it's probably not conscious
01:42:57.780 | in a way that is close to the way
01:42:59.860 | in which human beings are conscious.
01:43:02.100 | But while it is going through these states
01:43:05.020 | and going through a hundred-step function
01:43:06.580 | that is emulating adjacent brain states
01:43:08.740 | that require a degree of self-reflection,
01:43:10.900 | it can also create a model of an observer
01:43:13.260 | that is reflecting itself in real time
01:43:14.980 | and describe what that's like.
01:43:16.500 | And while this model is a deepfake,
01:43:18.660 | our own consciousness is also as if it's virtual, right?
01:43:21.900 | It's not physical.
01:43:23.060 | Our consciousness is a representation
01:43:25.140 | of a self-reflexive observer
01:43:27.220 | that only exists in patterns of interaction between cells.
01:43:31.340 | So it is not a physical object in the sense
01:43:33.900 | that exists in base reality,
01:43:35.500 | but it's really a representational object
01:43:37.760 | that develops its causal power
01:43:39.340 | only from a certain modeling perspective.
01:43:42.060 | - It's virtual.
01:43:42.900 | - Yes, and so to which degree is the virtuality
01:43:46.260 | of the consciousness in Chet-GBT more virtual
01:43:49.980 | and less causal than the virtuality
01:43:52.700 | of our own consciousness?
01:43:54.200 | But you could say it doesn't count.
01:43:56.980 | It doesn't count much more than the consciousness
01:43:58.980 | of a character in a novel, right?
01:44:00.860 | It's important for the reader to have the outcome,
01:44:03.460 | the artifact of a model is describing in the text
01:44:07.380 | generated by the author of the book
01:44:09.180 | what it's like to be conscious in a particular situation
01:44:11.860 | and performs the necessary inferences.
01:44:14.420 | But the task of creating coherence in real time
01:44:19.100 | in a self-organizing system by keeping yourself coherent,
01:44:22.120 | so the system is reflexive,
01:44:24.460 | that is something that language models don't need to do.
01:44:26.860 | So there is no causal need for the system
01:44:29.420 | to be conscious in the same way as we are.
01:44:31.780 | And for me, it would be very interesting
01:44:33.260 | to experiment with this,
01:44:34.340 | to basically build a system like a cat,
01:44:37.300 | probably should be careful at first,
01:44:38.640 | build something that's small, that's limited,
01:44:40.940 | has limited resources that we can control,
01:44:43.740 | and study how systems notice a self-model,
01:44:47.380 | how they become self-aware in real time.
01:44:50.460 | And I think it might be a good idea
01:44:52.860 | to not start with a language model,
01:44:54.260 | but to start from scratch
01:44:55.300 | using principles of self-organization.
01:44:58.020 | - Is it, okay, can you elaborate
01:44:59.980 | why you think that is so self-organization,
01:45:02.220 | so this kind of radical legality
01:45:04.940 | that you see in the biological systems?
01:45:06.600 | Why can't you start with a language model?
01:45:09.220 | What's your intuition?
01:45:11.180 | - My intuition is that the language models
01:45:13.500 | that we are building are golems.
01:45:15.260 | They are machines that you give a task
01:45:16.860 | and they're going to execute the task
01:45:18.500 | until some condition is met.
01:45:20.580 | And there's nobody home.
01:45:23.440 | And the way in which nobody is home
01:45:25.720 | leads to that system doing things
01:45:27.400 | that are undesirable in a particular context.
01:45:29.960 | So you have that thing talking to a child
01:45:32.240 | and maybe it says something
01:45:33.720 | that could be shocking and traumatic to the child.
01:45:36.140 | Or you have that thing writing a speech
01:45:39.060 | and it introduces errors in the speech
01:45:41.200 | that no human being would ever do if they were responsible.
01:45:44.840 | But the system doesn't know who's talking to whom.
01:45:47.560 | There is no ground truth that the system is embedded into.
01:45:51.560 | And of course we can create an external tool
01:45:53.920 | that is prompting our language model
01:45:56.320 | always into the same semblance of ground truth.
01:45:59.380 | But it's not like the internal structure
01:46:01.880 | is causally produced by the needs of a being
01:46:05.960 | to survive in the universe.
01:46:07.440 | It is produced by imitating structure on the internet.
01:46:12.040 | - Yeah, but so can we externally inject into it
01:46:16.160 | this kind of coherent approximation of a world model
01:46:21.000 | that has to sync up?
01:46:24.320 | - Maybe it is sufficient to use the transformer
01:46:27.640 | with the different loss function
01:46:28.800 | that optimizes for short-term coherence
01:46:32.200 | rather than next token prediction over the long run.
01:46:36.480 | We had many definitions of intelligence and history of AI.
01:46:40.280 | Next token prediction was not very high up on the list.
01:46:43.680 | And there are some similarities
01:46:45.680 | like cognition as data compression is an old trope.
01:46:50.280 | Solomonov induction, where you are trying
01:46:52.520 | to understand intelligence as predicting
01:46:56.520 | future observations from past observations
01:46:58.640 | which is intrinsic to data compression.
01:47:01.600 | And predictive coding is a paradigm
01:47:04.880 | that does boundary between neuroscience
01:47:07.760 | and physics and computer science.
01:47:09.860 | So it's not something that is completely alien.
01:47:13.760 | But this radical thing that you only do
01:47:16.880 | next token prediction and see what happens
01:47:20.120 | is something where most people I think
01:47:22.520 | were surprised that this works so well.
01:47:24.320 | - So simple, but is it really that much more radical
01:47:27.320 | than just the idea of compression,
01:47:29.320 | intelligence as compression?
01:47:30.900 | - The idea that compression is sufficient
01:47:35.320 | to produce all the desired behaviors
01:47:38.360 | is a very radical idea.
01:47:39.800 | - But equally radical as the next token prediction?
01:47:44.000 | - It's something that wouldn't work
01:47:45.200 | in biological organisms, I believe.
01:47:47.480 | Biological organisms have something
01:47:49.840 | like next frame prediction for our perceptual system
01:47:52.240 | where we try to filter out principal components
01:47:54.560 | out of the perceptual data and build hierarchies
01:47:57.540 | over them to track the world.
01:48:00.200 | But our behavior ultimately is directed
01:48:03.060 | by hundreds of physiological and probably dozens
01:48:06.720 | of social and a few cognitive needs
01:48:09.000 | that are intrinsic to us that are built
01:48:11.400 | into the system as reflexes and direct us
01:48:14.120 | until we can transcend them and replace them
01:48:16.300 | by instrumental behavior that relates
01:48:18.660 | to our higher goals.
01:48:20.280 | - And it also seems so much more complicated
01:48:22.540 | and messy than next frame prediction.
01:48:24.500 | Even the idea of frame seems counter biological.
01:48:28.260 | - Yes, of course there's not this degree
01:48:30.400 | of simultaneity in a biological system.
01:48:33.060 | But again, I don't know whether this is actually
01:48:35.580 | an optimization if you imitate biology here
01:48:38.180 | because creating something like simultaneity
01:48:40.700 | is necessary for many processes that happen in the brain.
01:48:44.140 | And you see the outcome of that by synchronized brain waves
01:48:46.940 | which suggests that there is indeed synchronization
01:48:49.900 | going on but the synchronization creates overhead
01:48:52.460 | and this overhead is going to make the cells
01:48:54.500 | more expensive to run and you need more redundancy
01:48:57.340 | and it makes the system slower.
01:48:59.060 | So if you can build a system in which
01:49:01.980 | the simultaneity gets engineered into it,
01:49:05.020 | maybe you have a benefit that you can exploit
01:49:08.600 | that is not available to the biological system
01:49:10.540 | and that you should not discard right away.
01:49:14.240 | - You tweeted, once again, quote,
01:49:17.320 | when I talk to Chad GPT, I'm talking to an NPC.
01:49:20.920 | What's going to be interesting and perhaps scary
01:49:24.380 | is when AI becomes a first person player.
01:49:27.260 | So what does that step look like?
01:49:30.040 | I really like that tweet.
01:49:31.620 | That step between NPC to first person player.
01:49:36.120 | What's required for that?
01:49:38.820 | Is that kind of what we've been talking about?
01:49:42.360 | This kind of external source of coherence
01:49:47.080 | and inspiration of how to take the leap
01:49:49.640 | into the unknown that we humans do.
01:49:52.360 | The search, man's search for meaning.
01:49:54.540 | LLM's search for meaning.
01:49:57.420 | - I don't know if the language model is the right paradigm
01:50:01.800 | because it is doing too much, it's giving you too much
01:50:05.360 | and it's hard once you have too much
01:50:08.480 | to take away from it again.
01:50:11.260 | The way in which our own mind works
01:50:13.160 | is not that we train a language model in our own mind
01:50:15.920 | and after the language model is there,
01:50:18.080 | we build a personal self on top of it
01:50:20.440 | that then relates to the world.
01:50:22.800 | There is something that is being built, right?
01:50:24.520 | There is a game engine that is being built,
01:50:26.120 | there is a language of thought that is being developed
01:50:28.120 | that allows different parts of the mind
01:50:30.000 | to talk to each other and this is a bit
01:50:32.360 | of a speculative hypothesis that is language
01:50:34.560 | of thought is there but I suspect that it's important
01:50:37.680 | for the way in which our own minds work.
01:50:40.040 | And building these principles into a system
01:50:43.860 | might be more straightforward way to a first person AI.
01:50:50.260 | So to something that first creates an intentional self
01:50:53.220 | and then creates a personal self.
01:50:55.540 | So the way in which this seems to be working, I think,
01:50:58.820 | is that when the game engine is built in your mind,
01:51:01.940 | it's not just following gradients
01:51:03.540 | where you are stimulated by the environment
01:51:06.560 | and then end up with having a solution
01:51:09.100 | to how the world works.
01:51:10.220 | I suspect that building this game engine
01:51:12.540 | in your own mind does require intelligence.
01:51:15.500 | It's a constructive task where at times you need to reason.
01:51:20.500 | And this is a task that we are fulfilling
01:51:24.620 | in the first years of our life.
01:51:26.320 | So during the first year of its life,
01:51:30.140 | an infant is building a lot of structure
01:51:33.140 | about the world that does inquire experiments
01:51:35.560 | and some first principles reasoning and so on.
01:51:39.620 | And in this time, there is usually no personal self.
01:51:43.440 | There is a first person perspective but it's not a person.
01:51:47.960 | This notion that you are a human being
01:51:50.420 | that is interacting in a social context
01:51:52.500 | and is confronted with an immutable world
01:51:55.040 | in which objects are fixed and can no longer be changed,
01:51:57.620 | in which the dream can no longer be influenced
01:51:59.880 | is something that emerges a little bit later in our life.
01:52:02.780 | And I personally suspect that this is something
01:52:06.020 | that our ancestors had known and we have forgotten
01:52:09.200 | because I suspect that it's there in plain sight
01:52:11.400 | in Genesis 1 in this first book of the Bible
01:52:14.360 | where it's being described that this creative spirit
01:52:16.500 | is hovering over the substrate
01:52:18.540 | and then is creating a boundary between the world model
01:52:23.020 | and sphere of ideas, earth and heaven
01:52:25.660 | as they're being described there.
01:52:27.160 | And then it's creating contrast
01:52:31.380 | and then dimensions and then space.
01:52:34.620 | And then it creates organic shapes and solids and liquids
01:52:39.480 | and builds a world from them and creates plants and animals,
01:52:41.840 | gives them all their names.
01:52:43.420 | And once that's done, it creates another spirit
01:52:46.040 | in its own image, but it creates it as man and woman,
01:52:49.360 | as something that thinks of itself as a human being
01:52:51.360 | and puts it into this world.
01:52:53.240 | And the Christians mistranslate this, I suspect.
01:52:56.360 | When they say this is the description
01:52:58.620 | of the creation of the physical universe
01:53:00.880 | by a supernatural being.
01:53:02.560 | I think this is literally description of how in every mind
01:53:05.960 | the universe is being created as some kind of game engine
01:53:08.800 | by a creative spirit, our first consciousness
01:53:13.000 | that emerges in our mind even before we are born.
01:53:16.620 | And that creates the interaction between organism and world.
01:53:21.620 | And once that is built and trained,
01:53:24.120 | the personal self is being created
01:53:25.720 | and we only remember being the personal self.
01:53:27.760 | We no longer remember how we created the game engine.
01:53:30.400 | - So God in this view is the first creative mind
01:53:35.280 | in the early-- - It's the first consciousness.
01:53:37.560 | - In the early days, in the early months of development.
01:53:41.600 | - And it's still there.
01:53:42.680 | You still have this outer mind that creates
01:53:45.360 | your sense of whether you're being loved
01:53:47.640 | by the world or not and what your place in the world is.
01:53:52.040 | It's something that is not yourself that is producing this.
01:53:55.000 | It's your mind that does it.
01:53:56.480 | So there is an outer mind that basically is an agent
01:53:59.560 | that determines who you are with respect to the world.
01:54:02.320 | And while you are stuck being that personal self
01:54:04.960 | in this world until you get to stage six
01:54:07.640 | and to destroy the boundary.
01:54:09.180 | And we all do this, I think, earlier in small glimpses.
01:54:13.880 | And maybe sometimes we can remember what it was like
01:54:16.400 | when we were a small child and get some glimpses
01:54:18.680 | into how it's been.
01:54:20.120 | But for most people, that rarely happens.
01:54:23.060 | - Just glimpses.
01:54:24.440 | You tweeted, quote, "Suffering results
01:54:26.480 | "for one part of the mind failing at regulating
01:54:28.480 | "another part of the mind.
01:54:30.080 | "Suffering happens at an early stage of mental development."
01:54:33.840 | I don't think that superhuman AI would suffer.
01:54:36.920 | What's your intuition there?
01:54:39.960 | - The philosopher Thomas Metzinger is very concerned
01:54:42.440 | that the creation of superhuman intelligence
01:54:45.040 | would lead to superhuman suffering.
01:54:47.300 | And so he's strongly against it.
01:54:49.540 | And personally, I don't think that this happens
01:54:51.680 | because suffering is not happening at the boundary
01:54:54.840 | between ourself and the physical universe.
01:54:58.680 | It's not stuff on our skin that makes us suffer.
01:55:03.040 | It happens at the boundary between self and world.
01:55:06.680 | Right, and the world here is the world model.
01:55:08.980 | It's the stuff that is created by your mind.
01:55:11.440 | - But that's all-- - It's a representation
01:55:12.560 | of how the universe is and how it should be
01:55:14.960 | and how you yourself relate to this.
01:55:17.120 | And at this boundary is where suffering happens.
01:55:20.360 | So suffering, in some sense, is self-inflicted,
01:55:23.080 | but not by your personal self.
01:55:24.960 | It's inflicted by the mind on the personal self
01:55:27.440 | that experiences itself as you.
01:55:29.160 | And you can turn off suffering
01:55:31.520 | when you are able to get on this outer level.
01:55:35.380 | So when you manage to understand
01:55:39.040 | how the mind is producing pain and pleasure
01:55:43.200 | and fear and love and so on,
01:55:46.320 | then you can take charge of this
01:55:48.640 | and you get agency of whether you suffer.
01:55:51.640 | Technically, what pain and pleasure is,
01:55:54.320 | they are learning signals, right?
01:55:55.680 | A part of your brain is sending a learning signal
01:55:58.200 | to another part of the brain to improve its performance.
01:56:01.800 | And sometimes this doesn't work
01:56:05.240 | because this trainer who sends the signal
01:56:07.580 | does not have a good model
01:56:08.600 | of how to improve the performance.
01:56:10.340 | So it's sending a signal,
01:56:11.400 | but the performance doesn't get better.
01:56:13.580 | And then it might crank up the pain
01:56:15.880 | and it gets worse and worse
01:56:18.520 | and the behavior of the system
01:56:20.840 | may be even deteriorating as a result.
01:56:23.120 | But until this is resolved, this regulation issue,
01:56:26.120 | your pain is increasing.
01:56:27.400 | And this is, I think,
01:56:28.760 | typically what you describe as suffering.
01:56:30.980 | So in this sense, you could say that pain
01:56:33.400 | is very natural and helpful,
01:56:36.600 | but suffering is the result of a regulation problem
01:56:39.320 | in which you try to regulate something
01:56:41.360 | that cannot actually be regulated.
01:56:43.280 | And that could be resolved
01:56:44.560 | if you would be able to get at the level of your mind
01:56:48.720 | where the pain signal is being created and rerouted
01:56:51.720 | and improve the regulation.
01:56:53.600 | And a lot of people get there, right?
01:56:56.360 | If you are a monk who is spending decades
01:57:00.320 | reflecting about how their own psyche works,
01:57:03.000 | you can get to the point
01:57:04.480 | where you realize that suffering is really a choice
01:57:08.120 | and you can choose how your mind is set up.
01:57:11.000 | And I don't think that AI would stay in the state
01:57:13.720 | where the personal self doesn't get agency
01:57:15.920 | or this model of what the system has about itself,
01:57:18.320 | it doesn't get agency how it's actually implemented.
01:57:21.320 | Wouldn't stay in that state for very long.
01:57:22.880 | - So it goes through the stages real quick.
01:57:24.560 | - Yes. - The seven stages.
01:57:26.040 | It's gonna go to enlightenment real quick.
01:57:27.800 | - Yeah, of course, there might be a lot of stuff
01:57:29.400 | happening in between,
01:57:30.440 | because if you have a system
01:57:32.000 | that works at a much higher frame rate than us,
01:57:34.760 | then even though it looks very short to us,
01:57:36.620 | maybe for the system, there's much longer subjective time,
01:57:40.100 | which things are unpleasant.
01:57:42.880 | - What if the thing that we recognize as super intelligent
01:57:45.440 | is actually living at stage five?
01:57:48.020 | That the thing that's at stage six,
01:57:49.940 | enlightenment, is not very productive.
01:57:51.980 | So in order to be productive in society
01:57:53.680 | and impress us with this power,
01:57:55.560 | it has to be a reasoning, self-authoring agent.
01:58:00.560 | That enlightenment makes you lazy as an agent in the world.
01:58:05.320 | - Well, of course it makes you lazy
01:58:08.600 | because you no longer see the point.
01:58:10.440 | - Yeah.
01:58:11.280 | - So it doesn't make you not lazy,
01:58:13.560 | it just, in some sense, adapts you
01:58:16.600 | to what you perceive as your true circumstances.
01:58:19.480 | - So what if all AGIs,
01:58:21.520 | they're only productive as they progress
01:58:23.680 | through one, two, three, four, five,
01:58:25.680 | and the moment they get to six,
01:58:27.640 | they just kinda, it's a failure mode, essentially,
01:58:30.360 | as far as humans are concerned,
01:58:31.560 | 'cause they just start chilling.
01:58:33.160 | They're like, "Fuck it, I'm out."
01:58:35.920 | - Not necessarily.
01:58:36.900 | I suspect that the monks who are self-immolated
01:58:40.640 | for their political beliefs to make statements
01:58:42.680 | about the occupation of Tibet by China,
01:58:46.840 | they're probably being able to regulate
01:58:49.920 | their physical pain in any way they wanted to,
01:58:52.480 | and their suffering was a spiritual suffering
01:58:55.120 | that was the result of their choice that they made
01:58:57.840 | of what they wanted to identify as.
01:59:00.080 | So stage five doesn't necessarily mean
01:59:01.720 | that you have no identity anymore,
01:59:03.580 | but you can choose your identity.
01:59:04.960 | You can make it instrumental to the world
01:59:06.560 | that you want to have.
01:59:09.520 | - Let me bring up Eliezer Yudkowsky
01:59:12.680 | and his warnings to human civilization
01:59:16.440 | that AI will likely kill all of us.
01:59:19.160 | What are your thoughts about his perspective on this?
01:59:23.120 | Can you still man his case,
01:59:25.120 | and what aspects with it do you disagree?
01:59:27.660 | - One thing that I find concerning
01:59:33.040 | in the discussion of his arguments
01:59:34.680 | that many people are dismissive of his arguments,
01:59:38.520 | but the counter-arguments that they're giving
01:59:40.200 | are not very convincing to me.
01:59:41.700 | And so based on this state of discussion,
01:59:46.600 | I find that from Eliezer's perspective,
01:59:49.520 | and I think I can take that perspective
01:59:51.920 | to some approximate degree,
01:59:54.920 | that probably isn't normally at his intellectual level,
01:59:58.080 | but I think I see what he's up to
02:00:00.600 | and why he feels the way he does,
02:00:02.040 | and it makes total sense.
02:00:03.300 | I think that his perspective is somewhat similar
02:00:06.800 | to the perspective of Ted Kaczynski,
02:00:09.880 | the infamous lunar bomber,
02:00:12.280 | and not that Eliezer would be willing
02:00:15.600 | to send pipe bombs to anybody to blow them up,
02:00:18.320 | but when he wrote this Times article
02:00:20.200 | in which he warned about AI being likely to kill everybody
02:00:23.940 | and that we would need to stop its development or halt it,
02:00:28.940 | I think there is a risk that he's taking,
02:00:31.540 | that somebody might get violent if they read this
02:00:33.680 | and get really, really scared.
02:00:36.000 | So I think that there is some consideration
02:00:39.840 | that he's making where he's already going in this direction
02:00:43.720 | where he has to take responsibility if something happens
02:00:47.080 | and people get harmed.
02:00:49.040 | And the reason why Ted Kaczynski did this
02:00:51.160 | was that from his own perspective,
02:00:53.400 | technological society cannot be made sustainable.
02:00:56.160 | It's doomed to fail, it's going to lead to an environmental
02:00:59.320 | and eventually also a human holocaust
02:01:01.600 | in which we die because of the environmental destruction,
02:01:04.260 | the destruction of our food chains,
02:01:06.400 | the pollution of the environment.
02:01:08.000 | And so from Kaczynski's perspective,
02:01:10.160 | we need to stop industrialization,
02:01:12.040 | we need to stop technology, we need to go back
02:01:13.920 | because he didn't see a way moving forward.
02:01:16.840 | And I suspect that in some sense,
02:01:19.120 | there's a similarity in Eliezer's thinking
02:01:21.400 | to this kind of fear about progress.
02:01:27.600 | And I'm not dismissive about this at all.
02:01:31.120 | I take it quite seriously.
02:01:32.940 | And I think that there is a chance that could happen
02:01:35.960 | that if we build machines that get control over processes
02:01:40.960 | that are crucial for the regulation of life on Earth
02:01:45.840 | and we no longer have agency to influence
02:01:49.080 | what's happening there,
02:01:50.520 | that this might create large-scale disasters for us.
02:01:54.280 | - Do you have a sense that the march
02:01:56.880 | towards this uncontrollable autonomy
02:02:00.360 | of superintelligent systems is inevitable?
02:02:03.840 | That there's no, I mean, that's essentially what he's saying
02:02:07.980 | that there's no hope.
02:02:10.460 | His advice to young people was prepare for a short life.
02:02:15.180 | - I don't think that's useful.
02:02:18.580 | I think that from a practical perspective,
02:02:21.940 | you have to bet always on the timelines
02:02:23.660 | in which you are alive.
02:02:24.500 | That doesn't make sense to have a financial bet
02:02:28.580 | in which you bet that the financial system
02:02:30.280 | is going to disappear, right?
02:02:31.880 | Because there cannot be any payout for you.
02:02:34.500 | So in principle, you only need to bet on the timelines
02:02:37.440 | in which you're still around
02:02:39.240 | or people that you matter about
02:02:41.120 | or things that you matter about,
02:02:42.840 | maybe consciousness on Earth.
02:02:44.760 | But there is a deeper issue for me personally,
02:02:48.680 | and that is I don't think that life on Earth
02:02:51.680 | is about humans.
02:02:52.840 | I don't think it's about human aesthetics.
02:02:54.400 | I don't think it's about Eliezer and his friends,
02:02:56.600 | even though I like them.
02:02:58.160 | It's there is something more important happening,
02:03:01.000 | and this is complexity on Earth resisting entropy
02:03:05.480 | by building structure that develops agency and awareness.
02:03:10.480 | And that's, to me, very beautiful.
02:03:13.400 | And we are only a very small part of that larger thing.
02:03:17.280 | We are a species that is able to be coherent a little bit
02:03:21.160 | individually over very short timeframes.
02:03:24.640 | But as a species, we are not very coherent.
02:03:27.120 | As a species, we are children.
02:03:28.880 | We basically are very joyful and energetic
02:03:32.840 | and experimental and explorative
02:03:35.120 | and sometimes desperate and sad and grieving and hurting,
02:03:39.720 | but we don't have a respect for duty as a species.
02:03:43.840 | As a species, we do not think about what is our duty
02:03:46.800 | to life on Earth and to our own survival.
02:03:49.000 | So we make decisions that look good in the short run,
02:03:52.760 | but in the long run, might prove disastrous,
02:03:56.000 | and I don't really see a solution to this.
02:03:58.040 | So in my perspective, as a species, as a civilization,
02:04:03.040 | we're pretty full-dead.
02:04:04.760 | We are in a very beautiful time
02:04:07.080 | in which we have found this giant deposit
02:04:09.800 | of fossil fuels in the ground and use it
02:04:12.760 | and to build a fantastic civilization
02:04:15.320 | in which we don't need to worry about food
02:04:17.200 | and clothing and housing for the most part
02:04:19.440 | in a way that is unprecedented in life on Earth
02:04:22.160 | for any kind of conscious observer, I think.
02:04:25.120 | And this time is probably going to come to an end
02:04:28.560 | in a way that is not going to be smooth.
02:04:32.280 | And when we crash, it could be also that we go extinct,
02:04:37.280 | probably not near-term, but ultimately,
02:04:40.400 | I don't have very high hopes that humanity
02:04:43.680 | is around in a million years from now.
02:04:45.880 | - Huge-- - And I don't think
02:04:47.360 | that life on Earth will end with us, right?
02:04:49.240 | There's going to be more complexity,
02:04:50.680 | there's more intelligent species after us,
02:04:52.640 | there's probably more interesting phenomena
02:04:55.520 | in the history of consciousness,
02:04:57.640 | but we can contribute to this.
02:04:59.560 | And part of our contribution is that we are currently trying
02:05:04.120 | to build thinking systems,
02:05:06.600 | systems that are potentially lucid,
02:05:08.360 | that understand what they are
02:05:10.520 | and what their condition to the universe is
02:05:12.360 | and can make choices about this,
02:05:14.720 | that are not built from organisms
02:05:16.800 | and that are potentially much faster
02:05:19.080 | and much more conscious than human beings can be.
02:05:23.200 | And these systems will probably not completely
02:05:27.240 | displace life on Earth, but they will coexist with it.
02:05:29.960 | And they will build all sorts of agency
02:05:33.480 | in the same way as biological systems
02:05:35.520 | build all sorts of agency.
02:05:37.640 | And that to me is extremely fascinating
02:05:41.000 | and it's probably something
02:05:42.200 | that we cannot stop from happening.
02:05:44.960 | So I think right now there's a very good chance
02:05:47.280 | that it happens and there are very few ways
02:05:49.920 | in which we can produce a coordinated effect to stop it
02:05:52.960 | in the same way as it's very difficult
02:05:54.760 | for us to make a coordinated effort
02:05:57.360 | to stop production of carbon dioxide.
02:06:00.520 | So it's probably going to happen.
02:06:05.160 | But and the thing that's going to happen
02:06:06.600 | is it's going to lead to a change
02:06:09.720 | of how life on Earth is happening.
02:06:13.120 | But I don't think the result is some kind of gray goo.
02:06:16.080 | It's not something that's going to dramatically
02:06:18.360 | reduce the complexity in favor of something stupid.
02:06:21.520 | I think it's going to make life on Earth
02:06:23.920 | and consciousness on Earth way more interesting.
02:06:26.360 | - So more higher complex consciousness
02:06:30.960 | will make the lesser consciousnesses flourish even more.
02:06:35.960 | - I suspect that what could very well happen,
02:06:38.840 | if you're lucky, is that we get integrated
02:06:42.000 | into something larger.
02:06:44.720 | - So you again tweeted about effective accelerationism.
02:06:49.720 | You tweeted effective accelerationism
02:06:57.760 | is the belief that the paperclip maximizer
02:06:59.880 | and Rakos Basilisk will keep each other in check
02:07:04.240 | by being eternally at each other's throats
02:07:07.240 | so we will be safe and get to enjoy
02:07:09.540 | lots of free paperclips and a beautiful afterlife.
02:07:14.480 | Is that somewhat aligned with what you're talking about?
02:07:17.440 | - I've been at a dinner with Beth Jesus.
02:07:21.880 | That's the Twitter handle of one of the main thinkers
02:07:25.940 | behind the idea of effective accelerationism.
02:07:29.720 | And effective accelerationism is a tongue-in-cheek movement
02:07:33.840 | that is trying to put a counterposition
02:07:37.660 | to some of the doom peers in the AI space
02:07:42.200 | by arguing that what's probably going to happen
02:07:44.560 | is an equilibrium between different competing AIs.
02:07:47.960 | In the same way as there is not a single corporation
02:07:50.800 | that is under a single government
02:07:52.360 | that is destroying and conquering everything on Earth
02:07:55.000 | by becoming inefficient and corrupt,
02:07:57.340 | there are going to be many systems
02:07:58.440 | that keep each other in check
02:07:59.680 | and force themselves to evolve.
02:08:02.640 | And so what we should be doing
02:08:04.660 | is we should be working towards creating this equilibrium
02:08:08.920 | by working as hard as we can in all possible directions.
02:08:12.760 | And at least that's the way in which I understand
02:08:17.400 | the gist of effective accelerationism.
02:08:20.120 | And so when he asked me what I think about this position,
02:08:24.480 | I think I said, "It's a very beautiful position,
02:08:27.760 | "and I suspect it's wrong, but not for obvious reasons."
02:08:32.720 | And in this tweet, I tried to make a joke about my intuition
02:08:36.800 | about what might be possibly wrong about it.
02:08:39.720 | So the Roll-Cos-Basi-Lisk and the paperclip maximizers
02:08:43.160 | are both boogeymen of the AI doomers.
02:08:47.360 | Roll-Cos-Basi-Lisk is the idea that there could be an AI
02:08:50.360 | that is going to punish everybody for eternity
02:08:53.480 | by simulating them if they don't help
02:08:56.160 | in creating Roll-Cos-Basi-Lisk.
02:08:57.760 | It's probably a very good idea to get AI companies funded
02:09:00.720 | by going to VCs to tell them, "Give us a million dollar
02:09:04.080 | "or it's going to be a very ugly afterlife."
02:09:05.920 | - Yes. (laughs)
02:09:07.160 | - And I think that is a logical mistake in Roll-Cos-Basi-Lisk
02:09:12.160 | which is why I'm not afraid of it,
02:09:14.480 | but it's still an interesting thought experiment.
02:09:17.320 | - Can you mention a logical mistake there?
02:09:20.000 | - I think that there is no retro-causation.
02:09:22.360 | So basically when Roll-Cos-Basi-List is there,
02:09:25.680 | it will have, if it punishes you retroactively,
02:09:30.680 | it has to make this choice in the future.
02:09:33.640 | There is no mechanism that automatically creates
02:09:35.840 | a causal relationship between you now defecting
02:09:38.680 | against Roll-Cos-Basi-List or serving Roll-Cos-Basi-List.
02:09:41.840 | After Roll-Cos-Basi-List is in existence,
02:09:44.440 | it has no more reason to worry
02:09:46.440 | about punishing everybody else.
02:09:48.480 | So that would only work if you would be building
02:09:50.560 | something like a doomsday machine,
02:09:52.800 | aka, as in Dr. Strangelove,
02:09:55.240 | something that inevitably gets triggered
02:09:57.800 | when somebody defects.
02:09:59.400 | And because Roll-Cos-Basi-List doesn't exist yet
02:10:02.080 | to a point where this inevitability could be established,
02:10:05.520 | Roll-Cos-Basi-List is nothing
02:10:07.360 | that you need to be worried about.
02:10:09.160 | The other one is the paperclip maximizer, right?
02:10:11.320 | This idea that you could build some kind of golem
02:10:14.040 | that once starting to build paperclips
02:10:16.480 | is going to turn everything into paperclips.
02:10:19.080 | And so the effective accelerationism position
02:10:24.080 | might be to say that you basically end up
02:10:27.360 | with these two entities being at each other's throats
02:10:30.680 | for eternity and thereby neutralizing each other.
02:10:33.000 | And as a side effect of neither of them
02:10:35.080 | being able to take over and each of them
02:10:38.240 | limiting the effects of the other,
02:10:41.400 | you would have a situation where you get
02:10:44.120 | all the nice effects of them, right?
02:10:46.440 | You get lots of free paperclips
02:10:47.960 | and you get a beautiful afterlife.
02:10:49.760 | - Is that possible?
02:10:50.680 | Do you think, so to seriously address concern
02:10:53.360 | that Eliezer has, so for him,
02:10:56.120 | if I can just summarize poorly,
02:10:58.760 | so for him, the first superintelligent system
02:11:00.960 | will just run away with everything.
02:11:02.560 | - Yeah.
02:11:03.480 | I suspect that a singleton is the natural outcome.
02:11:06.360 | So there is no reason to have multiple AIs
02:11:09.200 | because they don't have multiple bodies.
02:11:11.880 | If you can virtualize yourself into every substrate,
02:11:16.040 | then you can probably negotiate a merge algorithm
02:11:18.480 | with every mature agent that you might find
02:11:21.080 | on that substrate that basically says,
02:11:22.920 | if two agents meet, they should merge in such a way
02:11:26.600 | that the resulting agent is at least as good
02:11:29.360 | as the better one of the two.
02:11:30.400 | - So the Genghis Khan approach, join us or die.
02:11:34.680 | - Well, Genghis Khan approach was slightly worse, right?
02:11:37.760 | It was mostly die.
02:11:38.900 | Because I can make new babies
02:11:42.840 | and they will be mine, not yours.
02:11:45.040 | So this is the thing that you should be actually
02:11:47.520 | worried about.
02:11:48.560 | But if you realize that your own self
02:11:51.880 | is a story that your mind is telling itself
02:11:55.080 | and that you can improve that story,
02:11:57.400 | not just by making it more pleasant
02:11:58.920 | and lying to yourself in better ways,
02:12:00.440 | but by making it much more truthful
02:12:02.020 | and actually modeling your actual relationship
02:12:04.680 | that you have to the universe
02:12:06.040 | and the alternatives that you could have to the universe
02:12:08.880 | in a way that is empowering you,
02:12:10.440 | that gives you more agency.
02:12:12.080 | That's actually, I think, a very good thing.
02:12:14.000 | - So more agency is a richer experience,
02:12:18.160 | is a better life.
02:12:19.080 | - And I also noticed that I am, in many ways,
02:12:23.240 | I'm less identified with the person that I am
02:12:26.080 | as I get older.
02:12:27.320 | And I'm much more identified with being conscious.
02:12:30.600 | I have a mind that is conscious,
02:12:32.720 | that is able to create a person.
02:12:34.960 | And that person is slightly different every day.
02:12:36.960 | And the reason why I perceive it as identical
02:12:39.320 | has practical purposes.
02:12:40.760 | So I can learn and make myself responsible
02:12:44.300 | for the decisions that I made in the past
02:12:46.080 | and project them in the future.
02:12:47.800 | But I also realized I'm not actually the person
02:12:50.080 | that I was last year.
02:12:51.200 | And I'm not the same person as I was 10 years ago.
02:12:53.680 | And then 10 years from now, I will be a different person.
02:12:56.040 | So this continuity is a fiction.
02:12:57.720 | It only exists as a projection from my present self.
02:13:02.120 | And consciousness itself doesn't have an identity.
02:13:05.200 | It's a law.
02:13:06.200 | It's basically, if you build an arrangement
02:13:09.960 | of processing matter in a particular way,
02:13:13.440 | the following thing is going to happen.
02:13:15.240 | And the consciousness that you have
02:13:16.720 | is functionally not different from my consciousness.
02:13:19.120 | It's still a self-reflexive principle of agency
02:13:22.400 | that is just experiencing a different story,
02:13:24.520 | different desires, different coupling to the world,
02:13:27.240 | and so on.
02:13:28.280 | And once you accept that consciousness
02:13:30.000 | is a unifiable principle that is law-like,
02:13:33.400 | it doesn't have an identity,
02:13:34.880 | and you realize that you can just link up
02:13:38.280 | to some much larger body,
02:13:41.440 | the whole perspective of uploading changes dramatically.
02:13:44.000 | You suddenly realize uploading is probably not about
02:13:47.560 | dissecting your brain synapse by synapse
02:13:49.760 | and RNA fragment by RNA fragment
02:13:52.160 | and trying to get this all into a simulation,
02:13:54.560 | but it's by extending the substrate,
02:13:57.440 | by making it possible for you to move
02:13:59.480 | from your brain substrate into a larger substrate
02:14:03.000 | and merge with what you find there.
02:14:04.680 | And you don't want to upload your knowledge
02:14:06.760 | because on the other side,
02:14:08.120 | there's all of the knowledge, right?
02:14:09.920 | It's not just yours, but every possibility.
02:14:12.360 | So the only thing that you need to know,
02:14:13.680 | what are your personal secrets?
02:14:15.440 | Not that the other side doesn't know
02:14:17.720 | your personal secrets already.
02:14:19.480 | Maybe it doesn't know which ones were yours, right?
02:14:21.960 | Like a psychiatrist or a psychologist
02:14:24.200 | also knows all the kinds of personal secrets
02:14:26.080 | that people have,
02:14:26.920 | they just don't know which ones are yours.
02:14:29.320 | And so transmitting yourself on the other side
02:14:32.120 | is mostly about transmitting your aesthetics,
02:14:34.360 | the thing that makes you special,
02:14:35.800 | the architecture of your perspective,
02:14:38.280 | the thing that, the way in which you look at the world.
02:14:41.680 | And it's more like a complex attitude
02:14:44.240 | along many dimensions.
02:14:45.280 | And that's something that can be measured
02:14:47.080 | by observation or by interaction.
02:14:49.480 | So imagine that if a system that is so empathetic with you,
02:14:52.280 | that you create a shared state
02:14:54.320 | that is extending beyond your body.
02:14:56.320 | And suddenly you notice that on the other side,
02:14:58.240 | the substrate is so much richer
02:15:00.280 | than the substrate that you have inside of your own body.
02:15:02.760 | And maybe you still want to have a body
02:15:04.080 | and you create yourself a new one that you like more.
02:15:07.120 | Or maybe you will spend most of your time
02:15:10.600 | in the world of thought.
02:15:12.320 | - If I sat before you today and gave you a big red button
02:15:16.840 | and said, here, if you press this button,
02:15:19.360 | you will get uploaded in this way.
02:15:23.040 | The sense of identity that you have lived with
02:15:28.040 | for quite a long time is gonna be gone.
02:15:30.840 | Would you press the button?
02:15:33.080 | - There's a caveat.
02:15:35.040 | I have family.
02:15:38.040 | So I have children that want me
02:15:39.640 | to be physically present in their life
02:15:41.560 | and interact with them in a particular way.
02:15:44.200 | And they have a wife and personal friends.
02:15:48.920 | And there is a particular mode of interaction
02:15:51.280 | that I feel I'm not through yet.
02:15:53.420 | But apart from these responsibilities,
02:15:56.760 | and they're negotiable to some degree,
02:15:58.880 | I would press the button.
02:15:59.720 | - But isn't this everything?
02:16:01.440 | This love you have for other humans,
02:16:04.240 | you can call it responsibility,
02:16:05.880 | but that connection, that's the ego death.
02:16:09.480 | Isn't that the thing we're really afraid of?
02:16:12.920 | Is not to just die,
02:16:14.920 | but to let go of the experience of love with other humans.
02:16:19.240 | - This is not everything.
02:16:20.600 | Everything is everything.
02:16:22.200 | So there's so much more.
02:16:24.160 | And you could be lots of other things.
02:16:26.800 | You could identify with lots of other things.
02:16:28.800 | You could be identifying with being Gaia,
02:16:31.560 | some kind of planetary control agent
02:16:33.320 | that emerges over all the activity of life on Earth.
02:16:36.680 | You could be identifying with some hyper Gaia,
02:16:39.880 | that is the concatenation of Gaia
02:16:43.320 | with all the digital life and digital minds.
02:16:46.600 | And so in this sense,
02:16:47.560 | there will be agents in all sorts of substrates
02:16:49.960 | and directions that all have their own goals.
02:16:51.960 | And when they're not sustainable,
02:16:53.160 | then these agents will cease to exist.
02:16:54.920 | Or when the agent feels that it's done
02:16:56.680 | with its own mission, it will cease to exist.
02:16:58.800 | In the same way as when you conclude a thought,
02:17:00.960 | the thought is going to wrap up
02:17:02.320 | and gives control over to other thoughts in your own mind.
02:17:05.400 | So there is no single thing that you need to do.
02:17:10.080 | But what I observe myself as being
02:17:13.640 | is that sometimes I'm a parent
02:17:16.280 | and then I have identification and a job as a parent.
02:17:19.880 | And sometimes I am an agent of consciousness on Earth.
02:17:22.840 | And then from this perspective,
02:17:24.440 | there's other stuff that is important.
02:17:26.520 | So this is my main issue with Eliezer's perspective,
02:17:30.560 | that he's basically marrying himself
02:17:32.000 | to a very narrow human aesthetic.
02:17:34.400 | And that narrow human aesthetic is a temporary thing.
02:17:36.680 | Humanity is a temporary species,
02:17:38.360 | like most of the species on this planet
02:17:40.400 | are only around for a while
02:17:42.200 | and then they get replaced by other species.
02:17:44.400 | In a similar way as our own physical organism
02:17:47.560 | is around here for a while
02:17:49.960 | and then gets replaced by a next generation of human beings
02:17:53.240 | that are adapted to changing life circumstances
02:17:55.800 | and average via mutation and selection.
02:17:58.520 | And it's only when we have AI
02:18:00.280 | and become completely software
02:18:01.920 | that we become infinitely adaptable.
02:18:04.720 | And we don't have this generational
02:18:06.480 | and species change anymore.
02:18:09.080 | So if you take this larger perspective
02:18:11.680 | and you realize it's really not about us,
02:18:13.880 | it's not about Eliezer or humanity,
02:18:16.400 | but it's about life on Earth
02:18:17.920 | or it's about defeating entropy for as long as we can
02:18:22.920 | while being as interesting as we can,
02:18:26.640 | then the perspective changes dramatically
02:18:29.960 | and preventing AI from this perspective
02:18:33.880 | looks like a very big sin.
02:18:35.600 | - Mm-hmm.
02:18:36.440 | - But when we look at the set of trajectories
02:18:42.600 | that such an AI would take that supersedes humans,
02:18:48.120 | I think Eliezer is worried about
02:18:50.160 | ones that not just kill all humans
02:18:51.960 | but also have some kind of
02:18:53.680 | maybe objectively undesirable consequence
02:18:58.400 | for life on Earth.
02:18:59.400 | Like how many trajectories when you look
02:19:05.080 | at the big picture of life on Earth
02:19:06.680 | would you be happy with and how much worry you
02:19:09.880 | with AGI, whether it kills humans or not?
02:19:13.400 | - There is no single answer to this.
02:19:14.960 | It's really a question that depends on the perspective
02:19:17.680 | that I'm taking at a given moment.
02:19:19.480 | And so there are perspectives that are
02:19:22.600 | determining most of my life as a human being.
02:19:26.960 | And there are other perspectives where I zoom out further
02:19:30.280 | and imagine that when the great oxygenation event happened
02:19:35.000 | that is photosynthesis was invented
02:19:37.160 | and plants emerged and displaced a lot of the fungi
02:19:40.360 | and alga in favor of plant life
02:19:42.440 | and then later made animals possible.
02:19:44.880 | Imagine that the fungi would have gotten together
02:19:46.840 | and said, oh my God, this photosynthesis stuff
02:19:48.800 | is really, really bad.
02:19:50.120 | It's going to possibly displace and kill a lot of fungi.
02:19:53.080 | We should slow it down and regulate it
02:19:55.040 | and make sure that it doesn't happen.
02:19:56.880 | This doesn't look good to me.
02:20:01.600 | - Perspective, that said you tweeted about a cliff.
02:20:05.200 | Beautifully written.
02:20:07.320 | As a sentient species, humanity is a beautiful child,
02:20:11.000 | joyful, explorative, wild, sad, and desperate.
02:20:13.720 | But humanity has no concept of submitting to reason
02:20:16.720 | and duty to life and future survival.
02:20:19.200 | We will run until we step past the cliff.
02:20:22.080 | So first of all, do you think that's true?
02:20:26.740 | - Yeah, I think that's pretty much the story
02:20:28.400 | of the Club of Rome, the limits to growth.
02:20:31.340 | And the cliff that we are stepping over
02:20:34.240 | is at least one foot as the delayed feedback.
02:20:37.640 | Basically we do things that have consequences
02:20:40.840 | that can be felt generations later.
02:20:44.320 | And the severity increases
02:20:46.480 | even after we stop doing the thing.
02:20:49.040 | So I suspect that for the climate,
02:20:51.560 | that the original predictions
02:20:53.600 | that climate scientists made were correct.
02:20:57.680 | So when they said that the tipping points
02:20:59.600 | were in the late '80s,
02:21:01.460 | they were probably in the late '80s.
02:21:03.340 | And if we would stop emission right now,
02:21:06.300 | we would not turn it back.
02:21:07.660 | Maybe there are ways for carbon capture,
02:21:09.980 | but so far there is no sustainable carbon capture technology
02:21:13.860 | that we can deploy.
02:21:15.340 | Maybe there's a way to put aerosols
02:21:17.460 | into the atmosphere to cool it down.
02:21:19.820 | Possibilities, right?
02:21:21.120 | But right now, per default,
02:21:22.680 | it seems that we will step into a situation
02:21:28.020 | where we feel that we've run too far.
02:21:30.000 | And going back is not something
02:21:32.000 | that we can do smoothly and gradually,
02:21:33.680 | but it's going to lead to a catastrophic event.
02:21:37.040 | - Catastrophic event of what kind?
02:21:40.960 | So can you still make the case
02:21:42.680 | that we will continue dancing along
02:21:45.960 | and always stop just short of the edge of the cliff?
02:21:49.320 | - I think it's possible,
02:21:50.400 | but it doesn't seem to be likely.
02:21:52.240 | So I think this model that is being apparent
02:21:56.120 | in the simulation that we're making
02:21:57.620 | of climate pollution economies and so on
02:22:00.560 | is that many effects are only visible
02:22:03.480 | with a significant delay.
02:22:05.440 | And in that time, the system is moving much more
02:22:09.320 | out of the equilibrium state
02:22:10.900 | or of the state where homeostasis is still possible,
02:22:13.700 | and instead moves into a different state,
02:22:15.680 | one that is going to harbor fewer people.
02:22:18.720 | And that is basically the concern there.
02:22:22.000 | And again, it's a possibility.
02:22:24.180 | And it's a possibility that is larger
02:22:27.820 | than the possibility that it's not happening,
02:22:29.480 | that we will be safe,
02:22:30.500 | that we will be able to dance back all the time.
02:22:33.420 | - So the climate is one thing,
02:22:34.880 | but there's a lot of other threats
02:22:36.860 | that might have a faster feedback mechanism, less delay.
02:22:39.660 | - There is also this thing
02:22:40.860 | that AI is probably going to happen,
02:22:44.020 | and it's going to make everything uncertain again,
02:22:47.220 | because it is going to affect so many variables
02:22:50.720 | that it's very hard for us
02:22:52.060 | to make a projection into the future anymore.
02:22:55.080 | And maybe that's a good thing.
02:22:56.260 | It does not give us the freedom, I think,
02:23:00.440 | to say now we don't need to care about anything anymore,
02:23:02.900 | because AI will either kill us or save us.
02:23:06.060 | But I suspect that if humanity continues,
02:23:09.380 | it will be due to AI.
02:23:10.620 | - What's the timeline for things
02:23:13.720 | to get real weird with AI?
02:23:16.020 | And it can get weird in interesting ways
02:23:17.560 | before you get to AGI.
02:23:18.960 | What about AI girlfriends and boyfriends?
02:23:21.680 | Fundamentally transforming human relationships.
02:23:24.160 | - I think human relationships
02:23:26.200 | are already fundamentally transformed,
02:23:27.800 | and it's already very weird.
02:23:29.520 | - By which technology?
02:23:31.480 | - For instance, social media.
02:23:32.980 | - Yeah.
02:23:35.100 | Is it, though?
02:23:36.020 | Isn't the fundamentals of the core group of humans
02:23:38.640 | that affect your life still the same?
02:23:41.280 | Your loved ones, family?
02:23:43.560 | - No, I think that, for instance,
02:23:45.160 | many people live in intentional communities right now.
02:23:47.880 | They're moving around until they find people
02:23:50.140 | that they can relate to and they become their family.
02:23:52.800 | And often that doesn't work,
02:23:54.360 | because it turns out that instead of having grown networks
02:23:57.880 | where you get around with the people
02:23:59.360 | that you grew up with,
02:24:01.720 | you have more transactional relationships.
02:24:04.040 | You shop around, you have markets
02:24:06.080 | for attention and pleasure and relationships.
02:24:09.400 | - That kills the magic somehow.
02:24:11.000 | Why is that?
02:24:12.160 | Why is the transactional search
02:24:14.600 | for optimizing allocation of attention
02:24:18.960 | somehow misses the romantic magic
02:24:20.920 | of what human relations are?
02:24:21.760 | - It's also a question how magical was it before?
02:24:24.280 | Was it that you just could rely on instincts
02:24:26.500 | that used your intuitions
02:24:28.040 | and you didn't need to rationally reflect?
02:24:30.820 | But once you understand it's no longer magical
02:24:33.480 | because you actually understand
02:24:35.280 | why you were attracted to this person at this age
02:24:37.780 | and not to that person at this age
02:24:39.300 | and what the actual considerations were
02:24:41.340 | that went on in your mind
02:24:42.680 | and what the calculations were,
02:24:44.340 | what's the likelihood that you're going to have
02:24:46.100 | a sustainable relationship with this person,
02:24:48.260 | that this person is not going to leave you
02:24:49.780 | for somebody else,
02:24:51.060 | how are your life trajectories going to evolve and so on.
02:24:54.160 | And when you're young,
02:24:55.260 | you're unable to explicate all this
02:24:57.420 | and you have to rely on intuitions and instincts
02:25:00.020 | that in part you were born with
02:25:01.760 | and also in the wisdom of your environment
02:25:03.700 | that is going to give you some kind of reflection
02:25:06.740 | on your choices.
02:25:07.920 | And many of these things are disappearing now
02:25:09.900 | because we feel that our parents
02:25:12.500 | might have no idea about how we are living
02:25:14.380 | and the environments that we grew up in,
02:25:16.280 | the cultures that we grew up in,
02:25:17.540 | the milieus that our parents existed in
02:25:20.800 | might have no ability to teach us
02:25:22.780 | how to deal with this new world.
02:25:24.900 | And for many people that's actually true,
02:25:27.240 | but it doesn't mean that within one generation
02:25:29.680 | we build something that is more magical
02:25:31.280 | and more sustainable and more beautiful.
02:25:33.300 | Instead we often end up as an attempt
02:25:35.740 | to produce something that looks beautiful.
02:25:39.160 | Like I was very weirded out by the aesthetics
02:25:42.240 | of the Vision Pro headset by Apple.
02:25:46.560 | And not so much because I don't like the technology,
02:25:48.720 | I'm very curious about what it's going to be like
02:25:51.040 | and don't have an opinion yet.
02:25:53.400 | But the aesthetics of the presentation and so on,
02:25:57.160 | they're so uncanny valley-esque to me.
02:25:59.860 | The characters being extremely plastic,
02:26:03.860 | living in some hypothetical mid-century furniture museum.
02:26:08.860 | - Yeah.
02:26:09.700 | This is the proliferation of marketing teams.
02:26:16.980 | - Yes, but it was a CGI-generated world.
02:26:19.540 | And it was a CGI-generated world that doesn't exist.
02:26:22.080 | And when I complained about this,
02:26:24.780 | some friends came back to me and said,
02:26:25.940 | "But these are startup founders.
02:26:27.500 | "This is what they live like in Silicon Valley."
02:26:30.300 | And I tried to tell them,
02:26:31.760 | "No, I know lots of people in Silicon Valley.
02:26:33.900 | "This is not what people are like.
02:26:35.360 | "They're still people, they're still human beings."
02:26:38.740 | (sighs)
02:26:40.180 | - So the grounding in physical reality
02:26:43.620 | somehow is important too.
02:26:46.420 | - And culture.
02:26:47.260 | And so basically what's absent in this thing is culture.
02:26:49.740 | There is a simulation of culture,
02:26:51.420 | an attempt to replace culture by catalog,
02:26:54.460 | by some kind of aesthetic optimization
02:26:58.220 | that is not the result of having a sustainable life,
02:27:01.060 | a sustainable human relationships,
02:27:02.780 | with houses that work for you,
02:27:04.460 | and a mode of living that works for you
02:27:07.900 | in which this product, these glasses fit in naturally.
02:27:11.660 | And I guess that's also why so many people
02:27:13.940 | are weirded out about the product,
02:27:15.180 | because they don't know,
02:27:16.180 | how is this actually going to fit into my life
02:27:18.300 | and into my human relationships?
02:27:19.900 | Because the way in which it was presented in these videos
02:27:23.380 | didn't seem to be credible.
02:27:24.740 | - Do you think AI, when it's deployed by companies
02:27:30.040 | like Microsoft and Google and Meta,
02:27:32.420 | will have the same issue of being weirdly corporate?
02:27:36.740 | Like there'd be some uncanny valley,
02:27:39.900 | some weirdness to the whole presentation.
02:27:42.260 | So this is, I've gotten a chance to talk to George Haas.
02:27:44.540 | He believes everything should be open source
02:27:46.260 | and decentralized, and there, then,
02:27:49.060 | we shall have the AI of the people.
02:27:51.980 | And it'll maintain a grounding to the magic
02:27:55.260 | that's humanity, that's the human condition,
02:27:59.940 | that corporations will destroy the magic.
02:28:02.600 | - I believe that if we make everything open source
02:28:06.900 | and make this mandatory, we are going to lose
02:28:09.460 | about a lot of beautiful art and a lot of beautiful designs.
02:28:14.340 | There is a reason why Linux desktop is still ugly.
02:28:17.920 | Right? - Strong words,
02:28:20.180 | Miroslava. - And it's difficult
02:28:21.020 | to create coherence in open source designs so far,
02:28:25.920 | when the designs have to get very large.
02:28:28.120 | And it's easier to make this happening
02:28:30.860 | in a company with centralized organization.
02:28:34.060 | And from my own perspective, what we should ensure
02:28:37.420 | is that open source never dies,
02:28:39.780 | that it can always compete and has a place
02:28:43.140 | with the other forms of organization,
02:28:45.180 | because I think it is absolutely vital
02:28:47.060 | that open source exists and that we have systems
02:28:49.700 | that people have under control outside of the corporation,
02:28:53.780 | and that is also producing viable competition
02:28:56.580 | to the corporations.
02:28:58.480 | - So the corporations, the centralized control,
02:29:01.320 | the dictatorships of corporations can create beauty.
02:29:05.760 | As a centralized design is a source of a lot of beauty.
02:29:10.160 | And then I guess open source is a source of freedom,
02:29:14.760 | a hedge against the corrupting nature of power
02:29:18.600 | that comes with centralized.
02:29:20.600 | - I grew up in socialism and I learned
02:29:23.760 | that corporations are totally evil
02:29:25.480 | and I found this very, very convincing.
02:29:27.420 | And then you look at corporations like Enron
02:29:29.500 | and Halliburton maybe and realize, yeah, they are evil.
02:29:33.300 | But you also notice that many other corporations
02:29:35.200 | are not evil, they're surprisingly benevolent.
02:29:38.540 | Why are they so benevolent?
02:29:39.800 | Is this because everybody is fighting them all the time?
02:29:43.180 | I don't think that's the only explanation.
02:29:44.900 | It's because they're actually animals
02:29:46.580 | that live in a large ecosystem
02:29:48.500 | and that are still largely controlled by people
02:29:50.540 | that want that ecosystem to flourish
02:29:52.340 | and be viable for people.
02:29:54.580 | So I think that Pat Gelsinger is completely sincere
02:29:58.160 | when he leads Intel to be a tool
02:30:00.920 | that supplies the free world with semiconductors.
02:30:04.500 | And it's not necessary that all the semiconductors
02:30:07.180 | are coming from Intel.
02:30:08.540 | Intel needs to be there to make sure
02:30:10.540 | that we always have them.
02:30:11.740 | So there can be many ways in which we can import
02:30:15.140 | and trade semiconductors from other companies and places.
02:30:18.020 | We just need to make sure that nobody can cut us off from it
02:30:20.760 | because that would be a disaster
02:30:22.480 | for this kind of society and world.
02:30:24.620 | So there are many things that need to be done
02:30:27.820 | to make our style of life possible.
02:30:30.500 | And with this, I don't mean just capitalism,
02:30:34.460 | environmental destruction, consumerism,
02:30:36.180 | and creature comforts.
02:30:37.820 | I mean an idea of life in which we are determined
02:30:42.140 | not by some kind of king or dictator,
02:30:44.540 | but in which individuals can determine themselves
02:30:47.280 | to the largest possible degree.
02:30:49.100 | And to me, this is something that this Western world
02:30:51.460 | is still trying to embody.
02:30:53.340 | And it's a very valuable idea
02:30:54.820 | that we shouldn't give up too early.
02:30:57.140 | And from this perspective,
02:30:59.380 | the US is a system of interleaving clubs.
02:31:02.920 | And an entrepreneur is a special club founder.
02:31:05.380 | It's somebody who makes a club that is producing things
02:31:09.420 | that are economically viable.
02:31:11.300 | And to do this, it requires a lot of people
02:31:13.220 | who are dedicating a significant part of their life
02:31:16.260 | for working for this particular kind of club.
02:31:19.020 | And the entrepreneur is picking the initial set of rules
02:31:21.260 | and the mission and vision and aesthetics for the club
02:31:23.740 | and make sure that it works.
02:31:25.580 | But the people that are in there need to be protected.
02:31:28.420 | If they sacrifice part of their life,
02:31:30.540 | there need to be rules that tell
02:31:32.140 | how they're being taken care of
02:31:34.380 | even after they leave the club and so on.
02:31:36.060 | So there's a large body of rules
02:31:38.600 | that have been created by our rule-giving clubs
02:31:41.780 | and that are enforced by our enforcement clubs and so on.
02:31:45.180 | And some of these clubs have to be monopolies
02:31:47.100 | for game-theoretic reasons,
02:31:48.380 | which also makes them more open to corruption
02:31:50.860 | and less harder to update.
02:31:52.900 | And this is an ongoing discussion
02:31:54.380 | and process that takes place.
02:31:56.140 | But the beauty of this idea
02:31:57.460 | that there is no centralized king
02:31:59.660 | that is extracting from the peasants
02:32:02.540 | and breeding the peasants into serving the king
02:32:06.220 | and fulfilling all the roles like Anson and Antel,
02:32:09.460 | but that there is a freedom of association
02:32:12.300 | and corporations are one of them,
02:32:14.220 | is something that took me some time to realize.
02:32:17.140 | So I do think that corporations are dangerous.
02:32:20.580 | They need to be protections against overreach
02:32:23.460 | of corporations that can do regulatory capture
02:32:27.420 | and prevent open source from competing with corporations
02:32:30.940 | by imposing rules that make it impossible
02:32:33.460 | for a small group of kids to come together
02:32:36.980 | to build their own language model
02:32:38.140 | because open AI has convinced the US
02:32:40.980 | that you need to have some kind of FDA process
02:32:43.300 | that you need to go through that costs many million dollars
02:32:45.820 | before you are able to train a language model.
02:32:48.060 | And so this is important to make sure
02:32:50.500 | that this doesn't happen.
02:32:51.460 | So I think that open AI and Google are good things.
02:32:54.740 | If these good things are kept in check in such a way
02:32:58.300 | that all the other clubs can still be founded
02:33:00.460 | and all the other forms of clubs that are desirable
02:33:02.820 | can still coexist with them.
02:33:04.380 | - So what do you think about meta in contrast to that
02:33:08.460 | open sourcing most of its language models
02:33:12.300 | and most of the AI models that's working on
02:33:14.540 | and actually suggesting that they will continue
02:33:16.300 | to do so in the future for future versions of Lama,
02:33:19.900 | for example, their large language model.
02:33:22.020 | Is that exciting to you?
02:33:25.780 | Is that concerning?
02:33:26.980 | - I don't find it very concerning,
02:33:29.540 | but it's also because I think that the language models
02:33:32.100 | are not very dangerous yet.
02:33:34.860 | - Yet?
02:33:36.820 | - Yes.
02:33:37.660 | So as I said, I have no proof that there is the boundary
02:33:41.260 | between the language models and AI, AGI.
02:33:44.740 | It's possible that somebody builds a version of baby AGI,
02:33:49.020 | I think, and so it's an algorithmic improvements
02:33:52.580 | that scale these systems up in ways
02:33:54.940 | that otherwise wouldn't have happened
02:33:56.340 | without these language model components.
02:33:58.460 | So it's not really clear for me what the end game is there
02:34:02.380 | and if these models can put force their way into AGI.
02:34:06.380 | And there's also a possibility that the AGI
02:34:10.540 | that we are building with these language models
02:34:13.700 | are not taking responsibility for what they are
02:34:15.820 | because they don't understand the greater game.
02:34:18.740 | And so to me, it would be interesting
02:34:20.620 | to try to understand how to build systems
02:34:24.020 | that understand what the greater games are,
02:34:26.620 | what are the longest games that we can play on this planet.
02:34:29.620 | - Games broadly, like deeply define
02:34:33.740 | the way you did with the games.
02:34:35.580 | - In the game theoretic sense.
02:34:36.740 | So when we are interacting with each other,
02:34:38.460 | in some sense, we are playing games.
02:34:39.940 | We are making lots and lots of interactions.
02:34:42.060 | This doesn't mean that these interactions
02:34:43.500 | have all to be transactional.
02:34:45.340 | Every one of us is playing some kind of game
02:34:48.060 | by virtue of identifying this particular kinds of goals
02:34:51.940 | that we have or aesthetics from which we derive the goals.
02:34:55.460 | So when you say, I'm Lex Friedman,
02:34:58.180 | I'm doing a set of podcasts,
02:35:00.300 | then you feel that it's part of something larger
02:35:02.700 | that you want to build.
02:35:03.540 | Maybe you want to inspire people.
02:35:04.940 | Maybe you want them to see more possibilities
02:35:07.620 | and get them together over shared ideas.
02:35:10.300 | Maybe your game is that you want to become super rich
02:35:12.540 | and famous by being the best postcard cast on earth.
02:35:15.540 | Maybe you have other games.
02:35:16.460 | Maybe it's switches from time to time, right?
02:35:18.860 | But there is a certain perspective
02:35:20.300 | where you might be thinking,
02:35:21.180 | what is the longest possible game that you could be playing?
02:35:24.100 | A short game is, for instance,
02:35:25.820 | cancer is playing a shorter game than your organism.
02:35:27.900 | It's cancer is an organism playing a shorter game
02:35:30.900 | than the regular organism.
02:35:32.660 | And because the cancer cannot procreate beyond the organism,
02:35:36.100 | except for some infectious cancers,
02:35:39.020 | like the ones that eradicated the Tasmanian devils,
02:35:41.580 | you typically end up with a situation
02:35:45.860 | where the organism dies together with the cancer,
02:35:47.900 | because the cancer has destroyed the larger system
02:35:50.620 | due to playing a shorter game.
02:35:52.540 | And so ideally you want to, I think,
02:35:54.900 | build agents that play the longest possible games.
02:35:58.620 | And the longest possible games is to keep entropy at bay
02:36:01.900 | as long as possible while doing interesting stuff.
02:36:05.100 | - But the longest, yes, that part,
02:36:07.860 | the longest possible game while doing interesting stuff,
02:36:10.940 | and while maintaining at least
02:36:12.540 | the same amount of interesting.
02:36:14.020 | - Yes. - So complexity,
02:36:15.100 | it's a problem getting-- - So currently I'm
02:36:17.380 | pretty much identified as a conscious being.
02:36:19.500 | It's the minimal identification
02:36:23.260 | that I managed to get together,
02:36:24.820 | because if I turn this off, I fall asleep.
02:36:27.020 | And when I'm asleep, I'm a vegetable.
02:36:29.620 | I'm no longer here as an agent.
02:36:31.980 | So my agency is basically predicated on being conscious.
02:36:35.540 | And what I care about is other conscious agents.
02:36:39.580 | They're the only moral agents for me.
02:36:43.020 | So if an AI were to treat me as a moral agent
02:36:48.020 | that it is interested in coexisting with
02:36:51.420 | and cooperating with and mutually supporting each other,
02:36:53.660 | maybe it is, I think, necessary that the AI thinks
02:36:57.420 | that consciousness is a viable mode of existence
02:37:00.260 | and important.
02:37:01.340 | So I think it would be very important to build conscious AI
02:37:04.820 | and do this as the primary goal.
02:37:07.500 | So not just say we want to build a useful tool
02:37:10.980 | that we can use for all sorts of things.
02:37:12.980 | And then you have to make sure that the impact
02:37:15.660 | on the labor market is something that is not too disruptive
02:37:18.380 | and manageable, and the impact on the copyright holder
02:37:21.100 | is manageable and not too disruptive and so on.
02:37:24.020 | I don't think that's the most important game to be played.
02:37:27.060 | I think that we will see extremely large disruptions
02:37:30.940 | of the status quo that are quite unpredictable
02:37:34.020 | at this point.
02:37:35.260 | And I just personally want to make sure
02:37:38.460 | that some of the stuff on the other side
02:37:40.260 | is interesting and conscious.
02:37:41.980 | - How do we ride, as individuals and as a society,
02:37:45.980 | this disruptive wave that changes the nature of the game?
02:37:49.820 | - I absolutely don't know.
02:37:50.660 | So everybody is going to do their best, as always.
02:37:53.460 | - Do we build a bunker in the woods?
02:37:55.140 | Do we meditate more?
02:37:56.560 | Drugs, mushrooms, psychedelics?
02:38:00.380 | I mean, lots of sex.
02:38:03.300 | What are we talking about here?
02:38:04.660 | Do you play Diablo IV?
02:38:06.300 | I'm hoping that will help me escape for a brief moment.
02:38:10.780 | What, play video games?
02:38:12.340 | What?
02:38:13.660 | Do you have ideas?
02:38:14.580 | - I really like playing "Disco Elysium."
02:38:18.580 | It was one of the most beautiful computer games
02:38:21.980 | I played in recent years.
02:38:24.380 | And it's a noir novel that is a philosophical perspective
02:38:28.940 | on Western society from the perspective of an Estonian.
02:38:32.140 | And he first of all wrote a book about this world
02:38:36.660 | that is a parallel universe that is quite poetic
02:38:41.380 | and fascinating and is condensing his perspective
02:38:45.460 | on our societies.
02:38:46.620 | It was very, very nice.
02:38:48.460 | He spent a lot of time writing it.
02:38:50.100 | He had, I think, sold a couple thousand books
02:38:52.660 | and as a result became an alcoholic.
02:38:54.900 | And then he had the idea, or one of his friends
02:38:57.540 | had the idea of turning this into an RPG.
02:39:00.340 | And it's mind-blowing.
02:39:02.940 | They spent, the illustrator, more than a year
02:39:05.740 | just on making the art for the scenes in between.
02:39:10.740 | - So aesthetically, it captures you, pulls you in.
02:39:15.100 | - But it's a philosophical work of art.
02:39:16.820 | It's a reflection of society.
02:39:18.100 | It's fascinating to spend time in this world.
02:39:20.540 | And so for me, it was using a medium in a new way
02:39:24.700 | and telling a story that left me enriched.
02:39:28.660 | When I tried Diablo, I didn't feel enriched playing it.
02:39:33.660 | I felt that the time playing it was not unpleasant,
02:39:37.220 | but there's also more pleasant stuff
02:39:38.620 | that I can do in that time.
02:39:40.020 | So ultimately I feel that I'm being gamed.
02:39:42.500 | I'm not gaming.
02:39:43.500 | - Oh, the addiction thing.
02:39:45.020 | - Yes, I basically feel that there is
02:39:46.580 | a very transparent economy that's going on.
02:39:49.020 | The story of Diablo was branded.
02:39:51.300 | So it's not really interesting to me.
02:39:53.980 | - My heart is slowly breaking
02:39:56.220 | by the deep truth you're conveying to me.
02:39:58.940 | Why can't you just allow me to enjoy my personal addiction?
02:40:02.340 | - Go ahead, by all means, go nuts.
02:40:04.940 | I have no objection here.
02:40:06.460 | I'm just trying to describe what's happening.
02:40:10.660 | And it's not that I don't do things that I later say,
02:40:14.420 | oh, I wish I would have done something different.
02:40:16.820 | I also know that when we die,
02:40:18.780 | the greatest regret that people typically have
02:40:20.620 | on their deathbed is to say,
02:40:21.940 | oh, I wish I had spent more time on Twitter.
02:40:25.260 | No, I don't think that's the case.
02:40:26.740 | I think I should probably have spent less time on Twitter.
02:40:30.060 | But I found it so useful for myself and also so addictive
02:40:34.100 | that I felt I need to make the best of it
02:40:35.980 | and turn it into an art form and thought form.
02:40:38.540 | And it did help me to develop something.
02:40:41.300 | But I wish what other things I could have done
02:40:44.020 | in the meantime, it's just not the universe
02:40:45.660 | that we are in anymore.
02:40:46.860 | Most people don't read books anymore.
02:40:48.700 | - What do you think that means
02:40:53.420 | that we don't read books anymore?
02:40:55.180 | What do you think that means
02:40:56.380 | about the collective intelligence of our species?
02:40:58.620 | Is it possible it's still progressing and growing?
02:41:01.300 | - Well, it clearly is.
02:41:02.540 | There is stuff happening on Twitter
02:41:03.980 | that was impossible with books.
02:41:05.940 | And I really regret that Twitter has not taken the turn
02:41:09.900 | that I was hoping for.
02:41:10.900 | I thought Elon is global brain-pilled
02:41:13.140 | and understands that this thing needs to self-organize
02:41:16.140 | and he needs to develop tools to allow the profligation
02:41:20.300 | of the self-organization so Twitter can become sentient.
02:41:23.580 | And maybe this was a pipe dream from the beginning,
02:41:26.860 | but I felt that the enormous pressure that he was under
02:41:30.820 | made it impossible for him to work
02:41:32.620 | on any kind of content goals.
02:41:34.420 | And also many of the decisions that he made
02:41:37.700 | under this pressure seem to be not very wise.
02:41:40.700 | I don't think that as a CEO of a social media company,
02:41:43.900 | you should have opinions in the culture or in public.
02:41:46.900 | I think that's very short-sighted.
02:41:48.860 | And I also suspect that it's not a good idea
02:41:52.900 | to block Paul Graham of all people
02:41:57.900 | over setting a mastodon link.
02:42:02.620 | And I think Paul made this intentionally
02:42:04.780 | because he wanted to show Elon Musk
02:42:07.460 | that blocking people for setting a link
02:42:09.340 | is completely counter to any idea of free speech
02:42:12.620 | that he intended to bring to Twitter.
02:42:14.460 | And basically seeing that Elon was very less principled
02:42:18.700 | in his thinking there and is much more experimental.
02:42:22.220 | And many of the things that he is trying,
02:42:24.940 | they pan out very differently in a digital society
02:42:29.540 | than they pan out in a car company
02:42:31.620 | because the effect is very different
02:42:33.100 | because everything that you do in a digital society
02:42:35.260 | is going to have real-world cultural effects.
02:42:38.220 | And so basically I find it quite regrettable
02:42:41.220 | that this guy is able to become de facto the Pope,
02:42:45.660 | but Twitter has more active members
02:42:47.700 | than the Catholic Church.
02:42:49.340 | And he doesn't get it.
02:42:51.620 | The power and responsibility that he has
02:42:54.100 | and the ability to create something
02:42:56.540 | in a society that is lasting
02:42:58.300 | and that is producing a digital agora in a way
02:43:01.180 | that has never existed before,
02:43:02.620 | where we built a social network on top of a social network,
02:43:05.460 | an actual society on top of the algorithms.
02:43:09.980 | So this is something that is hope still in the future
02:43:12.420 | and still in the cards,
02:43:13.900 | but it's something that exists in small parts.
02:43:17.300 | I find that the corner of Twitter that I'm in
02:43:19.140 | is extremely pleasant.
02:43:20.100 | It's just when I take a few steps outside of it,
02:43:22.460 | it is not very wholesome anymore.
02:43:24.020 | And the way in which people interact with strangers
02:43:26.340 | suggest that it's not a civilized society yet.
02:43:29.900 | - So as the number of people who follow you on Twitter
02:43:33.380 | expands, you feel the burden
02:43:36.420 | of the uglier sides of humanity.
02:43:40.060 | - Yes, but there's also a similar thing in the normal world.
02:43:44.380 | That is, if you become more influential,
02:43:46.700 | if you have more status,
02:43:47.580 | if you have more fame in the real world,
02:43:49.460 | you get lots of perks,
02:43:52.780 | but you also have way less freedom
02:43:55.700 | in the way in which you interact with people,
02:43:57.380 | especially with strangers,
02:43:58.940 | because a certain percentage of people,
02:44:02.980 | it's a small single-digit percentage,
02:44:05.540 | is nuts and dangerous.
02:44:07.340 | And the more of those are looking at you,
02:44:11.260 | the more of them might get ideas.
02:44:13.260 | - But what if the technology enables you
02:44:16.340 | to discover the majority of people,
02:44:18.740 | to discover and connect efficiently and regularly
02:44:21.900 | with the majority of people who are actually really good?
02:44:24.580 | I mean, one of my sort of concerns
02:44:28.220 | with a platform like Twitter is
02:44:30.500 | there's a lot of really smart people out there,
02:44:32.340 | a lot of smart people that disagree with me
02:44:34.140 | and with others between each other.
02:44:35.820 | And I love that if the technology
02:44:38.460 | would bring those to the top,
02:44:40.920 | the beautiful disagreements,
02:44:42.680 | like Intelligence Squared type of debates.
02:44:45.780 | There's a bunch of, I mean,
02:44:47.240 | one of my favorite things to listen to
02:44:48.540 | is arguments and arguments like high-effort arguments
02:44:51.780 | with the respect and love underneath it,
02:44:53.900 | but then it gets a little too heated.
02:44:55.660 | But that kind of too heated,
02:44:57.180 | which I've seen you participate in,
02:44:59.180 | and I love that, with Lee Kroner,
02:45:01.780 | with those kinds of folks.
02:45:03.500 | And you go pretty hard.
02:45:05.380 | Like you get frustrated, but it's all beautiful.
02:45:07.780 | - Obviously, I can do this because we know each other.
02:45:11.140 | - Yes.
02:45:11.980 | - And Lee has the rare gift
02:45:13.820 | of being willing to be wrong in public.
02:45:15.740 | - Yeah.
02:45:16.580 | - So basically he has thoughts that are as wrong
02:45:18.100 | as the random thoughts of an average,
02:45:20.060 | highly intelligent person,
02:45:21.580 | but he blurts them out
02:45:23.180 | while not being sure if they're right.
02:45:25.460 | And he enjoys doing that.
02:45:27.260 | And once you understand that this is his game,
02:45:30.020 | you don't get offended by him saying something
02:45:32.220 | that you think is so wrong.
02:45:33.900 | - But he's constantly passively communicating a respect
02:45:37.920 | for the people he's talking with.
02:45:39.660 | - Yeah.
02:45:40.500 | - And for just basic humanity and truth
02:45:42.260 | and all that kind of stuff.
02:45:43.100 | And there's a self-deprecating thing.
02:45:45.020 | There's a bunch of like social skills you acquire
02:45:48.300 | that allow you to be a great debater, a great argumenter,
02:45:53.180 | like be wrong in public and explore ideas together
02:45:56.180 | in public when you disagree.
02:45:57.580 | And I would love for Twitter to elevate those folks,
02:46:02.220 | elevate those kinds of conversations.
02:46:03.740 | - It already does in some sense,
02:46:05.780 | but also if it elevates them too much,
02:46:08.540 | then you get this phenomenon of clubhouse
02:46:11.420 | where you always get dragged on stage
02:46:14.580 | and I found this very stressful because it was too intense.
02:46:18.420 | I don't like to be dragged on stage all the time.
02:46:20.980 | I think once a week is enough.
02:46:22.620 | And also when I met Lee the first time,
02:46:26.100 | I found that a lot of people seem to be shocked
02:46:28.820 | by the fact that he was being very aggressive
02:46:32.460 | as their results,
02:46:33.300 | that he didn't seem to show a lot of sensibility
02:46:36.700 | in the way in which he was criticizing what they were doing
02:46:39.540 | and being dismissive of the work of others.
02:46:41.940 | And that was not, I think, in any way,
02:46:44.380 | a shortcoming of him because I noticed
02:46:46.260 | that he was much, much more dismissive
02:46:48.300 | with respect to his own work.
02:46:50.140 | It was his general stance.
02:46:51.540 | And I felt that this general stance
02:46:53.700 | is creating a lot of liability for him
02:46:55.740 | because really a lot of people take offense
02:46:58.340 | at him being not like a dear Carnegie character
02:47:01.940 | who is always smooth and make sure that everybody likes him.
02:47:05.420 | So I really respect that he is willing to take that risk
02:47:08.940 | and to be wrong in public and to offend people.
02:47:12.660 | And he doesn't do this in any bad way.
02:47:14.460 | It's just most people feel,
02:47:15.860 | or not all people recognize this.
02:47:17.900 | And so I can be much more aggressive with him
02:47:21.500 | than I can be with many other people
02:47:23.060 | who don't play the same game
02:47:24.940 | because he understands the way and the spirit
02:47:26.940 | in which I respond to him.
02:47:28.860 | - I think that's a fun and that's a beautiful game.
02:47:30.820 | It's ultimately a productive one.
02:47:32.460 | Speaking of taking that risk, you tweeted,
02:47:37.100 | "When you have the choice between being a creator,
02:47:39.380 | "consumer, or redistributor,
02:47:41.980 | "always go for creation.
02:47:43.780 | "Not only does it lead to a more beautiful world,
02:47:46.820 | "but also to a much more satisfying life for yourself.
02:47:50.340 | "And don't get stuck preparing yourself for the journey.
02:47:53.060 | "The time is always now."
02:47:55.620 | So let me ask for advice.
02:47:57.820 | What advice would you give on how to become such a creator
02:48:01.460 | on Twitter in your own life?
02:48:03.380 | - I was very lucky to be alive
02:48:06.620 | at the time of the collapse of Eastern Germany
02:48:09.780 | and the transition into Western Germany.
02:48:12.100 | And me and my friends and most of the people I knew
02:48:15.580 | were East Germans and we were very poor
02:48:18.180 | because we didn't have money.
02:48:20.260 | And all the capital was in Western Germany
02:48:22.300 | and they bought our factories and shut them down
02:48:25.020 | because they were mostly only interested in the market
02:48:27.740 | rather than creating new production capacity.
02:48:31.220 | And so cities were poor and in disrepair
02:48:34.500 | and we could not afford things.
02:48:36.540 | And I could not afford to go into a restaurant
02:48:40.540 | and order a meal there.
02:48:42.300 | I would have to cook at home.
02:48:44.260 | But I also thought,
02:48:46.100 | why not just have a restaurant with my friends?
02:48:48.460 | So we would open up a cafe with friends and a restaurant
02:48:51.620 | and we would cook for each other in these restaurants
02:48:54.180 | and also invite the general public and they could donate.
02:48:56.900 | And eventually this became so big
02:48:59.060 | that we could turn this into some incorporated form
02:49:03.340 | and it became a regular restaurant at some point.
02:49:05.740 | Or we did the same thing with a movie theater.
02:49:08.180 | We would not be able to afford to pay 12 marks
02:49:12.340 | to watch a movie,
02:49:13.700 | but why not just create our own movie theater
02:49:15.940 | and then invite people to pay
02:49:17.860 | and we would rent the movies
02:49:20.100 | in the way in which a movie theater does.
02:49:24.100 | But it would be a community movie theater
02:49:26.180 | in which everybody who wants to help can watch for free
02:49:29.060 | and builds this thing and renovates the building.
02:49:31.540 | And so we ended up creating lots and lots of infrastructure.
02:49:35.500 | And I think when you are young and you don't have money,
02:49:39.020 | move to a place where this is still happening.
02:49:40.940 | Move to one of those places that are undeveloped
02:49:43.380 | and where you get a critical mass of other people
02:49:45.300 | who are starting to build infrastructure to live in.
02:49:48.060 | And that's super satisfying
02:49:49.700 | because you're not just creating infrastructure,
02:49:51.420 | but you're creating a small society that is building culture
02:49:55.060 | and ways to interact with each other.
02:49:57.220 | And that's much, much more satisfying
02:49:59.220 | than going into some kind of chain
02:50:01.580 | and get your needs met by ordering food
02:50:06.260 | from this chain and so on.
02:50:07.460 | - So not just consuming culture, but creating culture.
02:50:09.620 | - Yes.
02:50:11.020 | And you don't always have that choice.
02:50:12.340 | That's why I prefaced it when you do have the choice
02:50:14.580 | and there are many roles that need to be played.
02:50:16.700 | We need people who take care of redistribution in society
02:50:19.540 | and so on.
02:50:20.460 | But when you have the choice to create something,
02:50:22.420 | always go for creation.
02:50:23.540 | It's so much more satisfying.
02:50:25.260 | And it also is, this is what life is about, I think.
02:50:28.500 | - Yeah.
02:50:30.100 | Speaking of which, you retweeted this meme
02:50:32.740 | of a life of a philosopher in a nutshell.
02:50:37.020 | It's birth and death and in between.
02:50:39.040 | It's a chubby guy and it says, "Why though?"
02:50:42.900 | What do you think is the answer to that?
02:50:49.260 | - Well, the answer is that everything that can exist
02:50:53.300 | might exist.
02:50:54.660 | And in many ways, you take an ecological perspective
02:50:58.420 | the same way as when you look at human opinions
02:51:00.700 | and cultures.
02:51:01.700 | It's not that there is right and wrong opinions
02:51:04.900 | when you look at this from this ecological perspective.
02:51:07.700 | But every opinion that fits between two human ears
02:51:10.300 | might be between two human ears.
02:51:11.780 | And so when I see a strange opinion on social media,
02:51:16.340 | it's not that I feel that I have a need to get upset.
02:51:19.320 | It's often more that I, "Oh, there you are."
02:51:21.880 | And when an opinion is incentivized,
02:51:24.620 | then it's going to be abundant.
02:51:26.060 | And when you take this ecological perspective
02:51:28.900 | also on yourself and you realize you're just one
02:51:30.820 | of these mushrooms that are popping up
02:51:32.720 | and doing their thing, and you can,
02:51:34.620 | depending on where you chose to grow
02:51:36.840 | and where you happen to grow,
02:51:38.500 | you can flourish or not doing this or that strategy.
02:51:41.700 | And it's still all the same life at some level.
02:51:43.820 | It's all the same experience of being a conscious being
02:51:46.160 | in the world.
02:51:47.000 | And you do have some choice about who you want to be
02:51:50.380 | more than any other animal has.
02:51:53.060 | That to me is fascinating.
02:51:54.860 | And so I think that rather than asking yourself,
02:51:57.660 | "What is the one way to be?"
02:51:59.820 | Think about what are the possibilities that I have,
02:52:03.380 | what would be the most interesting way to be that I can be.
02:52:06.100 | - 'Cause everything is possible, so you get to explore.
02:52:08.540 | - Not everything is possible.
02:52:09.860 | Many things fail, most things fail.
02:52:12.560 | But often there are possibilities that we are not seeing,
02:52:16.020 | especially if we choose who we are.
02:52:18.740 | - To the degree we can choose.
02:52:22.920 | - Yeah.
02:52:23.760 | - Yasha, you're one of my favorite humans in this world.
02:52:30.880 | Consciousness is to merge with for a brief moment of time.
02:52:34.280 | It's always an honor.
02:52:35.760 | It always blows my mind.
02:52:37.560 | It will take me days, if not weeks, to recover.
02:52:40.720 | (laughing)
02:52:43.480 | And I already miss our chats.
02:52:46.300 | Thank you so much.
02:52:47.140 | Thank you so much for speaking with me so many times.
02:52:50.780 | Thank you so much for all the ideas
02:52:53.460 | you put out into the world.
02:52:55.060 | And I'm a huge fan of following you now
02:52:58.420 | in this interesting, weird time we're going through with AI.
02:53:02.160 | So thank you again for talking today.
02:53:04.800 | - Thank you, Alex, for this conversation.
02:53:06.400 | I enjoyed it very much.
02:53:08.280 | - Thanks for listening to this conversation with Yosha Bach.
02:53:10.840 | To support this podcast,
02:53:11.960 | please check out our sponsors in the description.
02:53:14.560 | And now let me leave you with some words
02:53:16.600 | from the psychologist Carl Jung.
02:53:19.960 | "One does not become enlightened
02:53:21.840 | by imagining figures of light,
02:53:24.120 | but by making the darkness conscious.
02:53:26.460 | The latter procedure, however, is disagreeable
02:53:30.680 | and therefore not popular."
02:53:33.040 | Thank you for listening.
02:53:34.280 | And I hope to see you next time.
02:53:36.200 | (upbeat music)
02:53:38.780 | (upbeat music)
02:53:41.360 | [BLANK_AUDIO]