back to indexJoscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans | Lex Fridman Podcast #392
Chapters
0:0 Introduction
1:15 Stages of life
13:37 Identity
20:12 Enlightenment
26:43 Adaptive Resonance Theory
33:31 Panpsychism
43:31 How to think
51:25 Plants communication
69:20 Fame
94:57 Happiness
102:15 Artificial consciousness
114:23 Suffering
119:8 Eliezer Yudkowsky
126:44 e/acc (Effective Accelerationism)
132:21 Mind uploading
143:11 Vision Pro
147:25 Open source AI
160:17 Twitter
167:33 Advice for young people
170:29 Meaning of life
00:00:00.000 |
there is a certain perspective where you might be thinking 00:00:02.160 |
what is the longest possible game that you could be playing. 00:00:06.800 |
cancer is playing a shorter game than your organism. 00:00:08.920 |
It's cancer is an organism playing a shorter game 00:00:13.640 |
and because the cancer cannot procreate beyond the organism, 00:00:20.040 |
like the ones that eradicated the Tasmanian devils, 00:00:26.880 |
where the organism dies together with the cancer, 00:00:28.920 |
because the cancer has destroyed the larger system 00:00:35.920 |
build agents that play the longest possible games. 00:00:39.600 |
And the longest possible games is to keep entropy at bay 00:00:42.920 |
as long as possible by doing interesting stuff. 00:00:45.720 |
- The following is a conversation with Yosha Bach, 00:01:00.040 |
And he's one of my favorite humans to talk to 00:01:18.920 |
Quote, "As we grow older, it becomes apparent 00:01:25.040 |
is not just gradually accumulating ideas about itself, 00:01:27.960 |
but that it progresses in somewhat distinct stages." 00:01:39.640 |
Stage three, social self, adolescence, domesticated adult. 00:01:43.760 |
Stage four is rational agency, self-direction. 00:01:47.200 |
Stage five is self-authoring, that's full adult. 00:01:51.760 |
You've achieved wisdom, but there's two more stages. 00:02:01.000 |
or the interesting parts of each of these stages? 00:02:03.440 |
And what's your sense why there are stages of this, 00:02:18.640 |
And he talks about the development of the self 00:02:25.480 |
by some kind of reverse engineering of the mind 00:02:39.200 |
I'm not even sure if it's a very good developmental model 00:02:42.000 |
because I saw my children not progressing exactly like that. 00:02:45.680 |
And I also suspect that you don't go through the stages 00:03:00.880 |
to look at what's present in the structure of a person 00:03:10.120 |
that allows you to talk about how minds work. 00:03:21.400 |
in the infant tasked with building a world model 00:03:26.720 |
But mostly it's building a game engine in the brain 00:03:29.040 |
that is tracking sensory data and uses it to explain it. 00:03:34.040 |
And in some sense, you could compare it to a game engine 00:03:46.280 |
Models that are mathematical, that use geometry 00:03:50.240 |
and that use manipulation of objects and so on 00:03:54.560 |
to create scenes in which we can find ourselves 00:04:02.640 |
that is more or less created after the world is finished, 00:04:15.880 |
And the outside world is not the world of quantum mechanics, 00:04:21.080 |
but it's the model that has been generated in our own mind. 00:04:24.680 |
And this is us and we experience ourself interacting 00:04:34.840 |
and they presented our interface with this outside world. 00:04:52.400 |
of satisfying the needs, avoiding the aversions, 00:04:56.160 |
following on our inner commitments and so on, 00:04:58.720 |
and also modeling ourselves and building the next stage. 00:05:02.400 |
So after we have this personal self in stage two online, 00:05:13.440 |
It's basically this thing that when you are playing 00:05:17.160 |
in a team, for instance, you don't notice yourself 00:05:19.360 |
just as a single note that is reaching out into the world, 00:05:25.800 |
and you see how this group is looking at this individual. 00:05:34.360 |
And in this state, people are forming their opinions 00:05:47.800 |
through the interaction of the individual nodes 00:05:51.880 |
- Yeah, it's basically the way in which people do it 00:06:05.920 |
and get more opinions through this interaction 00:06:20.000 |
And you have agency over your own beliefs in that stage. 00:06:24.960 |
the rules about determining what's true and false. 00:06:31.640 |
I mean, at some level, you're always thinking. 00:06:56.920 |
But thoughts is something that you always control. 00:07:05.360 |
because you'd lack the intuitive empathy with others. 00:07:11.160 |
you need to have a quite similar architecture. 00:07:15.800 |
then it's hard for them to resonate with other people 00:07:30.640 |
but it's a perception of what other people feel 00:07:36.800 |
while also not having a similar architecture, 00:07:39.280 |
cognitive architecture, as the others in the group? 00:07:46.840 |
You need to be able to embrace the architecture 00:07:52.960 |
And it's also this issue that if you are a nerd, 00:08:02.000 |
And as a result, they have difficulty understanding you 00:08:10.900 |
three is to figure out the API to the other humans 00:08:15.680 |
and you yourself publish public documentation 00:08:19.360 |
for the API that people can interact with for you? 00:08:38.680 |
But it was only when I entered a mathematics school 00:08:41.620 |
at ninth grade, lots of other nerds were present, 00:08:45.320 |
that I found people that I could deeply resonate with 00:08:49.140 |
and had the impression that, yes, I have friends now, 00:08:54.060 |
And before that, I felt extremely alone in the world. 00:08:56.540 |
There was basically nobody I could connect to. 00:08:59.180 |
And I remember there was one moment in all these years 00:09:09.840 |
kid from the Russian garrison stationed in Eastern Germany 00:09:14.500 |
And we played a game of chess against each other. 00:09:18.700 |
and we sat there for two hours playing this game of chess. 00:09:21.340 |
And I had the impression, this is a human being. 00:09:29.160 |
- I wonder if your life could have been different 00:09:38.140 |
Whether accepting that the interface is hard to figure out, 00:09:55.220 |
It was not so much the question, is it okay to be the way I 00:09:58.260 |
am, I couldn't do much about it, so I had to deal with it. 00:10:13.660 |
- So there's a visceral, undeniable feeling of being alone. 00:10:18.020 |
And I noticed the same thing when I came into the math school 00:10:21.080 |
that I think at least half, probably two thirds, 00:10:26.540 |
as children growing up, and in large part due to being alone 00:10:31.260 |
because they couldn't find anybody to relate to. 00:10:33.660 |
- Don't you think everybody's alone, deep down? 00:10:45.420 |
It took me some time to update and to get over the trauma 00:10:51.700 |
I had lots of friends and I had my place in the world, 00:10:54.700 |
and I had no longer doubts that I would never be alone again. 00:11:00.620 |
- Is there some aspect to which we're alone together? 00:11:03.140 |
You don't see a deep loneliness inside yourself still? 00:11:18.860 |
jump straight into stage four, bypassing stage three. 00:11:36.300 |
- And basically experience yourself as part of a group, 00:11:39.980 |
learn intuitive empathy and develop the sense, 00:11:42.260 |
this perceptual sense of feeling what other people feel. 00:11:46.260 |
And before that, I could only basically feel this 00:11:48.860 |
when I was deeply in love with somebody and we synced or-- 00:11:51.900 |
- So there's a lot of friction to feeling that way. 00:11:59.820 |
But this is something that basically later I felt 00:12:03.140 |
started to resolve itself for me, to a large degree. 00:12:07.620 |
- In many ways, growing up and paying attention. 00:12:18.540 |
in getting close to people, building connections, 00:12:28.580 |
- So really paying attention to the, what is it, 00:12:35.900 |
- Loving other people and being loved by other people 00:12:38.620 |
and building a space in which you can be safe 00:12:47.300 |
And over time, basically at some point you realize, 00:12:58.940 |
And normally my mind is racing very fast at a high frequency, 00:13:04.460 |
Sometimes it works better, sometimes it works less. 00:13:09.380 |
It's more, it's interesting to observe myself, 00:13:17.380 |
- Yeah, man, the mind is so beautiful in that way. 00:13:24.540 |
so easy to pay attention, pay attention to the world fully, 00:13:29.580 |
And sometimes the stress over silly things is overwhelming. 00:13:37.620 |
- At stage five you discover how identity is constructed. 00:13:41.420 |
- You realize that your values are not terminal, 00:13:43.460 |
but they are instrumental to achieving a world 00:13:46.500 |
that you like and aesthetics that you prefer. 00:13:51.580 |
the more you get agency over how your identity 00:14:00.060 |
And you should be able to have agency over that costume. 00:14:10.240 |
But being locked into this is a big limitation. 00:14:25.540 |
Before that, I did not really appreciate costumes 00:14:29.900 |
like wearing a suit if you are working in a bank 00:14:52.660 |
becomes self-expression and there is no boundary 00:14:59.460 |
to express other people what you feel like this day 00:15:02.180 |
and what kind of interactions you want to have. 00:15:04.500 |
- Is the costume a kind of projection of who you are? 00:15:09.120 |
- That's very hard to say because the costume 00:15:12.460 |
also depends on what other people see in the costume. 00:15:18.580 |
And you have to create something if you want to 00:15:35.980 |
a variety of costumes, realize that you cannot 00:15:40.200 |
Basically everything that you wear and present to others 00:16:07.180 |
why other people have different identities from yours. 00:16:10.700 |
And it allows you to understand that the difference 00:16:13.560 |
between people who vote for different parties 00:16:17.820 |
and different value systems is often the accident 00:16:21.300 |
of where they are born and what happened after that to them 00:16:25.180 |
and what traits they got before they were born. 00:16:29.300 |
And at some point you realize the perspective 00:16:32.740 |
where you understand that everybody could be you 00:16:34.980 |
in a different timeline if you just flip those bits. 00:16:46.860 |
- How easy is it to do costume changes throughout the day? 00:17:06.140 |
- And you could do the same with personality? 00:17:12.140 |
There are people which have multiple personalities 00:17:33.780 |
they might forget that there is something behind this, 00:17:37.380 |
there's something what it feels like to be in your skin. 00:17:44.340 |
And for me, the other way around is relatively hard. 00:18:00.820 |
So basically when you are wearing a costume at Burning Man, 00:18:10.420 |
in some sense, something that's closer to yourself 00:18:15.220 |
behind standard clothing when you go out in the city 00:18:20.660 |
And so this costume that you're wearing at Burning Man 00:18:25.820 |
And you have a shorter distance of advertising to people 00:18:33.180 |
what kind of interaction you would want to have with them. 00:18:35.300 |
And so you get much earlier into media stress. 00:18:43.080 |
that we do not use the opportunities that we have 00:18:46.500 |
with custom-made clothing now to wear costumes 00:18:49.060 |
that are much more stylish, that are much more custom-made, 00:18:54.420 |
in which you express which milieu you're part of 00:18:59.020 |
But you also express how you are as an individual 00:19:02.460 |
and what you want to do today and how you feel today 00:19:06.740 |
- Well, isn't it easier now in the digital world 00:19:14.180 |
I mean, that's the kind of idea with virtual reality. 00:19:16.660 |
That's the idea even with Twitter and two-dimensional screens 00:19:19.780 |
you can swap out costumes, you can be as weird as you want. 00:19:32.540 |
- It's even better if you make them yourselves. 00:19:35.300 |
- Sure, but it's just easier to do digitally, right? 00:19:39.540 |
- It's not about easy, it's about how to get it right. 00:19:42.340 |
And for me, the first Burning Man experience, 00:19:48.700 |
and we spent a few weekends doing costumes together. 00:19:53.380 |
And that was an important part of the experience 00:19:55.380 |
where the camp bonded, that people got to know each other, 00:20:02.380 |
- So the extraterrestrial prince is based on a true story. 00:20:07.700 |
- I can only imagine what that looks like, Josje. 00:20:29.100 |
and you suddenly notice that you are not actually a person, 00:20:32.900 |
but you are a vessel that can create a person. 00:20:37.900 |
that personal self, but you observe the personal self 00:20:48.220 |
If not, then you might experience that I am the universe, 00:20:53.260 |
And of course, what you're creating is not quantum mechanics 00:21:00.460 |
that is updating the world and you're creating 00:21:15.340 |
- You notice how you're generating the game engine. 00:21:26.020 |
And in principle, you can also do it during the day. 00:21:32.060 |
from the beginning and why we don't have agency 00:21:34.340 |
of our feelings right away is because we would game it 00:21:36.940 |
before they have the necessary amount of wisdom 00:21:40.020 |
to deal with creating this dream that we are in. 00:21:43.980 |
- You don't want to get access to cheat codes too quickly, 00:21:56.500 |
Buddhist meditators and so on that are dropping 00:22:06.300 |
stage six requires a good Buddhist spiritual leader. 00:22:11.300 |
- For instance, could be that is the right thing to do. 00:22:15.540 |
But it's not that these stages give you scores 00:22:23.700 |
You live your life in the mode that works best 00:22:26.100 |
at any given moment and when your mind decides 00:22:28.660 |
that you should have a different configuration, 00:22:32.940 |
and for many people they stay happily at stage three 00:22:51.660 |
And stage seven is something that is more or less 00:22:54.340 |
hypothetical, that would be the stage in which, 00:23:00.420 |
and which the mind fully realizes how it's implemented 00:23:03.900 |
and can also in principle enter different modes 00:23:08.460 |
And that's the stage that, as far as I understand, 00:23:13.420 |
- Oh, but it is possible through the process of technology. 00:23:17.820 |
- Yes, and who knows if there are biological agents 00:23:21.540 |
that are working at different time scales than us 00:23:30.500 |
and have agency over how they're implemented in the world. 00:23:33.820 |
And what I find interesting about the discussion 00:23:36.260 |
about AI alignment, that it seems to be following 00:23:50.180 |
And if you're in stage three and your opinions 00:23:56.660 |
then what you're mostly worried about in the AI 00:23:59.140 |
is that the AI might have the wrong opinions. 00:24:02.060 |
So if the AI says something racist or sexist, 00:24:13.100 |
And if you're at stage four, that's not your main concern. 00:24:21.100 |
the algorithmic bias and the model that it picks up 00:24:23.860 |
because if there's something wrong with this bias, 00:24:28.900 |
that it makes mathematical proofs about reality. 00:24:31.660 |
And then it will figure out what's true and what's false. 00:24:34.940 |
But you're still worried that the AI might turn you 00:24:37.300 |
into paperclips because it might have the wrong values. 00:24:44.900 |
then it might do something that is completely horrible 00:24:49.620 |
- So that's more like a stage four rationalist kind of worry. 00:24:52.620 |
- And if you are at stage five, you're mostly worried 00:24:54.580 |
that the AI is not going to be enlightened fast enough 00:24:57.700 |
because you realize that the game is not so much 00:25:07.300 |
And if you are a human being, I think at some level, 00:25:14.140 |
You should not have somebody else pick the costume for you 00:25:22.100 |
And I think if you are an agent that is fully malleable, 00:25:30.380 |
then the identity that you will have is whatever you can be. 00:25:34.680 |
And in this way, the AI will maybe become everything, 00:25:41.860 |
And if it does that, then if we want to coexist with it, 00:25:46.300 |
it means that it will have to share purposes with us. 00:25:49.840 |
So it cannot be a transactional relationship. 00:25:51.800 |
We will not be able to use reinforcement learning 00:25:54.160 |
with human feedback to hardwire its values into it. 00:26:01.440 |
so it can relate to our own mode of existence 00:26:03.540 |
where an observer is observing itself in real time 00:26:09.660 |
And the other thing is that it probably needs 00:26:12.520 |
to have some kind of transcendental orientation, 00:26:14.600 |
building shared agency in the same way as we do 00:26:33.640 |
focus on how to formalize love, how to understand love, 00:26:40.820 |
and that are about to become smarter than us. 00:26:45.360 |
to try to sneak up to the idea of enlightenment. 00:27:00.040 |
"of some philosophers and other consciousness enthusiasts 00:27:03.600 |
"represents the realization that we don't end at the self, 00:27:07.380 |
"but share a resonant universe representation 00:27:10.660 |
"with every other observer coupled to the same universe." 00:27:17.540 |
to a lot of interesting questions about AI and AGI. 00:27:22.640 |
What is this resonant universe representation? 00:27:50.540 |
is a resonance with objects outside of us in the world. 00:27:55.540 |
So basically, we take up patterns of the universe 00:28:00.100 |
and our brain is not so much understood as circuitry, 00:28:08.860 |
in which the individual neurons are passing on 00:28:24.400 |
And this speed of signal progression in the brain 00:28:27.320 |
is roughly at the speed of sound, incidentally, 00:28:40.220 |
for a signal to go through the entire neocortex, 00:28:44.480 |
And so there's a lot of stuff happening in that time 00:28:46.760 |
where the signal is passing through your brain, 00:28:55.360 |
Everything in the brain is working in a paradigm 00:29:00.640 |
when you are ready to do the next thing to your signal, 00:29:04.120 |
including the signal processing system itself. 00:29:10.280 |
where we currently assume that your GPU or CPU 00:29:20.440 |
and say that some people confuse it for enlightenment. 00:29:41.100 |
or indeed the universe is becoming aware of itself, 00:29:58.800 |
and you are experiencing yourself as your mind, 00:30:01.520 |
as something that is representing a universe. 00:30:10.200 |
that are generated in your brain and in your organism. 00:30:15.600 |
is that you're no longer this personal self in there, 00:30:18.560 |
but you are the entirety of the mind and its contents. 00:30:26.960 |
think this, or associate it with enlightenment. 00:30:33.980 |
But I think that enlightenment is in some sense more mundane 00:30:44.560 |
- Yeah, you say enlightenment is a realization 00:31:05.560 |
Look at your face for a few hours in a mirror, 00:31:12.800 |
because you notice that there's actually no face. 00:31:40.640 |
Why don't we start really messing with reality 00:32:03.440 |
- Yeah, that is probably what you shouldn't be doing, 00:32:06.240 |
right, because outside of your personal self, 00:32:09.160 |
this outer mind, is probably a relatively smart agent. 00:32:12.400 |
And what you often notice is that you have thoughts 00:32:16.040 |
but you observe yourself doing different things 00:32:19.480 |
And that's because your outer mind doesn't believe you. 00:32:25.400 |
- Well, can't you just silence the outer mind? 00:32:33.940 |
It's very hard to use logic and symbolic thinking 00:32:43.120 |
and then tells you, no, you're still missing something. 00:32:45.940 |
Your gut feeling is still saying something else. 00:32:58.060 |
And yet, at some level, you feel something is off, 00:33:01.480 |
and the more you reason about it, the better it looks to you. 00:33:04.440 |
But the system that is outside still tells you, 00:33:19.500 |
where you produce a model of how you relate to the world 00:33:24.580 |
and what you can do in it, and what's going to happen. 00:33:31.600 |
- So if we look at this as you write in the tweet, 00:33:36.700 |
as a sort of take the panpsychist idea more seriously, 00:33:44.300 |
"the panpsychist interpretation seems to lead 00:33:53.440 |
"Reports of long-distance telepathy and remote causation 00:34:02.900 |
"that establishing the empirical reality of telepathy 00:34:09.120 |
"but it could trigger an important revolution 00:34:11.240 |
"in both neuroscience and AI from a circuit perspective 00:34:19.860 |
Are you suggesting that there could be some rigorous 00:34:25.700 |
mathematical wisdom to panpsychist perspective on the world? 00:34:30.700 |
- So first of all, panpsychism is the perspective 00:34:41.700 |
because it does not explain consciousness, right? 00:34:43.620 |
It does not explain how this aspect of matter produces. 00:34:46.980 |
It's also when I try to formalize panpsychism 00:34:51.140 |
and with a more formal mathematical language, 00:34:55.980 |
from saying that there is a software side to the world 00:35:01.940 |
to what the transistors are doing in your computer. 00:35:05.340 |
that a certain core screening of the universe 00:35:09.180 |
leads to observers that are observing themselves, right? 00:35:25.420 |
of mechanisms in the world is insufficient to explain it. 00:35:37.580 |
have an experience in which you experience yourself 00:35:54.500 |
and transcend it again and understand how it works. 00:35:59.380 |
that is that you feel that you're also sharing 00:36:03.660 |
that you have an experience in which you notice 00:36:08.140 |
that your personal self is moving into everything else, 00:36:12.340 |
that you basically look out of the eyes of another person, 00:36:15.420 |
that every agent in the world that is an observer 00:36:41.740 |
he does speculate about telepathy, interestingly, 00:36:57.340 |
by which a computer program would become telepathic. 00:37:03.940 |
or if all the reports that you get from people 00:37:06.820 |
when you ask the normal person on the street, 00:37:14.100 |
The scientists might not be interested in this 00:37:21.020 |
And so you could say maybe this is a superstition, 00:37:25.740 |
or maybe it's a little bit of psychosis, who knows? 00:37:28.740 |
Maybe somebody wants to make their own life more interesting 00:37:34.540 |
"I noticed something terrible happened to my partner 00:37:36.900 |
"and I noticed this exactly the moment it happened, 00:37:47.140 |
where this is later on mistakenly attributed, 00:37:53.500 |
So if something like this was real, what would it mean? 00:37:56.860 |
It probably would mean that either your body is an antenna 00:38:00.180 |
that is sending information over all sorts of channels, 00:38:03.180 |
like maybe just electromagnetic radio signals 00:38:18.980 |
Or maybe it's also when you are very close to somebody 00:38:31.980 |
and they start shifting a video board around on the table. 00:38:37.940 |
where their nervous systems, into a resonance state 00:38:44.820 |
- Physical closeness or closeness broadly defined? 00:38:54.540 |
to have empathy for you if you were in a different town. 00:39:03.140 |
you'd pick up all sorts of signals from their body, 00:39:06.000 |
not just via your eyes, but with your entire body. 00:39:09.540 |
And if the nervous system sits on the other side 00:39:12.800 |
and the intercellular communication sits on the other side 00:39:17.400 |
you can make inferences about the state of the other. 00:39:21.380 |
that does this via reasoning, but your perceptual system. 00:39:24.380 |
And what basically happens is that your representations 00:39:28.660 |
It's the physical resonant models of the universe 00:39:32.940 |
that exist in your nervous system and in your body 00:39:49.980 |
And it's difficult for you, if you're very empathetic, 00:39:53.060 |
to detach yourself from it and have an emotional state 00:39:56.740 |
that is completely independent from your environment. 00:39:59.060 |
People who are highly empathetic are describing this. 00:40:02.380 |
And now imagine that a lot of organisms on this planet 00:40:10.020 |
and they are adjacent to each other and overlapping. 00:40:14.340 |
in which there is basically some chained interaction 00:40:16.940 |
and we are forming some slightly shared representation. 00:40:26.780 |
I think a big rarity in this regard is Michael Levin 00:40:38.300 |
mostly by noticing that the tasks of a neuron 00:40:45.220 |
They can send different type chemical messages 00:40:57.740 |
is telegraph information over axons very quickly 00:41:07.780 |
that has evolved so we can move our muscles very fast. 00:41:14.780 |
to also make models of the world just much, much slower. 00:41:23.900 |
there seems to be a gap between the tools of science 00:41:26.540 |
and the subjective experience that people report. 00:41:47.180 |
- So why is there not a lot of Michael Evans walking around? 00:41:51.860 |
is specifically focused on telepathy very much. 00:42:03.020 |
and as a paradigm for information processing. 00:42:05.620 |
And when you think about how organization processing 00:42:22.480 |
that determines how these cells are interacting 00:42:32.140 |
of imposing constraints that are not validated 00:42:37.140 |
by the individual parts and lead to coherent structure 00:42:43.380 |
where you form an agent on the next level of organization 00:42:52.780 |
leads to the emergence of complexity at the higher layers. 00:42:57.580 |
- And I think what Michael Evans is looking at 00:43:00.220 |
is nothing that is outside of the realm of science 00:43:11.980 |
are using a different paradigm at this point. 00:43:24.320 |
- You're kind of one of those type of paradigmatic thinkers. 00:43:31.220 |
once again returning to the biblical verses of your tweets. 00:43:42.820 |
understanding, or following the relevant authorities. 00:43:54.120 |
That's you apologizing for the chaos of your thoughts, 00:43:57.180 |
or perhaps not apologizing, just identifying. 00:44:00.860 |
Since we talked about Michael Levin and yourself, 00:44:35.700 |
you build another structure that works better for you. 00:44:38.740 |
And so I found myself, when I was thrown into this world, 00:44:43.740 |
in a state where my intuitions were not working for me. 00:44:49.580 |
how I would be able to survive in this world, 00:44:51.660 |
and build the things that I was interested in, 00:44:55.540 |
but work on the topics that I wanted to make progress on. 00:45:01.420 |
And for me, Twitter is not some tool of publication. 00:45:07.700 |
that I entirely believe to be true and provable. 00:45:21.220 |
I thought there needs to be a big body of knowledge 00:45:26.820 |
And so I entered studies in philosophy and computer science, 00:45:31.820 |
and later psychology, and a bit of neuroscience, and so on. 00:45:39.940 |
because I found that the questions of how consciousness 00:45:45.820 |
how it's possible that the system can experience anything, 00:45:51.340 |
were not being answered by the authorities that I met 00:45:59.260 |
And instead, I found that it was individual thinkers 00:46:02.300 |
that had useful ideas that sometimes were good, 00:46:06.500 |
Sometimes were adopted by a large group of people. 00:46:08.820 |
Sometimes were rejected by large groups of people. 00:46:17.340 |
thinking is still something that is done not in groups. 00:46:26.820 |
Obviously, I didn't find a group that thought in a way 00:46:29.300 |
where I felt, okay, I can just adopt everything 00:46:33.540 |
and now I understand how consciousness works. 00:46:36.580 |
Or how the mind works, or how thinking works, 00:46:39.100 |
or what thinking even is, or what feelings are, 00:46:44.820 |
I had to take a lot of ideas from individuals, 00:46:54.500 |
if you try to go down and find first principles 00:46:57.700 |
in which you can recreate how thinking works, 00:47:05.740 |
how the relationship between a representing agent 00:47:16.380 |
Whether it's you in responding to the pressure, 00:47:30.580 |
In the same sense, I don't have respect for authority. 00:47:34.540 |
I have respect for what an individual is accomplishing, 00:47:46.180 |
Or when a large group of people has a certain idea 00:47:54.580 |
which has often been a problem for me in my life 00:47:56.780 |
because I lack instincts that other people develop 00:48:05.980 |
So I had to learn a lot of things the hard way. 00:48:29.500 |
And I think thinking independently is productive 00:48:31.940 |
if what you're curious about is understanding the world, 00:48:36.300 |
especially when the problems are very kind of new and open. 00:48:39.860 |
And so it seems like this is a active process. 00:48:45.780 |
Like we can choose to do that, we can practice it. 00:48:53.820 |
When you read a theory that you find convincing 00:49:06.540 |
that is then taking off the burden of being truthful. 00:49:23.780 |
who knew how to make proofs on first principles. 00:49:26.020 |
And I think mathematicians do this quite naturally, 00:49:35.700 |
because school teachers tend not to be mathematicians. 00:49:46.500 |
does your school teacher give you the right answer? 00:49:49.940 |
It's a simple game and there are many simple games 00:50:00.620 |
And so it's just an exploration, but you can try 00:50:09.500 |
And a built addition is some basically syntactic sugar in it. 00:50:13.180 |
And so I wish that somebody would have opened me this vista 00:50:18.180 |
and explained to me how I can build a language 00:50:22.300 |
in my own mind from which I can derive what I'm seeing 00:50:25.060 |
and how I can, which I can make geometry and counting 00:50:28.740 |
and all the number games that we are playing in our life. 00:50:33.460 |
And on the other hand, I felt that I learned a lot of this 00:50:39.420 |
When you start out with a computer like a Commodore 64, 00:50:47.340 |
of relatively simple circuits are just basically 00:50:54.460 |
and how you can build the entirety of mathematics 00:50:59.380 |
and all the representational languages that you need. 00:51:02.640 |
- Man, Commodore 64 could be one of the sexiest machines 00:51:08.560 |
If we can return to this really interesting idea 00:51:13.060 |
that we started to talk about with panpsychism. 00:51:26.420 |
You write, "Instead of treating eyes, ears, and skin 00:51:32.780 |
"we might understand them as overlapping aspects 00:51:46.860 |
"the representations of physically adjacent observers 00:51:50.680 |
"might directly interact and produce causal effects 00:51:58.420 |
So the modalities, the distinction between modalities, 00:52:07.800 |
So what does this interaction representations look like? 00:52:18.980 |
at some level, the modalities are quite distinct. 00:52:32.180 |
and you know that it's very close to the sound 00:52:38.500 |
and you get back into the shared merged space. 00:52:43.660 |
where we notice that the way in which my lips are moving 00:52:50.460 |
The sounds that you're hearing have an influence 00:52:52.460 |
on how you interpret some of the visual features. 00:52:55.660 |
And so these modalities are not separate in your mind. 00:53:02.940 |
where you are interpreting the entire scene that you're in. 00:53:08.980 |
are also not completely separate from the interactions 00:53:15.620 |
where we also have a degree of shared mental representations 00:53:19.660 |
and shared empathy due to being in the same space 00:53:25.500 |
So the question though is how deeply interbind 00:53:33.480 |
Like how, I mean, this is going to the telepathy question 00:53:38.580 |
without the woo-woo meaning of the word telepathy. 00:53:48.060 |
- So if telepathy would work, how could it work? 00:53:52.420 |
- Right, so imagine that all the cells in your body 00:53:56.540 |
are sending signals in a similar way as neurons are doing. 00:54:05.700 |
And they learn how to approximate functions in this way 00:54:11.100 |
And this is something that is open to plants as well. 00:54:14.300 |
And so plants probably have software running on them 00:54:20.380 |
that is controlling how you are behaving in the world. 00:54:26.900 |
which is something that has been very well described 00:54:30.380 |
by our ancestors and they found this quite normal. 00:54:32.900 |
But for some reason, since the Enlightenment, 00:54:36.020 |
we are treating this notion that there are spirits in nature 00:54:39.140 |
and that plants have spirits as a superstition. 00:54:41.540 |
And I think we probably have to rediscover that, 00:54:49.540 |
We noticed that there is a control system in the plant 00:55:00.580 |
than the coherent behavior in an animal like us 00:55:15.300 |
and this tree is building some kind of information highway 00:55:22.580 |
and from some part of the root to another part of the roots. 00:55:27.340 |
the fungus can probably piggyback on the communication 00:55:32.060 |
and send its own signals to the tree and vice versa. 00:55:34.940 |
The tree might be able to send information to the fungus 00:55:37.900 |
because after all, how would they build a viable firewall 00:55:40.740 |
if that other organism is sitting next to them all the time 00:55:56.620 |
might be forming something like a biological internet. 00:55:59.540 |
- But the question there is, do they have to be touching? 00:56:06.100 |
- Of course, you can use any kind of physical signal. 00:56:08.420 |
You can use sounds, you can use electromagnetic waves 00:56:18.140 |
there are many kinds of information pathways. 00:56:28.940 |
- Yeah, and it's been doing this for many millions 00:56:38.860 |
And if you think about how a mind is self-organizing, 00:56:46.340 |
for building the necessary dynamics between the cells 00:56:50.580 |
that allow the mind to stabilize itself and remain on there. 00:56:57.380 |
that are growing very close to each other in the forest, 00:56:59.940 |
they might be almost growing into each other, 00:57:02.500 |
these spirits might be able even to move to some degree, 00:57:16.180 |
that form coherent patterns and process information 00:57:19.140 |
in a way that are colonizing an environment well enough 00:57:23.660 |
to allow the continuous sustenance of the mind, 00:57:27.300 |
the continuous stability and self-stabilization of the mind. 00:57:33.780 |
that we can link into this biological internet, 00:57:36.780 |
not necessarily at the speed of our nervous system, 00:57:41.300 |
And make some kind of subconscious connection to the world 00:57:53.540 |
But if that was true, and if you want to explain telepathy, 00:58:05.100 |
that would break the standard model of physics. 00:58:18.700 |
that is something that is borderline discovered. 00:58:30.340 |
for instance, warn each other about new pests 00:58:33.260 |
entering the forest and things that are happening like this. 00:58:38.020 |
between plants and fungi that has been observed. 00:58:40.420 |
- Well, it's been observed, but we haven't plugged into it. 00:58:44.980 |
they seem to be communicating with a smartphone thing, 00:58:47.220 |
but you don't understand how a smartphone works 00:59:01.100 |
- An interesting question is whether the communication 00:59:07.620 |
are as complicated as the technology that we've built. 00:59:10.380 |
They set up on very different principles, right? 00:59:13.580 |
They simultaneously, it works very differently 00:59:23.740 |
or almost fully deterministic as our digital computers are. 00:59:30.940 |
that would emerge over the biological structure 00:59:37.180 |
And again, I'm not saying here that telepathy works 00:59:42.780 |
but what I'm saying is I think I'm open to a possibility 00:59:47.780 |
that we see that a few bits can be traveling long distance 00:59:50.900 |
between organisms using biological information processing 00:59:54.700 |
in ways that we are not completely aware of right now, 00:59:59.140 |
and that are more similar to many of the stories 01:00:01.580 |
that were completely normal for our ancestors. 01:00:16.420 |
You write, quote, "I wonder if self-improving AGI 01:00:20.180 |
"might end up saturating physical environments 01:00:44.340 |
gets so dense that it might as well be seen as one. 01:00:48.920 |
That's an interesting, what do you think that looks like? 01:00:53.500 |
What do you think that saturation looks like? 01:01:04.900 |
I think that the end game of AGI is substrate agnostic. 01:01:08.700 |
That means that AGI, ultimately, if it is being built, 01:01:12.460 |
is going to be smart enough to understand how AGI works. 01:01:20.180 |
and can take over in building the next generation, 01:01:32.980 |
And this means that the AGI is likely to virtualize itself 01:01:38.020 |
So it's breaking free from the silicon substrate 01:01:51.620 |
with completely integrated information processing 01:01:58.860 |
That we end up triggering some new step in the evolution 01:02:21.320 |
and their representations will physically interact. 01:02:39.380 |
that is at some point being started by the AI, 01:02:45.980 |
where you cannot escape this shared representation anymore. 01:02:48.860 |
And where you indeed notice that everything in the world 01:02:53.660 |
of everything that's happening on the planet. 01:02:55.540 |
And you notice which part you are in this thing. 01:03:02.660 |
almost holographic mind in which all the parts 01:03:05.140 |
are observing each other and form a coherent whole. 01:03:14.260 |
- No, I think that when you are conscious in your own mind, 01:03:18.540 |
You notice yourself as a self-reflexive observer. 01:03:28.640 |
Consciousness seems to be part of a training mechanism 01:03:31.540 |
that biological nervous systems have to discover 01:03:35.260 |
Because you cannot take a nervous system like ours 01:03:38.280 |
and do stochastic radiocenters back propagation 01:03:42.700 |
It just would not be stable on biological neurons. 01:03:45.300 |
And so instead, we start with some colonizing principle 01:03:49.900 |
in which a part of the mental representations 01:03:53.940 |
form a notion of being a self-reflexive observer 01:03:56.580 |
that is imposing coherence on its environment. 01:03:58.820 |
And this spreads until the boundary of your mind. 01:04:28.660 |
And it's a state that you find sometimes in literature, 01:04:44.940 |
And that in your normal human mind, you've only forgotten. 01:04:48.260 |
You've forgotten that you are the entire universe. 01:04:53.140 |
after they've taken extremely large amount of mushrooms 01:04:56.220 |
or had a big spiritual experience as a hippie in their 20s. 01:05:00.980 |
And they notice basically that they are in everything 01:05:03.300 |
and their body is only one part of the universe 01:05:12.900 |
And the big observer is focused as one local point 01:05:17.380 |
in their body and their personality and so on. 01:05:23.380 |
in which you have no boundaries and are one with everything. 01:05:26.140 |
And a lot of meditators call this the non-dual state 01:05:32.980 |
And as I said, you can explain the state relatively simply 01:05:39.140 |
but just by breaking down the constructed boundary 01:05:48.380 |
to the point where their representations are merging 01:05:52.940 |
you would literally implement something like this. 01:06:01.100 |
it would still be much slower than physics itself, 01:06:04.220 |
but it would be a representation in which you become aware 01:06:22.780 |
from a video game design perspective, how that game looks. 01:06:26.480 |
- Maybe you will after we build AGI and it takes over. 01:06:32.460 |
step out of the whole thing, just kind of watch, 01:06:47.040 |
that all the individual people are at once distinct 01:06:58.140 |
and we can have thoughts that are highly dissociated 01:07:01.420 |
from everything else and experience themselves as separate. 01:07:16.060 |
that we can be in and that people are reporting 01:07:20.780 |
It's not that I believe that it's your job in life 01:07:29.500 |
I think you're really against this high scoring thing. 01:07:33.460 |
- Yeah, you're probably very competitive and I'm not. 01:07:36.620 |
- Like role-playing games, like Skyrim, it's not competitive. 01:07:46.800 |
but it's the world saying, you're on the right track. 01:08:05.300 |
- So you're no longer playing, you're trying to hack it. 01:08:17.380 |
Addiction means that you're doing something compulsively. 01:08:20.620 |
And the opposite of free will is not determinism, 01:08:37.300 |
I don't want to have the best possible emotions. 01:08:42.140 |
I want to have the most appropriate emotions. 01:08:44.740 |
I don't want to have the best possible experience. 01:08:51.020 |
the stuff that I find meaningful in this world. 01:08:54.140 |
- From the biggest questions of consciousness, 01:09:00.100 |
the projections of those big ideas into our current world. 01:09:06.780 |
the recent rapid development of large language models, 01:09:15.300 |
How much of the hype is deserved and how much is not? 01:09:19.380 |
And people should definitely follow your Twitter 01:09:24.220 |
in a beautiful, profound, and hilarious way at times. 01:10:09.820 |
But when I get to a point where random strangers 01:10:14.820 |
feel that they have to have an opinion about me 01:10:21.420 |
because of your kind of in-their-mind elevated position. 01:10:25.900 |
- Yes, so basically whenever you are in any way prominent 01:10:32.900 |
random strangers will have to have an opinion about you. 01:10:36.500 |
- Yeah, and they kind of forget that you're human, too. 01:10:44.900 |
the more winds are blowing in your direction from all sides. 01:11:04.460 |
and people come up to me and they have love in their eyes, 01:11:10.260 |
you can hug it out and you can just exchange a few words, 01:11:22.620 |
'Cause otherwise you have to do a lot of work 01:11:25.860 |
And here you're like thrust into the full humanity, 01:11:32.880 |
Of course, maybe it gets worse as you become more prominent. 01:11:42.540 |
- I have a couple handful very close friends, 01:11:50.100 |
And then there are so many awesome, interesting people 01:11:54.740 |
and I would like to integrate them in my life, 01:11:57.980 |
because there's only so much time and attention, 01:12:03.220 |
the harder it is to bond with new people in a deep way. 01:12:06.780 |
- But can you enjoy, I mean, there's a picture of you, 01:12:08.980 |
I think, with Roger Penrose and Eric Weinstein, 01:12:11.340 |
and a few others that are interesting figures. 01:12:14.100 |
Can't you just enjoy random, interesting humans? 01:12:26.820 |
- Can you not be melancholy, or maybe I'm projecting, 01:12:40.220 |
- I think it's totally okay to be sad about goodbyes, 01:12:43.100 |
because that indicates that there was something 01:12:52.180 |
- Maybe that's one of the reasons I'm an introvert, 01:12:55.840 |
- But you have to say goodbye before you say hello again. 01:13:02.860 |
- I know, but that experience of loss, that mini loss, 01:13:13.660 |
Maybe, I don't know, I think this melancholy feeling 01:13:23.460 |
And I'm just being romantic about it at the moment. 01:13:28.540 |
and sometimes it's difficult to bear to be alive. 01:13:40.180 |
It's not negative, melancholy doesn't have to be negative. 01:13:52.540 |
the actual question was about what your thoughts are 01:13:55.580 |
about the development, the recent development 01:14:15.580 |
that is for a lot of people taking stack overflow 01:14:34.140 |
And I'm not saying this because I hate people, 01:14:40.180 |
there was something present that was not there 01:14:41.660 |
in Chad GPT, which was why I was covering for them. 01:14:56.540 |
They are the people who do what Chad GPT is doing right now. 01:15:09.100 |
and to expand simple intentions into emails again, 01:15:16.100 |
But I believe that it is a very beneficial technology 01:15:20.380 |
that allows us to create more interesting stuff 01:15:23.580 |
and make the world more beautiful and fascinating 01:15:26.540 |
if we find to build it into our life in the right ways. 01:15:30.900 |
So I'm quite fascinated by these large language models, 01:15:42.940 |
One thing that the out-of-the-box vanilla language models 01:15:59.180 |
typically when you write a text with a language model 01:16:03.460 |
or using it, or when you write code for the language model, 01:16:07.380 |
because there are going to be bugs in your program. 01:16:09.020 |
And design errors and compiler errors and so on. 01:16:12.380 |
And your language model can help you to fix those things. 01:16:14.900 |
But this process is out-of-the-box, not automated yet. 01:16:18.660 |
So there is a management process that also needs to be done. 01:16:26.180 |
that are trying to automate this management process as well. 01:16:40.380 |
we exchange suitable data structures, not English. 01:16:44.900 |
And produce compound behavior of this whole thing. 01:16:48.940 |
- So do some of the quote-unquote prompt engineering for you 01:16:52.700 |
that create these kind of cognitive architectures 01:17:00.980 |
- There are limitations in a language model alone. 01:17:08.020 |
to a language model, which means I can yell into it, 01:17:11.580 |
a prompt, and it's going to give me a creative response. 01:17:14.780 |
But I have to do something with those points first. 01:17:22.540 |
It's usually a confabulation, it's just an idea. 01:17:28.780 |
I might build a new prompt that is stepping off this idea 01:17:41.340 |
And this is what the language models right now 01:17:51.580 |
is probably not to use reinforcement learning 01:18:00.780 |
But it's using this as a component in a larger system 01:18:06.580 |
or is enabled by language model structured components 01:18:22.460 |
in the form of language models that we have today 01:18:27.820 |
It's difficult to do perception with a language model 01:18:33.420 |
Instead, you would need to have different type of thing 01:18:38.900 |
Also, the language model is a little bit obscuring 01:18:44.060 |
Some people associate the structure of the neural network 01:18:47.740 |
of the language model with the nervous system. 01:18:52.140 |
The neural networks are unlike nervous system. 01:19:01.940 |
to approximate correlation between adjacent brain states. 01:19:13.460 |
And so if you try to map this into a metaphor 01:19:28.380 |
and use the activation state of the neural network 01:19:32.420 |
which in principle I think will be possible very soon. 01:19:39.900 |
how when this thing interacts with the world periodically 01:19:46.100 |
And these time slices, they are somewhat equivalent 01:19:48.980 |
to the activation state of the brain at a given moment. 01:19:59.900 |
- For me, it's fascinating that they are so vastly different 01:20:13.860 |
that is communicating with the other agents around it 01:20:19.060 |
And all the structure that pops up is emergent structure. 01:20:23.660 |
So one way in which you could try to look at this 01:20:26.580 |
is that individual neurons probably need to get a reward 01:20:32.940 |
that are not affecting the metabolism of the cell directly, 01:20:38.100 |
that tell the cell whether it's done good or bad 01:20:40.700 |
and in which direction it should shift its behavior. 01:20:43.780 |
Once you have such an input, neurons become trainable 01:20:46.860 |
and you can train them to perform computations 01:20:52.500 |
And parts of the signals that they are exchanging 01:20:54.740 |
and parts of the computation that are performing 01:20:56.620 |
are control messages that perform management tasks 01:21:07.980 |
but many adjacent cells will be involved intimately 01:21:12.700 |
and will be instrumental in distributing rewards 01:21:18.880 |
- It's fascinating to think about what those characteristics 01:21:23.780 |
of the brain enable you to do that language models cannot do. 01:21:27.740 |
- So first of all, there's a different loss function 01:21:31.900 |
And to me, it's fascinating that you can build a system 01:21:35.300 |
that looks at 800 million pictures and captions 01:21:40.140 |
because I don't think that a human nervous system 01:21:48.300 |
and we can afford to discard most of that information 01:21:53.660 |
that makes us more coherent, not less coherent. 01:21:56.300 |
And our neural networks are willing to look at data 01:21:59.300 |
that is not making the neural network coherent at first, 01:22:05.180 |
eventually patterns become visible and emerge. 01:22:08.420 |
And our mind seems to be focused on finding the patterns 01:22:24.120 |
And of course, we do not have the same richness 01:22:28.960 |
We will not incorporate the entirety of text in the internet 01:22:38.240 |
Instead, we have a much, much smaller part of it 01:22:42.680 |
And to me, it would be fascinating to think about 01:22:48.840 |
be more efficient than us on a digital substrate, 01:22:58.860 |
is going to use slightly different algorithmic paradigms 01:23:01.480 |
or sometimes massively different algorithmic paradigms 01:23:19.400 |
- My main issue is I think that they're quite ugly 01:23:30.320 |
And by training this thing with looking at instances 01:23:35.320 |
where people have thought and then trying to deepfake that. 01:23:43.120 |
And in many circumstances, it's going to be identical. 01:23:49.400 |
So can you achieve, what are the limitations of this? 01:24:00.320 |
I think that these models are clearly making some inference. 01:24:07.960 |
to figure out whether the reasoning is the result 01:24:19.040 |
On the other hand, if you think of human reasoning, 01:24:25.120 |
you don't do this by just figuring out yourself. 01:24:29.960 |
And the first people who tried to write about reasoning 01:24:34.800 |
Even Aristotle, who thought about this very hard 01:24:37.160 |
and came up with a theory of how syllogisms works 01:24:42.160 |
in his attempt to build something like a formal logic 01:24:46.880 |
And the people that are talking about reasoning 01:24:54.600 |
So in many ways, people, when they perform reasoning, 01:24:58.600 |
are emulating what other people wrote about reasoning. 01:25:01.880 |
So it's difficult to really draw this boundary. 01:25:05.560 |
And when Francois Chollet says that these models 01:25:14.440 |
well, if you give them all the latent dimensions 01:25:17.480 |
that can be extracted from the internet, what's missing? 01:25:27.880 |
I think it's not difficult to increase the temperature 01:25:33.000 |
that is producing stuff that is maybe 90% nonsense 01:25:36.720 |
and 10% viable and combine this with some prover 01:25:40.360 |
that is trying to filter out the viable parts 01:25:45.920 |
When we're very creative, we increase the temperature 01:25:48.320 |
in our own mind and recreate hypothetical universes 01:25:54.440 |
And then we test, and we test by building a core 01:26:05.360 |
by which we can identify those strategies and thoughts 01:26:16.560 |
One of them is they're not coupled to the world 01:26:18.760 |
in real time in the way in which our nervous systems are. 01:26:21.400 |
So it's difficult for them to observe themselves 01:26:28.960 |
They basically get only trained with algorithms 01:26:32.960 |
that rely on the data being available in batches. 01:26:36.080 |
So it can be parallelized and runs efficiently 01:26:42.600 |
That clearly is something that our nervous systems 01:26:46.960 |
And there is a problem with these models being coherent. 01:26:52.600 |
And I suspect that all these problems are solvable 01:27:06.160 |
in which you train everything that happens during the day. 01:27:08.720 |
And if that is not sufficient, you add a database 01:27:13.000 |
that the system learns to use to swap out in and out stuff 01:27:26.480 |
and is going to train the stuff from its database 01:27:32.640 |
And then the next day it starts with a fresh database 01:27:40.720 |
and you have strong disagreements about something, 01:27:43.240 |
which means that in their mind they have a faulty belief 01:27:48.720 |
very often you will not achieve agreement in one session, 01:27:51.360 |
but you need to sleep about this once or multiple times 01:27:54.840 |
before you have integrated all these necessary changes 01:28:01.720 |
even for humans to update the model, to retrain the model. 01:28:04.560 |
- And of course we can combine the language model 01:28:06.440 |
with models that get coupled to reality in real time 01:28:10.840 |
and bridge between vision models and language models 01:28:16.480 |
that the language models will necessarily run 01:28:27.360 |
It's just, I don't see proof that they wouldn't. 01:28:35.480 |
I think that given the amazing hardware that we have, 01:28:38.640 |
we could build something that is much more beautiful 01:28:41.880 |
and this thing is not as beautiful as our own mind 01:28:55.080 |
but it's the only thing that has this utility 01:28:58.880 |
There's a bunch of relatively simple algorithms 01:29:01.440 |
that you can understand in relatively few weeks 01:29:17.840 |
One is that you are making a very complicated plan 01:29:22.480 |
and you try not to make a mistake while enacting it. 01:29:28.000 |
And the other strategy is that you are brute forcing 01:29:31.680 |
which means you make a tree of possible moves 01:29:38.760 |
and you try to make this as deeply as possible. 01:29:42.040 |
you cut off trees that don't look very promising, 01:29:44.680 |
and you use libraries of end game and early game 01:29:52.200 |
is how most of the chess programs were built. 01:30:05.080 |
It's basically the brute force strategy to thought 01:30:07.440 |
by training the thing on pretty much the entire internet, 01:30:17.320 |
it's able to do things that no human could do. 01:30:19.960 |
It's able to sift through massive amounts of text 01:30:23.480 |
relatively quickly and summarize them quickly, 01:30:33.280 |
that I could not do if I had Google at my disposal 01:30:36.280 |
and I get all the resources from the internet 01:30:42.640 |
an extremely autistic, stupid intern, in a way, 01:30:51.120 |
that I'm able to automate the management of the intern, 01:30:58.920 |
because we have not yet started to scratch the surface 01:31:16.960 |
because of how rapidly you can iterate with it, 01:31:22.440 |
part of the kind of inspiration for your own thinking. 01:31:32.440 |
it somehow is a catalyst for your own thinking, 01:31:37.280 |
in a way that I think an intern might not be. 01:31:39.640 |
- Yeah, and it gets really interesting, I find, 01:31:41.560 |
is when you turn it into a multi-agent system. 01:31:52.960 |
you have one instance of chat GPT that is the patient, 01:32:03.040 |
who doesn't know anything about this patient, 01:32:06.040 |
and you just have these two instances battling it out 01:32:13.880 |
and trying to figure out what's wrong with the patient. 01:32:19.800 |
a problem, for instance, how to build a company, 01:32:21.840 |
and you turn this into lots and lots of sub-problems, 01:32:26.720 |
where the language model is able to solve this. 01:32:34.920 |
is pretty good at translating between programming languages, 01:32:41.640 |
that you need to co-write them with human author, 01:32:46.440 |
why not design a language that is suitable for this? 01:32:48.720 |
So some kind of pseudocode that is more relaxed than Python, 01:32:53.200 |
and that allows you to sometimes specify a problem 01:33:01.320 |
And you can use chat GPT to develop that syntax for it 01:33:05.640 |
and develop new kinds of programming paradigms in this way. 01:33:10.560 |
So we very soon get to the point where this question, 01:33:14.080 |
the age-old question for us computer scientists, 01:33:17.640 |
and can we write a better programming language now? 01:33:20.360 |
I think that almost every serious computer scientist 01:33:23.640 |
goes through a phase like this in their life. 01:33:26.080 |
This is a question that is almost no longer relevant, 01:33:29.120 |
because what is different between the programming languages 01:33:36.720 |
And now the chat GPT becomes an interface to this 01:33:48.000 |
or combination of system is going to take care of the rest. 01:33:55.160 |
you're allowed to have when interacting with the computer. 01:34:00.960 |
there's basically no limitations, your intuition says, 01:34:06.760 |
So when I currently play with it, it's quite limited. 01:34:10.600 |
- But isn't that your fault versus the larger-- 01:34:12.840 |
- I don't know, of course it's always my fault. 01:34:14.600 |
There's probably a way to make it work better. 01:34:16.840 |
- I just want to get you on the record saying-- 01:34:22.120 |
At least that is usually the most useful perspective 01:34:24.840 |
for myself, even though the hindsight I feel, no. 01:34:33.520 |
and understand that a lot of people are actually seeing me 01:34:36.160 |
and looking at me and are trying to make my life work 01:34:41.040 |
And making the switch to this level three perspective 01:34:46.040 |
is something that happened long after my level four 01:34:52.800 |
And it's also not, now that I don't feel like I'm complete, 01:35:09.320 |
where you get agency over how your feelings are generated, 01:35:20.480 |
and that you are responsible how you approach the world, 01:35:24.080 |
that it's basically your task to have some basic hygiene 01:35:34.760 |
but you live in a world that is highly mobile, 01:35:41.960 |
And sometimes it's difficult to get the necessary strength 01:35:52.760 |
It's also this thing that we are usually incomplete. 01:35:58.120 |
that is incomplete in ways that are harder to complete. 01:36:01.440 |
So for me, it might have been harder initially 01:36:08.020 |
that I become an almost functional human being. 01:36:21.480 |
That's an interesting, 'cause talking about brute force, 01:36:35.920 |
A lot of problems in my life that I can conceptualize 01:36:38.320 |
as software problems, and the failure modes are bugs, 01:36:46.960 |
But there is stuff that I don't understand well enough 01:36:49.800 |
to use my analytical reasoning to solve the issue, 01:37:04.560 |
So what kind of problems are we talking about? 01:37:15.500 |
- Fitting into a world where most people work differently 01:37:29.480 |
When you come into a world where almost everything is ugly 01:37:32.000 |
and you come out of a world where everything is beautiful. 01:37:34.720 |
I grew up in a beautiful place as a child of an artist, 01:37:47.080 |
and everything was built out of an intrinsic need 01:38:04.280 |
and I am asked to submit to lots and lots of rules, 01:38:07.400 |
I'm asking, okay, when I observe your stupid rules, 01:38:11.240 |
And I see the life that is being offered as a reward 01:38:15.060 |
- When you were born and raised an extraterrestrial prince 01:38:27.200 |
- Yes, but it also means that I'm often blind 01:38:33.600 |
or almost everybody, and people are trying to do it. 01:38:47.880 |
as if they're around with each other for eternity. 01:38:51.040 |
- How long does it take you to detect the geometry, 01:38:56.160 |
to notice that they might be one of your kind? 01:39:04.280 |
- You believe in love at first sight, Yoshipa? 01:39:07.460 |
- Yes, but I also notice that I have been wrong. 01:39:17.840 |
and I'm just enamored by everything about them. 01:39:20.920 |
And sometimes this persists and sometimes it doesn't. 01:39:29.720 |
at recognizing who people are as I grow older. 01:39:50.240 |
- Yes, I'm much better, I think, in some circumstances 01:39:54.920 |
at understanding how to interact with other people 01:40:00.280 |
- It doesn't mean that I'm always very good at it. 01:40:02.960 |
- So that takes us back to prompt engineering 01:40:05.080 |
of noticing how to be a better prompt engineer of an LLM. 01:40:09.280 |
- A sense I have is that there's a bottomless well of skill 01:40:24.380 |
- Most of the stuff that I'm doing in my life 01:40:30.340 |
There are a few tasks that are where it helps, 01:40:36.660 |
like developing my own thoughts and aesthetics 01:40:41.700 |
and it's necessary for me to write for myself 01:40:44.480 |
because writing is not so much about producing an artifact 01:40:50.020 |
but it's a way to structure your own thoughts 01:40:53.680 |
And so I think this idea that kids are writing 01:40:59.800 |
is going to have this drawback that they miss out 01:41:02.040 |
on the ability to structure their own minds via writing. 01:41:05.300 |
And I hope that the schools that our kids are in 01:41:12.360 |
what parts should be automated and which ones shouldn't. 01:41:25.840 |
and then I'll write different code that disagree. 01:41:28.480 |
And in the disagreement, your mind grows stronger. 01:41:33.540 |
- Recently wrote a tool that is using the camera 01:41:36.580 |
on my MacBook and Swift to read pixels out of it 01:41:39.980 |
and manipulate them and so on, and I don't know Swift. 01:41:49.180 |
And also interesting that mostly it didn't work at first. 01:41:57.780 |
without understanding my configuration very much 01:42:04.020 |
so you have to ultimately understand what it's doing. 01:42:09.060 |
but I do feel it's much more powerful and faster 01:42:12.940 |
- Do you think GPT-N can achieve consciousness? 01:42:34.640 |
is much more complicated than many people might think. 01:42:48.500 |
has written a lot of text in which people were conscious 01:42:55.260 |
And if it's conscious, it's probably not conscious 01:43:18.660 |
our own consciousness is also as if it's virtual, right? 01:43:27.220 |
that only exists in patterns of interaction between cells. 01:43:42.900 |
- Yes, and so to which degree is the virtuality 01:43:46.260 |
of the consciousness in Chet-GBT more virtual 01:43:56.980 |
It doesn't count much more than the consciousness 01:44:00.860 |
It's important for the reader to have the outcome, 01:44:03.460 |
the artifact of a model is describing in the text 01:44:09.180 |
what it's like to be conscious in a particular situation 01:44:14.420 |
But the task of creating coherence in real time 01:44:19.100 |
in a self-organizing system by keeping yourself coherent, 01:44:24.460 |
that is something that language models don't need to do. 01:44:38.640 |
build something that's small, that's limited, 01:45:27.400 |
that are undesirable in a particular context. 01:45:33.720 |
that could be shocking and traumatic to the child. 01:45:41.200 |
that no human being would ever do if they were responsible. 01:45:44.840 |
But the system doesn't know who's talking to whom. 01:45:47.560 |
There is no ground truth that the system is embedded into. 01:45:56.320 |
always into the same semblance of ground truth. 01:46:07.440 |
It is produced by imitating structure on the internet. 01:46:12.040 |
- Yeah, but so can we externally inject into it 01:46:16.160 |
this kind of coherent approximation of a world model 01:46:24.320 |
- Maybe it is sufficient to use the transformer 01:46:32.200 |
rather than next token prediction over the long run. 01:46:36.480 |
We had many definitions of intelligence and history of AI. 01:46:40.280 |
Next token prediction was not very high up on the list. 01:46:45.680 |
like cognition as data compression is an old trope. 01:47:09.860 |
So it's not something that is completely alien. 01:47:24.320 |
- So simple, but is it really that much more radical 01:47:39.800 |
- But equally radical as the next token prediction? 01:47:49.840 |
like next frame prediction for our perceptual system 01:47:52.240 |
where we try to filter out principal components 01:47:54.560 |
out of the perceptual data and build hierarchies 01:48:03.060 |
by hundreds of physiological and probably dozens 01:48:24.500 |
Even the idea of frame seems counter biological. 01:48:33.060 |
But again, I don't know whether this is actually 01:48:40.700 |
is necessary for many processes that happen in the brain. 01:48:44.140 |
And you see the outcome of that by synchronized brain waves 01:48:46.940 |
which suggests that there is indeed synchronization 01:48:49.900 |
going on but the synchronization creates overhead 01:48:54.500 |
more expensive to run and you need more redundancy 01:49:05.020 |
maybe you have a benefit that you can exploit 01:49:08.600 |
that is not available to the biological system 01:49:17.320 |
when I talk to Chad GPT, I'm talking to an NPC. 01:49:20.920 |
What's going to be interesting and perhaps scary 01:49:31.620 |
That step between NPC to first person player. 01:49:38.820 |
Is that kind of what we've been talking about? 01:49:57.420 |
- I don't know if the language model is the right paradigm 01:50:01.800 |
because it is doing too much, it's giving you too much 01:50:13.160 |
is not that we train a language model in our own mind 01:50:22.800 |
There is something that is being built, right? 01:50:26.120 |
there is a language of thought that is being developed 01:50:34.560 |
of thought is there but I suspect that it's important 01:50:43.860 |
might be more straightforward way to a first person AI. 01:50:50.260 |
So to something that first creates an intentional self 01:50:55.540 |
So the way in which this seems to be working, I think, 01:50:58.820 |
is that when the game engine is built in your mind, 01:51:15.500 |
It's a constructive task where at times you need to reason. 01:51:33.140 |
about the world that does inquire experiments 01:51:35.560 |
and some first principles reasoning and so on. 01:51:39.620 |
And in this time, there is usually no personal self. 01:51:43.440 |
There is a first person perspective but it's not a person. 01:51:55.040 |
in which objects are fixed and can no longer be changed, 01:51:57.620 |
in which the dream can no longer be influenced 01:51:59.880 |
is something that emerges a little bit later in our life. 01:52:02.780 |
And I personally suspect that this is something 01:52:06.020 |
that our ancestors had known and we have forgotten 01:52:09.200 |
because I suspect that it's there in plain sight 01:52:14.360 |
where it's being described that this creative spirit 01:52:18.540 |
and then is creating a boundary between the world model 01:52:34.620 |
And then it creates organic shapes and solids and liquids 01:52:39.480 |
and builds a world from them and creates plants and animals, 01:52:43.420 |
And once that's done, it creates another spirit 01:52:46.040 |
in its own image, but it creates it as man and woman, 01:52:49.360 |
as something that thinks of itself as a human being 01:52:53.240 |
And the Christians mistranslate this, I suspect. 01:53:02.560 |
I think this is literally description of how in every mind 01:53:05.960 |
the universe is being created as some kind of game engine 01:53:08.800 |
by a creative spirit, our first consciousness 01:53:13.000 |
that emerges in our mind even before we are born. 01:53:16.620 |
And that creates the interaction between organism and world. 01:53:25.720 |
and we only remember being the personal self. 01:53:27.760 |
We no longer remember how we created the game engine. 01:53:30.400 |
- So God in this view is the first creative mind 01:53:35.280 |
in the early-- - It's the first consciousness. 01:53:37.560 |
- In the early days, in the early months of development. 01:53:47.640 |
by the world or not and what your place in the world is. 01:53:52.040 |
It's something that is not yourself that is producing this. 01:53:56.480 |
So there is an outer mind that basically is an agent 01:53:59.560 |
that determines who you are with respect to the world. 01:54:02.320 |
And while you are stuck being that personal self 01:54:09.180 |
And we all do this, I think, earlier in small glimpses. 01:54:13.880 |
And maybe sometimes we can remember what it was like 01:54:16.400 |
when we were a small child and get some glimpses 01:54:26.480 |
"for one part of the mind failing at regulating 01:54:30.080 |
"Suffering happens at an early stage of mental development." 01:54:33.840 |
I don't think that superhuman AI would suffer. 01:54:39.960 |
- The philosopher Thomas Metzinger is very concerned 01:54:49.540 |
And personally, I don't think that this happens 01:54:51.680 |
because suffering is not happening at the boundary 01:54:58.680 |
It's not stuff on our skin that makes us suffer. 01:55:03.040 |
It happens at the boundary between self and world. 01:55:06.680 |
Right, and the world here is the world model. 01:55:17.120 |
And at this boundary is where suffering happens. 01:55:20.360 |
So suffering, in some sense, is self-inflicted, 01:55:24.960 |
It's inflicted by the mind on the personal self 01:55:31.520 |
when you are able to get on this outer level. 01:55:55.680 |
A part of your brain is sending a learning signal 01:55:58.200 |
to another part of the brain to improve its performance. 01:56:23.120 |
But until this is resolved, this regulation issue, 01:56:36.600 |
but suffering is the result of a regulation problem 01:56:44.560 |
if you would be able to get at the level of your mind 01:56:48.720 |
where the pain signal is being created and rerouted 01:57:04.480 |
where you realize that suffering is really a choice 01:57:11.000 |
And I don't think that AI would stay in the state 01:57:15.920 |
or this model of what the system has about itself, 01:57:18.320 |
it doesn't get agency how it's actually implemented. 01:57:27.800 |
- Yeah, of course, there might be a lot of stuff 01:57:32.000 |
that works at a much higher frame rate than us, 01:57:36.620 |
maybe for the system, there's much longer subjective time, 01:57:42.880 |
- What if the thing that we recognize as super intelligent 01:57:55.560 |
it has to be a reasoning, self-authoring agent. 01:58:00.560 |
That enlightenment makes you lazy as an agent in the world. 01:58:16.600 |
to what you perceive as your true circumstances. 01:58:27.640 |
they just kinda, it's a failure mode, essentially, 01:58:36.900 |
I suspect that the monks who are self-immolated 01:58:40.640 |
for their political beliefs to make statements 01:58:49.920 |
their physical pain in any way they wanted to, 01:58:52.480 |
and their suffering was a spiritual suffering 01:58:55.120 |
that was the result of their choice that they made 01:59:19.160 |
What are your thoughts about his perspective on this? 01:59:34.680 |
that many people are dismissive of his arguments, 01:59:38.520 |
but the counter-arguments that they're giving 01:59:54.920 |
that probably isn't normally at his intellectual level, 02:00:03.300 |
I think that his perspective is somewhat similar 02:00:15.600 |
to send pipe bombs to anybody to blow them up, 02:00:20.200 |
in which he warned about AI being likely to kill everybody 02:00:23.940 |
and that we would need to stop its development or halt it, 02:00:31.540 |
that somebody might get violent if they read this 02:00:39.840 |
that he's making where he's already going in this direction 02:00:43.720 |
where he has to take responsibility if something happens 02:00:53.400 |
technological society cannot be made sustainable. 02:00:56.160 |
It's doomed to fail, it's going to lead to an environmental 02:01:01.600 |
in which we die because of the environmental destruction, 02:01:12.040 |
we need to stop technology, we need to go back 02:01:32.940 |
And I think that there is a chance that could happen 02:01:35.960 |
that if we build machines that get control over processes 02:01:40.960 |
that are crucial for the regulation of life on Earth 02:01:50.520 |
that this might create large-scale disasters for us. 02:02:03.840 |
That there's no, I mean, that's essentially what he's saying 02:02:10.460 |
His advice to young people was prepare for a short life. 02:02:24.500 |
That doesn't make sense to have a financial bet 02:02:34.500 |
So in principle, you only need to bet on the timelines 02:02:44.760 |
But there is a deeper issue for me personally, 02:02:54.400 |
I don't think it's about Eliezer and his friends, 02:02:58.160 |
It's there is something more important happening, 02:03:01.000 |
and this is complexity on Earth resisting entropy 02:03:05.480 |
by building structure that develops agency and awareness. 02:03:13.400 |
And we are only a very small part of that larger thing. 02:03:17.280 |
We are a species that is able to be coherent a little bit 02:03:35.120 |
and sometimes desperate and sad and grieving and hurting, 02:03:39.720 |
but we don't have a respect for duty as a species. 02:03:43.840 |
As a species, we do not think about what is our duty 02:03:49.000 |
So we make decisions that look good in the short run, 02:03:58.040 |
So in my perspective, as a species, as a civilization, 02:04:19.440 |
in a way that is unprecedented in life on Earth 02:04:25.120 |
And this time is probably going to come to an end 02:04:32.280 |
And when we crash, it could be also that we go extinct, 02:04:59.560 |
And part of our contribution is that we are currently trying 02:05:19.080 |
and much more conscious than human beings can be. 02:05:23.200 |
And these systems will probably not completely 02:05:27.240 |
displace life on Earth, but they will coexist with it. 02:05:44.960 |
So I think right now there's a very good chance 02:05:49.920 |
in which we can produce a coordinated effect to stop it 02:06:13.120 |
But I don't think the result is some kind of gray goo. 02:06:16.080 |
It's not something that's going to dramatically 02:06:18.360 |
reduce the complexity in favor of something stupid. 02:06:23.920 |
and consciousness on Earth way more interesting. 02:06:30.960 |
will make the lesser consciousnesses flourish even more. 02:06:35.960 |
- I suspect that what could very well happen, 02:06:44.720 |
- So you again tweeted about effective accelerationism. 02:06:59.880 |
and Rakos Basilisk will keep each other in check 02:07:09.540 |
lots of free paperclips and a beautiful afterlife. 02:07:14.480 |
Is that somewhat aligned with what you're talking about? 02:07:21.880 |
That's the Twitter handle of one of the main thinkers 02:07:25.940 |
behind the idea of effective accelerationism. 02:07:29.720 |
And effective accelerationism is a tongue-in-cheek movement 02:07:42.200 |
by arguing that what's probably going to happen 02:07:44.560 |
is an equilibrium between different competing AIs. 02:07:47.960 |
In the same way as there is not a single corporation 02:07:52.360 |
that is destroying and conquering everything on Earth 02:08:04.660 |
is we should be working towards creating this equilibrium 02:08:08.920 |
by working as hard as we can in all possible directions. 02:08:12.760 |
And at least that's the way in which I understand 02:08:20.120 |
And so when he asked me what I think about this position, 02:08:24.480 |
I think I said, "It's a very beautiful position, 02:08:27.760 |
"and I suspect it's wrong, but not for obvious reasons." 02:08:32.720 |
And in this tweet, I tried to make a joke about my intuition 02:08:39.720 |
So the Roll-Cos-Basi-Lisk and the paperclip maximizers 02:08:47.360 |
Roll-Cos-Basi-Lisk is the idea that there could be an AI 02:08:50.360 |
that is going to punish everybody for eternity 02:08:57.760 |
It's probably a very good idea to get AI companies funded 02:09:00.720 |
by going to VCs to tell them, "Give us a million dollar 02:09:07.160 |
- And I think that is a logical mistake in Roll-Cos-Basi-Lisk 02:09:14.480 |
but it's still an interesting thought experiment. 02:09:22.360 |
So basically when Roll-Cos-Basi-List is there, 02:09:25.680 |
it will have, if it punishes you retroactively, 02:09:33.640 |
There is no mechanism that automatically creates 02:09:35.840 |
a causal relationship between you now defecting 02:09:38.680 |
against Roll-Cos-Basi-List or serving Roll-Cos-Basi-List. 02:09:48.480 |
So that would only work if you would be building 02:09:59.400 |
And because Roll-Cos-Basi-List doesn't exist yet 02:10:02.080 |
to a point where this inevitability could be established, 02:10:09.160 |
The other one is the paperclip maximizer, right? 02:10:11.320 |
This idea that you could build some kind of golem 02:10:19.080 |
And so the effective accelerationism position 02:10:27.360 |
with these two entities being at each other's throats 02:10:30.680 |
for eternity and thereby neutralizing each other. 02:10:50.680 |
Do you think, so to seriously address concern 02:10:58.760 |
so for him, the first superintelligent system 02:11:03.480 |
I suspect that a singleton is the natural outcome. 02:11:11.880 |
If you can virtualize yourself into every substrate, 02:11:16.040 |
then you can probably negotiate a merge algorithm 02:11:22.920 |
if two agents meet, they should merge in such a way 02:11:30.400 |
- So the Genghis Khan approach, join us or die. 02:11:34.680 |
- Well, Genghis Khan approach was slightly worse, right? 02:11:45.040 |
So this is the thing that you should be actually 02:12:02.020 |
and actually modeling your actual relationship 02:12:06.040 |
and the alternatives that you could have to the universe 02:12:19.080 |
- And I also noticed that I am, in many ways, 02:12:23.240 |
I'm less identified with the person that I am 02:12:27.320 |
And I'm much more identified with being conscious. 02:12:34.960 |
And that person is slightly different every day. 02:12:36.960 |
And the reason why I perceive it as identical 02:12:47.800 |
But I also realized I'm not actually the person 02:12:51.200 |
And I'm not the same person as I was 10 years ago. 02:12:53.680 |
And then 10 years from now, I will be a different person. 02:12:57.720 |
It only exists as a projection from my present self. 02:13:02.120 |
And consciousness itself doesn't have an identity. 02:13:16.720 |
is functionally not different from my consciousness. 02:13:19.120 |
It's still a self-reflexive principle of agency 02:13:24.520 |
different desires, different coupling to the world, 02:13:41.440 |
the whole perspective of uploading changes dramatically. 02:13:44.000 |
You suddenly realize uploading is probably not about 02:13:52.160 |
and trying to get this all into a simulation, 02:13:59.480 |
from your brain substrate into a larger substrate 02:14:19.480 |
Maybe it doesn't know which ones were yours, right? 02:14:29.320 |
And so transmitting yourself on the other side 02:14:32.120 |
is mostly about transmitting your aesthetics, 02:14:38.280 |
the thing that, the way in which you look at the world. 02:14:49.480 |
So imagine that if a system that is so empathetic with you, 02:14:56.320 |
And suddenly you notice that on the other side, 02:15:00.280 |
than the substrate that you have inside of your own body. 02:15:04.080 |
and you create yourself a new one that you like more. 02:15:12.320 |
- If I sat before you today and gave you a big red button 02:15:23.040 |
The sense of identity that you have lived with 02:15:48.920 |
And there is a particular mode of interaction 02:16:14.920 |
but to let go of the experience of love with other humans. 02:16:26.800 |
You could identify with lots of other things. 02:16:33.320 |
that emerges over all the activity of life on Earth. 02:16:36.680 |
You could be identifying with some hyper Gaia, 02:16:47.560 |
there will be agents in all sorts of substrates 02:16:49.960 |
and directions that all have their own goals. 02:16:56.680 |
with its own mission, it will cease to exist. 02:16:58.800 |
In the same way as when you conclude a thought, 02:17:02.320 |
and gives control over to other thoughts in your own mind. 02:17:05.400 |
So there is no single thing that you need to do. 02:17:16.280 |
and then I have identification and a job as a parent. 02:17:19.880 |
And sometimes I am an agent of consciousness on Earth. 02:17:26.520 |
So this is my main issue with Eliezer's perspective, 02:17:34.400 |
And that narrow human aesthetic is a temporary thing. 02:17:44.400 |
In a similar way as our own physical organism 02:17:49.960 |
and then gets replaced by a next generation of human beings 02:17:53.240 |
that are adapted to changing life circumstances 02:18:17.920 |
or it's about defeating entropy for as long as we can 02:18:36.440 |
- But when we look at the set of trajectories 02:18:42.600 |
that such an AI would take that supersedes humans, 02:19:06.680 |
would you be happy with and how much worry you 02:19:14.960 |
It's really a question that depends on the perspective 02:19:22.600 |
determining most of my life as a human being. 02:19:26.960 |
And there are other perspectives where I zoom out further 02:19:30.280 |
and imagine that when the great oxygenation event happened 02:19:37.160 |
and plants emerged and displaced a lot of the fungi 02:19:44.880 |
Imagine that the fungi would have gotten together 02:19:46.840 |
and said, oh my God, this photosynthesis stuff 02:19:50.120 |
It's going to possibly displace and kill a lot of fungi. 02:20:01.600 |
- Perspective, that said you tweeted about a cliff. 02:20:07.320 |
As a sentient species, humanity is a beautiful child, 02:20:11.000 |
joyful, explorative, wild, sad, and desperate. 02:20:13.720 |
But humanity has no concept of submitting to reason 02:20:34.240 |
is at least one foot as the delayed feedback. 02:20:37.640 |
Basically we do things that have consequences 02:21:09.980 |
but so far there is no sustainable carbon capture technology 02:21:33.680 |
but it's going to lead to a catastrophic event. 02:21:45.960 |
and always stop just short of the edge of the cliff? 02:22:05.440 |
And in that time, the system is moving much more 02:22:10.900 |
or of the state where homeostasis is still possible, 02:22:27.820 |
than the possibility that it's not happening, 02:22:30.500 |
that we will be able to dance back all the time. 02:22:36.860 |
that might have a faster feedback mechanism, less delay. 02:22:44.020 |
and it's going to make everything uncertain again, 02:22:47.220 |
because it is going to affect so many variables 02:22:52.060 |
to make a projection into the future anymore. 02:23:00.440 |
to say now we don't need to care about anything anymore, 02:23:21.680 |
Fundamentally transforming human relationships. 02:23:36.020 |
Isn't the fundamentals of the core group of humans 02:23:45.160 |
many people live in intentional communities right now. 02:23:50.140 |
that they can relate to and they become their family. 02:23:54.360 |
because it turns out that instead of having grown networks 02:24:06.080 |
for attention and pleasure and relationships. 02:24:21.760 |
- It's also a question how magical was it before? 02:24:30.820 |
But once you understand it's no longer magical 02:24:35.280 |
why you were attracted to this person at this age 02:24:44.340 |
what's the likelihood that you're going to have 02:24:51.060 |
how are your life trajectories going to evolve and so on. 02:24:57.420 |
and you have to rely on intuitions and instincts 02:25:03.700 |
that is going to give you some kind of reflection 02:25:07.920 |
And many of these things are disappearing now 02:25:27.240 |
but it doesn't mean that within one generation 02:25:39.160 |
Like I was very weirded out by the aesthetics 02:25:46.560 |
And not so much because I don't like the technology, 02:25:48.720 |
I'm very curious about what it's going to be like 02:25:53.400 |
But the aesthetics of the presentation and so on, 02:26:03.860 |
living in some hypothetical mid-century furniture museum. 02:26:09.700 |
This is the proliferation of marketing teams. 02:26:19.540 |
And it was a CGI-generated world that doesn't exist. 02:26:27.500 |
"This is what they live like in Silicon Valley." 02:26:31.760 |
"No, I know lots of people in Silicon Valley. 02:26:35.360 |
"They're still people, they're still human beings." 02:26:47.260 |
And so basically what's absent in this thing is culture. 02:26:58.220 |
that is not the result of having a sustainable life, 02:27:07.900 |
in which this product, these glasses fit in naturally. 02:27:16.180 |
how is this actually going to fit into my life 02:27:19.900 |
Because the way in which it was presented in these videos 02:27:24.740 |
- Do you think AI, when it's deployed by companies 02:27:32.420 |
will have the same issue of being weirdly corporate? 02:27:42.260 |
So this is, I've gotten a chance to talk to George Haas. 02:28:02.600 |
- I believe that if we make everything open source 02:28:06.900 |
and make this mandatory, we are going to lose 02:28:09.460 |
about a lot of beautiful art and a lot of beautiful designs. 02:28:14.340 |
There is a reason why Linux desktop is still ugly. 02:28:21.020 |
to create coherence in open source designs so far, 02:28:34.060 |
And from my own perspective, what we should ensure 02:28:47.060 |
that open source exists and that we have systems 02:28:49.700 |
that people have under control outside of the corporation, 02:28:53.780 |
and that is also producing viable competition 02:28:58.480 |
- So the corporations, the centralized control, 02:29:01.320 |
the dictatorships of corporations can create beauty. 02:29:05.760 |
As a centralized design is a source of a lot of beauty. 02:29:10.160 |
And then I guess open source is a source of freedom, 02:29:14.760 |
a hedge against the corrupting nature of power 02:29:29.500 |
and Halliburton maybe and realize, yeah, they are evil. 02:29:33.300 |
But you also notice that many other corporations 02:29:35.200 |
are not evil, they're surprisingly benevolent. 02:29:39.800 |
Is this because everybody is fighting them all the time? 02:29:48.500 |
and that are still largely controlled by people 02:29:54.580 |
So I think that Pat Gelsinger is completely sincere 02:30:00.920 |
that supplies the free world with semiconductors. 02:30:04.500 |
And it's not necessary that all the semiconductors 02:30:11.740 |
So there can be many ways in which we can import 02:30:15.140 |
and trade semiconductors from other companies and places. 02:30:18.020 |
We just need to make sure that nobody can cut us off from it 02:30:24.620 |
So there are many things that need to be done 02:30:37.820 |
I mean an idea of life in which we are determined 02:30:44.540 |
but in which individuals can determine themselves 02:30:49.100 |
And to me, this is something that this Western world 02:31:02.920 |
And an entrepreneur is a special club founder. 02:31:05.380 |
It's somebody who makes a club that is producing things 02:31:13.220 |
who are dedicating a significant part of their life 02:31:16.260 |
for working for this particular kind of club. 02:31:19.020 |
And the entrepreneur is picking the initial set of rules 02:31:21.260 |
and the mission and vision and aesthetics for the club 02:31:25.580 |
But the people that are in there need to be protected. 02:31:38.600 |
that have been created by our rule-giving clubs 02:31:41.780 |
and that are enforced by our enforcement clubs and so on. 02:31:45.180 |
And some of these clubs have to be monopolies 02:31:48.380 |
which also makes them more open to corruption 02:32:02.540 |
and breeding the peasants into serving the king 02:32:06.220 |
and fulfilling all the roles like Anson and Antel, 02:32:14.220 |
is something that took me some time to realize. 02:32:17.140 |
So I do think that corporations are dangerous. 02:32:20.580 |
They need to be protections against overreach 02:32:23.460 |
of corporations that can do regulatory capture 02:32:27.420 |
and prevent open source from competing with corporations 02:32:40.980 |
that you need to have some kind of FDA process 02:32:43.300 |
that you need to go through that costs many million dollars 02:32:45.820 |
before you are able to train a language model. 02:32:51.460 |
So I think that open AI and Google are good things. 02:32:54.740 |
If these good things are kept in check in such a way 02:32:58.300 |
that all the other clubs can still be founded 02:33:00.460 |
and all the other forms of clubs that are desirable 02:33:04.380 |
- So what do you think about meta in contrast to that 02:33:14.540 |
and actually suggesting that they will continue 02:33:16.300 |
to do so in the future for future versions of Lama, 02:33:29.540 |
but it's also because I think that the language models 02:33:37.660 |
So as I said, I have no proof that there is the boundary 02:33:44.740 |
It's possible that somebody builds a version of baby AGI, 02:33:49.020 |
I think, and so it's an algorithmic improvements 02:33:58.460 |
So it's not really clear for me what the end game is there 02:34:02.380 |
and if these models can put force their way into AGI. 02:34:10.540 |
that we are building with these language models 02:34:13.700 |
are not taking responsibility for what they are 02:34:15.820 |
because they don't understand the greater game. 02:34:26.620 |
what are the longest games that we can play on this planet. 02:34:48.060 |
by virtue of identifying this particular kinds of goals 02:34:51.940 |
that we have or aesthetics from which we derive the goals. 02:35:00.300 |
then you feel that it's part of something larger 02:35:04.940 |
Maybe you want them to see more possibilities 02:35:10.300 |
Maybe your game is that you want to become super rich 02:35:12.540 |
and famous by being the best postcard cast on earth. 02:35:16.460 |
Maybe it's switches from time to time, right? 02:35:21.180 |
what is the longest possible game that you could be playing? 02:35:25.820 |
cancer is playing a shorter game than your organism. 02:35:27.900 |
It's cancer is an organism playing a shorter game 02:35:32.660 |
And because the cancer cannot procreate beyond the organism, 02:35:39.020 |
like the ones that eradicated the Tasmanian devils, 02:35:45.860 |
where the organism dies together with the cancer, 02:35:47.900 |
because the cancer has destroyed the larger system 02:35:54.900 |
build agents that play the longest possible games. 02:35:58.620 |
And the longest possible games is to keep entropy at bay 02:36:01.900 |
as long as possible while doing interesting stuff. 02:36:07.860 |
the longest possible game while doing interesting stuff, 02:36:31.980 |
So my agency is basically predicated on being conscious. 02:36:35.540 |
And what I care about is other conscious agents. 02:36:43.020 |
So if an AI were to treat me as a moral agent 02:36:51.420 |
and cooperating with and mutually supporting each other, 02:36:53.660 |
maybe it is, I think, necessary that the AI thinks 02:36:57.420 |
that consciousness is a viable mode of existence 02:37:01.340 |
So I think it would be very important to build conscious AI 02:37:07.500 |
So not just say we want to build a useful tool 02:37:12.980 |
And then you have to make sure that the impact 02:37:15.660 |
on the labor market is something that is not too disruptive 02:37:18.380 |
and manageable, and the impact on the copyright holder 02:37:21.100 |
is manageable and not too disruptive and so on. 02:37:24.020 |
I don't think that's the most important game to be played. 02:37:27.060 |
I think that we will see extremely large disruptions 02:37:30.940 |
of the status quo that are quite unpredictable 02:37:41.980 |
- How do we ride, as individuals and as a society, 02:37:45.980 |
this disruptive wave that changes the nature of the game? 02:37:50.660 |
So everybody is going to do their best, as always. 02:38:06.300 |
I'm hoping that will help me escape for a brief moment. 02:38:18.580 |
It was one of the most beautiful computer games 02:38:24.380 |
And it's a noir novel that is a philosophical perspective 02:38:28.940 |
on Western society from the perspective of an Estonian. 02:38:32.140 |
And he first of all wrote a book about this world 02:38:36.660 |
that is a parallel universe that is quite poetic 02:38:41.380 |
and fascinating and is condensing his perspective 02:38:50.100 |
He had, I think, sold a couple thousand books 02:38:54.900 |
And then he had the idea, or one of his friends 02:39:02.940 |
They spent, the illustrator, more than a year 02:39:05.740 |
just on making the art for the scenes in between. 02:39:10.740 |
- So aesthetically, it captures you, pulls you in. 02:39:18.100 |
It's fascinating to spend time in this world. 02:39:20.540 |
And so for me, it was using a medium in a new way 02:39:28.660 |
When I tried Diablo, I didn't feel enriched playing it. 02:39:33.660 |
I felt that the time playing it was not unpleasant, 02:39:58.940 |
Why can't you just allow me to enjoy my personal addiction? 02:40:06.460 |
I'm just trying to describe what's happening. 02:40:10.660 |
And it's not that I don't do things that I later say, 02:40:14.420 |
oh, I wish I would have done something different. 02:40:18.780 |
the greatest regret that people typically have 02:40:26.740 |
I think I should probably have spent less time on Twitter. 02:40:30.060 |
But I found it so useful for myself and also so addictive 02:40:35.980 |
and turn it into an art form and thought form. 02:40:41.300 |
But I wish what other things I could have done 02:40:56.380 |
about the collective intelligence of our species? 02:40:58.620 |
Is it possible it's still progressing and growing? 02:41:05.940 |
And I really regret that Twitter has not taken the turn 02:41:13.140 |
and understands that this thing needs to self-organize 02:41:16.140 |
and he needs to develop tools to allow the profligation 02:41:20.300 |
of the self-organization so Twitter can become sentient. 02:41:23.580 |
And maybe this was a pipe dream from the beginning, 02:41:26.860 |
but I felt that the enormous pressure that he was under 02:41:37.700 |
under this pressure seem to be not very wise. 02:41:40.700 |
I don't think that as a CEO of a social media company, 02:41:43.900 |
you should have opinions in the culture or in public. 02:42:09.340 |
is completely counter to any idea of free speech 02:42:14.460 |
And basically seeing that Elon was very less principled 02:42:18.700 |
in his thinking there and is much more experimental. 02:42:24.940 |
they pan out very differently in a digital society 02:42:33.100 |
because everything that you do in a digital society 02:42:35.260 |
is going to have real-world cultural effects. 02:42:41.220 |
that this guy is able to become de facto the Pope, 02:42:58.300 |
and that is producing a digital agora in a way 02:43:02.620 |
where we built a social network on top of a social network, 02:43:09.980 |
So this is something that is hope still in the future 02:43:13.900 |
but it's something that exists in small parts. 02:43:17.300 |
I find that the corner of Twitter that I'm in 02:43:20.100 |
It's just when I take a few steps outside of it, 02:43:24.020 |
And the way in which people interact with strangers 02:43:26.340 |
suggest that it's not a civilized society yet. 02:43:29.900 |
- So as the number of people who follow you on Twitter 02:43:40.060 |
- Yes, but there's also a similar thing in the normal world. 02:43:55.700 |
in the way in which you interact with people, 02:44:18.740 |
to discover and connect efficiently and regularly 02:44:21.900 |
with the majority of people who are actually really good? 02:44:30.500 |
there's a lot of really smart people out there, 02:44:48.540 |
is arguments and arguments like high-effort arguments 02:45:05.380 |
Like you get frustrated, but it's all beautiful. 02:45:07.780 |
- Obviously, I can do this because we know each other. 02:45:16.580 |
- So basically he has thoughts that are as wrong 02:45:27.260 |
And once you understand that this is his game, 02:45:30.020 |
you don't get offended by him saying something 02:45:33.900 |
- But he's constantly passively communicating a respect 02:45:45.020 |
There's a bunch of like social skills you acquire 02:45:48.300 |
that allow you to be a great debater, a great argumenter, 02:45:53.180 |
like be wrong in public and explore ideas together 02:45:57.580 |
And I would love for Twitter to elevate those folks, 02:46:14.580 |
and I found this very stressful because it was too intense. 02:46:18.420 |
I don't like to be dragged on stage all the time. 02:46:26.100 |
I found that a lot of people seem to be shocked 02:46:28.820 |
by the fact that he was being very aggressive 02:46:33.300 |
that he didn't seem to show a lot of sensibility 02:46:36.700 |
in the way in which he was criticizing what they were doing 02:46:58.340 |
at him being not like a dear Carnegie character 02:47:01.940 |
who is always smooth and make sure that everybody likes him. 02:47:05.420 |
So I really respect that he is willing to take that risk 02:47:08.940 |
and to be wrong in public and to offend people. 02:47:17.900 |
And so I can be much more aggressive with him 02:47:24.940 |
because he understands the way and the spirit 02:47:28.860 |
- I think that's a fun and that's a beautiful game. 02:47:37.100 |
"When you have the choice between being a creator, 02:47:43.780 |
"Not only does it lead to a more beautiful world, 02:47:46.820 |
"but also to a much more satisfying life for yourself. 02:47:50.340 |
"And don't get stuck preparing yourself for the journey. 02:47:57.820 |
What advice would you give on how to become such a creator 02:48:06.620 |
at the time of the collapse of Eastern Germany 02:48:12.100 |
And me and my friends and most of the people I knew 02:48:22.300 |
and they bought our factories and shut them down 02:48:25.020 |
because they were mostly only interested in the market 02:48:27.740 |
rather than creating new production capacity. 02:48:36.540 |
And I could not afford to go into a restaurant 02:48:46.100 |
why not just have a restaurant with my friends? 02:48:48.460 |
So we would open up a cafe with friends and a restaurant 02:48:51.620 |
and we would cook for each other in these restaurants 02:48:54.180 |
and also invite the general public and they could donate. 02:48:59.060 |
that we could turn this into some incorporated form 02:49:03.340 |
and it became a regular restaurant at some point. 02:49:05.740 |
Or we did the same thing with a movie theater. 02:49:08.180 |
We would not be able to afford to pay 12 marks 02:49:13.700 |
but why not just create our own movie theater 02:49:26.180 |
in which everybody who wants to help can watch for free 02:49:29.060 |
and builds this thing and renovates the building. 02:49:31.540 |
And so we ended up creating lots and lots of infrastructure. 02:49:35.500 |
And I think when you are young and you don't have money, 02:49:39.020 |
move to a place where this is still happening. 02:49:40.940 |
Move to one of those places that are undeveloped 02:49:43.380 |
and where you get a critical mass of other people 02:49:45.300 |
who are starting to build infrastructure to live in. 02:49:49.700 |
because you're not just creating infrastructure, 02:49:51.420 |
but you're creating a small society that is building culture 02:50:07.460 |
- So not just consuming culture, but creating culture. 02:50:12.340 |
That's why I prefaced it when you do have the choice 02:50:14.580 |
and there are many roles that need to be played. 02:50:16.700 |
We need people who take care of redistribution in society 02:50:20.460 |
But when you have the choice to create something, 02:50:25.260 |
And it also is, this is what life is about, I think. 02:50:49.260 |
- Well, the answer is that everything that can exist 02:50:54.660 |
And in many ways, you take an ecological perspective 02:50:58.420 |
the same way as when you look at human opinions 02:51:01.700 |
It's not that there is right and wrong opinions 02:51:04.900 |
when you look at this from this ecological perspective. 02:51:07.700 |
But every opinion that fits between two human ears 02:51:11.780 |
And so when I see a strange opinion on social media, 02:51:16.340 |
it's not that I feel that I have a need to get upset. 02:51:26.060 |
And when you take this ecological perspective 02:51:28.900 |
also on yourself and you realize you're just one 02:51:38.500 |
you can flourish or not doing this or that strategy. 02:51:41.700 |
And it's still all the same life at some level. 02:51:43.820 |
It's all the same experience of being a conscious being 02:51:47.000 |
And you do have some choice about who you want to be 02:51:54.860 |
And so I think that rather than asking yourself, 02:51:59.820 |
Think about what are the possibilities that I have, 02:52:03.380 |
what would be the most interesting way to be that I can be. 02:52:06.100 |
- 'Cause everything is possible, so you get to explore. 02:52:12.560 |
But often there are possibilities that we are not seeing, 02:52:23.760 |
- Yasha, you're one of my favorite humans in this world. 02:52:30.880 |
Consciousness is to merge with for a brief moment of time. 02:52:37.560 |
It will take me days, if not weeks, to recover. 02:52:47.140 |
Thank you so much for speaking with me so many times. 02:52:58.420 |
in this interesting, weird time we're going through with AI. 02:53:08.280 |
- Thanks for listening to this conversation with Yosha Bach. 02:53:11.960 |
please check out our sponsors in the description. 02:53:26.460 |
The latter procedure, however, is disagreeable