back to indexDavid Chalmers: The Hard Problem of Consciousness | Lex Fridman Podcast #69
Chapters
0:0 Introduction
2:23 Nature of reality: Are we living in a simulation?
19:19 Consciousness in virtual reality
27:46 Music-color synesthesia
31:40 What is consciousness?
51:25 Consciousness and the meaning of life
57:33 Philosophical zombies
61:38 Creating the illusion of consciousness
67:3 Conversation with a clone
71:35 Free will
76:35 Meta-problem of consciousness
78:40 Is reality an illusion?
80:53 Descartes' evil demon
83:20 Does AGI need conscioussness?
93:47 Exciting future
95:32 Immortality
00:00:00.000 |
The following is a conversation with David Chalmers. 00:00:16.520 |
"Why does the feeling which accompanies awareness 00:00:25.480 |
Many people who worry about AI safety and ethics 00:00:31.820 |
and should be engineered into AI systems of the future. 00:00:38.280 |
and discoveries yet to be made about consciousness, 00:00:45.240 |
may nevertheless be very important for engineers 00:01:04.480 |
As usual, I'll do one or two minutes of ads now, 00:01:29.840 |
Brokerage services are provided by Cash App Investing, 00:01:35.980 |
Since Cash App does fractional share trading, 00:01:38.280 |
let me mention that the order execution algorithm 00:01:42.080 |
to create the abstraction of fractional orders 00:01:51.600 |
provides an easy interface that takes a step up 00:01:54.200 |
to the next layer of abstraction over the stock market, 00:01:57.080 |
making trading more accessible for new investors 00:02:02.760 |
If you get Cash App from the App Store or Google Play 00:02:13.460 |
that is helping to advance robotics and STEM education 00:02:18.640 |
And now, here's my conversation with David Chalmers. 00:02:27.420 |
There's probably gonna be a lot of simulations 00:02:34.700 |
it'll be indistinguishable from a non-simulated reality. 00:02:39.700 |
And although we could keep searching for evidence 00:02:46.000 |
any of that evidence in principle could be simulated. 00:02:58.720 |
- Yeah, I definitely think it's interesting and useful. 00:03:01.000 |
In fact, I'm actually writing a book about this right now, 00:03:14.680 |
Descartes said maybe you're being fooled by an evil demon 00:03:21.760 |
all this stuff is real when in fact it's all made up. 00:03:25.880 |
Well, the modern version of that is how do you know 00:03:30.880 |
Then the thought is if you're in a simulation, 00:03:34.560 |
So that's teaching us something about knowledge. 00:03:39.480 |
I think there's also really interesting questions 00:03:56.880 |
and even if we are in a simulation, all of this is real. 00:03:59.400 |
That's why I call this reality 2.0, new version of reality, 00:04:05.440 |
- So what's the difference between quote unquote real world 00:04:12.560 |
So we interact with the world by perceiving it. 00:04:27.360 |
that's quote unquote real, that exists perhaps 00:04:30.400 |
without us being there, and the world as you perceive it? 00:04:35.400 |
- Well, the world as we perceive it is a very simplified 00:04:39.160 |
and distorted version of what's going on underneath. 00:04:42.760 |
We already know that from just thinking about science. 00:04:48.760 |
in what we perceive, but we still know quantum mechanics 00:04:55.280 |
is this very kind of simplified picture of colors 00:05:04.600 |
We know there's a, that's what the philosopher 00:05:18.160 |
or super strings or whatever the latest thing is. 00:05:32.560 |
or metaphysical reality is going on underneath the world 00:05:37.640 |
The world of the manifest image is this very simple thing 00:05:48.840 |
Maybe philosophy could help tell us about that too. 00:06:02.720 |
I've got a glass here and it's got all these, 00:06:05.320 |
it appears to me a certain way, a certain shape, 00:06:10.120 |
And he said, "What is the nature of the thing in itself?" 00:06:15.480 |
it's a hypothesis about the nature of the thing in itself. 00:06:22.640 |
it's okay, it's actually a bunch of data structures 00:06:25.040 |
running on a computer in the next universe up. 00:06:34.560 |
and somehow trivially, crudely just scaled up in some sense. 00:06:53.040 |
and particles and quarks and maybe even strings, 00:06:57.220 |
all of that requires something just infinitely 00:07:00.760 |
many orders of magnitude more of scale and complexity. 00:07:05.760 |
Do you think we're even able to even conceptualize 00:07:23.720 |
Does it get into this fuzzy area that's not useful at all? 00:07:31.360 |
incredibly complicated and for us within our universe 00:07:36.280 |
to build a simulation of a universe as complicated as ours 00:07:51.200 |
maybe an infinite universe could somehow simulate 00:07:53.660 |
a copy of itself, but that's going to be hard. 00:08:10.020 |
so you could think of it, and turtles all the way down, 00:08:17.780 |
could also create another simulating universe. 00:08:24.220 |
and you've also mentioned this hilarious idea, 00:08:29.140 |
that there may be simulations within simulations, 00:08:31.860 |
arbitrarily stacked levels, and that there may be, 00:08:39.320 |
referencing Hitchhiker's Guide to the Universe. 00:08:41.860 |
If we're indeed in a simulation within a simulation, 00:08:45.940 |
at level 42, what do you think level zero looks like? 00:08:50.940 |
- I would expect that level zero is truly enormous. 00:08:57.780 |
it's some extraordinarily large finite capacity, 00:09:03.200 |
Maybe it's got some very high set-theoretic cardinality 00:09:06.820 |
that enables it to support just any number of simulations. 00:09:14.360 |
slightly smaller degree of infinity at level one, 00:09:18.880 |
so by the time you get down to us at level 42, 00:09:21.480 |
maybe there's plenty of room for lots of simulations 00:09:27.780 |
If the top universe is only a small finite capacity, 00:09:35.360 |
very, very serious limits on how many simulations 00:09:49.120 |
as we get down levels, more and more simplified 00:09:54.360 |
- Yeah, we still have plenty of capacity here. 00:09:58.320 |
He said there's plenty of room at the bottom. 00:10:08.400 |
quantum computing capacity at the bottom level. 00:10:11.040 |
So we've got plenty of room to play with and make, 00:10:14.280 |
we probably have plenty of room for simulations 00:10:29.120 |
but maybe universes somewhat simpler than ours, 00:10:31.760 |
unless of course we're prepared to take certain shortcuts 00:10:36.040 |
which might then increase the capacity significantly. 00:10:51.240 |
that could be created in this universe of ours, 00:10:56.800 |
how incredible human beings are on that scale? 00:11:08.000 |
are at a certain point in the scale of intelligence, 00:11:22.840 |
and something happens once you get to human beings. 00:11:31.720 |
and we get to develop certain kinds of collective thinking 00:11:35.040 |
that has enabled all this amazing stuff to happen, 00:11:38.520 |
science and literature and engineering and culture and so on. 00:12:10.920 |
have got the capacity to be far more sophisticated. 00:12:15.360 |
who knows what the ones at level zero are like. 00:12:17.720 |
- It's also possible that this is the epitome 00:12:24.480 |
So we as human beings see ourselves maybe as flawed, 00:12:27.280 |
see all the constraints, all the limitations, 00:12:29.680 |
but maybe that's the magical, the beautiful thing. 00:12:32.360 |
Maybe those limitations are the essential elements 00:12:36.000 |
for an interesting sort of that edge of chaos, 00:13:12.080 |
I mean, we could still have levels of intelligence 00:13:26.800 |
of maximal romanticism in the history of evolution. 00:13:39.640 |
the point of inflection and life was interesting. 00:13:44.040 |
that if there is super intelligence somewhere 00:13:48.040 |
in the future, they'll figure out how to make life 00:13:56.400 |
how boring life is when you're super intelligent. 00:14:02.560 |
and sort of live through the things they've created 00:14:05.680 |
by watching them stumble about in their flawed ways. 00:14:13.000 |
of a simulation every time you get really bored 00:14:19.080 |
'cause we're sure the peak of their existence 00:14:20.800 |
would be like watching simulations for entertainment. 00:14:23.440 |
It's like saying the peak of our existence now is Netflix. 00:14:27.640 |
- A flip side of that could be the peak of our existence 00:14:31.160 |
for many people having children and watching them grow. 00:14:47.760 |
So I've heard several people, including Nick Bostrom, 00:14:53.880 |
maybe you don't need to simulate the universe, 00:15:09.760 |
- Well, I think in some sense the answer is obvious. 00:15:22.600 |
So unless we're talking about partial simulations. 00:15:25.280 |
- And I guess the question is, which comes first? 00:15:32.520 |
So the mind could just be an emergent phenomena 00:15:44.040 |
It's not like creating a simulation, perhaps, 00:15:51.760 |
It's just defining a set of initial conditions 00:15:57.780 |
Simulating the mind requires you to have a little bit more, 00:16:20.000 |
through some kind of emergence from basic physics laws, 00:16:42.640 |
as well as all the other physical systems within it. 00:16:46.120 |
And it's not obvious that the problems are any worse 00:16:50.920 |
for the brain than for, it's a particularly complex 00:16:56.880 |
arbitrary physical systems, we can simulate brains. 00:17:02.120 |
when you simulate a brain, will that bring along 00:17:17.080 |
My own view is if you simulate the brain well enough, 00:17:22.640 |
but yeah, there's plenty of people who would say no. 00:17:27.160 |
a simulation of a brain without any true consciousness. 00:17:43.100 |
it's in the patterns of information processing and so on, 00:17:46.960 |
rather than say the biology that it's made of. 00:17:51.800 |
who think consciousness has to be say biological. 00:17:56.720 |
of information processing in a non-biological substrate, 00:17:59.680 |
you'll miss what's crucial for consciousness. 00:18:02.440 |
I mean, I just don't think there's any particular reason 00:18:09.600 |
for non-biological systems, say silicon circuits, 00:18:17.640 |
And I think just thinking about what is the true, 00:18:22.280 |
the isomorphisms between consciousness and the brain, 00:18:25.520 |
the deepest connections to me seem to connect consciousness 00:18:32.340 |
So I at least adopted as my working hypothesis 00:18:35.160 |
that basically it's the computation and the information 00:18:39.520 |
At the same time, we don't understand consciousness, 00:18:43.640 |
- So the computation, the flow, the processing, 00:18:54.460 |
the software is where the consciousness comes from, 00:19:01.360 |
at least in the hardware, which we could view as software. 00:19:05.680 |
It may not be something you can just like program 00:19:07.360 |
and load and erase and so on in the way we can 00:19:12.920 |
but it's something at the level of information processing 00:19:17.960 |
- So on that, what do you think of the experience of self, 00:19:22.480 |
just the experience of the world in a virtual world, 00:19:38.840 |
So yeah, can we be conscious in the same kind of deep way 00:19:51.160 |
- Yeah, well, the kind of virtual worlds we have now 00:19:58.040 |
In particular, they rely on us having a brain and so on, 00:20:06.720 |
or just hang out in a virtual world on a screen, 00:20:10.840 |
but my brain and then my physical environment 00:20:15.600 |
might be simulated if I'm in a virtual world. 00:20:17.680 |
But right now, there's no attempt to simulate my brain. 00:20:34.640 |
put a bit of AI in their non-player characters 00:20:42.280 |
the actual thinking is interestingly distinct 00:20:49.720 |
that Descartes thought our physical world was. 00:20:52.240 |
There's physics and there's the mind and they're separate. 00:21:07.600 |
When anyone exercises agency in a video game, 00:21:11.200 |
that's actually somebody outside the virtual world 00:22:02.680 |
intelligence, free will, all the things that we have? 00:22:10.440 |
I find virtual reality really incredibly powerful, 00:22:14.400 |
just even the crude virtual reality we have now. 00:22:25.280 |
but I find myself wanting to stay in virtual worlds 00:22:33.880 |
'cause I am totally addicted to using the internet 00:22:43.040 |
I don't typically use it for more than 10 or 20 minutes. 00:22:46.160 |
There's something just slightly aversive about it, I find. 00:22:50.160 |
even though I have Oculus Rift and Oculus Quest 00:22:55.560 |
- You just don't wanna stay in that world for long. 00:23:10.640 |
It feels like I want to almost prepare my brain for, 00:23:19.680 |
when it's first being built in the early days. 00:23:33.160 |
allows my mind to really enter into that world. 00:23:36.040 |
But you say that the brain's external to that virtual world. 00:23:46.640 |
- If you're in VR and you do brain surgery on an avatar, 00:23:55.920 |
You don't think it's possible to kind of separate them. 00:24:13.380 |
create a new consciousness for prolonged periods of time? 00:24:26.320 |
- So this is, okay, this is gonna be the case 00:24:31.880 |
I mean, we already find this, right, with video games. 00:24:37.880 |
and you get taken up by living in those worlds, 00:24:50.960 |
It's almost possible to really forget the external world, 00:25:02.200 |
Maybe you can stop paying attention to the external world, 00:25:07.540 |
I go to work, and maybe I'm not paying attention 00:25:10.000 |
to my home life, I go to a movie, and I'm immersed in that. 00:25:17.120 |
but we still have the capacity to remember it, 00:25:23.920 |
I don't know, some pretty serious drugs or something 00:25:38.520 |
Or can you create sort of different offspring copies 00:25:45.520 |
of consciousnesses based on the worlds that you enter? 00:25:51.560 |
at least with a standard VR, there's just one brain, 00:25:54.920 |
interacts with the physical world, plays a video game, 00:25:58.040 |
puts on a video headset, interacts with this virtual world. 00:26:04.800 |
that nonetheless undergoes different environments, 00:26:07.540 |
takes on different characters in different environments. 00:26:14.160 |
You know, I might interact one way in my home life, 00:26:25.880 |
People sometimes adopt the character of avatars 00:26:32.360 |
maybe even a different gender, different race, 00:26:43.320 |
If you want literal splitting of consciousness 00:26:47.360 |
I think it's gonna take something more radical than that. 00:27:20.200 |
it's fundamentally, you see it as a singular consciousness, 00:27:23.760 |
even though it's experiencing different environments, 00:27:28.840 |
to the same set of memories, same set of experiences, 00:27:31.640 |
and therefore one sort of joint conscious system. 00:27:42.120 |
that we get from inhabiting different environments 00:27:46.720 |
- So you said as a child, you were a music color-- 00:27:52.320 |
- Synesthete, so where songs had colors for you. 00:28:00.920 |
I didn't pay much attention to this at the time, 00:28:05.320 |
and I'd get some kind of imagery of a kind of color. 00:28:11.360 |
The weird thing is mostly they were kind of murky, 00:28:22.440 |
I mean, my theory is that maybe it's like different chords 00:28:29.280 |
into these somewhat uninteresting browns and greens, 00:28:34.880 |
there'd be something that had a really pure color. 00:28:50.880 |
There was this song by the Alan Parsons Project 00:28:53.960 |
called "Ammonia Avenue" that was kind of a pure blue. 00:29:07.480 |
- So is it purely just the perception of a particular color 00:29:10.960 |
or was there a positive or negative experience with it? 00:29:20.960 |
associated with some characteristic of the song? 00:29:23.440 |
- For me, I don't remember a lot of association 00:29:28.360 |
It was just this kind of weird and interesting fact. 00:29:31.760 |
I thought this was something that happened to everyone. 00:29:33.920 |
Songs of colors, maybe I mentioned it once or twice, 00:29:40.960 |
- But it was like, I thought it was kind of cool 00:29:46.320 |
once I became a grad student thinking about the mind, 00:29:49.000 |
that I read about this phenomenon called synesthesia. 00:29:53.960 |
And now I occasionally talk about it in my classes, 00:29:56.600 |
in intro class, and it still happens sometimes. 00:29:58.640 |
A student comes up and says, "Hey, I have that. 00:30:04.560 |
- You said that it went away at age 20 or so. 00:30:08.120 |
And that you have a journal entry from around then 00:30:18.840 |
In retrospect, it was like, "Hey, that's cool. 00:30:21.960 |
- Yeah, do you, can you think about that for a little bit? 00:30:27.040 |
'Cause it's a fundamentally different sets of experiences 00:30:38.400 |
You don't see them as that fundamentally different 00:30:45.000 |
- I guess for me, when I had these experiences, 00:30:49.000 |
They were like a little bonus kind of experience. 00:30:51.680 |
I know there are people who have much more serious forms 00:30:57.040 |
for whom it's absolutely central to their lives. 00:30:59.440 |
I know people who, when they experience new people, 00:31:01.840 |
they have colors, maybe they have tastes, and so on. 00:31:09.680 |
it's got a certain really rich color pattern. 00:31:14.480 |
And for some synesthetes, it's absolutely central. 00:31:17.480 |
I think if they lost it, they'd be devastated. 00:31:20.240 |
Again, for me, it was a very, very mild form of synesthesia. 00:31:29.520 |
- that you might get under different altered states 00:31:36.200 |
the single most important experiences in your life. 00:31:40.160 |
So let's try to go to the very simplest question 00:31:45.120 |
but perhaps the simplest things can help us reveal, 00:32:00.720 |
- Consciousness, I mean, the word is used many ways, 00:32:03.400 |
but the kind of consciousness that I'm interested in 00:32:10.040 |
What it feels like from the inside to be a human being 00:32:16.160 |
I mean, there's something it's like to be me. 00:32:19.040 |
Right now, I have visual images that I'm experiencing. 00:32:29.160 |
I've got a stream of thoughts running through my head. 00:32:36.280 |
I've sometimes called this the inner movie in the mind. 00:32:51.400 |
or sometimes philosophers use the word qualia, 00:32:57.080 |
for things like the qualities of things like colors, 00:33:04.680 |
the experience of one taste or one smell versus another, 00:33:29.240 |
I'm thinking about what I'm gonna do later on. 00:33:31.720 |
Maybe there's still something running through my head, 00:33:36.360 |
Maybe it goes beyond those qualities or qualia. 00:33:40.000 |
Philosophers sometimes use the word phenomenal consciousness 00:33:44.760 |
I mean, people also talk about access consciousness, 00:33:47.520 |
being able to access information in your mind, 00:34:08.920 |
why is it that there is phenomenal consciousness at all? 00:34:11.560 |
And how is it that physical processes in a brain 00:34:21.720 |
you'd have all this big complicated physical system 00:34:25.000 |
without a given subjective experience at all. 00:34:37.560 |
where a red light goes on that says it's not conscious. 00:34:45.720 |
Or how do humans do it and how do we ourselves do it? 00:35:02.920 |
Maybe eventually we'll be able to create machines, 00:35:10.320 |
But that won't necessarily make the hard problem 00:35:56.920 |
But it's a useful idea that we do create consciousness. 00:36:12.280 |
We don't know which point it happens or where it is, 00:36:19.240 |
- Yeah, I mean, there's a question, of course, 00:36:21.120 |
is whether babies are conscious when they're born. 00:36:30.560 |
to newborn babies when they circumcised them. 00:36:33.200 |
And so now people think, oh, that's incredibly cruel. 00:36:38.800 |
And now the dominant view is that the babies can feel pain. 00:36:42.160 |
Actually, my partner, Claudia, works on this whole issue 00:36:45.880 |
of whether there's consciousness in babies and of what kind. 00:36:52.200 |
come into the world with some degree of consciousness. 00:36:55.440 |
Of course, then you can just extend the question 00:36:58.080 |
and suddenly you're into politically controversial-- 00:37:02.120 |
But the question also arises in the animal kingdom. 00:37:16.380 |
Over time, people are becoming more and more liberal 00:37:24.540 |
Now most people seem to think, sure, fish are conscious. 00:37:39.340 |
that every physical system has some degree of consciousness. 00:37:48.340 |
- I mean, that's a fascinating way to view reality. 00:37:52.860 |
if you can linger on panpsychism for a little bit, 00:38:10.900 |
that some things in the world are fundamental, right? 00:38:14.620 |
In physics, we take things like space or time, 00:38:33.780 |
Theories like relativity or quantum mechanics 00:38:36.580 |
or some future theory that will unify them both. 00:38:39.940 |
But everyone says you've got to take some things 00:38:48.220 |
Maybe something like this happened with Maxwell. 00:38:54.180 |
of electromagnetism and took charge as fundamental 00:38:57.500 |
'cause it turned out that was the best way to explain it. 00:39:02.820 |
something like that could happen with consciousness. 00:39:10.140 |
And instead of trying to explain consciousness 00:39:13.140 |
wholly in terms of the evolution of space, time, 00:39:29.100 |
for solving the easy problems of consciousness, 00:39:33.300 |
They give us a complicated structure and dynamics. 00:39:39.660 |
what kind of observable behavior they'll produce, 00:39:43.180 |
which is great for the problems of explaining how we walk 00:39:48.600 |
Those are the easy problems of consciousness. 00:39:52.580 |
about subjective experience just doesn't look 00:39:55.340 |
like that kind of problem about structure, dynamics, 00:40:01.340 |
is gonna give you a full explanation of that. 00:40:07.260 |
of consciousness, yes, there has to be a connecting point 00:40:25.620 |
At that point, the word consciousness is already 00:40:30.620 |
beyond the reach of our current understanding, 00:40:40.980 |
experiences that we have, that I have as a human being. 00:40:47.500 |
that means that basically another way to put that, 00:40:52.500 |
if that's true, then we understand almost nothing 00:41:00.140 |
- How do you feel about saying an ant is conscious? 00:41:05.780 |
- I can understand ant, I can understand an atom. 00:41:12.500 |
So I'm comfortable with living things on Earth 00:41:16.660 |
being conscious because there's some kind of agency 00:41:43.220 |
I mean, I'm not like, I don't believe actually 00:41:47.620 |
that plants are conscious or that plants suffer, 00:41:49.620 |
but I can understand that kind of belief, that kind of idea. 00:42:06.100 |
- I could understand that a Roomba has consciousness. 00:42:23.860 |
So there's a difference than a neural network, 00:42:32.700 |
allows me to really see that physical object as an entity. 00:42:43.420 |
where it feels that it's acting based on its own perception. 00:42:55.460 |
then you start to assign it some agency, some consciousness. 00:43:03.800 |
that consciousness is a fundamental property of reality 00:43:23.880 |
like how would it, how do you think about reality 00:43:27.600 |
if consciousness is a fundamental part of its fabric? 00:43:36.560 |
And then if you can't, as at least right now it looks like, 00:43:42.360 |
It doesn't follow that you have to add consciousness. 00:43:51.680 |
And then it turns out space, time, mass, plus X 00:43:56.160 |
will somehow collectively give you the possibility 00:44:10.080 |
there's actually genuine consciousness at the bottom level, 00:44:16.200 |
Maybe we can't imagine that somehow gives you consciousness. 00:44:28.280 |
but at least in, say, if it was classical physics, 00:44:32.040 |
then you'd end up saying, well, every little atom, 00:44:37.640 |
each of these particles has some kind of consciousness 00:44:41.540 |
whose structure mirrors maybe their physical properties, 00:44:44.560 |
like its mass, its charge, its velocity, and so on. 00:44:52.280 |
And the physical interactions between particles, 00:44:55.440 |
I mean, there's this old worry about physics, 00:45:04.560 |
Physics tells us about how a particle relates 00:45:09.320 |
It doesn't tell us about what the particle is in itself. 00:45:15.720 |
The nature in itself of a particle is something mental. 00:45:20.840 |
A particle is actually a little conscious subject 00:45:29.160 |
The laws of physics are actually ultimately relating 00:45:38.200 |
of little conscious subjects at the bottom level, 00:45:41.240 |
way, way simpler than we are without free will 00:45:48.800 |
Now, of course, that's a vastly speculative view. 00:46:00.080 |
It's not a vast collection of conscious subjects. 00:46:02.600 |
Maybe there's ultimately one big wave function 00:46:06.760 |
Corresponding to that might be something more like 00:46:09.160 |
a single conscious mind whose structure corresponds 00:46:29.280 |
think yeah, giant cosmic mind with enough richness 00:46:36.520 |
- I think therefore I am at the level of particles 00:46:43.800 |
It's kind of an exciting, beautiful possibility, 00:46:48.800 |
of course, way out of reach of physics currently. 00:46:51.940 |
- It is interesting that some neuroscientists 00:46:58.680 |
You find consciousness even in very simple systems. 00:47:02.880 |
So for example, the integrated information theory 00:47:08.240 |
Actually, I just got this new book by Christoph Koch 00:47:13.680 |
Why consciousness is widespread, but can't be computed. 00:47:24.520 |
or integrated information processing in a system, 00:47:29.520 |
like a couple of particles, will have some degree of this. 00:47:32.720 |
So he ends up with some degree of consciousness 00:47:40.480 |
about the connection between the brain and consciousness. 00:47:48.040 |
- It's still in there. - But it's interesting 00:47:52.680 |
but there are ways of thinking quasi-scientifically 00:48:17.920 |
"becomes conscious of its glory, of its magnificence." 00:48:24.800 |
Do you think that we are essentially the tools, 00:48:29.800 |
the senses the universe created to be conscious of itself? 00:48:37.600 |
Of course, if you went for the giant cosmic mind view, 00:48:44.060 |
We're just little components of the universal consciousness. 00:48:50.820 |
then there was some little degree of consciousness 00:48:54.720 |
and we were just a more complex form of consciousness. 00:48:58.300 |
So I think maybe the quote you mentioned works better. 00:49:02.060 |
If you're not a panpsychist, you're not a cosmopsychist, 00:49:14.660 |
So is your own view of panpsychism a rarer view? 00:49:19.660 |
- I think it's generally regarded, certainly, 00:49:24.620 |
held by a fairly small minority of at least theorists. 00:49:31.220 |
who think about consciousness are not panpsychists. 00:49:34.620 |
There's been a bit of a movement in that direction 00:49:41.600 |
but it's still very definitely a minority view. 00:49:43.940 |
Many people think it's totally bat shit crazy 00:49:51.220 |
- Yeah, so the orthodox view, I think, is still 00:49:55.140 |
and some good number of non-human animals have, 00:49:59.020 |
and maybe AIs might have one day, but it's restricted. 00:50:02.720 |
On that view, then, there was no consciousness 00:50:07.220 |
but it is this thing which happened at some point 00:50:23.260 |
Without consciousness, there'd be no meaning, 00:50:25.760 |
no true value, no good versus bad, and so on. 00:50:44.340 |
because the universe needed to have consciousness 00:50:49.260 |
and maybe you could combine that with a theistic view 00:50:54.660 |
The universe was inexorably evolving towards consciousness. 00:50:58.440 |
Actually, my colleague here at NYU, Tom Nagel, 00:51:01.420 |
wrote a book called "Mind and Cosmos" a few years ago 00:51:16.640 |
I don't myself agree with this teleological view, 00:51:20.080 |
but it is at least a beautiful speculative view 00:51:36.180 |
- I'm not an expert on thinking about God and religion. 00:51:43.880 |
- When people sort of pray, communicate with God, 00:51:46.740 |
which whatever form, I'm not speaking to sort of 00:51:56.220 |
that people really have a deep connection with God 00:52:19.600 |
when they experience religious awe or prayer and so on. 00:52:32.680 |
of what are people looking for when they're doing this? 00:52:35.320 |
And like I said, I've got no real expertise on this, 00:52:39.040 |
but it does seem that one thing people are after 00:53:01.120 |
and somehow connection to God might give your life meaning. 00:53:18.520 |
If universal consciousness can give the world meaning, 00:53:21.800 |
why can't local consciousness give the world meaning too? 00:53:25.320 |
So I think my consciousness gives my world meaning. 00:53:28.560 |
- What is the origin of meaning for your world? 00:53:37.520 |
So my consciousness invests this world with meaning. 00:53:42.200 |
maybe it would be a bleak, meaningless universe. 00:53:45.360 |
But I don't see why I need someone else's consciousness 00:53:47.720 |
or even God's consciousness to give this universe meaning. 00:53:55.180 |
I think we can give the universe meaning ourselves. 00:53:58.960 |
I mean, maybe to some people that feels inadequate. 00:54:01.680 |
Yeah, our own local consciousness is somehow too puny 00:54:09.320 |
and maybe God gives you a sense of cosmic significance, 00:54:15.720 |
- So, you know, it's a really interesting idea 00:54:19.280 |
that consciousness is the thing that makes life meaningful. 00:54:24.800 |
If you could maybe just briefly explore that for a second. 00:54:37.360 |
just the day-to-day experiences of life have, 00:54:51.760 |
I guess I wanna ask something I would always wanted to ask 00:55:03.500 |
So I suspect you don't mean consciousness gives 00:55:16.280 |
- I think life has meaning for us because we are conscious. 00:55:27.320 |
So consciousness is the source of the meaning of life, 00:55:42.680 |
So if you find meaning and fulfillment and value 00:55:53.200 |
If you find it in social connections or in raising a family, 00:55:58.960 |
The meaning kind of comes from what you value 00:56:04.040 |
So I think on this view, there's no universal solution, 00:56:14.600 |
but it's consciousness that somehow makes value possible, 00:56:22.840 |
Something that comes from within consciousness. 00:56:24.600 |
- So you think consciousness is a crucial component, 00:56:33.560 |
- I mean, it's kind of a fairly strong intuition 00:56:39.960 |
If we just had a purely, a universe of unconscious creatures, 00:56:44.640 |
would anything be better or worse than anything else? 00:56:47.700 |
- Certainly when it comes to ethical dilemmas, 00:56:53.200 |
Do you kill one person or do you switch to the other track 00:57:03.440 |
where there's one conscious being on one track 00:57:06.720 |
and five humanoid zombies, let's make them robots, 00:57:21.000 |
Most people have a fairly clear intuition here. 00:57:25.520 |
'cause they basically, they don't have a meaningful life. 00:57:28.680 |
They're not really persons, conscious beings at all. 00:57:42.040 |
So in philosophical terms, you referred to as a zombie. 00:57:54.900 |
So that's kind of what we may be able to create with robots. 00:58:00.240 |
And I don't necessarily know what that even means. 00:58:13.480 |
is a being which is physically, functionally, 00:58:16.400 |
behaviorally identical to me, but not conscious. 00:58:29.360 |
to raise questions like, why aren't we zombies? 00:58:36.740 |
like robots, which are maybe not physically identical to us, 00:58:45.340 |
but they can do a lot of sophisticated things, 00:58:47.700 |
maybe carry on a conversation, but they're not conscious. 00:59:11.120 |
that can use language and be fairly high functioning 00:59:17.780 |
I mean, we've caused, there's this tricky question 00:59:21.580 |
of how you would know whether they're conscious. 00:59:24.960 |
and we know that these high functioning robots 00:59:27.920 |
Then the question is, do they have moral status? 00:59:35.440 |
- Does basically that question, can they suffer? 00:59:51.420 |
It's going to be annoying for me and my partner. 00:59:55.920 |
No one would say the cup itself has moral status. 01:00:13.540 |
But if a being is conscious, on the other hand, 01:00:17.240 |
So Siri, or I dare not say the name of Alexa. 01:00:22.240 |
Anyway, so we don't think we're morally harming Alexa 01:00:28.640 |
by turning her off or disconnecting her or even destroying. 01:00:36.140 |
because we don't really think she's conscious. 01:00:39.060 |
On the other hand, you move to like the disembodied being 01:00:45.500 |
I guess she was kind of presented as conscious. 01:00:49.760 |
you'd certainly be committing a serious harm. 01:00:51.740 |
So I think our strong sense is if a being is conscious 01:01:05.380 |
then they're basically just meat or a machine 01:01:10.340 |
So I think at least maybe how we think about this stuff 01:01:23.340 |
is ultimately kind of the line between systems 01:01:36.820 |
talk about the demonstration of consciousness 01:01:40.780 |
from a system like that, from a system like Alexa 01:02:13.020 |
like Kafka, right, in the body of a bug or something. 01:02:15.660 |
But in a computer, you all of a sudden realize 01:02:19.740 |
and yet you would, feeling what you're feeling, 01:02:22.540 |
you would probably say those kinds of things. 01:02:25.980 |
So do you think a system essentially becomes conscious 01:02:46.100 |
- I don't think this is what makes you conscious, 01:02:48.120 |
but I do think being puzzled about consciousness 01:02:50.280 |
is a very good sign that a system is conscious. 01:03:01.340 |
and saying, yeah, I have all these weird experiences, 01:03:08.780 |
but I don't see how that would give you my consciousness, 01:03:13.900 |
that there's some consciousness going on there. 01:03:21.820 |
Many people aren't puzzled by their consciousness. 01:03:28.060 |
So I don't think that's a requirement on consciousness. 01:03:30.700 |
But I do think if we're looking for signs for consciousness, 01:03:41.300 |
is if it shows signs of introspectively recognizing 01:03:54.220 |
- That's such an interesting thought, though, 01:03:57.940 |
at the shallow level, criticize the Turing test, 01:04:01.140 |
or language, that it's essentially what I heard 01:04:05.060 |
like Dan Dennett criticize it in this kind of way, 01:04:09.820 |
which is it really puts a lot of emphasis on lying. 01:04:23.220 |
It's got to read this book called "Talk Like a Human." 01:04:26.660 |
It's like, man, why do I have to waste my time 01:04:36.380 |
that I recognize the hard problem of consciousness 01:04:38.940 |
in order for people to recognize me as conscious? 01:04:42.140 |
- Yeah, it just feels like, I guess the question is, 01:04:47.020 |
we can never really create a test for consciousness 01:04:49.460 |
because it feels like we're very human-centric. 01:05:00.900 |
the thing demonstrates the illusion of consciousness. 01:05:05.540 |
We can never really know whether it's conscious or not. 01:05:10.340 |
And in fact, that almost feels like it doesn't matter then. 01:05:16.700 |
that something is conscious or it demonstrates consciousness? 01:05:28.860 |
like how we treat it, can it suffer, and so on. 01:05:38.700 |
how we can know for sure whether a system is conscious. 01:05:50.100 |
things like social interaction, conversation, and so on, 01:06:00.020 |
or finding things meaningful, or being in pain. 01:06:29.900 |
"Please don't kick me, that hurts," just say it. 01:06:44.940 |
where these conscious beings can just be created 01:06:55.940 |
but ultimately their moral status ought to be the same, 01:06:58.140 |
and yeah, the civil rights issue is going to be a huge mess. 01:07:13.420 |
between two copies of David Chalmers quite interesting. 01:07:26.580 |
- So what, do you think he would be conscious? 01:07:34.540 |
I do think in some sense, I'm not sure it would be me, 01:07:37.060 |
there would be two different beings at this point. 01:07:41.300 |
and they both have many of the same mental properties. 01:07:45.860 |
I think they both, in a way, have the same moral status. 01:07:56.020 |
their legal status would have to be different. 01:07:58.540 |
If I'm the original and that one's just a clone, 01:08:17.860 |
my partner and so on, I'm gonna somehow be connected to them 01:08:33.700 |
They have all the memories of that connection. 01:08:35.620 |
Then in a way, you might say it's kind of unfair 01:08:41.500 |
or a partner who only one person can be with. 01:08:45.820 |
- I think, it's an interesting philosophical question, 01:08:49.100 |
but you might say, because I actually have this history, 01:08:51.980 |
if I am the same person as the one that came before 01:08:56.900 |
and the clone is not, then I have this history 01:09:05.820 |
This is the question about personal identity. 01:09:07.500 |
If I continue and I create a clone over there, 01:09:10.660 |
I wanna say this one is me and this one is someone else. 01:09:14.100 |
But you could take the view that a clone is equally me. 01:09:23.420 |
They treat the clones as if they're the original person. 01:09:25.900 |
Of course, they destroy the original body in "Star Trek." 01:09:30.940 |
and only very occasionally do things go wrong 01:09:34.660 |
But somehow our legal system, at the very least, 01:09:37.740 |
is gonna have to sort out some of these issues 01:09:42.180 |
and what's legally acceptable are gonna come apart. 01:09:45.860 |
- What question would you ask a clone of yourself? 01:09:50.660 |
Is there something useful you can find out from him 01:09:56.100 |
about the fundamentals of consciousness even? 01:10:19.420 |
except that it's just undergone this great shock 01:10:24.460 |
So just say you woke me up tomorrow and said, 01:10:31.860 |
And you provided me really convincing evidence, 01:10:36.940 |
and then all wrapped up here being here and waking up. 01:10:52.580 |
like how I'd react upon discovering that I'm a clone. 01:10:55.420 |
I could certainly ask the clone if it's conscious 01:10:57.860 |
and what its consciousness is like and so on. 01:10:59.860 |
But I guess I kind of know if it's a perfect clone, 01:11:04.520 |
Of course, at the beginning, there'll be a question 01:11:14.620 |
and the way it reacts to things in general is like me. 01:11:28.540 |
you say that it's gonna behave exactly like you. 01:11:37.420 |
Are we able to make decisions that are not predetermined 01:11:44.140 |
- Philosophers do this annoying thing of saying 01:11:54.500 |
If you mean something which was not determined in advance, 01:12:06.140 |
but I'm not sure we have free will in that sense. 01:12:09.540 |
But I'm also not sure that's the kind of free will 01:12:13.380 |
What matters to us is being able to do what we want 01:12:19.800 |
We've got this distinction between having our lives 01:12:21.500 |
be under our control and under someone else's control. 01:12:26.500 |
We've got the sense of actions that we are responsible for 01:12:45.540 |
is something we can have in a deterministic universe. 01:12:50.460 |
why an AI system couldn't have free will of that kind. 01:12:57.700 |
and doing things that in principle could not be predicted, 01:13:01.740 |
I don't know, maybe no one has that kind of free will. 01:13:04.680 |
- What's the connection between the reality of free will 01:13:15.240 |
So how does consciousness connect to the experience 01:13:19.620 |
of, to the reality and the experience of free will? 01:13:22.260 |
- It's certainly true that when we make decisions 01:13:29.740 |
I could go into philosophy or I could go into math. 01:13:38.060 |
So we experience these things as if the future is open. 01:14:05.740 |
And that's I think that's what really matters 01:14:07.260 |
in making decisions about experience of making a decision 01:14:14.300 |
I mean, in general, our introspective models of the mind, 01:14:18.100 |
I think are generally very distorted representations 01:14:21.640 |
So it may well be that our experience of ourself 01:14:27.620 |
doesn't terribly well mirror what's going on. 01:14:31.020 |
I mean, maybe there are antecedents in the brain 01:14:33.180 |
way before anything came into consciousness and so on. 01:14:38.180 |
Those aren't represented in our introspective model. 01:14:50.580 |
It's not a terribly good model of what's actually going on 01:14:57.060 |
It's just one little snapshot of one bit of that. 01:14:59.820 |
So in general, introspective models are very over simplified 01:15:13.940 |
that consciousness itself is an introspective illusion. 01:15:19.460 |
but the brain just has these introspective models of itself 01:15:24.300 |
or oversimplifies everything and represents itself 01:15:27.180 |
as having these special properties of consciousness. 01:15:31.060 |
It's a really simple way to kind of keep track of itself 01:15:46.640 |
about how the brain would create introspective models 01:15:50.100 |
of its own consciousness, of its own free will 01:16:02.700 |
but of course that's a really useful way of keeping track. 01:16:06.420 |
- Did you say that you find it not very plausible? 01:16:08.980 |
'Cause I find it both plausible and attractive 01:16:19.660 |
that has the minimum amount of mystery around it. 01:16:24.040 |
You can kind of understand that kind of view. 01:16:35.460 |
I recently wrote an article about this kind of issue 01:16:53.060 |
We might be able to explain that bit of behavior 01:16:54.940 |
as one of the easy problems of consciousness. 01:16:57.620 |
So maybe there'll be some computational model 01:17:00.580 |
that explains why we're puzzled by consciousness. 01:17:03.500 |
The meta-problem has come up with that model, 01:17:05.860 |
and I've been thinking about that a lot lately. 01:17:07.900 |
There are some interesting stories you can tell 01:17:09.580 |
about why the right kind of computational system 01:17:13.620 |
might develop these introspective models of itself 01:17:17.660 |
that attribute to itself these special properties. 01:17:20.700 |
So that meta-problem is a research program for everyone. 01:17:25.300 |
And then if you've got attraction to sort of simple views, 01:17:37.780 |
What is real is just these introspective models 01:17:45.060 |
So the view is very simple, very attractive, very powerful. 01:18:02.380 |
And this is why most people find this view crazy, 01:18:06.100 |
just as they find panpsychism crazy in one way. 01:18:08.820 |
People find illusionism crazy in another way. 01:18:20.660 |
Now, that makes the view sort of frankly unbelievable 01:18:28.220 |
might be able to explain why we find it unbelievable, 01:18:31.300 |
'cause these models are so deeply hardwired into our head. 01:18:35.340 |
So it's not, you can't escape that, the illusion. 01:18:41.940 |
that the entirety of the universe, our planet, 01:18:44.780 |
all the people in New York, all the organisms on our planet, 01:18:48.580 |
including me here today, are not real in that sense? 01:19:04.940 |
'Cause a simulation kind of is outside of you. 01:19:23.040 |
- Well, there's illusionism about the external world 01:19:34.100 |
and yeah, could all this be produced by an evil demon? 01:19:37.380 |
Descartes himself also had the dream argument. 01:19:39.540 |
He said, "How do you know you're not dreaming right now? 01:19:41.980 |
"How do you know this is not an amazing dream?" 01:19:46.060 |
that yeah, this could be some super duper complex dream 01:19:59.220 |
if the evil demon was doing it, it's not real. 01:20:05.580 |
As I was saying before, I think even if it's a simulation, 01:20:12.980 |
it could turn out that all this is like my dream 01:20:19.100 |
My own view is that wouldn't stop this physical world 01:20:22.780 |
It would turn out this cup at the most fundamental level 01:20:31.940 |
Maybe that would give you a kind of weird kind 01:20:36.460 |
but it wouldn't show that the cup isn't real. 01:20:39.380 |
It would just tell us it's ultimately made of processes 01:20:43.180 |
So I'd resist the idea that if the physical world is a dream 01:20:55.500 |
Why is Descartes demon or genius considered evil? 01:21:05.940 |
- Yeah, I mean, Descartes called it the malin genie, 01:21:25.380 |
God actually supplies you all of these perceptions 01:21:30.500 |
and ideas and that's how physical reality is sustained. 01:21:33.980 |
And interestingly, Berkeley's God is doing something 01:21:41.300 |
It's just that Descartes thought it was deception 01:21:46.300 |
And I'm actually more sympathetic to Berkeley here. 01:21:49.420 |
Yeah, this evil demon may be trying to deceive you, 01:21:54.900 |
but I think, okay, well, the evil demon may just be under 01:21:57.700 |
the working under a false philosophical theory. 01:22:07.140 |
I think, no, if we're in a matrix, it's all still real. 01:22:11.660 |
Yeah, the philosopher, O.K. Boosman had a nice story 01:22:15.140 |
about this about 50 years ago about Descartes evil demon, 01:22:30.220 |
So yeah, I think that maybe it's a very natural 01:22:33.100 |
to take this view that if we're in a simulation 01:22:38.640 |
then none of this is real, but I think it may be 01:22:43.860 |
especially if you take on board sort of the view of reality 01:22:46.740 |
or what matters to reality is really its structure, 01:22:50.100 |
something like its mathematical structure and so on, 01:22:52.860 |
which seems to be the view that a lot of people take 01:22:54.680 |
from contemporary physics, and it looks like you can find 01:22:58.060 |
all that mathematical structure in a simulation, 01:23:05.500 |
I would say that's enough for the physical world to be real. 01:23:10.100 |
to be somewhat more intangible than we had thought 01:23:18.160 |
- See, you've kind of alluded that you don't have 01:23:21.840 |
to have consciousness for high levels of intelligence, 01:23:25.500 |
but to create truly general intelligence systems, 01:23:35.020 |
you've talked about that you feel like that kind of thing 01:23:41.700 |
when we reach that point, do you think consciousness 01:23:49.460 |
or at least highly beneficial for creating an AGI system? 01:23:53.440 |
- Yeah, no one knows what consciousness is for, 01:23:57.120 |
functionally, so right now, there's no specific thing 01:24:00.220 |
we can point to and say, you need consciousness for that. 01:24:05.220 |
Still, my inclination is to believe that in principle, 01:24:09.340 |
At the very least, I don't see why someone couldn't 01:24:11.900 |
simulate a brain, ultimately have a computational system 01:24:16.160 |
that produces all of our behavior, and if that's possible, 01:24:19.480 |
I'm sure vastly many other computational systems 01:24:22.820 |
of equal or greater sophistication are possible 01:24:27.180 |
with all of our cognitive functions and more. 01:24:29.440 |
My inclination is to think that once you've got 01:24:35.440 |
perception, attention, reasoning, introspection, 01:24:40.440 |
language, emotion, and so on, it's very likely 01:24:49.180 |
At least it's very hard for me to see how you'd have 01:24:51.080 |
a system that had all those things while bypassing 01:24:55.680 |
- So just naturally, it's integrated quite naturally. 01:25:00.220 |
There's a lot of overlap about the kind of function 01:25:03.000 |
that required to achieve each of those things. 01:25:05.300 |
So you can't disentangle them even when you're-- 01:25:08.280 |
- It seems to, at least in us, but we don't know 01:25:11.040 |
what the causal role of consciousness in the physical world, 01:25:15.080 |
I mean, just say it turns out consciousness does something 01:25:18.500 |
like collapsing wave functions, as on one common 01:25:24.300 |
Then ultimately we might find some place where it actually 01:25:26.340 |
makes a difference, and we could say, ah, here is where 01:25:29.340 |
in collapsing wave functions, it's driving the behavior 01:25:32.240 |
of a system, and maybe it could even turn out that 01:25:39.200 |
I mean, if you wanted to connect this to free will, 01:25:41.200 |
some people think consciousness collapsing wave functions. 01:25:43.540 |
That would be how the conscious mind exerts effect 01:25:47.660 |
on the physical world and exerts its free will. 01:25:50.460 |
And maybe it could turn out that any AGI that didn't 01:25:53.940 |
utilize that mechanism would be limited in the kinds 01:26:02.260 |
I think probably that functionality could be simulated. 01:26:05.020 |
But you could imagine once we had a very specific idea 01:26:07.780 |
about the role of consciousness in the physical world, 01:26:10.460 |
this would have some impact on the capacity of AGIs, 01:26:14.100 |
and if it was a role that could not be duplicated elsewhere, 01:26:17.900 |
then we'd have to find some way to either get consciousness 01:26:22.900 |
in the system to play that role or to simulate it. 01:26:25.520 |
- If we can isolate a particular role to consciousness, 01:26:29.100 |
of course, that's incredibly, seems like an incredibly 01:26:33.920 |
Do you have worries about existential threats 01:26:39.620 |
of conscious, intelligent beings that are not us? 01:26:44.620 |
So certainly, I'm sure you're worried about us 01:26:58.140 |
One is an existential threat to consciousness, generally. 01:27:01.420 |
I mean, yes, I care about humans and the survival 01:27:05.020 |
of humans and so on, but just say it turns out 01:27:07.460 |
that eventually we're replaced by some artificial beings 01:27:11.820 |
that aren't humans but are somehow our successors. 01:27:15.500 |
They still have good lives, they still do interesting 01:27:26.500 |
Something different, maybe better, came next. 01:27:29.740 |
If, on the other hand, all of consciousness was wiped out, 01:27:36.780 |
One way that could happen is by all intelligent life 01:27:42.100 |
And many people think that, yeah, once you get to humans 01:27:44.420 |
and AI's and amazing sophistication where everyone 01:27:53.420 |
just by pressing a button, then maybe it's inevitable 01:28:03.660 |
and we've got to think very hard about how to avoid that. 01:28:05.980 |
But yeah, another interesting kind of disaster 01:28:08.020 |
is that maybe intelligent life is not wiped out, 01:28:14.860 |
So just say you thought, unlike what I was saying 01:28:17.340 |
a moment ago, that there are two different kinds 01:28:19.940 |
of intelligent systems, some which are conscious 01:28:27.940 |
with a high degree of intelligence, meaning high degree 01:28:39.660 |
but then there'd be no consciousness in this world. 01:28:44.380 |
Some people have called this the zombie apocalypse. 01:28:47.060 |
Because it's an apocalypse for consciousness. 01:28:54.540 |
And I would say that's a moral disaster in the same way, 01:28:59.820 |
with no intelligent life is a moral disaster. 01:29:02.220 |
All value and meaning may be gone from that world. 01:29:09.000 |
Now my own view is, if you get super intelligence, 01:29:11.720 |
you're almost certainly gonna bring consciousness with it. 01:29:15.840 |
but of course, I don't understand consciousness. 01:29:20.240 |
This is one reason for, this is one reason at least, 01:29:22.880 |
among many, for thinking very seriously about consciousness 01:29:25.480 |
and thinking about the kind of future we want to create 01:29:35.740 |
if consciousness so naturally does come with AGI systems, 01:29:42.580 |
That we will be just something, a blimp on the record, 01:29:51.720 |
- I mean, I think I'd probably be okay with that. 01:29:56.320 |
Especially if somehow humans are continuous with AGI. 01:29:59.380 |
I mean, I think something like this is inevitable. 01:30:02.420 |
At the very least, humans are gonna be transformed, 01:30:16.740 |
And eventually that line between what's a human 01:30:18.900 |
and what's an AI may be kind of hard to draw. 01:30:25.620 |
that some future being a thousand years from now 01:30:29.480 |
that somehow descended from us actually still has biology? 01:30:32.880 |
I think it would be nice if you could kind of point 01:30:36.880 |
that had some roots in us and trace a continuous line there 01:30:41.560 |
that would be selfishly nice for me to think that, 01:30:48.600 |
But if it turns out, okay, there's a jump there, 01:30:51.120 |
they found a better way to design cognitive systems, 01:30:56.080 |
and the only line is some causal chain of designing 01:31:06.680 |
of a causal chain of design and yes, they're not humans, 01:31:12.280 |
So I mean, ultimately I think it's probably inevitable 01:31:20.000 |
It'd be nice if they still cared enough about us 01:31:28.340 |
I'm really hoping that the AGIs are gonna solve 01:31:31.820 |
They'll come back and read all this crappy work 01:31:36.560 |
hard problem of consciousness and here is why 01:31:40.460 |
If that happened, then I'd really feel like I was part 01:31:42.360 |
of at least an intellectual process over centuries 01:31:50.860 |
and for the fun of it, sort of bring back other philosophers. 01:31:56.100 |
- Descartes and just put them in a room and just watch. 01:32:02.120 |
where you bring philosophers from different human, 01:32:04.660 |
100% human philosophers from previous generations, 01:32:16.940 |
I would like to be recreated and hang out with Descartes. 01:32:19.740 |
- Who would be, would Descartes would be the first? 01:32:22.660 |
If you could hang out as part of such a TV show 01:32:26.100 |
with a philosopher that's no longer with us from long ago, 01:32:38.980 |
An actor who's actually a philosopher came out on stage 01:32:48.980 |
- All my ideas were crap and all derived from him 01:32:51.580 |
and so on, we had a long argument, this was great. 01:33:19.100 |
if I got to actually talk to him about some of this. 01:33:22.740 |
Hey, there was Princess Elizabeth who talked with Descartes 01:33:25.700 |
and who really got at the problems of how Descartes' ideas 01:33:39.180 |
she's been proved right, so maybe put me in a room 01:34:04.100 |
and we're now at the early stages of being able 01:34:11.540 |
and maybe one day we'll have degrees of consciousness, 01:34:19.980 |
Is there a particular aspect of this future world 01:34:24.020 |
- I think there are lots of different aspects. 01:34:26.340 |
I mean, frankly, I want it to hurry up and happen. 01:34:29.500 |
It's like, yeah, we've had some progress lately in AI and VR, 01:34:33.100 |
but in the grand scheme of things, it's still kind of slow. 01:34:38.180 |
and I'm in my 50s, I've only got so long left. 01:34:42.060 |
I'd like to see really serious AI in my lifetime 01:34:49.660 |
I would like to be able to hang out in a virtual reality, 01:34:56.500 |
to really get to inhabit fundamentally different kinds 01:35:02.140 |
Well, I would very much like to be able to upload my mind 01:35:05.700 |
onto a computer, so maybe I don't have to die. 01:35:11.420 |
If this is maybe gradually replace my neurons 01:35:19.300 |
I suspect I'm not gonna quite get there in my lifetime, 01:35:27.340 |
of transforming your consciousness in remarkable ways, 01:35:39.580 |
and you were given the opportunity to become immortal 01:35:42.820 |
in this kind of way, would you choose to be immortal? 01:35:59.820 |
I don't see, I really don't see why this might be. 01:36:04.820 |
I mean, even if it's just ordinary life that continues on, 01:36:12.020 |
if the universe is gonna go on forever or indefinitely, 01:36:19.300 |
I don't think, your view was that we're just hit 01:36:30.020 |
no, it's gonna continue to be infinitely interesting. 01:36:32.660 |
Something like as you go up the set theoretic hierarchy, 01:36:36.180 |
you go from the finite cardinals to Aleph zero, 01:36:41.180 |
and then through there to all the Aleph one and Aleph two, 01:36:46.060 |
and maybe the continuum, and you keep taking power sets. 01:36:51.980 |
that actually all this is fundamentally unpredictable. 01:36:54.780 |
It doesn't follow any simple computational patterns. 01:36:58.940 |
as the set theoretic universe expands and expands. 01:37:09.780 |
but still being fundamentally unpredictable at many points. 01:37:12.900 |
I mean, yes, this creates all kinds of worries, 01:37:18.980 |
So we're gonna need a solution to that problem. 01:37:21.180 |
But if we get to stipulate that I'm immortal, 01:37:36.460 |
Well, I think I speak for a lot of people in saying, 01:37:41.460 |
and there'll be that Netflix show, "The Future," 01:37:57.180 |
And thank you to our presenting sponsor, Cash App. 01:38:05.500 |
an organization that inspires and educates young minds 01:38:08.700 |
to become science and technology innovators of tomorrow. 01:38:12.180 |
If you enjoy this podcast, subscribe on YouTube, 01:38:19.220 |
or simply connect with me on Twitter @LexFriedman. 01:38:26.980 |
Materialism is a beautiful and compelling view of the world, 01:38:32.260 |
we have to go beyond the resources it provides.