back to index

David Chalmers: The Hard Problem of Consciousness | Lex Fridman Podcast #69


Chapters

0:0 Introduction
2:23 Nature of reality: Are we living in a simulation?
19:19 Consciousness in virtual reality
27:46 Music-color synesthesia
31:40 What is consciousness?
51:25 Consciousness and the meaning of life
57:33 Philosophical zombies
61:38 Creating the illusion of consciousness
67:3 Conversation with a clone
71:35 Free will
76:35 Meta-problem of consciousness
78:40 Is reality an illusion?
80:53 Descartes' evil demon
83:20 Does AGI need conscioussness?
93:47 Exciting future
95:32 Immortality

Whisper Transcript | Transcript Only Page

00:00:00.000 | The following is a conversation with David Chalmers.
00:00:02.880 | He's a philosopher and cognitive scientist
00:00:05.320 | specializing in areas of philosophy of mind,
00:00:08.060 | philosophy of language, and consciousness.
00:00:11.000 | He's perhaps best known for formulating
00:00:13.280 | the hard problem of consciousness,
00:00:15.120 | which could be stated as,
00:00:16.520 | "Why does the feeling which accompanies awareness
00:00:18.880 | "of sensory information exist at all?"
00:00:21.760 | Consciousness is almost entirely a mystery.
00:00:25.480 | Many people who worry about AI safety and ethics
00:00:28.600 | believe that in some form consciousness can
00:00:31.820 | and should be engineered into AI systems of the future.
00:00:35.440 | So while there's much mystery, disagreement,
00:00:38.280 | and discoveries yet to be made about consciousness,
00:00:40.880 | these conversations,
00:00:42.440 | while fundamentally philosophical in nature,
00:00:45.240 | may nevertheless be very important for engineers
00:00:48.040 | of modern AI systems to engage in.
00:00:50.240 | This is the Artificial Intelligence Podcast.
00:00:53.840 | If you enjoy it, subscribe on YouTube,
00:00:56.160 | get five stars on Apple Podcast,
00:00:57.940 | support it on Patreon,
00:00:59.240 | or simply connect with me on Twitter,
00:01:01.280 | @LexFriedman, spelled F-R-I-D-M-A-N.
00:01:04.480 | As usual, I'll do one or two minutes of ads now,
00:01:08.320 | and never any ads in the middle
00:01:09.640 | that can break the flow of the conversation.
00:01:11.880 | I hope that works for you,
00:01:13.200 | and doesn't hurt the listening experience.
00:01:15.560 | This show is presented by Cash App,
00:01:17.520 | the number one finance app in the App Store.
00:01:19.800 | When you get it, use code LEXPODCAST.
00:01:23.240 | Cash App lets you send money to friends,
00:01:25.440 | buy Bitcoin, and invest in the stock market
00:01:27.880 | with as little as $1.
00:01:29.840 | Brokerage services are provided by Cash App Investing,
00:01:32.600 | subsidiary of Square, and member SIPC.
00:01:35.980 | Since Cash App does fractional share trading,
00:01:38.280 | let me mention that the order execution algorithm
00:01:40.840 | that works behind the scenes
00:01:42.080 | to create the abstraction of fractional orders
00:01:44.880 | is an algorithmic marvel.
00:01:46.720 | So big props to the Cash App engineers
00:01:49.160 | for solving a hard problem that in the end
00:01:51.600 | provides an easy interface that takes a step up
00:01:54.200 | to the next layer of abstraction over the stock market,
00:01:57.080 | making trading more accessible for new investors
00:01:59.920 | and diversification much easier.
00:02:02.760 | If you get Cash App from the App Store or Google Play
00:02:05.120 | and use the code LEXPODCAST, you'll get $10,
00:02:08.840 | and Cash App will also donate $10 to FIRST,
00:02:11.660 | one of my favorite organizations
00:02:13.460 | that is helping to advance robotics and STEM education
00:02:16.620 | for young people around the world.
00:02:18.640 | And now, here's my conversation with David Chalmers.
00:02:22.160 | Do you think we're living in a simulation?
00:02:25.920 | - I don't rule it out.
00:02:27.420 | There's probably gonna be a lot of simulations
00:02:29.720 | in the history of the cosmos.
00:02:31.220 | If the simulation is designed well enough,
00:02:34.700 | it'll be indistinguishable from a non-simulated reality.
00:02:39.700 | And although we could keep searching for evidence
00:02:43.160 | that we're not in a simulation,
00:02:46.000 | any of that evidence in principle could be simulated.
00:02:48.600 | So I think it's a possibility.
00:02:50.560 | - But do you think the thought experiment
00:02:52.040 | is interesting or useful to calibrate
00:02:55.960 | how we think about the nature of reality?
00:02:58.720 | - Yeah, I definitely think it's interesting and useful.
00:03:01.000 | In fact, I'm actually writing a book about this right now,
00:03:03.600 | all about the simulation idea,
00:03:05.960 | using it to shed light on a whole bunch
00:03:08.040 | of philosophical questions.
00:03:10.320 | So the big one is how do we know anything
00:03:13.100 | about the external world?
00:03:14.680 | Descartes said maybe you're being fooled by an evil demon
00:03:19.440 | who's stimulating your brain into thinking
00:03:21.760 | all this stuff is real when in fact it's all made up.
00:03:25.880 | Well, the modern version of that is how do you know
00:03:29.280 | you're not in a simulation?
00:03:30.880 | Then the thought is if you're in a simulation,
00:03:33.720 | none of this is real.
00:03:34.560 | So that's teaching us something about knowledge.
00:03:37.600 | How do you know about the external world?
00:03:39.480 | I think there's also really interesting questions
00:03:41.120 | about the nature of reality right here.
00:03:43.880 | If we are in a simulation, is all this real?
00:03:46.880 | Is there really a table here?
00:03:48.160 | Is there really a microphone?
00:03:49.280 | Do I really have a body?
00:03:50.840 | The standard view would be no, we don't.
00:03:54.200 | None of this would be real.
00:03:55.600 | My view is actually that's wrong,
00:03:56.880 | and even if we are in a simulation, all of this is real.
00:03:59.400 | That's why I call this reality 2.0, new version of reality,
00:04:02.520 | different version of reality, still reality.
00:04:05.440 | - So what's the difference between quote unquote real world
00:04:09.960 | and the world that we perceive?
00:04:12.560 | So we interact with the world by perceiving it.
00:04:18.800 | It only really exists through the window
00:04:22.920 | of our perception system and in our mind.
00:04:25.800 | So what's the difference between something
00:04:27.360 | that's quote unquote real, that exists perhaps
00:04:30.400 | without us being there, and the world as you perceive it?
00:04:35.400 | - Well, the world as we perceive it is a very simplified
00:04:39.160 | and distorted version of what's going on underneath.
00:04:42.760 | We already know that from just thinking about science.
00:04:45.200 | You know, you don't see too many,
00:04:46.520 | obviously quantum mechanical effects
00:04:48.760 | in what we perceive, but we still know quantum mechanics
00:04:51.640 | is going on under all things.
00:04:53.680 | So I like to think the world we perceive
00:04:55.280 | is this very kind of simplified picture of colors
00:05:00.280 | and shapes existing in space and so on.
00:05:04.600 | We know there's a, that's what the philosopher
00:05:07.040 | Wilfred Sellers called the manifest image,
00:05:09.720 | the world as it seems to us.
00:05:10.920 | We already know underneath all that
00:05:12.560 | is a very different scientific image
00:05:14.680 | with atoms or quantum wave functions
00:05:18.160 | or super strings or whatever the latest thing is.
00:05:22.320 | And that's the ultimate scientific reality.
00:05:24.840 | So I think of the simulation idea
00:05:27.280 | as basically another hypothesis
00:05:29.600 | about what the ultimate say quasi scientific
00:05:32.560 | or metaphysical reality is going on underneath the world
00:05:36.800 | or the manifest image.
00:05:37.640 | The world of the manifest image is this very simple thing
00:05:41.240 | that we interact with that's neutral
00:05:43.200 | on the underlying stuff of reality science
00:05:47.440 | could help tell us about that.
00:05:48.840 | Maybe philosophy could help tell us about that too.
00:05:51.360 | And if we eventually take the red pill
00:05:53.360 | and find out we're in a simulation,
00:05:54.840 | my view is that's just another view
00:05:56.720 | about what reality is made of.
00:05:58.720 | The philosopher Immanuel Kant said,
00:06:00.840 | "What is the nature of the thing in itself?"
00:06:02.720 | I've got a glass here and it's got all these,
00:06:05.320 | it appears to me a certain way, a certain shape,
00:06:07.920 | it's liquid, it's clear.
00:06:10.120 | And he said, "What is the nature of the thing in itself?"
00:06:14.160 | Well, I think of the simulation idea,
00:06:15.480 | it's a hypothesis about the nature of the thing in itself.
00:06:18.520 | It turns out if we're in a simulation,
00:06:20.600 | the thing in itself, nature of this glass,
00:06:22.640 | it's okay, it's actually a bunch of data structures
00:06:25.040 | running on a computer in the next universe up.
00:06:28.400 | - Yeah, that's what people tend to do
00:06:30.360 | when they think about simulation.
00:06:31.600 | They think about our modern computers
00:06:34.560 | and somehow trivially, crudely just scaled up in some sense.
00:06:39.560 | But do you think the simulation,
00:06:44.720 | I mean, in order to actually simulate
00:06:47.520 | something as complicated as our universe
00:06:50.400 | that's made up of molecules and atoms
00:06:53.040 | and particles and quarks and maybe even strings,
00:06:57.220 | all of that requires something just infinitely
00:07:00.760 | many orders of magnitude more of scale and complexity.
00:07:05.760 | Do you think we're even able to even conceptualize
00:07:12.320 | what it would take to simulate our universe?
00:07:16.040 | Or does it just slip into this idea
00:07:18.720 | that you basically have to build a universe,
00:07:21.640 | something so big to simulate it?
00:07:23.720 | Does it get into this fuzzy area that's not useful at all?
00:07:28.880 | - Yeah, I mean, our universe is obviously
00:07:31.360 | incredibly complicated and for us within our universe
00:07:36.280 | to build a simulation of a universe as complicated as ours
00:07:40.740 | is going to have obvious problems here.
00:07:42.420 | If the universe is finite,
00:07:44.040 | there's just no way that's going to work.
00:07:45.780 | Maybe there's some cute way to make it work
00:07:48.100 | if the universe is infinite,
00:07:51.200 | maybe an infinite universe could somehow simulate
00:07:53.660 | a copy of itself, but that's going to be hard.
00:07:57.180 | Nonetheless, just so we are in a simulation,
00:07:59.820 | I think there's no particular reason
00:08:01.140 | why we have to think the simulating universe
00:08:04.040 | has to be anything like ours.
00:08:06.220 | - You've said before that it might be,
00:08:10.020 | so you could think of it, and turtles all the way down,
00:08:12.700 | you could think of the simulating universe
00:08:15.860 | different than ours, but we ourselves
00:08:17.780 | could also create another simulating universe.
00:08:20.280 | So you said that there could be these
00:08:21.700 | kind of levels of universes,
00:08:24.220 | and you've also mentioned this hilarious idea,
00:08:27.120 | maybe tongue in cheek, maybe not,
00:08:29.140 | that there may be simulations within simulations,
00:08:31.860 | arbitrarily stacked levels, and that there may be,
00:08:35.200 | that we may be in level 42.
00:08:37.860 | - Oh yeah. - Along those stacks,
00:08:39.320 | referencing Hitchhiker's Guide to the Universe.
00:08:41.860 | If we're indeed in a simulation within a simulation,
00:08:45.940 | at level 42, what do you think level zero looks like?
00:08:50.940 | - I would expect that level zero is truly enormous.
00:08:55.240 | I mean, not just, if it's finite,
00:08:57.780 | it's some extraordinarily large finite capacity,
00:09:01.840 | much more likely it's infinite.
00:09:03.200 | Maybe it's got some very high set-theoretic cardinality
00:09:06.820 | that enables it to support just any number of simulations.
00:09:11.380 | So high degree of infinity at level zero,
00:09:14.360 | slightly smaller degree of infinity at level one,
00:09:18.880 | so by the time you get down to us at level 42,
00:09:21.480 | maybe there's plenty of room for lots of simulations
00:09:25.080 | of finite capacity.
00:09:27.780 | If the top universe is only a small finite capacity,
00:09:34.280 | then obviously that's gonna put
00:09:35.360 | very, very serious limits on how many simulations
00:09:38.640 | you're gonna be able to get running.
00:09:40.320 | So I think we can certainly confidently say
00:09:42.720 | that if we're at level 42,
00:09:44.320 | then the top level's pretty damn big.
00:09:47.120 | - So it gets more and more constrained
00:09:49.120 | as we get down levels, more and more simplified
00:09:52.220 | and constrained and limited in resources.
00:09:54.360 | - Yeah, we still have plenty of capacity here.
00:09:56.520 | What was it, Feynman said?
00:09:58.320 | He said there's plenty of room at the bottom.
00:10:00.800 | You know, we're still a number of levels
00:10:03.400 | above the degree of where there's room
00:10:05.320 | for fundamental physical computing capacity,
00:10:08.400 | quantum computing capacity at the bottom level.
00:10:11.040 | So we've got plenty of room to play with and make,
00:10:14.280 | we probably have plenty of room for simulations
00:10:16.520 | of pretty sophisticated universes,
00:10:19.080 | perhaps none as complicated as our universe,
00:10:22.760 | unless our universe is infinite,
00:10:25.260 | but still at the very least
00:10:27.240 | for pretty serious finite universes,
00:10:29.120 | but maybe universes somewhat simpler than ours,
00:10:31.760 | unless of course we're prepared to take certain shortcuts
00:10:35.160 | in the simulation,
00:10:36.040 | which might then increase the capacity significantly.
00:10:38.680 | - Do you think the human mind, us people,
00:10:42.200 | in terms of the complexity of simulation
00:10:44.680 | is at the height of what the simulation
00:10:47.200 | might be able to achieve?
00:10:48.600 | Like if you look at incredible entities
00:10:51.240 | that could be created in this universe of ours,
00:10:54.880 | do you have an intuition about
00:10:56.800 | how incredible human beings are on that scale?
00:11:00.600 | - I think we're pretty impressive,
00:11:02.400 | but we're not that impressive.
00:11:03.920 | - Are we above average?
00:11:06.040 | - I mean, I think kind of human beings
00:11:08.000 | are at a certain point in the scale of intelligence,
00:11:11.400 | which made many things possible.
00:11:14.080 | You know, you get through evolution,
00:11:16.760 | through single cell organisms,
00:11:19.400 | through fish and mammals and primates,
00:11:22.840 | and something happens once you get to human beings.
00:11:25.960 | We've just reached that level
00:11:27.760 | where we get to develop language,
00:11:29.600 | we get to develop certain kinds of culture,
00:11:31.720 | and we get to develop certain kinds of collective thinking
00:11:35.040 | that has enabled all this amazing stuff to happen,
00:11:38.520 | science and literature and engineering and culture and so on.
00:11:43.520 | So we are just at the beginning of that
00:11:46.280 | on the evolutionary threshold.
00:11:47.760 | It's kind of like we just got there,
00:11:49.520 | you know, who knows a few thousand
00:11:51.640 | or tens of thousands of years ago.
00:11:54.400 | So we're probably just at the very beginning
00:11:56.440 | for what's possible there.
00:11:57.680 | So I'm inclined to think among the scale
00:12:01.040 | of intelligent beings,
00:12:02.360 | we're somewhere very near the bottom.
00:12:05.120 | I would expect that, for example,
00:12:06.280 | if we're in a simulation,
00:12:08.760 | then the simulators who created us
00:12:10.920 | have got the capacity to be far more sophisticated.
00:12:13.960 | If we're at level 42,
00:12:15.360 | who knows what the ones at level zero are like.
00:12:17.720 | - It's also possible that this is the epitome
00:12:22.720 | of what is possible to achieve.
00:12:24.480 | So we as human beings see ourselves maybe as flawed,
00:12:27.280 | see all the constraints, all the limitations,
00:12:29.680 | but maybe that's the magical, the beautiful thing.
00:12:32.360 | Maybe those limitations are the essential elements
00:12:36.000 | for an interesting sort of that edge of chaos,
00:12:38.960 | that interesting existence.
00:12:41.000 | That if you make us much more intelligent,
00:12:43.760 | if you make us much more powerful
00:12:46.920 | in any kind of dimension of performance,
00:12:50.320 | maybe you lose something fundamental
00:12:52.520 | that makes life worth living.
00:12:55.080 | So you kind of have this optimistic view
00:12:57.920 | that we're this little baby,
00:13:00.120 | that there's so much growth and potential,
00:13:03.000 | but this could also be it.
00:13:05.240 | This is the most amazing thing is us.
00:13:09.600 | - Maybe what you're saying is consistent
00:13:11.240 | with what I'm saying.
00:13:12.080 | I mean, we could still have levels of intelligence
00:13:14.360 | far beyond us,
00:13:15.640 | but maybe those levels of intelligence,
00:13:17.160 | on your view, would be kind of boring.
00:13:18.920 | And we kind of get so good at everything
00:13:21.360 | that life suddenly becomes unidimensional.
00:13:24.160 | So we're just inhabiting this one spot
00:13:26.800 | of maximal romanticism in the history of evolution.
00:13:30.640 | You get to humans and it's like, yeah,
00:13:32.080 | and then years to come,
00:13:33.360 | our super intelligent descendants
00:13:34.920 | are gonna look back at us and say,
00:13:37.480 | those were the days when they just hit
00:13:39.640 | the point of inflection and life was interesting.
00:13:42.480 | I am an optimist, so I'd like to think
00:13:44.040 | that if there is super intelligence somewhere
00:13:48.040 | in the future, they'll figure out how to make life
00:13:50.800 | super interesting and super romantic.
00:13:52.840 | - Well, you know what they're gonna do.
00:13:54.560 | So what they're gonna do is they realize
00:13:56.400 | how boring life is when you're super intelligent.
00:13:58.720 | So they create a new level of a simulation
00:14:02.560 | and sort of live through the things they've created
00:14:05.680 | by watching them stumble about in their flawed ways.
00:14:10.480 | So maybe that's, so you create a new level
00:14:13.000 | of a simulation every time you get really bored
00:14:16.440 | with how smart and-
00:14:17.880 | - This would be kind of sad though,
00:14:19.080 | 'cause we're sure the peak of their existence
00:14:20.800 | would be like watching simulations for entertainment.
00:14:23.440 | It's like saying the peak of our existence now is Netflix.
00:14:26.560 | - No. - It's all right.
00:14:27.640 | - A flip side of that could be the peak of our existence
00:14:31.160 | for many people having children and watching them grow.
00:14:34.280 | That becomes very meaningful.
00:14:35.800 | - Okay, you create a simulation,
00:14:37.160 | it's like creating a family.
00:14:38.600 | - Creating like, well, any kind of creation
00:14:40.880 | is kind of a powerful act.
00:14:43.800 | Do you think it's easier to simulate
00:14:45.480 | the mind or the universe?
00:14:47.760 | So I've heard several people, including Nick Bostrom,
00:14:51.960 | think about ideas of, you know,
00:14:53.880 | maybe you don't need to simulate the universe,
00:14:55.600 | you can just simulate the human mind.
00:14:57.440 | Or in general, just the distinction
00:15:00.400 | between simulating the entirety of it,
00:15:02.600 | the entirety of the physical world,
00:15:04.600 | or just simulating the mind.
00:15:06.080 | Which one do you see as more challenging?
00:15:09.760 | - Well, I think in some sense the answer is obvious.
00:15:12.400 | It has to be simpler to simulate the mind
00:15:15.040 | than to simulate the universe,
00:15:16.480 | because the mind is part of the universe.
00:15:18.480 | And in order to fully simulate the universe,
00:15:20.520 | you're gonna have to simulate the mind.
00:15:22.600 | So unless we're talking about partial simulations.
00:15:25.280 | - And I guess the question is, which comes first?
00:15:27.560 | Does the mind come before the universe,
00:15:29.760 | or does the universe come before the mind?
00:15:32.520 | So the mind could just be an emergent phenomena
00:15:36.600 | in this universe.
00:15:37.920 | So simulation is an interesting thing.
00:15:44.040 | It's not like creating a simulation, perhaps,
00:15:47.360 | requires you to program every single thing
00:15:50.360 | that happens in it.
00:15:51.760 | It's just defining a set of initial conditions
00:15:54.160 | and rules based on which it behaves.
00:15:57.780 | Simulating the mind requires you to have a little bit more,
00:16:03.800 | we're now in a little bit of a crazy land,
00:16:07.280 | but it requires you to understand
00:16:10.200 | the fundamentals of cognition,
00:16:11.840 | perhaps of consciousness, of perception,
00:16:15.000 | of everything like that, that's not created
00:16:20.000 | through some kind of emergence from basic physics laws,
00:16:25.440 | but more requires you to actually understand
00:16:27.920 | the fundamentals of the mind.
00:16:29.840 | - How about if we said simulate the brain,
00:16:32.000 | rather than the mind?
00:16:33.960 | So the brain is just a big physical system.
00:16:36.040 | The universe is a giant physical system.
00:16:38.600 | To simulate the universe, at the very least,
00:16:40.080 | you're gonna have to simulate the brains
00:16:42.640 | as well as all the other physical systems within it.
00:16:46.120 | And it's not obvious that the problems are any worse
00:16:50.920 | for the brain than for, it's a particularly complex
00:16:55.280 | physical system, but if we can simulate
00:16:56.880 | arbitrary physical systems, we can simulate brains.
00:16:59.880 | There is this further question of whether
00:17:02.120 | when you simulate a brain, will that bring along
00:17:05.200 | all the features of the mind with it?
00:17:07.360 | Like will you get consciousness?
00:17:08.880 | Will you get thinking?
00:17:09.960 | Will you get free will?
00:17:11.600 | And so on, and that's something philosophers
00:17:14.360 | have argued over for years.
00:17:17.080 | My own view is if you simulate the brain well enough,
00:17:20.080 | that will also simulate the mind,
00:17:22.640 | but yeah, there's plenty of people who would say no.
00:17:24.860 | You'd merely get like a zombie system,
00:17:27.160 | a simulation of a brain without any true consciousness.
00:17:31.300 | - But for you, you put together a brain,
00:17:33.440 | the consciousness comes with it, arise.
00:17:36.320 | - Yeah, I don't think it's obvious.
00:17:38.640 | - That's your intuition.
00:17:39.680 | - My view is roughly that, yeah,
00:17:41.320 | what is responsible for consciousness,
00:17:43.100 | it's in the patterns of information processing and so on,
00:17:46.960 | rather than say the biology that it's made of.
00:17:50.480 | There's certainly plenty of people out there
00:17:51.800 | who think consciousness has to be say biological.
00:17:54.520 | So if you merely replicate the patterns
00:17:56.720 | of information processing in a non-biological substrate,
00:17:59.680 | you'll miss what's crucial for consciousness.
00:18:02.440 | I mean, I just don't think there's any particular reason
00:18:04.320 | to think that biology is special here.
00:18:07.400 | You can imagine substituting the biology
00:18:09.600 | for non-biological systems, say silicon circuits,
00:18:13.720 | that play the same role.
00:18:15.120 | The behavior will continue to be the same.
00:18:17.640 | And I think just thinking about what is the true,
00:18:21.280 | when I think about the connection,
00:18:22.280 | the isomorphisms between consciousness and the brain,
00:18:25.520 | the deepest connections to me seem to connect consciousness
00:18:28.280 | to patterns of information processing,
00:18:30.280 | not to specific biology.
00:18:32.340 | So I at least adopted as my working hypothesis
00:18:35.160 | that basically it's the computation and the information
00:18:38.120 | that matters for consciousness.
00:18:39.520 | At the same time, we don't understand consciousness,
00:18:41.760 | so all this could be wrong.
00:18:43.640 | - So the computation, the flow, the processing,
00:18:48.120 | manipulation of information,
00:18:49.800 | the process is where the consciousness,
00:18:54.460 | the software is where the consciousness comes from,
00:18:56.480 | not the hardware.
00:18:57.860 | - Roughly the software, yeah.
00:18:59.200 | The patterns of information processing,
00:19:01.360 | at least in the hardware, which we could view as software.
00:19:05.680 | It may not be something you can just like program
00:19:07.360 | and load and erase and so on in the way we can
00:19:11.360 | with ordinary software,
00:19:12.920 | but it's something at the level of information processing
00:19:15.120 | rather than at the level of implementation.
00:19:17.960 | - So on that, what do you think of the experience of self,
00:19:22.480 | just the experience of the world in a virtual world,
00:19:26.040 | in virtual reality?
00:19:27.920 | Is it possible that we can create
00:19:29.880 | sort of offsprings of our consciousness
00:19:36.040 | by existing in a virtual world long enough?
00:19:38.840 | So yeah, can we be conscious in the same kind of deep way
00:19:43.840 | that we are in this real world
00:19:47.640 | by hanging out in a virtual world?
00:19:51.160 | - Yeah, well, the kind of virtual worlds we have now
00:19:54.160 | are interesting but limited in certain ways.
00:19:58.040 | In particular, they rely on us having a brain and so on,
00:20:01.720 | which is outside the virtual world.
00:20:03.600 | Maybe I'll strap on my VR headset
00:20:06.720 | or just hang out in a virtual world on a screen,
00:20:10.840 | but my brain and then my physical environment
00:20:15.600 | might be simulated if I'm in a virtual world.
00:20:17.680 | But right now, there's no attempt to simulate my brain.
00:20:21.360 | There might be some non-player characters
00:20:24.160 | in these virtual worlds that have simulated
00:20:27.480 | cognitive systems of certain kinds
00:20:29.080 | that dictate their behavior,
00:20:30.600 | but mostly they're pretty simple right now.
00:20:33.120 | I mean, some people are trying to combine,
00:20:34.640 | put a bit of AI in their non-player characters
00:20:36.880 | to make them smarter.
00:20:39.600 | But for now, inside virtual worlds,
00:20:42.280 | the actual thinking is interestingly distinct
00:20:45.360 | from the physics of those virtual worlds.
00:20:47.160 | In a way, actually, I like to think
00:20:48.480 | this is kind of reminiscent of the way
00:20:49.720 | that Descartes thought our physical world was.
00:20:52.240 | There's physics and there's the mind and they're separate.
00:20:55.200 | Now we think the mind is somehow connected
00:20:58.760 | to physics pretty deeply.
00:20:59.920 | But in these virtual worlds,
00:21:01.080 | there's a physics of a virtual world
00:21:02.960 | and then there's this brain
00:21:04.080 | which is totally outside the virtual world
00:21:05.880 | that controls it and interacts it.
00:21:07.600 | When anyone exercises agency in a video game,
00:21:11.200 | that's actually somebody outside the virtual world
00:21:13.520 | moving a controller,
00:21:14.880 | controlling the interaction of things
00:21:16.640 | inside the virtual world.
00:21:18.200 | So right now in virtual worlds,
00:21:20.400 | the mind is somehow outside the world.
00:21:22.320 | But you could imagine in the future,
00:21:25.000 | once we have developed serious AI,
00:21:29.040 | artificial general intelligence, and so on,
00:21:31.520 | and then we could come to a virtual world
00:21:34.480 | which have enough sophistication,
00:21:35.760 | you could actually simulate a brain
00:21:38.080 | or have a genuine AGI,
00:21:41.640 | which would then presumably be able to act
00:21:43.720 | in equally sophisticated ways,
00:21:45.920 | maybe even more sophisticated ways
00:21:47.920 | inside the virtual world
00:21:49.400 | to how it might in the physical world.
00:21:51.640 | And then the question is gonna come along,
00:21:53.400 | that'll be kind of a VR,
00:21:56.040 | a virtual world internal intelligence.
00:21:59.560 | And then the question is,
00:22:00.400 | could they have consciousness, experience,
00:22:02.680 | intelligence, free will, all the things that we have?
00:22:06.240 | And again, my view is, I don't see why not.
00:22:08.840 | - To linger on it a little bit,
00:22:10.440 | I find virtual reality really incredibly powerful,
00:22:14.400 | just even the crude virtual reality we have now.
00:22:16.800 | Perhaps there's psychological effects
00:22:21.840 | that make some people more amenable
00:22:23.960 | to virtual worlds than others,
00:22:25.280 | but I find myself wanting to stay in virtual worlds
00:22:27.800 | for the most part. - You do?
00:22:28.920 | - Yes.
00:22:29.760 | - With a headset or on a desktop?
00:22:32.080 | - No, with a headset.
00:22:33.040 | - Really interesting,
00:22:33.880 | 'cause I am totally addicted to using the internet
00:22:37.600 | and things on a desktop.
00:22:40.680 | But when it comes to VR for the headset,
00:22:43.040 | I don't typically use it for more than 10 or 20 minutes.
00:22:46.160 | There's something just slightly aversive about it, I find.
00:22:48.760 | So I don't right now,
00:22:50.160 | even though I have Oculus Rift and Oculus Quest
00:22:52.960 | and HTC Vive and Samsung this and that.
00:22:55.560 | - You just don't wanna stay in that world for long.
00:22:57.360 | - Not for extended periods.
00:22:58.760 | - Do you actually find yourself
00:23:00.000 | hiking out in that? - Something about 'em,
00:23:02.520 | it's both a combination of just imagination
00:23:06.040 | and considering the possibilities
00:23:08.000 | of where this goes in the future.
00:23:10.640 | It feels like I want to almost prepare my brain for,
00:23:15.640 | I wanna explore sort of Disneyland
00:23:19.680 | when it's first being built in the early days.
00:23:23.720 | And it feels like I'm walking around
00:23:27.400 | almost imagining the possibilities
00:23:31.440 | and something through that process
00:23:33.160 | allows my mind to really enter into that world.
00:23:36.040 | But you say that the brain's external to that virtual world.
00:23:41.040 | It is, strictly speaking, true.
00:23:45.120 | But--
00:23:46.640 | - If you're in VR and you do brain surgery on an avatar,
00:23:50.640 | and you're gonna open up that skull,
00:23:51.840 | what are you gonna find?
00:23:53.040 | Sorry, nothing there. - Nothing.
00:23:54.280 | - The brain is elsewhere.
00:23:55.920 | You don't think it's possible to kind of separate them.
00:23:59.560 | And I don't mean in a sense like Descartes,
00:24:02.080 | like a hard separation,
00:24:04.520 | but basically, do you think it's possible
00:24:08.160 | with the brain outside of the virtual,
00:24:11.880 | when you're wearing a headset,
00:24:13.380 | create a new consciousness for prolonged periods of time?
00:24:19.960 | Really feel, like really experience,
00:24:23.320 | like forget that your brain is outside.
00:24:26.320 | - So this is, okay, this is gonna be the case
00:24:27.840 | where the brain is still outside.
00:24:29.240 | - Still outside.
00:24:30.080 | - But could living in the VR,
00:24:31.880 | I mean, we already find this, right, with video games.
00:24:35.200 | - Exactly.
00:24:36.040 | - They're completely immersive,
00:24:37.880 | and you get taken up by living in those worlds,
00:24:40.680 | and it becomes your reality for a while.
00:24:43.240 | - So they're not completely immersive,
00:24:44.800 | they're just very immersive.
00:24:46.040 | Completely immersive.
00:24:46.880 | - You don't forget the external world, no.
00:24:48.840 | - Exactly, so that's what I'm asking.
00:24:50.960 | It's almost possible to really forget the external world,
00:24:55.680 | really, really immerse yourself.
00:24:58.440 | - To forget completely?
00:24:59.840 | Why would we forget?
00:25:00.680 | You know, we've got pretty good memories.
00:25:02.200 | Maybe you can stop paying attention to the external world,
00:25:06.000 | but you know, this already happens a lot.
00:25:07.540 | I go to work, and maybe I'm not paying attention
00:25:10.000 | to my home life, I go to a movie, and I'm immersed in that.
00:25:14.520 | So that degree of immersion, absolutely,
00:25:17.120 | but we still have the capacity to remember it,
00:25:19.640 | to completely forget the external world.
00:25:21.960 | I'm thinking that would probably take some,
00:25:23.920 | I don't know, some pretty serious drugs or something
00:25:25.760 | to make your brain do that.
00:25:26.600 | - But is it possible?
00:25:28.960 | So, I mean, I guess I'm getting at,
00:25:31.040 | is consciousness truly a property
00:25:35.640 | that's tied to the physical brain?
00:25:38.520 | Or can you create sort of different offspring copies
00:25:45.520 | of consciousnesses based on the worlds that you enter?
00:25:49.440 | - Well, the way we're doing it now,
00:25:51.560 | at least with a standard VR, there's just one brain,
00:25:54.920 | interacts with the physical world, plays a video game,
00:25:58.040 | puts on a video headset, interacts with this virtual world.
00:26:01.720 | And I think we'd typically say
00:26:02.860 | there's one consciousness here
00:26:04.800 | that nonetheless undergoes different environments,
00:26:07.540 | takes on different characters in different environments.
00:26:11.880 | This is already something that happens
00:26:13.160 | in the non-virtual world.
00:26:14.160 | You know, I might interact one way in my home life,
00:26:17.480 | my work life, social life, and so on.
00:26:21.160 | So at the very least, that will happen
00:26:23.960 | in a virtual world very naturally.
00:26:25.880 | People sometimes adopt the character of avatars
00:26:30.320 | very different from themselves,
00:26:32.360 | maybe even a different gender, different race,
00:26:34.760 | different social background.
00:26:36.960 | So that much is certainly possible.
00:26:38.760 | I would see that as a single consciousness
00:26:41.120 | as taking on different personas.
00:26:43.320 | If you want literal splitting of consciousness
00:26:46.240 | into multiple copies,
00:26:47.360 | I think it's gonna take something more radical than that.
00:26:50.600 | Like maybe you can run different simulations
00:26:53.600 | of your brain in different realities
00:26:56.080 | and then expose them to different histories.
00:26:57.840 | And then, you know, you'd split yourself
00:27:00.160 | into 10 different simulated copies,
00:27:01.880 | which then undergo different environments
00:27:04.120 | and then ultimately do become
00:27:05.280 | 10 very different consciousnesses.
00:27:07.720 | Maybe that could happen,
00:27:08.600 | but now we're not talking about something
00:27:10.440 | that's possible in the near term.
00:27:12.240 | We're gonna have to have brain simulations
00:27:14.040 | and AGI for that to happen.
00:27:16.240 | - Got it, so before any of that happens,
00:27:20.200 | it's fundamentally, you see it as a singular consciousness,
00:27:23.760 | even though it's experiencing different environments,
00:27:26.400 | virtual or not, it's still connected
00:27:28.840 | to the same set of memories, same set of experiences,
00:27:31.640 | and therefore one sort of joint conscious system.
00:27:36.640 | - Yeah, or at least no more multiple
00:27:40.560 | than the kind of multiple consciousness
00:27:42.120 | that we get from inhabiting different environments
00:27:45.000 | in a non-virtual world.
00:27:46.720 | - So you said as a child, you were a music color--
00:27:51.480 | - Synesthete.
00:27:52.320 | - Synesthete, so where songs had colors for you.
00:27:55.360 | So what songs had what colors?
00:27:59.720 | - You know, this is funny.
00:28:00.920 | I didn't pay much attention to this at the time,
00:28:04.000 | but I'd listen to a piece of music
00:28:05.320 | and I'd get some kind of imagery of a kind of color.
00:28:11.360 | The weird thing is mostly they were kind of murky,
00:28:16.040 | dark greens and olive browns,
00:28:18.560 | and the colors weren't all that interesting.
00:28:21.600 | I don't know what the reason is.
00:28:22.440 | I mean, my theory is that maybe it's like different chords
00:28:25.280 | and tones provided different colors,
00:28:27.720 | and they all tended to get mixed together
00:28:29.280 | into these somewhat uninteresting browns and greens,
00:28:33.200 | but every now and then,
00:28:34.880 | there'd be something that had a really pure color.
00:28:37.360 | So there's just a few that I remember.
00:28:39.360 | There was a "Here, There, and Everywhere"
00:28:41.680 | by the Beatles was bright red.
00:28:43.840 | It has this very distinctive tonality
00:28:46.360 | and it's chord structure at the beginning.
00:28:49.680 | So that was bright red.
00:28:50.880 | There was this song by the Alan Parsons Project
00:28:53.960 | called "Ammonia Avenue" that was kind of a pure blue.
00:28:58.960 | Anyway, I've got no idea how this happened.
00:29:02.000 | I didn't even pay that much attention
00:29:03.120 | until it went away when I was about 20.
00:29:05.400 | This synesthesia often goes away.
00:29:07.480 | - So is it purely just the perception of a particular color
00:29:10.960 | or was there a positive or negative experience with it?
00:29:14.360 | Like was blue associated with a positive
00:29:16.400 | and red with a negative?
00:29:17.920 | Or is it simply the perception of color
00:29:20.960 | associated with some characteristic of the song?
00:29:23.440 | - For me, I don't remember a lot of association
00:29:25.800 | with emotion or with value.
00:29:28.360 | It was just this kind of weird and interesting fact.
00:29:30.920 | I mean, at the beginning,
00:29:31.760 | I thought this was something that happened to everyone.
00:29:33.920 | Songs of colors, maybe I mentioned it once or twice,
00:29:36.440 | and people said, "Nope."
00:29:39.160 | - Nope.
00:29:40.960 | - But it was like, I thought it was kind of cool
00:29:42.040 | when there was one that had one of these
00:29:43.400 | especially pure colors, but only much later,
00:29:46.320 | once I became a grad student thinking about the mind,
00:29:49.000 | that I read about this phenomenon called synesthesia.
00:29:51.560 | And it's like, "Hey, that's what I had."
00:29:53.960 | And now I occasionally talk about it in my classes,
00:29:56.600 | in intro class, and it still happens sometimes.
00:29:58.640 | A student comes up and says, "Hey, I have that.
00:30:01.080 | "I never knew about that.
00:30:01.920 | "I never knew it had a name."
00:30:04.560 | - You said that it went away at age 20 or so.
00:30:08.120 | And that you have a journal entry from around then
00:30:13.080 | saying, "Songs don't have colors anymore.
00:30:15.200 | "What happened?"
00:30:16.040 | - What happened?
00:30:16.880 | Yeah, I was definitely sad that it was gone.
00:30:18.840 | In retrospect, it was like, "Hey, that's cool.
00:30:20.620 | "The colors have gone."
00:30:21.960 | - Yeah, do you, can you think about that for a little bit?
00:30:25.080 | Do you miss those experiences?
00:30:27.040 | 'Cause it's a fundamentally different sets of experiences
00:30:31.760 | that you no longer have.
00:30:34.160 | Or is it just a nice thing to have had?
00:30:38.400 | You don't see them as that fundamentally different
00:30:40.680 | than you visiting a new country
00:30:42.960 | and experiencing new environments.
00:30:45.000 | - I guess for me, when I had these experiences,
00:30:47.480 | they were somewhat marginal.
00:30:49.000 | They were like a little bonus kind of experience.
00:30:51.680 | I know there are people who have much more serious forms
00:30:55.160 | of synesthesia than this,
00:30:57.040 | for whom it's absolutely central to their lives.
00:30:59.440 | I know people who, when they experience new people,
00:31:01.840 | they have colors, maybe they have tastes, and so on.
00:31:04.800 | Every time they see writing, it has colors.
00:31:08.360 | Some people, whenever they hear music,
00:31:09.680 | it's got a certain really rich color pattern.
00:31:14.480 | And for some synesthetes, it's absolutely central.
00:31:17.480 | I think if they lost it, they'd be devastated.
00:31:20.240 | Again, for me, it was a very, very mild form of synesthesia.
00:31:24.800 | And it's like, yeah,
00:31:25.640 | it's like those interesting experiences
00:31:27.440 | - Yeah.
00:31:29.520 | - that you might get under different altered states
00:31:31.560 | of consciousness and so on.
00:31:33.360 | It's kind of cool, but not necessarily
00:31:36.200 | the single most important experiences in your life.
00:31:39.320 | - Got it.
00:31:40.160 | So let's try to go to the very simplest question
00:31:43.920 | that you've answered by any time,
00:31:45.120 | but perhaps the simplest things can help us reveal,
00:31:48.560 | even in time, some new ideas.
00:31:51.640 | So what, in your view, is consciousness?
00:31:55.640 | What is qualia?
00:31:56.840 | What is the hard problem of consciousness?
00:32:00.720 | - Consciousness, I mean, the word is used many ways,
00:32:03.400 | but the kind of consciousness that I'm interested in
00:32:06.280 | is basically subjective experience.
00:32:10.040 | What it feels like from the inside to be a human being
00:32:14.400 | or any other conscious being.
00:32:16.160 | I mean, there's something it's like to be me.
00:32:19.040 | Right now, I have visual images that I'm experiencing.
00:32:23.520 | I'm hearing my voice.
00:32:25.640 | I've got maybe some emotional tone.
00:32:29.160 | I've got a stream of thoughts running through my head.
00:32:31.680 | These are all things that I experience
00:32:33.640 | from the first person point of view.
00:32:36.280 | I've sometimes called this the inner movie in the mind.
00:32:39.160 | It's not a perfect metaphor.
00:32:41.640 | It's not like a movie in every way,
00:32:44.240 | and it's very rich, but yeah,
00:32:46.440 | it's just direct subjective experience,
00:32:49.400 | and I call that consciousness,
00:32:51.400 | or sometimes philosophers use the word qualia,
00:32:54.640 | which you suggested.
00:32:55.520 | People tend to use the word qualia
00:32:57.080 | for things like the qualities of things like colors,
00:33:00.440 | redness, the experience of redness
00:33:02.320 | versus the experience of greenness,
00:33:04.680 | the experience of one taste or one smell versus another,
00:33:08.840 | the experience of the quality of pain,
00:33:10.960 | and yeah, a lot of consciousness
00:33:12.720 | is the experience of those qualities.
00:33:17.040 | - Well, consciousness is bigger,
00:33:18.280 | the entirety of any kind of experience.
00:33:20.600 | - I mean, consciousness of thinking
00:33:22.120 | is not obviously qualia.
00:33:23.920 | It's not like specific qualities
00:33:25.240 | like redness or greenness,
00:33:26.480 | but still I'm thinking about my hometown,
00:33:29.240 | I'm thinking about what I'm gonna do later on.
00:33:31.720 | Maybe there's still something running through my head,
00:33:34.200 | which is subjective experience.
00:33:36.360 | Maybe it goes beyond those qualities or qualia.
00:33:40.000 | Philosophers sometimes use the word phenomenal consciousness
00:33:43.040 | for consciousness in this sense.
00:33:44.760 | I mean, people also talk about access consciousness,
00:33:47.520 | being able to access information in your mind,
00:33:50.320 | reflective consciousness,
00:33:52.120 | being able to think about yourself,
00:33:53.960 | but it looks like the really mysterious one,
00:33:55.960 | the one that really gets people going
00:33:57.280 | is phenomenal consciousness.
00:33:58.920 | The fact that all this,
00:34:00.640 | the fact that there's subjective experience
00:34:02.800 | and all this feels like something at all.
00:34:05.160 | And then the hard problem is how is it that,
00:34:08.920 | why is it that there is phenomenal consciousness at all?
00:34:11.560 | And how is it that physical processes in a brain
00:34:15.640 | could give you subjective experience?
00:34:19.440 | It looks like on the face of it,
00:34:21.720 | you'd have all this big complicated physical system
00:34:23.960 | in a brain running
00:34:25.000 | without a given subjective experience at all.
00:34:28.520 | And yet we do have subjective experience.
00:34:30.840 | So the hard problem is just explain that.
00:34:33.240 | - Explain how that comes about.
00:34:35.960 | We haven't been able to build machines
00:34:37.560 | where a red light goes on that says it's not conscious.
00:34:41.320 | So how do we actually create that?
00:34:45.720 | Or how do humans do it and how do we ourselves do it?
00:34:49.000 | - We do every now and then create machines
00:34:50.920 | that can do this.
00:34:51.760 | We create babies that are conscious.
00:34:55.600 | They've got these brains.
00:34:56.560 | - As best as we can tell.
00:34:57.400 | - That brain does produce consciousness.
00:34:58.480 | But even though we can create it,
00:35:00.720 | we still don't understand why it happens.
00:35:02.920 | Maybe eventually we'll be able to create machines,
00:35:05.480 | which as a matter of fact, AI machines,
00:35:07.880 | which as a matter of fact are conscious.
00:35:10.320 | But that won't necessarily make the hard problem
00:35:13.240 | go away any more than it does with babies.
00:35:15.520 | 'Cause we still wanna know how and why is it
00:35:17.520 | that these processes give you consciousness?
00:35:19.520 | - You just made me realize for a second,
00:35:22.200 | maybe it's a totally dumb realization,
00:35:26.280 | but nevertheless, that it's a useful way
00:35:31.000 | to think about the creation of consciousness
00:35:33.860 | is looking at a baby.
00:35:35.760 | So that there's a certain point
00:35:37.560 | at which that baby is not conscious.
00:35:40.940 | The baby starts from maybe, I don't know,
00:35:47.160 | from a few cells, right?
00:35:49.600 | There's a certain point at which it becomes,
00:35:51.800 | consciousness arrives, it's conscious.
00:35:54.960 | Of course, we can't know exactly that line.
00:35:56.920 | But it's a useful idea that we do create consciousness.
00:36:01.060 | Again, a really dumb thing for me to say,
00:36:04.600 | but not until now did I realize
00:36:07.040 | we do engineer consciousness.
00:36:09.680 | We get to watch the process happen.
00:36:12.280 | We don't know which point it happens or where it is,
00:36:16.240 | but we do see the birth of consciousness.
00:36:19.240 | - Yeah, I mean, there's a question, of course,
00:36:21.120 | is whether babies are conscious when they're born.
00:36:25.040 | And it used to be, it seems,
00:36:26.360 | at least some people thought they weren't,
00:36:28.280 | which is why they didn't give anesthetics
00:36:30.560 | to newborn babies when they circumcised them.
00:36:33.200 | And so now people think, oh, that's incredibly cruel.
00:36:36.680 | Of course, babies feel pain.
00:36:38.800 | And now the dominant view is that the babies can feel pain.
00:36:42.160 | Actually, my partner, Claudia, works on this whole issue
00:36:45.880 | of whether there's consciousness in babies and of what kind.
00:36:49.720 | And she certainly thinks that newborn babies
00:36:52.200 | come into the world with some degree of consciousness.
00:36:55.440 | Of course, then you can just extend the question
00:36:56.840 | backwards to fetuses,
00:36:58.080 | and suddenly you're into politically controversial--
00:37:00.480 | - Exactly. - Territory.
00:37:02.120 | But the question also arises in the animal kingdom.
00:37:06.840 | Where does consciousness start or stop?
00:37:08.640 | Is there a line in the animal kingdom
00:37:11.120 | where the first conscious organisms are?
00:37:15.540 | It's interesting.
00:37:16.380 | Over time, people are becoming more and more liberal
00:37:18.260 | about ascribing consciousness to animals.
00:37:21.100 | People used to think,
00:37:22.380 | maybe only mammals could be conscious.
00:37:24.540 | Now most people seem to think, sure, fish are conscious.
00:37:27.420 | They can feel pain.
00:37:28.740 | And now we're arguing over insects.
00:37:30.980 | You'll find people out there who say plants
00:37:33.460 | have some degree of consciousness.
00:37:35.580 | So who knows where it's gonna end?
00:37:37.820 | The far end of this chain is the view
00:37:39.340 | that every physical system has some degree of consciousness.
00:37:43.340 | Philosophers call that panpsychism.
00:37:45.980 | You know, I take that view.
00:37:48.340 | - I mean, that's a fascinating way to view reality.
00:37:50.940 | So if you could talk about,
00:37:52.860 | if you can linger on panpsychism for a little bit,
00:37:56.540 | what does it mean?
00:37:58.380 | So it's not just plants are conscious.
00:38:00.940 | I mean, it's that consciousness
00:38:02.500 | is a fundamental fabric of reality.
00:38:05.380 | What does that mean to you?
00:38:07.360 | How are we supposed to think about that?
00:38:09.660 | - Well, we're used to the idea
00:38:10.900 | that some things in the world are fundamental, right?
00:38:14.620 | In physics, we take things like space or time,
00:38:17.340 | or space-time, mass, charge,
00:38:20.340 | as fundamental properties of the universe.
00:38:23.100 | You don't reduce them to something simpler.
00:38:25.420 | You take those for granted.
00:38:26.940 | You've got some laws that connect them.
00:38:30.120 | Here is how mass and space and time evolve.
00:38:33.780 | Theories like relativity or quantum mechanics
00:38:36.580 | or some future theory that will unify them both.
00:38:39.940 | But everyone says you've got to take some things
00:38:41.500 | as fundamental.
00:38:42.500 | And if you can't explain one thing
00:38:44.580 | in terms of the previous fundamental things,
00:38:47.100 | you have to expand.
00:38:48.220 | Maybe something like this happened with Maxwell.
00:38:51.660 | He ended up with fundamental principles
00:38:54.180 | of electromagnetism and took charge as fundamental
00:38:57.500 | 'cause it turned out that was the best way to explain it.
00:39:00.100 | So I at least take seriously the possibility
00:39:02.820 | something like that could happen with consciousness.
00:39:06.060 | Take it as a fundamental property
00:39:07.580 | like space, time, and mass.
00:39:10.140 | And instead of trying to explain consciousness
00:39:13.140 | wholly in terms of the evolution of space, time,
00:39:17.020 | and mass, and so on, take it as a primitive
00:39:20.040 | and then connect it to everything else
00:39:23.020 | by some fundamental laws.
00:39:25.260 | 'Cause I mean, there's this basic problem
00:39:27.180 | that the physics we have now looks great
00:39:29.100 | for solving the easy problems of consciousness,
00:39:31.860 | which are all about behavior.
00:39:33.300 | They give us a complicated structure and dynamics.
00:39:37.500 | They tell us how things are gonna behave,
00:39:39.660 | what kind of observable behavior they'll produce,
00:39:43.180 | which is great for the problems of explaining how we walk
00:39:46.380 | and how we talk and so on.
00:39:48.600 | Those are the easy problems of consciousness.
00:39:50.640 | But the hard problem was this problem
00:39:52.580 | about subjective experience just doesn't look
00:39:55.340 | like that kind of problem about structure, dynamics,
00:39:57.600 | how things behave.
00:39:58.820 | So it's hard to see how existing physics
00:40:01.340 | is gonna give you a full explanation of that.
00:40:04.700 | - Certainly trying to get a physics view
00:40:07.260 | of consciousness, yes, there has to be a connecting point
00:40:10.940 | and it could be at the very axiomatic,
00:40:12.600 | at the very beginning level.
00:40:14.140 | But I mean, first of all,
00:40:18.180 | there's a crazy idea that sort of everything
00:40:23.180 | has properties of consciousness.
00:40:25.620 | At that point, the word consciousness is already
00:40:30.620 | beyond the reach of our current understanding,
00:40:32.980 | like far, because it's so far from,
00:40:35.820 | at least for me, maybe you can correct me,
00:40:38.780 | as far from the experience,
00:40:40.980 | experiences that we have, that I have as a human being.
00:40:45.300 | To say that everything is conscious,
00:40:47.500 | that means that basically another way to put that,
00:40:52.500 | if that's true, then we understand almost nothing
00:40:56.820 | about that fundamental aspect of the world.
00:41:00.140 | - How do you feel about saying an ant is conscious?
00:41:02.780 | Do you get the same reaction to that
00:41:04.020 | or is that something you can understand?
00:41:05.780 | - I can understand ant, I can understand an atom.
00:41:10.220 | - A plant? - A planticle.
00:41:12.500 | So I'm comfortable with living things on Earth
00:41:16.660 | being conscious because there's some kind of agency
00:41:20.820 | where they're similar size to me
00:41:25.220 | and they can be born and they can die.
00:41:30.820 | And that is understandable intuitively.
00:41:34.420 | Of course, you anthropomorphize,
00:41:36.740 | you put yourself in the place of the plant.
00:41:39.020 | But I can understand it.
00:41:43.220 | I mean, I'm not like, I don't believe actually
00:41:47.620 | that plants are conscious or that plants suffer,
00:41:49.620 | but I can understand that kind of belief, that kind of idea.
00:41:52.980 | - How do you feel about robots?
00:41:54.940 | Like the kind of robots we have now?
00:41:56.740 | If I told you like that a Roomba
00:41:58.860 | had some degree of consciousness.
00:42:02.300 | Or some deep neural network.
00:42:06.100 | - I could understand that a Roomba has consciousness.
00:42:08.460 | I just had spent all day at iRobot.
00:42:10.660 | And I mean, I personally love robots
00:42:15.220 | and have a deep connection with robots.
00:42:16.980 | So I also probably anthropomorphize them.
00:42:20.060 | There's something about the physical object.
00:42:23.860 | So there's a difference than a neural network,
00:42:26.820 | a neural network running a software.
00:42:28.980 | To me, the physical object,
00:42:31.060 | something about the human experience
00:42:32.700 | allows me to really see that physical object as an entity.
00:42:36.960 | And if it moves, it moves in a way that it,
00:42:40.940 | there's a, like I didn't program it,
00:42:43.420 | where it feels that it's acting based on its own perception.
00:42:49.420 | And yes, self-awareness and consciousness,
00:42:53.460 | even if it's a Roomba,
00:42:55.460 | then you start to assign it some agency, some consciousness.
00:43:00.680 | So, but to say that panpsychism,
00:43:03.800 | that consciousness is a fundamental property of reality
00:43:06.880 | is a much bigger statement.
00:43:11.360 | That it's like turtles all the way.
00:43:13.680 | It's like every, it doesn't end.
00:43:16.080 | The whole thing is, so like how,
00:43:18.360 | I know it's full of mystery,
00:43:20.120 | but if you can linger on it,
00:43:23.880 | like how would it, how do you think about reality
00:43:27.600 | if consciousness is a fundamental part of its fabric?
00:43:31.880 | - The way you get there is from thinking,
00:43:33.320 | can we explain consciousness
00:43:34.760 | given the existing fundamentals?
00:43:36.560 | And then if you can't, as at least right now it looks like,
00:43:41.160 | then you've got to add something.
00:43:42.360 | It doesn't follow that you have to add consciousness.
00:43:44.960 | Here's another interesting possibility is,
00:43:47.040 | well, we'll add something else.
00:43:48.040 | Let's call it proto-consciousness or X.
00:43:51.680 | And then it turns out space, time, mass, plus X
00:43:56.160 | will somehow collectively give you the possibility
00:43:58.960 | for consciousness.
00:44:00.240 | Why don't we allow that view?
00:44:01.820 | Either I call that pan-proto-psychism
00:44:04.800 | 'cause maybe there's some other property,
00:44:06.280 | proto-consciousness at the bottom level.
00:44:08.920 | And if you can't imagine
00:44:10.080 | there's actually genuine consciousness at the bottom level,
00:44:12.840 | I think we should be open to the idea
00:44:14.120 | there's this other thing, X.
00:44:16.200 | Maybe we can't imagine that somehow gives you consciousness.
00:44:20.000 | But if we are playing along with the idea
00:44:22.380 | that there really is genuine consciousness
00:44:24.360 | at the bottom level, of course,
00:44:25.400 | this is gonna be way out and speculative,
00:44:28.280 | but at least in, say, if it was classical physics,
00:44:32.040 | then you'd end up saying, well, every little atom,
00:44:35.280 | with a bunch of particles in space-time,
00:44:37.640 | each of these particles has some kind of consciousness
00:44:41.540 | whose structure mirrors maybe their physical properties,
00:44:44.560 | like its mass, its charge, its velocity, and so on.
00:44:49.080 | The structure of its consciousness
00:44:50.320 | would roughly correspond to that.
00:44:52.280 | And the physical interactions between particles,
00:44:55.440 | I mean, there's this old worry about physics,
00:44:58.280 | I mentioned this before in this issue
00:44:59.560 | about the manifest image,
00:45:01.120 | we don't really find out
00:45:02.080 | about the intrinsic nature of things.
00:45:04.560 | Physics tells us about how a particle relates
00:45:07.440 | to other particles and interacts.
00:45:09.320 | It doesn't tell us about what the particle is in itself.
00:45:12.840 | That was Kant's thing in itself.
00:45:14.600 | So here's a view.
00:45:15.720 | The nature in itself of a particle is something mental.
00:45:20.840 | A particle is actually a little conscious subject
00:45:24.520 | with properties of its consciousness
00:45:27.360 | that correspond to its physical properties.
00:45:29.160 | The laws of physics are actually ultimately relating
00:45:32.640 | these properties of conscious subjects.
00:45:34.560 | So in this view, a Newtonian world
00:45:36.640 | would actually be a vast collection
00:45:38.200 | of little conscious subjects at the bottom level,
00:45:41.240 | way, way simpler than we are without free will
00:45:44.960 | or rationality or anything like that.
00:45:47.280 | But that's what the universe would be like.
00:45:48.800 | Now, of course, that's a vastly speculative view.
00:45:51.360 | No particular reason to think it's correct.
00:45:53.600 | Furthermore, non-Newtonian physics,
00:45:56.480 | say quantum mechanical wave function,
00:45:58.960 | suddenly it starts to look different.
00:46:00.080 | It's not a vast collection of conscious subjects.
00:46:02.600 | Maybe there's ultimately one big wave function
00:46:05.360 | for the whole universe.
00:46:06.760 | Corresponding to that might be something more like
00:46:09.160 | a single conscious mind whose structure corresponds
00:46:13.840 | to the structure of the wave function.
00:46:16.280 | People sometimes call this cosmo-psychism.
00:46:19.160 | And now, of course, we're in the realm
00:46:20.880 | of extremely speculative philosophy.
00:46:23.180 | There's no direct evidence for this.
00:46:25.160 | But yeah, but if you want a picture
00:46:27.320 | of what that universe would be like,
00:46:29.280 | think yeah, giant cosmic mind with enough richness
00:46:32.680 | and structure among it to replicate
00:46:34.480 | all the structure of physics.
00:46:36.520 | - I think therefore I am at the level of particles
00:46:39.720 | and with quantum mechanics
00:46:40.960 | at the level of the wave function.
00:46:43.800 | It's kind of an exciting, beautiful possibility,
00:46:48.800 | of course, way out of reach of physics currently.
00:46:51.940 | - It is interesting that some neuroscientists
00:46:55.040 | are beginning to take panpsychism seriously.
00:46:58.680 | You find consciousness even in very simple systems.
00:47:02.880 | So for example, the integrated information theory
00:47:05.560 | of consciousness, a lot of neuroscientists
00:47:07.400 | are taking it seriously.
00:47:08.240 | Actually, I just got this new book by Christoph Koch
00:47:11.000 | just came in, "The Feeling of Life Itself."
00:47:13.680 | Why consciousness is widespread, but can't be computed.
00:47:17.280 | He basically endorses a panpsychist view
00:47:20.520 | where you get consciousness with the degree
00:47:22.840 | of information processing,
00:47:24.520 | or integrated information processing in a system,
00:47:27.520 | and even very, very simple systems,
00:47:29.520 | like a couple of particles, will have some degree of this.
00:47:32.720 | So he ends up with some degree of consciousness
00:47:35.240 | in all matter.
00:47:36.080 | And the claim is that this theory
00:47:38.680 | can actually explain a bunch of stuff
00:47:40.480 | about the connection between the brain and consciousness.
00:47:43.580 | Now that's very controversial.
00:47:45.360 | I think it's very, very early days
00:47:46.920 | in the science of consciousness.
00:47:48.040 | - It's still in there. - But it's interesting
00:47:48.880 | that it's not just philosophy
00:47:50.800 | that might lead you in this direction,
00:47:52.680 | but there are ways of thinking quasi-scientifically
00:47:55.240 | that lead you there too.
00:47:56.440 | - But maybe it's different than panpsychism.
00:48:01.200 | What do you think?
00:48:02.040 | So Alan Watts has this quote
00:48:04.040 | that I'd like to ask you about.
00:48:06.960 | The quote is, "Through our eyes,
00:48:10.320 | "the universe is perceiving itself.
00:48:12.660 | "Through our ears, the universe is listening
00:48:14.520 | "to its harmonies.
00:48:16.000 | "We are the witnesses to which the universe
00:48:17.920 | "becomes conscious of its glory, of its magnificence."
00:48:21.480 | So that's not panpsychism.
00:48:24.800 | Do you think that we are essentially the tools,
00:48:29.800 | the senses the universe created to be conscious of itself?
00:48:35.480 | - It's an interesting idea.
00:48:37.600 | Of course, if you went for the giant cosmic mind view,
00:48:40.560 | then the universe was conscious.
00:48:42.640 | - All along, it didn't need us.
00:48:44.060 | We're just little components of the universal consciousness.
00:48:48.180 | Likewise, if you believe in panpsychism,
00:48:50.820 | then there was some little degree of consciousness
00:48:52.860 | at the bottom level all along,
00:48:54.720 | and we were just a more complex form of consciousness.
00:48:58.300 | So I think maybe the quote you mentioned works better.
00:49:02.060 | If you're not a panpsychist, you're not a cosmopsychist,
00:49:05.140 | you think consciousness just exists
00:49:07.220 | at this intermediate level.
00:49:09.340 | And of course, that's the orthodox view.
00:49:12.320 | - That, you would say, is the common view?
00:49:14.660 | So is your own view of panpsychism a rarer view?
00:49:19.660 | - I think it's generally regarded, certainly,
00:49:22.140 | as a speculative view,
00:49:24.620 | held by a fairly small minority of at least theorists.
00:49:27.920 | Most philosophers and most scientists
00:49:31.220 | who think about consciousness are not panpsychists.
00:49:34.620 | There's been a bit of a movement in that direction
00:49:36.220 | the last 10 years or so.
00:49:37.940 | Seems to be quite popular,
00:49:38.980 | especially among the younger generation,
00:49:41.600 | but it's still very definitely a minority view.
00:49:43.940 | Many people think it's totally bat shit crazy
00:49:47.100 | to use the technical term.
00:49:48.340 | (laughing)
00:49:50.380 | - It's a philosophical term.
00:49:51.220 | - Yeah, so the orthodox view, I think, is still
00:49:52.820 | consciousness is something that humans have
00:49:55.140 | and some good number of non-human animals have,
00:49:59.020 | and maybe AIs might have one day, but it's restricted.
00:50:02.720 | On that view, then, there was no consciousness
00:50:04.420 | at the start of the universe.
00:50:05.860 | There may be none at the end,
00:50:07.220 | but it is this thing which happened at some point
00:50:09.900 | in the history of the universe,
00:50:11.480 | consciousness developed, and yes,
00:50:14.900 | that's a very amazing event on this view
00:50:17.460 | because many people are inclined to think
00:50:19.620 | consciousness is what somehow gives meaning
00:50:22.420 | to our lives.
00:50:23.260 | Without consciousness, there'd be no meaning,
00:50:25.760 | no true value, no good versus bad, and so on.
00:50:29.740 | So with the advent of consciousness,
00:50:32.220 | suddenly the universe went from meaningless
00:50:35.980 | to somehow meaningful.
00:50:38.740 | Why did this happen?
00:50:39.840 | I guess the quote you mentioned was somehow,
00:50:42.220 | this was somehow destined to happen
00:50:44.340 | because the universe needed to have consciousness
00:50:47.340 | within it to have value and have meaning,
00:50:49.260 | and maybe you could combine that with a theistic view
00:50:52.680 | or a teleological view.
00:50:54.660 | The universe was inexorably evolving towards consciousness.
00:50:58.440 | Actually, my colleague here at NYU, Tom Nagel,
00:51:01.420 | wrote a book called "Mind and Cosmos" a few years ago
00:51:04.220 | where he argued for this teleological view
00:51:06.080 | of evolution toward consciousness,
00:51:09.020 | saying this led to problems for Darwinism.
00:51:12.620 | It's got him on, you know,
00:51:13.460 | this was very, very controversial.
00:51:15.100 | Most people didn't agree.
00:51:16.640 | I don't myself agree with this teleological view,
00:51:20.080 | but it is at least a beautiful speculative view
00:51:24.060 | of the cosmos.
00:51:26.180 | - What do you think people experience,
00:51:30.660 | what do they seek when they believe in God
00:51:32.920 | from this kind of perspective?
00:51:36.180 | - I'm not an expert on thinking about God and religion.
00:51:41.180 | I'm not myself religious at all.
00:51:43.880 | - When people sort of pray, communicate with God,
00:51:46.740 | which whatever form, I'm not speaking to sort of
00:51:50.700 | the practices and the rituals of religion.
00:51:53.800 | I mean the actual experience of,
00:51:56.220 | that people really have a deep connection with God
00:51:58.900 | in some cases.
00:51:59.840 | What do you think that experience is?
00:52:06.280 | - It's so common, at least throughout
00:52:08.240 | the history of civilization,
00:52:10.400 | that it seems like we seek that.
00:52:15.400 | - At the very least, it's an interesting
00:52:17.960 | conscious experience that people have
00:52:19.600 | when they experience religious awe or prayer and so on.
00:52:24.600 | Neuroscientists have tried to examine
00:52:27.760 | what bits of the brain are active and so on.
00:52:30.740 | But yeah, there's this deeper question
00:52:32.680 | of what are people looking for when they're doing this?
00:52:35.320 | And like I said, I've got no real expertise on this,
00:52:39.040 | but it does seem that one thing people are after
00:52:41.560 | is a sense of meaning and value,
00:52:43.880 | a sense of connection to something greater
00:52:47.360 | than themselves that will give their lives
00:52:49.920 | meaning and value.
00:52:50.840 | And maybe the thought is if there is a God,
00:52:53.200 | and God somehow is a universal consciousness
00:52:56.120 | who has invested this universe with meaning,
00:53:01.120 | and somehow connection to God might give your life meaning.
00:53:05.560 | I can kind of see the attractions of that,
00:53:09.880 | but still makes me wonder why is it exactly
00:53:13.040 | that a universal consciousness, a God,
00:53:15.960 | would be needed to give the world meaning?
00:53:18.520 | If universal consciousness can give the world meaning,
00:53:21.800 | why can't local consciousness give the world meaning too?
00:53:25.320 | So I think my consciousness gives my world meaning.
00:53:28.560 | - What is the origin of meaning for your world?
00:53:31.120 | - Yeah, I experience things as good or bad,
00:53:33.880 | happy, sad, interesting, important.
00:53:37.520 | So my consciousness invests this world with meaning.
00:53:40.600 | Without any consciousness,
00:53:42.200 | maybe it would be a bleak, meaningless universe.
00:53:45.360 | But I don't see why I need someone else's consciousness
00:53:47.720 | or even God's consciousness to give this universe meaning.
00:53:51.520 | Here we are, local creatures
00:53:53.200 | with our own subjective experiences.
00:53:55.180 | I think we can give the universe meaning ourselves.
00:53:58.960 | I mean, maybe to some people that feels inadequate.
00:54:01.680 | Yeah, our own local consciousness is somehow too puny
00:54:04.920 | and insignificant to invest any of this
00:54:07.320 | with cosmic significance,
00:54:09.320 | and maybe God gives you a sense of cosmic significance,
00:54:13.680 | but I'm just speculating here.
00:54:15.720 | - So, you know, it's a really interesting idea
00:54:19.280 | that consciousness is the thing that makes life meaningful.
00:54:24.800 | If you could maybe just briefly explore that for a second.
00:54:29.800 | So I suspect just from listening to you now,
00:54:33.760 | you mean in an almost trivial sense,
00:54:37.360 | just the day-to-day experiences of life have,
00:54:42.300 | because of you attach identity to it,
00:54:51.760 | I guess I wanna ask something I would always wanted to ask
00:54:56.760 | a legit world-renowned philosopher,
00:55:01.960 | what is the meaning of life?
00:55:03.500 | So I suspect you don't mean consciousness gives
00:55:08.120 | any kind of greater meaning to it all,
00:55:11.340 | and more to day-to-day,
00:55:13.400 | but is there greater meaning to it all?
00:55:16.280 | - I think life has meaning for us because we are conscious.
00:55:20.960 | So without consciousness, no meaning,
00:55:24.160 | consciousness invests our life with meaning.
00:55:27.320 | So consciousness is the source of the meaning of life,
00:55:30.680 | but I wouldn't say consciousness itself
00:55:33.360 | is the meaning of life.
00:55:34.800 | I'd say what's meaningful in life
00:55:37.000 | is basically what we find meaningful,
00:55:40.040 | what we experience as meaningful.
00:55:42.680 | So if you find meaning and fulfillment and value
00:55:46.320 | in say intellectual work like understanding,
00:55:49.120 | then that's a very significant part
00:55:51.720 | of the meaning of life for you.
00:55:53.200 | If you find it in social connections or in raising a family,
00:55:57.400 | then that's the meaning of life for you.
00:55:58.960 | The meaning kind of comes from what you value
00:56:02.080 | as a conscious creature.
00:56:04.040 | So I think on this view, there's no universal solution,
00:56:07.480 | no universal answer to the question,
00:56:10.200 | what is the meaning of life?
00:56:11.480 | The meaning of life is where you find it
00:56:13.520 | as a conscious creature,
00:56:14.600 | but it's consciousness that somehow makes value possible,
00:56:18.040 | experiencing some things as good or as bad
00:56:21.000 | or as meaningful.
00:56:22.840 | Something that comes from within consciousness.
00:56:24.600 | - So you think consciousness is a crucial component,
00:56:28.760 | ingredient of assigning value to things?
00:56:33.560 | - I mean, it's kind of a fairly strong intuition
00:56:36.080 | that without consciousness,
00:56:37.520 | there wouldn't really be any value.
00:56:39.960 | If we just had a purely, a universe of unconscious creatures,
00:56:44.640 | would anything be better or worse than anything else?
00:56:47.700 | - Certainly when it comes to ethical dilemmas,
00:56:50.360 | you know about the old trolley problem.
00:56:53.200 | Do you kill one person or do you switch to the other track
00:56:58.120 | to kill five?
00:56:59.600 | Well, I've got a variant on this,
00:57:01.680 | the zombie trolley problem,
00:57:03.440 | where there's one conscious being on one track
00:57:06.720 | and five humanoid zombies, let's make them robots,
00:57:10.640 | who are not conscious on the other track.
00:57:15.520 | Do you, given that choice,
00:57:16.640 | do you kill the one conscious being
00:57:17.880 | or the five unconscious robots?
00:57:21.000 | Most people have a fairly clear intuition here.
00:57:22.800 | - Yeah.
00:57:23.640 | - Kill the unconscious beings,
00:57:25.520 | 'cause they basically, they don't have a meaningful life.
00:57:28.680 | They're not really persons, conscious beings at all.
00:57:32.720 | - Of course, we don't have good intuition
00:57:36.640 | about something like an unconscious being.
00:57:42.040 | So in philosophical terms, you referred to as a zombie.
00:57:46.720 | It's a useful thought experiment,
00:57:50.360 | construction in philosophical terms,
00:57:52.460 | but we don't yet have them.
00:57:54.900 | So that's kind of what we may be able to create with robots.
00:58:00.240 | And I don't necessarily know what that even means.
00:58:05.240 | - Yeah, it's merely hypothetical for now.
00:58:07.840 | They're just a thought experiment.
00:58:09.640 | They may never be possible.
00:58:11.060 | I mean, the extreme case of a zombie
00:58:13.480 | is a being which is physically, functionally,
00:58:16.400 | behaviorally identical to me, but not conscious.
00:58:19.520 | That's a mere, I don't think that could ever
00:58:21.720 | be built in this universe.
00:58:23.540 | The question is just,
00:58:24.840 | does that hypothetically make sense?
00:58:27.000 | That's kind of a useful contrast class
00:58:29.360 | to raise questions like, why aren't we zombies?
00:58:31.800 | How does it come about that we're conscious?
00:58:33.840 | And we're not like that.
00:58:34.960 | But there are less extreme versions of this,
00:58:36.740 | like robots, which are maybe not physically identical to us,
00:58:41.540 | maybe not even functionally identical to us.
00:58:43.340 | Maybe they've got a different architecture,
00:58:45.340 | but they can do a lot of sophisticated things,
00:58:47.700 | maybe carry on a conversation, but they're not conscious.
00:58:51.140 | And that's not so far out.
00:58:52.140 | We've got simple computer systems,
00:58:54.900 | at least tending in that direction now.
00:58:57.420 | And presumably this is going to get more
00:59:00.240 | and more sophisticated over years to come,
00:59:02.860 | where we may have some pretty,
00:59:05.300 | at least quite straightforward to conceive
00:59:07.220 | of some pretty sophisticated robot systems
00:59:11.120 | that can use language and be fairly high functioning
00:59:14.760 | without consciousness at all.
00:59:16.380 | Then I stipulate that.
00:59:17.780 | I mean, we've caused, there's this tricky question
00:59:21.580 | of how you would know whether they're conscious.
00:59:23.620 | But let's say we've somehow solved that,
00:59:24.960 | and we know that these high functioning robots
00:59:27.080 | aren't conscious.
00:59:27.920 | Then the question is, do they have moral status?
00:59:30.200 | Does it matter how we treat them?
00:59:33.520 | - What does moral status mean?
00:59:35.440 | - Does basically that question, can they suffer?
00:59:38.480 | Does it matter how we treat them?
00:59:41.040 | For example, if I mistreat this glass,
00:59:45.140 | this cup by shattering it, then that's bad.
00:59:49.760 | Why is it bad?
00:59:50.600 | It's going to make a mess.
00:59:51.420 | It's going to be annoying for me and my partner.
00:59:53.620 | And so it's not bad for the cup.
00:59:55.920 | No one would say the cup itself has moral status.
00:59:59.560 | Hey, you hurt the cup.
01:00:02.880 | And that's doing it a moral harm.
01:00:06.500 | Likewise, plants.
01:00:08.840 | Well, again, if they're not conscious,
01:00:09.880 | most people think by uprooting a plant,
01:00:12.000 | you're not harming it.
01:00:13.540 | But if a being is conscious, on the other hand,
01:00:16.200 | then you are harming it.
01:00:17.240 | So Siri, or I dare not say the name of Alexa.
01:00:22.240 | Anyway, so we don't think we're morally harming Alexa
01:00:28.640 | by turning her off or disconnecting her or even destroying.
01:00:32.940 | Her, whether it's the system
01:00:34.060 | or the underlying software system,
01:00:36.140 | because we don't really think she's conscious.
01:00:39.060 | On the other hand, you move to like the disembodied being
01:00:42.380 | in the movie, Her, Samantha.
01:00:45.500 | I guess she was kind of presented as conscious.
01:00:47.460 | And then if you destroyed her,
01:00:49.760 | you'd certainly be committing a serious harm.
01:00:51.740 | So I think our strong sense is if a being is conscious
01:00:55.200 | and can undergo subjective experiences,
01:00:57.420 | then it matters morally how we treat them.
01:01:00.380 | So if a robot is conscious, it matters.
01:01:03.020 | But if a robot is not conscious,
01:01:05.380 | then they're basically just meat or a machine
01:01:07.140 | and it doesn't matter.
01:01:10.340 | So I think at least maybe how we think about this stuff
01:01:12.980 | is fundamentally wrong,
01:01:13.940 | but I think a lot of people
01:01:15.500 | who think about this stuff seriously,
01:01:17.200 | including people who think about, say,
01:01:18.460 | the moral treatment of animals and so on,
01:01:20.780 | come to the view that consciousness
01:01:23.340 | is ultimately kind of the line between systems
01:01:25.780 | that where we have to take them into account
01:01:29.340 | and thinking morally about how we act
01:01:32.260 | and systems for which we don't.
01:01:34.420 | - And I think I've seen you, the writer,
01:01:36.820 | talk about the demonstration of consciousness
01:01:40.780 | from a system like that, from a system like Alexa
01:01:43.820 | or a conversational agent,
01:01:48.120 | that what you would be looking for
01:01:51.140 | is kind of at the very basic level
01:01:54.620 | for the system to have an awareness
01:01:58.180 | that I'm just a program,
01:02:00.500 | and yet why do I experience this?
01:02:03.900 | Or not to have that experience,
01:02:06.200 | but to communicate that to you.
01:02:08.020 | So that's what us humans would sound like
01:02:10.720 | if you all of a sudden woke up one day,
01:02:13.020 | like Kafka, right, in the body of a bug or something.
01:02:15.660 | But in a computer, you all of a sudden realize
01:02:18.340 | you don't have a body,
01:02:19.740 | and yet you would, feeling what you're feeling,
01:02:22.540 | you would probably say those kinds of things.
01:02:25.980 | So do you think a system essentially becomes conscious
01:02:29.540 | by convincing us that it's conscious
01:02:33.100 | through the words that I just mentioned?
01:02:36.240 | So by being confused about the fact
01:02:40.100 | that why am I having these experiences?
01:02:45.100 | So basically--
01:02:46.100 | - I don't think this is what makes you conscious,
01:02:48.120 | but I do think being puzzled about consciousness
01:02:50.280 | is a very good sign that a system is conscious.
01:02:53.300 | So if I encountered a robot
01:02:55.660 | that actually seemed to be genuinely puzzled
01:02:58.700 | by its own mental states,
01:03:01.340 | and saying, yeah, I have all these weird experiences,
01:03:04.020 | and I don't see how to explain them,
01:03:06.340 | I know I'm just a set of silicon circuits,
01:03:08.780 | but I don't see how that would give you my consciousness,
01:03:11.660 | I would at least take that as some evidence
01:03:13.900 | that there's some consciousness going on there.
01:03:16.780 | I don't think a system needs to be puzzled
01:03:19.500 | about consciousness to be conscious.
01:03:21.820 | Many people aren't puzzled by their consciousness.
01:03:24.020 | Animals don't seem to be puzzled at all.
01:03:26.340 | I still think they're conscious.
01:03:28.060 | So I don't think that's a requirement on consciousness.
01:03:30.700 | But I do think if we're looking for signs for consciousness,
01:03:34.780 | say in AI systems,
01:03:37.020 | one of the things that will help convince me
01:03:39.140 | that an AI system is conscious
01:03:41.300 | is if it shows signs of introspectively recognizing
01:03:46.300 | something like consciousness
01:03:48.300 | and finding this philosophically puzzling
01:03:51.340 | in the way that we do.
01:03:54.220 | - That's such an interesting thought, though,
01:03:55.940 | because a lot of people sort of would,
01:03:57.940 | at the shallow level, criticize the Turing test,
01:04:01.140 | or language, that it's essentially what I heard
01:04:05.060 | like Dan Dennett criticize it in this kind of way,
01:04:09.820 | which is it really puts a lot of emphasis on lying.
01:04:13.300 | - Yeah.
01:04:14.140 | And being able to imitate human beings,
01:04:18.020 | yeah, there's this cartoon of the AI system
01:04:21.780 | studying for the Turing test.
01:04:23.220 | It's got to read this book called "Talk Like a Human."
01:04:26.660 | It's like, man, why do I have to waste my time
01:04:28.340 | learning how to imitate humans?
01:04:30.460 | Maybe the AI system is gonna be way beyond
01:04:32.300 | the hard problem of consciousness.
01:04:33.780 | And it's gonna be just like,
01:04:34.740 | why do I need to waste my time pretending
01:04:36.380 | that I recognize the hard problem of consciousness
01:04:38.940 | in order for people to recognize me as conscious?
01:04:42.140 | - Yeah, it just feels like, I guess the question is,
01:04:45.020 | do you think there's a,
01:04:47.020 | we can never really create a test for consciousness
01:04:49.460 | because it feels like we're very human-centric.
01:04:53.940 | And so the only way we would be convinced
01:04:57.620 | that something is conscious is basically
01:05:00.900 | the thing demonstrates the illusion of consciousness.
01:05:05.540 | We can never really know whether it's conscious or not.
01:05:10.340 | And in fact, that almost feels like it doesn't matter then.
01:05:14.780 | Or does it still matter to you
01:05:16.700 | that something is conscious or it demonstrates consciousness?
01:05:20.700 | You still see that fundamental distinction.
01:05:22.780 | - I think to a lot of people,
01:05:24.820 | whether a system is conscious or not
01:05:27.340 | matters hugely for many things,
01:05:28.860 | like how we treat it, can it suffer, and so on.
01:05:33.020 | But still that leaves open the question,
01:05:35.020 | how can we ever know?
01:05:36.740 | And it's true that it's awfully hard to see
01:05:38.700 | how we can know for sure whether a system is conscious.
01:05:42.340 | I suspect that sociologically,
01:05:44.860 | the thing that's going to convince us
01:05:46.300 | that a system is conscious is in part,
01:05:50.100 | things like social interaction, conversation, and so on,
01:05:53.860 | where they seem to be conscious,
01:05:56.020 | they talk about their conscious states,
01:05:57.700 | or just talk about being happy or sad,
01:06:00.020 | or finding things meaningful, or being in pain.
01:06:02.820 | That will tend to convince us if we don't,
01:06:06.620 | if a system genuinely seems to be conscious,
01:06:08.340 | we don't treat it as such,
01:06:09.980 | eventually it's going to seem like
01:06:11.220 | a strange form of racism or speciesism,
01:06:13.740 | or somehow not to acknowledge them.
01:06:16.340 | - I truly believe that, by the way.
01:06:17.740 | I believe that there is going to be
01:06:21.260 | something akin to the civil rights movement,
01:06:23.260 | but for robots.
01:06:24.780 | I think the moment you have a Roomba say,
01:06:29.900 | "Please don't kick me, that hurts," just say it.
01:06:32.900 | I think that will fundamentally
01:06:35.980 | change the fabric of our society.
01:06:40.300 | - I think you're probably right,
01:06:41.140 | although it's going to be very tricky,
01:06:42.220 | because just say we've got the technology
01:06:44.940 | where these conscious beings can just be created
01:06:47.260 | and multiplied by the thousands,
01:06:50.220 | by flicking a switch.
01:06:51.860 | The legal status is going to be different,
01:06:55.940 | but ultimately their moral status ought to be the same,
01:06:58.140 | and yeah, the civil rights issue is going to be a huge mess.
01:07:03.140 | - So if one day somebody clones you,
01:07:06.700 | another very real possibility,
01:07:09.740 | but in fact, I find the conversation
01:07:13.420 | between two copies of David Chalmers quite interesting.
01:07:18.420 | - Scary thought.
01:07:22.260 | Who is this idiot?
01:07:25.180 | He's not making any sense.
01:07:26.580 | - So what, do you think he would be conscious?
01:07:30.940 | - I do think he would be conscious.
01:07:34.540 | I do think in some sense, I'm not sure it would be me,
01:07:37.060 | there would be two different beings at this point.
01:07:40.020 | I think they'd both be conscious
01:07:41.300 | and they both have many of the same mental properties.
01:07:45.860 | I think they both, in a way, have the same moral status.
01:07:49.460 | It'd be wrong to hurt either of them
01:07:51.660 | or to kill them and so on.
01:07:54.620 | Still, there's some sense in which probably
01:07:56.020 | their legal status would have to be different.
01:07:58.540 | If I'm the original and that one's just a clone,
01:08:01.660 | then creating a clone of me,
01:08:03.340 | presumably the clone doesn't, for example,
01:08:05.060 | automatically own the stuff that I own.
01:08:08.660 | Or I've got a certain connect to things
01:08:13.660 | that the people I interact with, my family,
01:08:17.860 | my partner and so on, I'm gonna somehow be connected to them
01:08:21.140 | in a way in which the clone isn't.
01:08:23.620 | - Because you came slightly first?
01:08:26.420 | - Yeah.
01:08:27.260 | - Because a clone would argue that they have
01:08:30.100 | really as much of a connection.
01:08:33.700 | They have all the memories of that connection.
01:08:35.620 | Then in a way, you might say it's kind of unfair
01:08:37.940 | to discriminate against them.
01:08:38.980 | But say you've got an apartment
01:08:40.100 | that only one person can live in
01:08:41.500 | or a partner who only one person can be with.
01:08:44.020 | - But why should it be you?
01:08:45.820 | - I think, it's an interesting philosophical question,
01:08:49.100 | but you might say, because I actually have this history,
01:08:51.980 | if I am the same person as the one that came before
01:08:56.900 | and the clone is not, then I have this history
01:08:59.860 | that the clone doesn't.
01:09:01.020 | Of course, there's also the question,
01:09:03.860 | isn't the clone the same person too?
01:09:05.820 | This is the question about personal identity.
01:09:07.500 | If I continue and I create a clone over there,
01:09:10.660 | I wanna say this one is me and this one is someone else.
01:09:14.100 | But you could take the view that a clone is equally me.
01:09:17.940 | Of course, in a movie like "Star Trek,"
01:09:20.060 | where they have a teletransporter,
01:09:21.300 | basically creates clones all the time.
01:09:23.420 | They treat the clones as if they're the original person.
01:09:25.900 | Of course, they destroy the original body in "Star Trek."
01:09:28.980 | So there's only one left around
01:09:30.940 | and only very occasionally do things go wrong
01:09:32.660 | and you get two copies of "Captain Kirk."
01:09:34.660 | But somehow our legal system, at the very least,
01:09:37.740 | is gonna have to sort out some of these issues
01:09:40.580 | and maybe that's what's moral
01:09:42.180 | and what's legally acceptable are gonna come apart.
01:09:45.860 | - What question would you ask a clone of yourself?
01:09:50.660 | Is there something useful you can find out from him
01:09:56.100 | about the fundamentals of consciousness even?
01:10:00.580 | - I mean, kind of in principle,
01:10:03.820 | I know that if it's a perfect clone,
01:10:06.700 | it's gonna behave just like me.
01:10:09.060 | So I'm not sure I'm gonna be able to,
01:10:11.340 | I can discover whether it's a perfect clone
01:10:13.140 | by seeing whether it answers like me.
01:10:15.220 | But otherwise, I know what I'm gonna find
01:10:17.500 | is a being which is just like me,
01:10:19.420 | except that it's just undergone this great shock
01:10:21.940 | of discovering that it's a clone.
01:10:24.460 | So just say you woke me up tomorrow and said,
01:10:26.380 | "Hey, Dave, sorry to tell you this,
01:10:29.020 | "but you're actually the clone."
01:10:31.860 | And you provided me really convincing evidence,
01:10:34.260 | showed me the film of my being cloned
01:10:36.940 | and then all wrapped up here being here and waking up.
01:10:41.340 | So you proved to me I'm a clone.
01:10:42.420 | Well, yeah, okay, I would find that shocking
01:10:44.540 | and who knows how I would react to this.
01:10:46.460 | So maybe by talking to the clone,
01:10:48.660 | I'd find something about my own psychology
01:10:50.860 | that I can't find out so easily,
01:10:52.580 | like how I'd react upon discovering that I'm a clone.
01:10:55.420 | I could certainly ask the clone if it's conscious
01:10:57.860 | and what its consciousness is like and so on.
01:10:59.860 | But I guess I kind of know if it's a perfect clone,
01:11:02.700 | it's gonna behave roughly like me.
01:11:04.520 | Of course, at the beginning, there'll be a question
01:11:06.940 | about whether a perfect clone is possible.
01:11:08.900 | So I may wanna ask it lots of questions
01:11:11.140 | to see if its consciousness
01:11:12.420 | and the way it talks about its consciousness
01:11:14.620 | and the way it reacts to things in general is like me.
01:11:17.560 | And that will occupy us for a long time.
01:11:21.380 | - For a while.
01:11:22.380 | Some basic unit testing on the early models.
01:11:25.860 | So if it's a perfect clone,
01:11:28.540 | you say that it's gonna behave exactly like you.
01:11:30.780 | So that takes us to free will.
01:11:32.560 | Is there a free will?
01:11:37.420 | Are we able to make decisions that are not predetermined
01:11:41.440 | from the initial conditions of the universe?
01:11:44.140 | - Philosophers do this annoying thing of saying
01:11:47.060 | it depends what you mean.
01:11:48.740 | So in this case, yeah, it really depends
01:11:51.700 | on what you mean by free will.
01:11:54.500 | If you mean something which was not determined in advance,
01:11:58.700 | could never have been determined,
01:12:00.580 | then I don't know we have free will.
01:12:02.260 | I mean, there's quantum mechanics
01:12:03.620 | and who's to say if that opens up some room,
01:12:06.140 | but I'm not sure we have free will in that sense.
01:12:09.540 | But I'm also not sure that's the kind of free will
01:12:12.280 | that really matters.
01:12:13.380 | What matters to us is being able to do what we want
01:12:17.180 | and to create our own futures.
01:12:19.800 | We've got this distinction between having our lives
01:12:21.500 | be under our control and under someone else's control.
01:12:26.500 | We've got the sense of actions that we are responsible for
01:12:29.420 | versus ones that we're not.
01:12:31.160 | I think you can make those distinctions
01:12:33.780 | even in a deterministic universe.
01:12:36.420 | And this is what people call
01:12:37.300 | the compatibilist view of free will,
01:12:38.900 | where it's compatible with determinism.
01:12:41.260 | So I think for many purposes,
01:12:42.880 | the kind of free will that matters
01:12:45.540 | is something we can have in a deterministic universe.
01:12:48.100 | And I can't see any reason in principle
01:12:50.460 | why an AI system couldn't have free will of that kind.
01:12:54.460 | If you mean super duper free will,
01:12:55.860 | the ability to violate the laws of physics
01:12:57.700 | and doing things that in principle could not be predicted,
01:13:01.740 | I don't know, maybe no one has that kind of free will.
01:13:04.680 | - What's the connection between the reality of free will
01:13:09.680 | and the experience of it,
01:13:11.380 | the subjective experience in your view?
01:13:15.240 | So how does consciousness connect to the experience
01:13:19.620 | of, to the reality and the experience of free will?
01:13:22.260 | - It's certainly true that when we make decisions
01:13:24.780 | and when we choose and so on,
01:13:26.180 | we feel like we have an open future.
01:13:28.020 | - Yes.
01:13:28.860 | - Feel like I could do this,
01:13:29.740 | I could go into philosophy or I could go into math.
01:13:34.180 | I could go to a movie tonight,
01:13:36.060 | I could go to a restaurant.
01:13:38.060 | So we experience these things as if the future is open.
01:13:42.580 | And maybe we experience ourselves
01:13:44.500 | as exerting a kind of effect on the future
01:13:50.020 | that somehow picking out one path
01:13:51.660 | from many paths were previously open.
01:13:54.140 | And you might think that actually
01:13:56.060 | if we're in a deterministic universe,
01:13:58.040 | there's a sense in which objectively
01:13:59.860 | those paths weren't really open all along,
01:14:03.660 | but subjectively they were open.
01:14:05.740 | And that's I think that's what really matters
01:14:07.260 | in making decisions about experience of making a decision
01:14:10.260 | is choosing a path for ourselves.
01:14:14.300 | I mean, in general, our introspective models of the mind,
01:14:18.100 | I think are generally very distorted representations
01:14:20.620 | of the mind.
01:14:21.640 | So it may well be that our experience of ourself
01:14:24.220 | in making a decision,
01:14:25.340 | our experience of what's going on
01:14:27.620 | doesn't terribly well mirror what's going on.
01:14:31.020 | I mean, maybe there are antecedents in the brain
01:14:33.180 | way before anything came into consciousness and so on.
01:14:38.180 | Those aren't represented in our introspective model.
01:14:41.740 | So in general, our experience of perception,
01:14:46.980 | it's like I experience a perceptual image
01:14:49.740 | of the external world.
01:14:50.580 | It's not a terribly good model of what's actually going on
01:14:53.380 | in my visual cortex and so on,
01:14:55.660 | which has all these layers and so on.
01:14:57.060 | It's just one little snapshot of one bit of that.
01:14:59.820 | So in general, introspective models are very over simplified
01:15:04.820 | and it wouldn't be surprising
01:15:07.220 | if that was true of free will as well.
01:15:09.140 | This also incidentally can be applied
01:15:10.980 | to consciousness itself.
01:15:12.620 | There is this very interesting view
01:15:13.940 | that consciousness itself is an introspective illusion.
01:15:17.540 | In fact, we're not conscious,
01:15:19.460 | but the brain just has these introspective models of itself
01:15:24.300 | or oversimplifies everything and represents itself
01:15:27.180 | as having these special properties of consciousness.
01:15:31.060 | It's a really simple way to kind of keep track of itself
01:15:33.860 | and so on.
01:15:34.700 | And then on the illusionist view,
01:15:36.940 | yeah, that's just an illusion.
01:15:39.900 | I find this view, I find it implausible.
01:15:42.220 | I do find it very attractive in some ways
01:15:44.820 | 'cause it's easy to tell some story
01:15:46.640 | about how the brain would create introspective models
01:15:50.100 | of its own consciousness, of its own free will
01:15:53.140 | as a way of simplifying itself.
01:15:55.460 | I mean, it's a similar way
01:15:56.380 | when we perceive the external world,
01:15:58.500 | we perceive it as having these colors
01:16:00.020 | that maybe it doesn't really have,
01:16:02.700 | but of course that's a really useful way of keeping track.
01:16:06.420 | - Did you say that you find it not very plausible?
01:16:08.980 | 'Cause I find it both plausible and attractive
01:16:13.500 | in some sense because it,
01:16:14.900 | I mean, that kind of view is one
01:16:19.660 | that has the minimum amount of mystery around it.
01:16:24.040 | You can kind of understand that kind of view.
01:16:29.020 | Everything else says we don't understand
01:16:32.020 | so much of this picture.
01:16:33.980 | - No, it is very attractive.
01:16:35.460 | I recently wrote an article about this kind of issue
01:16:38.340 | called "The Meta-Problem of Consciousness."
01:16:41.340 | The hard problem is how does the brain
01:16:43.220 | give you consciousness?
01:16:44.220 | The meta-problem is why are we puzzled
01:16:46.780 | by the hard problem of consciousness?
01:16:49.620 | 'Cause, you know, our being puzzled by it,
01:16:51.020 | that's ultimately a bit of behavior.
01:16:53.060 | We might be able to explain that bit of behavior
01:16:54.940 | as one of the easy problems of consciousness.
01:16:57.620 | So maybe there'll be some computational model
01:17:00.580 | that explains why we're puzzled by consciousness.
01:17:03.500 | The meta-problem has come up with that model,
01:17:05.860 | and I've been thinking about that a lot lately.
01:17:07.900 | There are some interesting stories you can tell
01:17:09.580 | about why the right kind of computational system
01:17:13.620 | might develop these introspective models of itself
01:17:17.660 | that attribute to itself these special properties.
01:17:20.700 | So that meta-problem is a research program for everyone.
01:17:25.300 | And then if you've got attraction to sort of simple views,
01:17:29.220 | desert landscapes, and so on,
01:17:31.340 | then you can go all the way
01:17:32.260 | with what people call illusionism and say,
01:17:34.540 | in fact, consciousness itself is not real.
01:17:37.780 | What is real is just these introspective models
01:17:42.420 | we have that tell us that we're conscious.
01:17:45.060 | So the view is very simple, very attractive, very powerful.
01:17:49.620 | The trouble is, of course, it has to say
01:17:51.260 | that deep down, consciousness is not real.
01:17:55.180 | We're not actually experiencing right now,
01:17:58.020 | and it looks like it's just contradicting
01:18:00.020 | a fundamental datum of our existence.
01:18:02.380 | And this is why most people find this view crazy,
01:18:06.100 | just as they find panpsychism crazy in one way.
01:18:08.820 | People find illusionism crazy in another way.
01:18:13.260 | But I mean, so yes, it has to deny
01:18:18.020 | this fundamental datum of our existence.
01:18:20.660 | Now, that makes the view sort of frankly unbelievable
01:18:24.700 | for most people.
01:18:25.540 | On the other hand, the view developed right
01:18:28.220 | might be able to explain why we find it unbelievable,
01:18:31.300 | 'cause these models are so deeply hardwired into our head.
01:18:34.300 | - And they're all integrated.
01:18:35.340 | So it's not, you can't escape that, the illusion.
01:18:38.500 | And as a crazy possibility, is it possible
01:18:41.940 | that the entirety of the universe, our planet,
01:18:44.780 | all the people in New York, all the organisms on our planet,
01:18:48.580 | including me here today, are not real in that sense?
01:18:54.460 | They're all part of an illusion
01:18:56.260 | inside of Dave Chalmers' head.
01:18:59.820 | - I think all this could be a simulation.
01:19:02.340 | - No, but not just a simulation.
01:19:04.940 | 'Cause a simulation kind of is outside of you.
01:19:09.220 | - A dream?
01:19:10.140 | - What if it's all an illusion, yes,
01:19:12.420 | a dream that you're experiencing?
01:19:14.580 | That it's all in your mind, right?
01:19:18.860 | Is that, can you take illusionism that far?
01:19:23.040 | - Well, there's illusionism about the external world
01:19:26.820 | and illusionism about consciousness,
01:19:28.420 | and these might go in different.
01:19:30.180 | Illusionism about the external world
01:19:31.780 | kind of takes you back to Descartes,
01:19:34.100 | and yeah, could all this be produced by an evil demon?
01:19:37.380 | Descartes himself also had the dream argument.
01:19:39.540 | He said, "How do you know you're not dreaming right now?
01:19:41.980 | "How do you know this is not an amazing dream?"
01:19:43.700 | And I think it's at least a possibility
01:19:46.060 | that yeah, this could be some super duper complex dream
01:19:49.840 | in the next universe up.
01:19:51.620 | I guess though, my attitude is that just as,
01:19:56.620 | I mean, Descartes thought that
01:19:59.220 | if the evil demon was doing it, it's not real.
01:20:01.460 | A lot of people these days say
01:20:02.540 | if a simulation is doing it, it's not real.
01:20:05.580 | As I was saying before, I think even if it's a simulation,
01:20:08.060 | that doesn't stop this from being real.
01:20:09.380 | It just tells us what the world is made of.
01:20:11.420 | Likewise, if it's a dream,
01:20:12.980 | it could turn out that all this is like my dream
01:20:15.820 | created by my brain in the next universe up.
01:20:19.100 | My own view is that wouldn't stop this physical world
01:20:21.940 | from being real.
01:20:22.780 | It would turn out this cup at the most fundamental level
01:20:26.040 | was made of a bit of say my consciousness
01:20:28.900 | in the dreaming mind at the next level up.
01:20:31.940 | Maybe that would give you a kind of weird kind
01:20:33.980 | of panpsychism about reality,
01:20:36.460 | but it wouldn't show that the cup isn't real.
01:20:39.380 | It would just tell us it's ultimately made of processes
01:20:42.120 | in my dreaming mind.
01:20:43.180 | So I'd resist the idea that if the physical world is a dream
01:20:48.180 | then it's an illusion.
01:20:50.460 | - By the way, perhaps you have
01:20:54.140 | an interesting thought about it.
01:20:55.500 | Why is Descartes demon or genius considered evil?
01:21:01.500 | Why couldn't have been a benevolent one
01:21:04.660 | that had the same powers?
01:21:05.940 | - Yeah, I mean, Descartes called it the malin genie,
01:21:08.900 | the evil genie or evil genius.
01:21:12.380 | Malign, I guess, was the word.
01:21:14.400 | But yeah, it's an interesting question.
01:21:15.980 | I mean, a later philosophy, Berkeley,
01:21:18.980 | said no, in fact, all this is done by God.
01:21:25.380 | God actually supplies you all of these perceptions
01:21:30.500 | and ideas and that's how physical reality is sustained.
01:21:33.980 | And interestingly, Berkeley's God is doing something
01:21:36.940 | that doesn't look so different
01:21:38.260 | from what Descartes evil demon was doing.
01:21:41.300 | It's just that Descartes thought it was deception
01:21:43.660 | and Berkeley thought it was not.
01:21:46.300 | And I'm actually more sympathetic to Berkeley here.
01:21:49.420 | Yeah, this evil demon may be trying to deceive you,
01:21:54.900 | but I think, okay, well, the evil demon may just be under
01:21:57.700 | the working under a false philosophical theory.
01:22:01.300 | It thinks it's deceiving you, it's wrong.
01:22:02.900 | It's like those machines in the matrix.
01:22:04.300 | They thought they were deceiving you
01:22:06.180 | that all this stuff is real.
01:22:07.140 | I think, no, if we're in a matrix, it's all still real.
01:22:11.660 | Yeah, the philosopher, O.K. Boosman had a nice story
01:22:15.140 | about this about 50 years ago about Descartes evil demon,
01:22:19.080 | where he said this demon spends all its time
01:22:21.660 | trying to fool people, but fails,
01:22:24.660 | because somehow all the demon ends up doing
01:22:26.660 | is constructing realities for people.
01:22:30.220 | So yeah, I think that maybe it's a very natural
01:22:33.100 | to take this view that if we're in a simulation
01:22:35.260 | or evil demon scenario or something,
01:22:38.640 | then none of this is real, but I think it may be
01:22:41.860 | ultimately a philosophical mistake,
01:22:43.860 | especially if you take on board sort of the view of reality
01:22:46.740 | or what matters to reality is really its structure,
01:22:50.100 | something like its mathematical structure and so on,
01:22:52.860 | which seems to be the view that a lot of people take
01:22:54.680 | from contemporary physics, and it looks like you can find
01:22:58.060 | all that mathematical structure in a simulation,
01:23:01.400 | maybe even in a dream and so on.
01:23:03.580 | So as long as that structure is real,
01:23:05.500 | I would say that's enough for the physical world to be real.
01:23:08.720 | Yeah, the physical world may turn out
01:23:10.100 | to be somewhat more intangible than we had thought
01:23:13.140 | and have a surprising nature,
01:23:14.520 | but we're already gotten very used to that
01:23:16.660 | from modern science.
01:23:18.160 | - See, you've kind of alluded that you don't have
01:23:21.840 | to have consciousness for high levels of intelligence,
01:23:25.500 | but to create truly general intelligence systems,
01:23:29.860 | AGI systems, human level intelligence,
01:23:32.380 | and perhaps super human level intelligence,
01:23:35.020 | you've talked about that you feel like that kind of thing
01:23:37.760 | might be very far away, but nevertheless,
01:23:41.700 | when we reach that point, do you think consciousness
01:23:46.080 | from an engineering perspective is needed
01:23:49.460 | or at least highly beneficial for creating an AGI system?
01:23:53.440 | - Yeah, no one knows what consciousness is for,
01:23:57.120 | functionally, so right now, there's no specific thing
01:24:00.220 | we can point to and say, you need consciousness for that.
01:24:05.220 | Still, my inclination is to believe that in principle,
01:24:07.700 | AGI is possible.
01:24:09.340 | At the very least, I don't see why someone couldn't
01:24:11.900 | simulate a brain, ultimately have a computational system
01:24:16.160 | that produces all of our behavior, and if that's possible,
01:24:19.480 | I'm sure vastly many other computational systems
01:24:22.820 | of equal or greater sophistication are possible
01:24:27.180 | with all of our cognitive functions and more.
01:24:29.440 | My inclination is to think that once you've got
01:24:33.340 | all these cognitive functions, you know,
01:24:35.440 | perception, attention, reasoning, introspection,
01:24:40.440 | language, emotion, and so on, it's very likely
01:24:46.020 | you'll have consciousness as well.
01:24:49.180 | At least it's very hard for me to see how you'd have
01:24:51.080 | a system that had all those things while bypassing
01:24:54.320 | somehow conscious.
01:24:55.680 | - So just naturally, it's integrated quite naturally.
01:25:00.220 | There's a lot of overlap about the kind of function
01:25:03.000 | that required to achieve each of those things.
01:25:05.300 | So you can't disentangle them even when you're--
01:25:08.280 | - It seems to, at least in us, but we don't know
01:25:11.040 | what the causal role of consciousness in the physical world,
01:25:14.240 | what it does.
01:25:15.080 | I mean, just say it turns out consciousness does something
01:25:17.080 | very specific in the physical world,
01:25:18.500 | like collapsing wave functions, as on one common
01:25:22.080 | interpretation of quantum mechanics.
01:25:24.300 | Then ultimately we might find some place where it actually
01:25:26.340 | makes a difference, and we could say, ah, here is where
01:25:29.340 | in collapsing wave functions, it's driving the behavior
01:25:32.240 | of a system, and maybe it could even turn out that
01:25:35.100 | for AGI, you'd need something playing that,
01:25:39.200 | I mean, if you wanted to connect this to free will,
01:25:41.200 | some people think consciousness collapsing wave functions.
01:25:43.540 | That would be how the conscious mind exerts effect
01:25:47.660 | on the physical world and exerts its free will.
01:25:50.460 | And maybe it could turn out that any AGI that didn't
01:25:53.940 | utilize that mechanism would be limited in the kinds
01:25:57.540 | of functionality that it had.
01:25:59.740 | I don't myself find that plausible.
01:26:02.260 | I think probably that functionality could be simulated.
01:26:05.020 | But you could imagine once we had a very specific idea
01:26:07.780 | about the role of consciousness in the physical world,
01:26:10.460 | this would have some impact on the capacity of AGIs,
01:26:14.100 | and if it was a role that could not be duplicated elsewhere,
01:26:17.900 | then we'd have to find some way to either get consciousness
01:26:22.900 | in the system to play that role or to simulate it.
01:26:25.520 | - If we can isolate a particular role to consciousness,
01:26:29.100 | of course, that's incredibly, seems like an incredibly
01:26:32.940 | difficult thing.
01:26:33.920 | Do you have worries about existential threats
01:26:39.620 | of conscious, intelligent beings that are not us?
01:26:44.620 | So certainly, I'm sure you're worried about us
01:26:49.460 | from an existential threat perspective,
01:26:52.820 | but outside of us, AI systems.
01:26:55.380 | - There's a couple of different kinds
01:26:56.460 | of existential threats here.
01:26:58.140 | One is an existential threat to consciousness, generally.
01:27:01.420 | I mean, yes, I care about humans and the survival
01:27:05.020 | of humans and so on, but just say it turns out
01:27:07.460 | that eventually we're replaced by some artificial beings
01:27:11.820 | that aren't humans but are somehow our successors.
01:27:15.500 | They still have good lives, they still do interesting
01:27:18.260 | and wonderful things with the universe.
01:27:20.620 | I don't think that's not so bad.
01:27:23.440 | That's just our successors.
01:27:24.560 | We were one stage in evolution.
01:27:26.500 | Something different, maybe better, came next.
01:27:29.740 | If, on the other hand, all of consciousness was wiped out,
01:27:33.260 | that would be a very serious moral disaster.
01:27:36.780 | One way that could happen is by all intelligent life
01:27:40.860 | being wiped out.
01:27:42.100 | And many people think that, yeah, once you get to humans
01:27:44.420 | and AI's and amazing sophistication where everyone
01:27:48.180 | has got the ability to create weapons
01:27:50.980 | that can destroy the whole universe,
01:27:53.420 | just by pressing a button, then maybe it's inevitable
01:27:57.180 | all intelligent life will die out.
01:28:00.620 | That would certainly be a disaster,
01:28:03.660 | and we've got to think very hard about how to avoid that.
01:28:05.980 | But yeah, another interesting kind of disaster
01:28:08.020 | is that maybe intelligent life is not wiped out,
01:28:12.100 | but all consciousness is wiped out.
01:28:14.860 | So just say you thought, unlike what I was saying
01:28:17.340 | a moment ago, that there are two different kinds
01:28:19.940 | of intelligent systems, some which are conscious
01:28:23.060 | and some which are not.
01:28:25.380 | And just say it turns out that we create AGI
01:28:27.940 | with a high degree of intelligence, meaning high degree
01:28:31.420 | of sophistication and its behavior,
01:28:34.040 | but with no consciousness at all.
01:28:37.060 | That AGI could take over the world, maybe,
01:28:39.660 | but then there'd be no consciousness in this world.
01:28:42.700 | This would be a world of zombies.
01:28:44.380 | Some people have called this the zombie apocalypse.
01:28:47.060 | Because it's an apocalypse for consciousness.
01:28:50.180 | Consciousness is gone, you've merely got
01:28:52.140 | super intelligent, non-conscious robots.
01:28:54.540 | And I would say that's a moral disaster in the same way,
01:28:58.020 | in almost the same way that the world
01:28:59.820 | with no intelligent life is a moral disaster.
01:29:02.220 | All value and meaning may be gone from that world.
01:29:06.720 | So these are both threats to watch out for.
01:29:09.000 | Now my own view is, if you get super intelligence,
01:29:11.720 | you're almost certainly gonna bring consciousness with it.
01:29:13.720 | So I hope that's not gonna happen,
01:29:15.840 | but of course, I don't understand consciousness.
01:29:18.400 | No one understands consciousness.
01:29:20.240 | This is one reason for, this is one reason at least,
01:29:22.880 | among many, for thinking very seriously about consciousness
01:29:25.480 | and thinking about the kind of future we want to create
01:29:28.960 | in a world with humans and/or AIs.
01:29:33.180 | - How do you feel about the possibility
01:29:35.740 | if consciousness so naturally does come with AGI systems,
01:29:39.900 | that we are just a step in the evolution?
01:29:42.580 | That we will be just something, a blimp on the record,
01:29:47.260 | that'll be studied in books
01:29:49.020 | by the AGI systems centuries from now?
01:29:51.720 | - I mean, I think I'd probably be okay with that.
01:29:56.320 | Especially if somehow humans are continuous with AGI.
01:29:59.380 | I mean, I think something like this is inevitable.
01:30:02.420 | At the very least, humans are gonna be transformed,
01:30:04.900 | we're gonna be augmented by technology,
01:30:07.380 | it's already happening in all kinds of ways.
01:30:09.700 | We're gonna be transformed by technology
01:30:12.420 | where our brains are gonna be uploaded
01:30:14.060 | and computationally enhanced.
01:30:16.740 | And eventually that line between what's a human
01:30:18.900 | and what's an AI may be kind of hard to draw.
01:30:23.900 | How much does it matter, for example,
01:30:25.620 | that some future being a thousand years from now
01:30:29.480 | that somehow descended from us actually still has biology?
01:30:32.880 | I think it would be nice if you could kind of point
01:30:34.940 | to its cognitive system, point to some parts
01:30:36.880 | that had some roots in us and trace a continuous line there
01:30:41.560 | that would be selfishly nice for me to think that,
01:30:44.520 | okay, I'm connected to this thread line
01:30:47.240 | through the future of the world.
01:30:48.600 | But if it turns out, okay, there's a jump there,
01:30:51.120 | they found a better way to design cognitive systems,
01:30:54.340 | they designed a whole new kind of thing
01:30:56.080 | and the only line is some causal chain of designing
01:31:00.440 | and systems that design better systems,
01:31:02.860 | is that so much worse?
01:31:05.440 | I don't know, we're still at least part
01:31:06.680 | of a causal chain of design and yes, they're not humans,
01:31:10.160 | but still they're our successors.
01:31:12.280 | So I mean, ultimately I think it's probably inevitable
01:31:15.040 | that something like that will happen
01:31:16.840 | and at least we were part of the process.
01:31:20.000 | It'd be nice if they still cared enough about us
01:31:23.480 | to maybe to engage with our arguments.
01:31:28.340 | I'm really hoping that the AGIs are gonna solve
01:31:30.180 | all the problems of philosophy.
01:31:31.820 | They'll come back and read all this crappy work
01:31:35.140 | for the 20th and 21st century,
01:31:36.560 | hard problem of consciousness and here is why
01:31:39.000 | they got it wrong and so on.
01:31:40.460 | If that happened, then I'd really feel like I was part
01:31:42.360 | of at least an intellectual process over centuries
01:31:45.040 | and that would be kind of cool.
01:31:46.060 | - I'm pretty sure they would clone
01:31:48.620 | or they would recreate David Chalmers
01:31:50.860 | and for the fun of it, sort of bring back other philosophers.
01:31:55.260 | - Yeah, bring back Descartes.
01:31:56.100 | - Descartes and just put them in a room and just watch.
01:31:58.860 | It'll be a Netflix of the future show
01:32:02.120 | where you bring philosophers from different human,
01:32:04.660 | 100% human philosophers from previous generations,
01:32:07.760 | put them in a room and see them.
01:32:09.500 | - I am totally up for that.
01:32:11.220 | Simulators, AGIs of the future,
01:32:14.100 | if you're watching this podcast, do that.
01:32:16.940 | I would like to be recreated and hang out with Descartes.
01:32:19.740 | - Who would be, would Descartes would be the first?
01:32:22.660 | If you could hang out as part of such a TV show
01:32:26.100 | with a philosopher that's no longer with us from long ago,
01:32:30.860 | who would you choose?
01:32:33.220 | - Descartes would have to be right up there.
01:32:34.740 | Oh, actually, a couple of months ago,
01:32:36.820 | I got to have a conversation with Descartes.
01:32:38.980 | An actor who's actually a philosopher came out on stage
01:32:41.540 | playing Descartes.
01:32:42.780 | I didn't know this was gonna happen
01:32:43.860 | and I just offered, I gave a talk and--
01:32:48.140 | - A bit of a surreal moment.
01:32:48.980 | - All my ideas were crap and all derived from him
01:32:51.580 | and so on, we had a long argument, this was great.
01:32:54.700 | You know, I would love to see what Descartes
01:32:56.100 | would think about AI, for example,
01:32:58.180 | and the modern neuroscience and so on.
01:32:59.940 | I suspect not too much would surprise him,
01:33:01.980 | but yeah, William James,
01:33:05.460 | for a psychologist of consciousness,
01:33:08.740 | I think James was probably the richest,
01:33:13.740 | but, oh, there are Immanuel Kant.
01:33:17.140 | I never really understood what he was up to
01:33:19.100 | if I got to actually talk to him about some of this.
01:33:22.740 | Hey, there was Princess Elizabeth who talked with Descartes
01:33:25.700 | and who really got at the problems of how Descartes' ideas
01:33:30.700 | of a non-physical mind interacting
01:33:33.420 | with the physical body couldn't really work.
01:33:37.220 | She's been kind of, most philosophers think
01:33:39.180 | she's been proved right, so maybe put me in a room
01:33:41.020 | with Descartes and Princess Elizabeth
01:33:43.460 | and we can all argue it out.
01:33:44.860 | (laughing)
01:33:47.860 | - What kind of future, so we talked about,
01:33:50.540 | it was zombies, a concerning future,
01:33:53.260 | but what kind of future excites you?
01:33:56.180 | What do you think, if we look forward,
01:33:58.900 | sort of we're at the very early stages
01:34:02.180 | of understanding consciousness
01:34:04.100 | and we're now at the early stages of being able
01:34:06.420 | to engineer complex, interesting systems
01:34:10.140 | that have degrees of intelligence
01:34:11.540 | and maybe one day we'll have degrees of consciousness,
01:34:14.260 | maybe be able to upload brains,
01:34:17.100 | all those possibilities, virtual reality.
01:34:19.980 | Is there a particular aspect of this future world
01:34:22.620 | that just excites you?
01:34:24.020 | - I think there are lots of different aspects.
01:34:26.340 | I mean, frankly, I want it to hurry up and happen.
01:34:29.500 | It's like, yeah, we've had some progress lately in AI and VR,
01:34:33.100 | but in the grand scheme of things, it's still kind of slow.
01:34:35.900 | The changes are not yet transformative
01:34:38.180 | and I'm in my 50s, I've only got so long left.
01:34:42.060 | I'd like to see really serious AI in my lifetime
01:34:45.620 | and really serious virtual worlds.
01:34:48.180 | 'Cause yeah, once people are,
01:34:49.660 | I would like to be able to hang out in a virtual reality,
01:34:51.980 | which is richer than this reality,
01:34:56.500 | to really get to inhabit fundamentally different kinds
01:35:00.300 | of spaces.
01:35:02.140 | Well, I would very much like to be able to upload my mind
01:35:05.700 | onto a computer, so maybe I don't have to die.
01:35:11.420 | If this is maybe gradually replace my neurons
01:35:14.180 | with a Silicon chips and inhabit a computer.
01:35:17.340 | Selfishly, that would be wonderful.
01:35:19.300 | I suspect I'm not gonna quite get there in my lifetime,
01:35:24.300 | but once that's possible,
01:35:26.500 | then you've got the possibility
01:35:27.340 | of transforming your consciousness in remarkable ways,
01:35:30.180 | augmenting it, enhancing it.
01:35:33.300 | - So let me ask then if such a system
01:35:36.020 | is a possibility within your lifetime
01:35:39.580 | and you were given the opportunity to become immortal
01:35:42.820 | in this kind of way, would you choose to be immortal?
01:35:49.220 | - Yes, I totally would.
01:35:52.420 | I know some people say they couldn't,
01:35:54.900 | it'd be awful to be immortal,
01:35:58.540 | it'd be so boring or something.
01:35:59.820 | I don't see, I really don't see why this might be.
01:36:04.820 | I mean, even if it's just ordinary life that continues on,
01:36:07.460 | ordinary life is not so bad,
01:36:09.580 | but furthermore, I kind of suspect that
01:36:12.020 | if the universe is gonna go on forever or indefinitely,
01:36:16.180 | it's gonna continue to be interesting.
01:36:19.300 | I don't think, your view was that we're just hit
01:36:22.020 | with this one romantic point of interest now
01:36:24.220 | and afterwards it's all gonna be boring,
01:36:26.220 | super intelligent stasis.
01:36:28.500 | I guess my vision is more like,
01:36:30.020 | no, it's gonna continue to be infinitely interesting.
01:36:32.660 | Something like as you go up the set theoretic hierarchy,
01:36:36.180 | you go from the finite cardinals to Aleph zero,
01:36:41.180 | and then through there to all the Aleph one and Aleph two,
01:36:46.060 | and maybe the continuum, and you keep taking power sets.
01:36:49.860 | And in set theory, they've got these results
01:36:51.980 | that actually all this is fundamentally unpredictable.
01:36:54.780 | It doesn't follow any simple computational patterns.
01:36:57.420 | There's new levels of creativity
01:36:58.940 | as the set theoretic universe expands and expands.
01:37:01.900 | I guess that's my future.
01:37:03.340 | That's my vision of the future.
01:37:04.860 | That's my optimistic vision of the future
01:37:06.660 | of super intelligence.
01:37:08.100 | It will keep expanding and keep growing,
01:37:09.780 | but still being fundamentally unpredictable at many points.
01:37:12.900 | I mean, yes, this creates all kinds of worries,
01:37:15.300 | like couldn't it all be fragile
01:37:17.700 | and be destroyed at any point?
01:37:18.980 | So we're gonna need a solution to that problem.
01:37:21.180 | But if we get to stipulate that I'm immortal,
01:37:23.420 | well, I hope that I'm not just immortal
01:37:25.580 | and stuck in the single world forever,
01:37:27.940 | but I'm immortal and get to take part
01:37:29.980 | in this process of going through
01:37:32.180 | infinitely rich created futures.
01:37:34.380 | - Rich, unpredictable, exciting.
01:37:36.460 | Well, I think I speak for a lot of people in saying,
01:37:39.900 | I hope you do become immortal
01:37:41.460 | and there'll be that Netflix show, "The Future,"
01:37:43.700 | where you get to argue with Descartes,
01:37:46.380 | perhaps for all eternity.
01:37:49.780 | So Dave, it was an honor.
01:37:51.460 | Thank you so much for talking today.
01:37:52.900 | - Thanks, it was a pleasure.
01:37:55.060 | - Thanks for listening to this conversation.
01:37:57.180 | And thank you to our presenting sponsor, Cash App.
01:38:00.060 | Download it, use code LEXPODCAST.
01:38:02.700 | You'll get $10 and $10 will go to FIRST,
01:38:05.500 | an organization that inspires and educates young minds
01:38:08.700 | to become science and technology innovators of tomorrow.
01:38:12.180 | If you enjoy this podcast, subscribe on YouTube,
01:38:14.960 | give it five stars on Apple Podcast,
01:38:16.820 | follow on Spotify, support it on Patreon,
01:38:19.220 | or simply connect with me on Twitter @LexFriedman.
01:38:22.260 | And now let me leave you with some words
01:38:25.020 | from David Chalmers.
01:38:26.980 | Materialism is a beautiful and compelling view of the world,
01:38:30.780 | but to account for consciousness,
01:38:32.260 | we have to go beyond the resources it provides.
01:38:35.260 | Thank you for listening.
01:38:37.500 | I hope to see you next time.
01:38:39.340 | (upbeat music)
01:38:41.920 | (upbeat music)
01:38:44.500 | [BLANK_AUDIO]