back to index

Nick Bostrom: Simulation and Superintelligence | Lex Fridman Podcast #83


Chapters

0:0 Introduction
2:48 Simulation hypothesis and simulation argument
12:17 Technologically mature civilizations
15:30 Case 1: if something kills all possible civilizations
19:8 Case 2: if we lose interest in creating simulations
22:3 Consciousness
26:27 Immersive worlds
28:50 Experience machine
41:10 Intelligence and consciousness
48:58 Weighing probabilities of the simulation argument
61:43 Elaborating on Joe Rogan conversation
65:53 Doomsday argument and anthropic reasoning
83:2 Elon Musk
85:26 What's outside the simulation?
89:52 Superintelligence
107:27 AGI utopia
112:41 Meaning of life

Whisper Transcript | Transcript Only Page

00:00:00.000 | The following is a conversation with Nick Bostrom,
00:00:02.880 | a philosopher at University of Oxford
00:00:05.520 | and the director of the Future of Humanity Institute.
00:00:08.700 | He has worked on fascinating and important ideas
00:00:11.900 | in existential risk, simulation hypothesis,
00:00:14.960 | human enhancement ethics,
00:00:16.860 | and the risks of superintelligent AI systems,
00:00:19.940 | including in his book, "Superintelligence."
00:00:23.200 | I can see talking to Nick multiple times in this podcast,
00:00:26.200 | many hours each time,
00:00:27.640 | because he has done some incredible work
00:00:30.440 | in artificial intelligence, in technology space,
00:00:33.520 | science, and really philosophy in general.
00:00:36.440 | But we have to start somewhere.
00:00:38.800 | This conversation was recorded
00:00:40.480 | before the outbreak of the coronavirus pandemic,
00:00:43.500 | that both Nick and I, I'm sure,
00:00:45.800 | will have a lot to say about next time we speak.
00:00:48.680 | And perhaps that is for the best,
00:00:50.680 | because the deepest lessons can be learned
00:00:52.920 | only in retrospect, when the storm has passed.
00:00:56.680 | I do recommend you read many of his papers
00:00:58.800 | on the topic of existential risk,
00:01:00.720 | including the technical report
00:01:02.400 | titled "Global Catastrophic Risks Survey"
00:01:05.720 | that he co-authored with Anders Sandberg.
00:01:09.360 | For everyone feeling the medical, psychological,
00:01:11.600 | and financial burden of this crisis,
00:01:13.640 | I'm sending love your way.
00:01:15.480 | Stay strong.
00:01:16.680 | We're in this together.
00:01:18.100 | We'll beat this thing.
00:01:19.200 | This is the Artificial Intelligence Podcast.
00:01:23.320 | If you enjoy it, subscribe on YouTube,
00:01:25.560 | review it with five stars on Apple Podcast,
00:01:27.760 | support it on Patreon,
00:01:29.080 | or simply connect with me on Twitter at Lex Friedman,
00:01:32.200 | spelled F-R-I-D-M-A-N.
00:01:34.840 | As usual, I'll do one or two minutes of ads now,
00:01:37.480 | and never any ads in the middle
00:01:39.020 | that can break the flow of the conversation.
00:01:41.120 | I hope that works for you
00:01:42.440 | and doesn't hurt the listening experience.
00:01:44.800 | This show is presented by Cash App,
00:01:48.000 | the number one finance app in the App Store.
00:01:50.200 | When you get it, use code LEXPODCAST.
00:01:53.600 | Cash App lets you send money to friends,
00:01:55.480 | buy Bitcoin, and invest in the stock market
00:01:57.800 | with as little as $1.
00:01:58.800 | Since Cash App does fractional share trading,
00:02:02.240 | let me mention that the order execution algorithm
00:02:05.240 | that works behind the scenes
00:02:06.480 | to create the abstraction of fractional orders
00:02:09.200 | is an algorithmic marvel.
00:02:11.000 | So big props to the Cash App engineers
00:02:13.320 | for solving a hard problem that in the end
00:02:15.840 | provides an easy interface that takes a step up
00:02:18.480 | to the next layer of abstraction over the stock market,
00:02:21.120 | making trading more accessible for new investors
00:02:23.640 | and diversification much easier.
00:02:26.840 | So again, if you get Cash App
00:02:28.600 | from the App Store or Google Play,
00:02:30.120 | and use the code LEXPODCAST, you get $10,
00:02:34.320 | and Cash App will also donate $10 to FIRST,
00:02:37.040 | an organization that is helping to advance robotics
00:02:39.880 | and STEM education for young people around the world.
00:02:42.720 | And now, here's my conversation with Nick Bostrom.
00:02:48.080 | At the risk of asking the Beatles to play "Yesterday"
00:02:51.600 | or the Rolling Stones to play "Satisfaction,"
00:02:54.080 | let me ask you the basics.
00:02:56.240 | What is the simulation hypothesis?
00:02:59.320 | - That we are living in a computer simulation.
00:03:01.840 | - What is a computer simulation?
00:03:04.280 | How are we supposed to even think about that?
00:03:07.240 | - Well, so the hypothesis is meant to be understood
00:03:10.680 | in a literal sense,
00:03:12.640 | not that we can kind of metaphorically view the universe
00:03:17.160 | as an information processing physical system,
00:03:20.200 | but that there is some advanced civilization
00:03:24.040 | who built a lot of computers,
00:03:26.240 | and that what we experience is an effect
00:03:30.560 | of what's going on inside one of those computers,
00:03:33.720 | so that the world around us, our own brains,
00:03:37.520 | everything we see and perceive and think and feel
00:03:41.000 | would exist because this computer
00:03:45.720 | is running certain programs.
00:03:49.080 | - So do you think of this computer
00:03:51.000 | as something similar to the computers of today,
00:03:54.200 | these deterministic sort of Turing machine type things?
00:03:58.080 | Is that what we're supposed to imagine,
00:03:59.520 | or we're supposed to think of something more
00:04:01.800 | like a quantum mechanical system,
00:04:06.760 | something much bigger, something much more complicated,
00:04:09.160 | something much more mysterious from our current perspective?
00:04:12.800 | - The ones we have today would define,
00:04:14.320 | I mean, bigger, certainly.
00:04:15.240 | You'd need more-- - It's all about size.
00:04:17.080 | - More memory and more processing power.
00:04:18.760 | I don't think anything else would be required.
00:04:21.680 | Now, it might well be that they do have,
00:04:24.360 | maybe they have quantum computers and other things
00:04:27.080 | that would give them even more oomph.
00:04:29.440 | It seems kind of plausible,
00:04:30.600 | but I don't think it's a necessary assumption
00:04:33.560 | in order to get to the conclusion
00:04:37.400 | that a technologically mature civilization
00:04:40.760 | would be able to create these kinds of computer simulations
00:04:44.000 | with conscious beings inside them.
00:04:46.520 | - So do you think the simulation hypothesis
00:04:49.440 | is an idea that's most useful in philosophy,
00:04:52.520 | computer science, physics?
00:04:54.880 | Sort of where do you see it having valuable
00:04:59.520 | kind of starting point
00:05:02.880 | in terms of a thought experiment of it?
00:05:05.200 | - Is it useful?
00:05:06.040 | I guess it's more informative and interesting
00:05:11.040 | and maybe important,
00:05:12.200 | but it's not designed to be useful for something else.
00:05:16.320 | - Well, okay, interesting, sure.
00:05:18.320 | But is it philosophically interesting
00:05:20.840 | or is there some kind of implications
00:05:23.080 | of computer science and physics?
00:05:24.920 | - I think not so much for computer science
00:05:27.880 | or physics per se.
00:05:29.160 | Certainly it would be of interest in philosophy,
00:05:32.240 | I think also to say cosmology or physics
00:05:37.240 | in as much as you're interested
00:05:38.920 | in the fundamental building blocks of the world
00:05:43.080 | and the rules that govern it.
00:05:45.920 | If we are in a simulation,
00:05:47.000 | there is then the possibility that say physics
00:05:49.760 | at the level where the computer running the simulation
00:05:52.960 | could be different from the physics governing phenomena
00:05:58.120 | in the simulation.
00:05:59.600 | So I think it might be interesting
00:06:02.200 | from point of view of religion
00:06:04.760 | or just for kind of trying to figure out
00:06:06.960 | what the heck is going on.
00:06:08.680 | So we mentioned the simulation hypothesis so far.
00:06:14.600 | There is also the simulation argument,
00:06:16.760 | which I tend to make a distinction.
00:06:19.720 | So simulation hypothesis,
00:06:20.960 | we are living in a computer simulation.
00:06:22.680 | Simulation argument,
00:06:23.560 | this argument that tries to show
00:06:25.840 | that one of three propositions is true.
00:06:27.960 | One of which is the simulation hypothesis,
00:06:30.800 | but there are two alternatives
00:06:32.320 | in the original simulation argument,
00:06:35.800 | which we can get to.
00:06:36.720 | - Yeah, let's go there.
00:06:37.720 | By the way, confusing terms
00:06:39.080 | because people will, I think,
00:06:41.920 | probably naturally think simulation argument
00:06:43.760 | equals simulation hypothesis, just terminology wise.
00:06:47.040 | But let's go there.
00:06:48.120 | So simulation hypothesis means
00:06:49.760 | that we are living in a simulation.
00:06:51.600 | So the hypothesis that we're living in a simulation,
00:06:53.600 | simulation argument has these three complete possibilities
00:06:58.600 | that cover all possibilities.
00:07:00.400 | So what are they?
00:07:01.240 | - So it's like a disjunction.
00:07:02.280 | It says at least one of these three is true.
00:07:05.640 | Although it doesn't on its own tell us which one.
00:07:10.640 | So the first one is that almost all civilizations
00:07:15.240 | at our current stage of technological development
00:07:17.800 | go extinct before they reach technological maturity.
00:07:23.680 | So there is some great filter
00:07:25.960 | that makes it so that basically
00:07:31.640 | none of the civilizations throughout,
00:07:35.200 | you know, maybe vast cosmos
00:07:37.640 | will ever get to realize the full potential
00:07:41.080 | of technological development.
00:07:42.200 | - And this could be, theoretically speaking,
00:07:44.720 | this could be because most civilizations
00:07:47.480 | kill themselves too eagerly or destroy themselves too eagerly
00:07:50.600 | or it might be super difficult to build a simulation.
00:07:55.080 | So the span of time.
00:07:56.840 | - Theoretically, it could be both.
00:07:58.320 | Now, I think it looks like we would technologically
00:08:02.080 | be able to get there in a time span
00:08:04.360 | that is short compared to, say, the lifetime of planets
00:08:09.360 | and other sort of astronomical processes.
00:08:13.680 | - So your intuition is to build a simulation is not...
00:08:16.920 | - Well, so there's this interesting concept
00:08:18.280 | of technological maturity.
00:08:20.240 | It's kind of an interesting concept
00:08:22.440 | to have for other purposes as well.
00:08:25.040 | We can see, even based on our current limited understanding,
00:08:29.240 | what some lower bound would be on the capabilities
00:08:32.680 | that you could realize by just developing technologies
00:08:36.960 | that we already see are possible.
00:08:38.480 | So for example, one of my research fellows here,
00:08:41.960 | Eric Drexler, back in the '80s,
00:08:44.680 | studied molecular manufacturing.
00:08:48.320 | That is, you could analyze using theoretical tools
00:08:53.280 | and computer modeling the performance
00:08:55.600 | of various molecularly precise structures
00:08:58.080 | that we didn't then and still don't today
00:09:01.040 | have the ability to actually fabricate.
00:09:03.960 | But you could say that, well,
00:09:04.960 | if we could put these atoms together in this way,
00:09:07.040 | then the system would be stable
00:09:08.840 | and it would rotate at this speed
00:09:12.200 | and have these computational characteristics.
00:09:16.120 | And he also outlined some pathways
00:09:18.240 | that would enable us to get to this kind
00:09:20.840 | of molecularly manufacturing in the fullness of time.
00:09:25.440 | You could do other studies we've done.
00:09:28.200 | You could look at the speed at which, say,
00:09:30.240 | it would be possible to colonize the galaxy
00:09:33.440 | if you had mature technology.
00:09:35.880 | We have an upper limit, which is the speed of light.
00:09:38.280 | We have sort of a lower current limit,
00:09:40.320 | which is how fast current rockets go.
00:09:42.120 | We know we can go faster than that by just, you know,
00:09:45.840 | making them bigger and have more fuel and stuff.
00:09:47.920 | And you can then start to describe
00:09:52.080 | the technological affordances that would exist
00:09:54.880 | once a civilization has had enough time to develop,
00:09:58.400 | at least those technologies we already know are possible.
00:10:00.880 | Then maybe they would discover
00:10:01.960 | other new physical phenomena as well
00:10:04.880 | that we haven't realized
00:10:05.840 | that would enable them to do even more.
00:10:07.800 | But at least there is this kind of basic set of capabilities.
00:10:12.520 | - Can you just linger on that?
00:10:14.320 | How do we jump from molecular manufacturing
00:10:17.560 | to deep space exploration to mature technology?
00:10:22.160 | Like, what's the connection there?
00:10:23.680 | - Well, so these would be two examples
00:10:26.720 | of technological capability sets
00:10:29.400 | that we can have a high degree of confidence
00:10:32.280 | are physically possible in our universe
00:10:35.600 | and that a civilization that was allowed to continue
00:10:39.280 | to develop its science and technology
00:10:41.800 | would eventually attain.
00:10:42.640 | - You can intuit, like, we can kind of see
00:10:45.240 | the set of breakthroughs that are likely to happen.
00:10:49.120 | So you can see, like,
00:10:50.400 | what did you call it, the technological set?
00:10:53.120 | - With computers, maybe it's easier.
00:10:55.640 | The one is we could just imagine bigger computers
00:10:58.160 | using exactly the same parts that we have.
00:11:00.000 | So you can kind of scale things that way, right?
00:11:02.680 | But you could also make processors a bit faster.
00:11:06.120 | If you had this molecular nanotechnology
00:11:08.240 | that Eric Drexler described,
00:11:09.600 | he characterized a kind of crude computer
00:11:13.840 | built with these parts that would perform
00:11:16.760 | at a million times the human brain
00:11:18.920 | while being significantly smaller,
00:11:21.000 | the size of a sugar cube.
00:11:23.640 | And he may not claim that
00:11:24.640 | that's the optimum computing structure,
00:11:26.640 | like for what, you know,
00:11:28.360 | we could build a faster computer
00:11:30.480 | that would be more efficient,
00:11:31.320 | but at least you could do that
00:11:32.440 | if you had the ability to do things
00:11:34.080 | that were atomically precise.
00:11:35.880 | - Yes.
00:11:36.720 | - I mean, so you can then combine these two.
00:11:37.960 | You could have this kind of nanomolecular ability
00:11:40.960 | to build things atom by atom,
00:11:42.600 | and then say at this, as a spatial scale,
00:11:46.080 | that would be attainable
00:11:47.960 | through space colonizing technology.
00:11:51.200 | You could then start to build
00:11:52.440 | and you could then start, for example,
00:11:54.320 | to characterize a lower bound
00:11:56.000 | on the amount of computing power
00:11:58.200 | that a technologically mature civilization would have.
00:12:01.760 | If it could grab resources,
00:12:04.360 | you know, planets and so forth,
00:12:06.160 | and then use this molecular nanotechnology
00:12:08.520 | to optimize them for computing,
00:12:10.280 | you'd get a very, very high lower bound
00:12:15.200 | on the amount of compute.
00:12:16.880 | - So, sorry, just to define some terms.
00:12:19.080 | So technologically mature civilization
00:12:21.000 | is one that took that piece of technology
00:12:23.160 | to its lower bound.
00:12:26.080 | What is a technologically mature civilization?
00:12:27.480 | - Well, okay, so that means it's a stronger concept
00:12:29.080 | than we really need for the simulation hypothesis.
00:12:31.160 | I just think it's interesting in its own right.
00:12:33.920 | So it would be the idea that there is
00:12:35.680 | some stage of technological development
00:12:38.560 | where you've basically maxed out,
00:12:40.880 | that you developed all those general purpose,
00:12:43.920 | widely useful technologies that could be developed,
00:12:47.080 | or at least kind of come very close to the,
00:12:50.040 | you're 99.9% there or something.
00:12:52.360 | So that's an independent question.
00:12:55.120 | You can think either that there is such a ceiling,
00:12:57.640 | or you might think it just goes,
00:12:59.440 | the technology tree just goes on forever.
00:13:02.400 | - Where does your sense fall?
00:13:04.640 | - I would guess that there is a maximum
00:13:07.240 | that you would start to asymptote towards.
00:13:09.880 | - So new things won't keep springing up, new ceilings.
00:13:13.800 | - In terms of basic technological capabilities,
00:13:16.320 | I think that, yeah, there is like a finite set of those
00:13:19.840 | that can exist in this universe.
00:13:22.080 | Moreover, I mean, I wouldn't be that surprised
00:13:27.080 | if we actually reached close to that level
00:13:30.080 | fairly shortly after we have, say, machine superintelligence.
00:13:33.880 | So I don't think it would take millions of years
00:13:36.360 | for a human originating civilization to begin to do this.
00:13:42.280 | It, I think, is more likely to happen
00:13:44.800 | on historical timescales.
00:13:46.520 | But that's an independent speculation
00:13:49.480 | from the simulation argument.
00:13:51.320 | I mean, for the purpose of the simulation argument,
00:13:54.000 | it doesn't really matter whether it goes
00:13:56.080 | indefinitely far up or whether there's a ceiling,
00:13:58.280 | as long as we know we can at least get to a certain level.
00:14:01.600 | And it also doesn't matter whether that's gonna happen
00:14:04.400 | in 100 years or 5,000 years or 50 million years.
00:14:08.360 | Like the timescales really don't make any difference
00:14:11.240 | for the simulation.
00:14:12.080 | - Can you linger on that a little bit?
00:14:13.200 | Like there's a big difference between 100 years
00:14:16.480 | and 10 million years.
00:14:19.000 | So does it really not matter?
00:14:21.640 | Because you just said,
00:14:22.960 | does it matter if we jump scales
00:14:25.680 | to beyond historical scales?
00:14:28.440 | So will you describe that?
00:14:30.160 | So for the simulation argument,
00:14:34.880 | sort of, doesn't it matter that we,
00:14:38.400 | if it takes 10 million years,
00:14:40.800 | it gives us a lot more opportunity
00:14:42.680 | to destroy civilization in the meantime?
00:14:44.760 | - Yeah, well, so it would shift around the probabilities
00:14:47.320 | between these three alternatives.
00:14:49.680 | That is, if we are very, very far away
00:14:52.400 | from being able to create these simulations,
00:14:54.680 | if it's like say the billions of years into the future,
00:14:56.880 | then it's more likely that we will fail ever to get there.
00:14:59.760 | There are more time for us to kind of,
00:15:02.320 | you know, go extinct along the way.
00:15:03.960 | And so similarly for other civilizations.
00:15:06.280 | - So it is important to think about how hard it is
00:15:08.440 | to build a simulation.
00:15:09.880 | - In terms of, yeah, figuring out which of the disjuncts.
00:15:14.400 | But for the simulation argument itself,
00:15:16.320 | which is agnostic as to which of these three alternatives
00:15:19.440 | is true.
00:15:20.280 | - Yeah, yeah, okay.
00:15:21.120 | - Is it like, you don't have to,
00:15:22.720 | like the simulation argument would be true
00:15:24.800 | whether or not we thought this could be done in 500 years
00:15:28.200 | or it would take 500 million years.
00:15:29.880 | - No, for sure.
00:15:30.720 | The simulation argument stands.
00:15:31.800 | I mean, I'm sure there might be some people who oppose it,
00:15:34.560 | but it doesn't matter.
00:15:36.120 | I mean, it's very nice those three cases cover it.
00:15:39.360 | But the fun part is at least not saying
00:15:42.240 | what the probabilities are,
00:15:43.560 | but kind of thinking about,
00:15:44.960 | kind of intuiting reasoning about like,
00:15:46.960 | what's more likely,
00:15:48.800 | what are the kinds of things that would make
00:15:51.280 | some of the arguments less and more so like,
00:15:54.200 | but let's actually, I don't think we went through them.
00:15:56.400 | So number one is we destroy ourselves
00:15:58.760 | before we ever create simulated.
00:16:00.640 | - Right, so that's kind of sad,
00:16:02.440 | but we have to think not just what might destroy us.
00:16:07.440 | I mean, so that could be some,
00:16:09.560 | whatever disasters or meteorites slamming the earth
00:16:13.400 | a few years from now that could destroy us, right?
00:16:16.040 | But you'd have to postulate
00:16:18.720 | in order for this first disjunct to be true,
00:16:23.320 | that almost all civilizations throughout the cosmos
00:16:27.760 | also failed to reach technological maturity.
00:16:32.040 | - And the underlying assumption there is that there is
00:16:35.040 | likely a very large number
00:16:37.080 | of other intelligent civilizations.
00:16:40.120 | - Well, if there are, yeah,
00:16:41.880 | then they would virtually all have to succumb
00:16:44.720 | in the same way.
00:16:45.640 | I mean, then that leads off another,
00:16:48.640 | I guess there are a lot of little digressions
00:16:50.040 | that are interesting.
00:16:50.880 | - Definitely, let's go there, let's go there.
00:16:52.000 | - Yeah, he means- - I'm dragging us back.
00:16:53.920 | - Well, there are these,
00:16:55.040 | there is a set of basic questions that always come up
00:16:58.080 | in conversations with interesting people,
00:17:01.400 | like the Fermi paradox, like there's like,
00:17:03.600 | you could almost define whether a person is interesting,
00:17:07.360 | whether at some point the question
00:17:09.120 | of the Fermi paradox comes up, like.
00:17:11.120 | (both laughing)
00:17:12.840 | Well, so for what it's worth,
00:17:14.480 | it looks to me that the universe is very big.
00:17:17.920 | I mean, in fact, according to the most popular
00:17:21.480 | current cosmological theories, infinitely big.
00:17:24.040 | And so then it would follow pretty trivially
00:17:28.360 | that it would contain a lot of other civilizations,
00:17:31.360 | in fact, infinitely many.
00:17:32.600 | If you have some local stochasticity and infinitely many,
00:17:37.800 | it's like, you know, infinitely many lumps of matter,
00:17:40.480 | one next to another,
00:17:41.320 | there's kind of random stuff in each one,
00:17:42.880 | then you're gonna get all possible outcomes
00:17:45.800 | with probability one, infinitely repeated.
00:17:49.120 | So then certainly that would be a lot
00:17:52.560 | of extraterrestrials out there.
00:17:54.200 | Even short of that, if the universe is very big,
00:17:57.920 | that might be a finite, but large number.
00:18:00.560 | If we were literally the only one, yeah,
00:18:04.120 | then of course, if we went extinct,
00:18:08.000 | then all of civilizations at our current stage
00:18:12.160 | would have gone extinct
00:18:13.000 | before becoming technological material.
00:18:14.760 | So then it kind of becomes trivially true
00:18:17.440 | that a very high fraction of those went extinct.
00:18:21.240 | But if we think there are many, I mean, it's interesting
00:18:24.520 | because there are certain things that plausibly
00:18:26.920 | could kill us, like if you look at existential risks.
00:18:32.320 | And it might be a different, like the best answer
00:18:37.440 | to what would be most likely to kill us
00:18:39.000 | might be a different answer
00:18:40.440 | than the best answer to the question,
00:18:43.640 | if there is something that kills almost everyone,
00:18:46.640 | what would that be?
00:18:47.720 | 'Cause that would have to be some risk factor
00:18:49.880 | that was kind of uniform overall possible civilization.
00:18:53.520 | - Yeah, so in this, for the sake of this argument,
00:18:56.320 | you have to think about not just us,
00:18:58.920 | but like every civilization dies out
00:19:01.720 | before they create the simulation.
00:19:02.920 | - Yeah, or something very close to everybody.
00:19:07.680 | - Okay, so what's number two in the-
00:19:10.080 | - Well, so number two is the convergence hypothesis
00:19:13.040 | that is that maybe like a lot of,
00:19:14.960 | some of these civilizations do make it
00:19:16.880 | through to technological maturity,
00:19:18.600 | but out of those who do get there,
00:19:21.320 | they all lose interest in creating these simulations.
00:19:26.320 | So they just, they have the capability of doing it,
00:19:29.920 | but they choose not to.
00:19:32.880 | Not just a few of them decide not to,
00:19:34.600 | but out of a million,
00:19:39.360 | maybe not even a single one of them would do it.
00:19:41.520 | - And I think when you say lose interest,
00:19:44.840 | that sounds like unlikely
00:19:46.920 | because it's like they get bored or whatever,
00:19:49.400 | but it could be so many possibilities within that.
00:19:52.680 | I mean, losing interest could be,
00:19:56.900 | it could be anything from it being
00:20:01.240 | exceptionally difficult to do,
00:20:03.880 | to fundamentally changing the sort of,
00:20:07.680 | the fabric of reality if you do it,
00:20:10.080 | to ethical concerns, all those kinds of things
00:20:12.760 | could be exceptionally strong pressures.
00:20:15.200 | - Well, certainly, I mean, yeah, ethical concerns.
00:20:18.440 | I mean, not really too difficult to do.
00:20:21.160 | I mean, in a sense, that's the first assumption
00:20:23.640 | that you get to technological maturity
00:20:25.880 | where you would have the ability,
00:20:27.920 | using only a tiny fraction of your resources,
00:20:31.200 | to create many, many simulations.
00:20:34.400 | So it wouldn't be the case that they would need
00:20:36.960 | to spend half of their GDP forever
00:20:39.640 | in order to create one simulation.
00:20:41.280 | And they had this difficult debate
00:20:42.880 | about whether they should invest half of their GDP for this.
00:20:46.320 | It would more be like,
00:20:47.160 | well, if any little fraction of the civilization
00:20:49.600 | feels like doing this at any point
00:20:51.680 | during maybe their millions of years of existence,
00:20:55.960 | then there will be millions of simulations.
00:21:00.200 | But certainly, there could be many conceivable reasons
00:21:05.200 | for why there would be this convert,
00:21:07.280 | many possible reasons for not running ancestor simulations
00:21:10.600 | or other computer simulations,
00:21:12.720 | even if you could do so cheaply.
00:21:15.000 | - By the way, what's an ancestor simulation?
00:21:17.080 | - Well, that would be the type of computer simulation
00:21:20.720 | that would contain people like those
00:21:24.600 | we think have lived on our planet in the past
00:21:27.560 | and like ourselves in terms of the types of experiences
00:21:30.120 | that they have and where those simulated people are conscious.
00:21:33.520 | So like not just simulated in the same sense
00:21:36.440 | that a non-player character would be simulated
00:21:41.440 | in the current computer game,
00:21:42.360 | where it's kind of has like an avatar body
00:21:45.040 | and then a very simple mechanism
00:21:46.680 | that moves it forward or backwards,
00:21:49.600 | but something where the simulated being has a brain,
00:21:54.600 | let's say, that's simulated
00:21:56.200 | at a sufficient level of granularity
00:21:57.960 | that it would have the same subjective experiences
00:22:02.640 | as we have.
00:22:03.480 | - So where does consciousness fit into this?
00:22:06.320 | Do you think simulation,
00:22:08.400 | like is there different ways to think about
00:22:09.920 | how this can be simulated,
00:22:11.620 | just like you're talking about now?
00:22:13.360 | Do we have to simulate each brain
00:22:18.220 | within the larger simulation?
00:22:20.960 | Is it enough to simulate just the brain, just the minds,
00:22:24.660 | and not the simulation, not the big universe itself?
00:22:27.560 | Like is there different ways to think about this?
00:22:29.920 | - Yeah, I guess there is a kind of premise
00:22:33.000 | in the simulation argument rolled in
00:22:36.920 | from philosophy of mind.
00:22:38.840 | That is that it would be possible
00:22:41.080 | to create a conscious mind in a computer
00:22:44.960 | and that what determines whether some system
00:22:48.280 | is conscious or not is not like whether it's built
00:22:52.040 | from organic biological neurons,
00:22:54.640 | but maybe something like what the structure
00:22:56.760 | of the computation is that it implements.
00:22:59.560 | So we can discuss that if we want,
00:23:02.400 | but I think it would be,
00:23:04.480 | or as far as my view, that it would be sufficient, say,
00:23:08.520 | if you had a computation that was identical
00:23:13.520 | to the computation in the human brain
00:23:15.760 | down to the level of neurons.
00:23:17.400 | So if you had a simulation with 100 billion neurons
00:23:19.800 | connected in the same way as the human brain,
00:23:21.420 | and you then roll that forward
00:23:24.400 | with the same kind of synaptic weights and so forth,
00:23:27.560 | so you actually had the same behavior coming out of this
00:23:30.400 | as a human with that brain would have done,
00:23:33.720 | then I think that would be conscious.
00:23:35.840 | Now, it's possible you could also generate consciousness
00:23:38.640 | without having that detailed simulation.
00:23:44.440 | There I'm getting more uncertain
00:23:47.000 | exactly how much you could simplify or abstract away.
00:23:50.840 | - Can you look on that?
00:23:51.920 | What do you mean?
00:23:53.680 | I missed where you're placing consciousness in the second.
00:23:56.680 | - Well, so if you are a computationalist,
00:23:59.160 | do you think that what creates consciousness
00:24:01.840 | is the implementation of a computation?
00:24:04.720 | - Some property, emerging property
00:24:06.520 | of the computation itself.
00:24:08.200 | - Yeah. - That's the idea.
00:24:09.160 | - Yeah, you could say that.
00:24:10.520 | But then the question is,
00:24:12.240 | what's the class of computations
00:24:14.400 | such that when they are run, consciousness emerges?
00:24:18.100 | So if you just have something that adds
00:24:20.760 | one plus one plus one plus one,
00:24:22.880 | like a simple computation,
00:24:24.160 | you think maybe that's not gonna have any consciousness.
00:24:26.960 | If on the other hand, the computation is one
00:24:31.040 | like our human brains are performing,
00:24:34.040 | where as part of the computation,
00:24:37.640 | there is a global workspace,
00:24:40.920 | a sophisticated attention mechanism,
00:24:43.360 | there is self-representations of other cognitive processes
00:24:47.880 | and a whole lot of other things,
00:24:50.320 | that possibly would be conscious.
00:24:52.800 | And in fact, if it's exactly like ours,
00:24:54.640 | I think definitely it would.
00:24:56.520 | But exactly how much less than the full computation
00:25:01.200 | that the human brain is performing would be required
00:25:04.560 | is a little bit, I think, of an open question.
00:25:07.340 | You asked another interesting question as well,
00:25:12.560 | which is, would it be sufficient to just have,
00:25:17.720 | say, the brain or would you need the environment
00:25:20.760 | in order to generate the same kind of experiences
00:25:24.960 | that we have?
00:25:25.840 | And there is a bunch of stuff we don't know.
00:25:29.240 | I mean, if you look at, say,
00:25:30.600 | current virtual reality environments,
00:25:34.080 | one thing that's clear is that we don't have to simulate
00:25:37.320 | all details of them all the time
00:25:39.560 | in order for, say, the human player
00:25:42.440 | to have the perception that there is a full reality in there.
00:25:46.160 | You can have, say, procedurally generated,
00:25:48.200 | where you might only render a scene
00:25:49.920 | when it's actually within the view of the player character.
00:25:53.560 | And so similarly, if this environment
00:26:00.200 | that we perceive is simulated,
00:26:03.800 | it might be that all of the parts that come into our view
00:26:08.480 | are rendered at any given time.
00:26:10.240 | And a lot of aspects that never come into view,
00:26:13.600 | say, the details of this microphone I'm talking into,
00:26:17.320 | exactly what each atom is doing at any given point in time,
00:26:21.720 | might not be part of the simulation,
00:26:24.120 | only a more coarse-grained representation.
00:26:26.200 | - So that to me is actually
00:26:28.320 | from an engineering perspective
00:26:29.960 | why the simulation hypothesis
00:26:31.680 | is really interesting to think about,
00:26:33.600 | is how difficult is it to fake
00:26:38.600 | sort of in a virtual reality context,
00:26:41.640 | I don't know if fake is the right word,
00:26:42.880 | but to construct a reality that is sufficiently real to us
00:26:46.960 | to be immersive in the way that the physical world is.
00:26:52.120 | I think that's actually probably an answerable question
00:26:56.960 | of psychology, of computer science,
00:26:59.440 | of where's the line where it becomes so immersive
00:27:04.440 | that you don't wanna leave that world?
00:27:08.240 | - Yeah, or that you don't realize while you're in it
00:27:10.960 | that it is a virtual world.
00:27:13.240 | - Yeah, those are two actually questions.
00:27:15.000 | Yours is the more sort of the good question
00:27:17.360 | about the realism.
00:27:18.800 | But mine, from my perspective,
00:27:20.600 | what's interesting is it doesn't have to be real,
00:27:23.400 | but how can we construct a world
00:27:28.120 | that we wouldn't wanna leave?
00:27:29.800 | - Yeah, I mean, I think that might be too low a bar.
00:27:32.520 | I mean, if you think, say, when people first had Pong
00:27:35.640 | or something like that,
00:27:37.320 | I'm sure there were people
00:27:38.160 | who wanted to keep playing it for a long time
00:27:40.840 | 'cause it was fun,
00:27:41.680 | and they wanted to be in this little world.
00:27:43.800 | I'm not sure we would say it's immersive.
00:27:46.440 | I mean, I guess in some sense it is,
00:27:48.160 | but an absorbing activity doesn't even have to be.
00:27:51.240 | - But they left that world though.
00:27:52.960 | So I think that bar is deceivingly high.
00:27:58.880 | So they eventually left.
00:28:00.880 | So you can play Pong or StarCraft
00:28:04.400 | or whatever more sophisticated games for hours,
00:28:07.880 | for months, you know,
00:28:09.680 | while the World of Warcraft could be in a big addiction,
00:28:12.280 | but eventually they escaped that.
00:28:13.880 | - Ah, so you mean when it's absorbing enough
00:28:16.920 | that you would spend your entire,
00:28:18.400 | you would choose to spend your entire life in there.
00:28:21.040 | - And then thereby changing the concept of what reality is
00:28:24.440 | because your reality becomes the game,
00:28:29.440 | not because you're fooled,
00:28:31.240 | but because you've made that choice.
00:28:34.160 | - Yeah, and it may be different.
00:28:35.400 | People might have different preferences regarding that.
00:28:38.480 | Some might, even if you had a perfect virtual reality,
00:28:43.320 | might still prefer not to spend
00:28:47.560 | the rest of their lives there.
00:28:49.120 | I mean, in philosophy, there's this experience machine,
00:28:53.080 | thought experiment.
00:28:54.240 | Have you come across this?
00:28:55.760 | So Robert Nozick had this thought experiment
00:28:58.840 | where you imagine some crazy, super duper neuroscientists
00:29:03.840 | of the future have created a machine
00:29:06.160 | that could give you any experience you want
00:29:08.040 | if you step in there.
00:29:09.120 | And for the rest of your life,
00:29:11.120 | you can kind of pre-programmed it in different ways.
00:29:14.000 | So your fondest dreams could come true.
00:29:20.640 | You could, whatever you dream,
00:29:22.640 | you wanna be a great artist, a great lover,
00:29:26.080 | like have a wonderful life, all of these things.
00:29:29.080 | If you step into the experience machine,
00:29:31.080 | will be your experiences constantly happy.
00:29:36.240 | But you would kind of disconnect from the rest of reality
00:29:38.360 | and you would float there in a tank.
00:29:40.160 | And so Nozick thought that most people
00:29:44.600 | would choose not to enter the experience machine.
00:29:49.000 | I mean, many might wanna go there for a holiday,
00:29:50.920 | but they wouldn't want to sort of check out
00:29:52.480 | of existence permanently.
00:29:54.360 | And so he thought that was an argument
00:29:56.120 | against certain views of value,
00:29:59.640 | according to what we value
00:30:01.880 | is a function of what we experience.
00:30:04.480 | And because in the experience machine,
00:30:05.800 | you could have any experience you want.
00:30:07.080 | And yet many people would think
00:30:09.200 | that would not be much value.
00:30:11.880 | So therefore what we value depends on other things
00:30:15.360 | than what we experience.
00:30:18.440 | So, okay, can you take that argument further?
00:30:21.280 | I mean, what about the fact that maybe what we value
00:30:23.800 | is the up and down of life?
00:30:25.040 | So you could have up and downs in the experience machine.
00:30:27.840 | But what can't you have in the experience machine?
00:30:31.120 | Well, I mean, that then becomes
00:30:33.760 | an interesting question to explore,
00:30:35.320 | but for example, real connection with other people,
00:30:38.920 | if the experience machine is a solar machine
00:30:40.920 | where it's only you,
00:30:42.000 | like that's something you wouldn't have there.
00:30:44.840 | You would have this objective experience
00:30:46.560 | that would be like fake people.
00:30:48.160 | - Yeah.
00:30:49.240 | - But if you gave somebody flowers,
00:30:51.840 | that wouldn't be anybody there who actually got happy.
00:30:54.360 | It would just be a little simulation of somebody smiling,
00:30:58.360 | but the simulation would not be the kind of simulation
00:31:00.360 | I'm talking about in the simulation argument
00:31:01.920 | where the simulated creature is conscious.
00:31:04.000 | It would just be a kind of a smiley face
00:31:06.680 | that would look perfectly real to you.
00:31:08.400 | - So we're now drawing a distinction
00:31:10.480 | between appear to be perfectly real
00:31:13.640 | and actually being real.
00:31:14.920 | - Yeah.
00:31:15.760 | So that could be one thing.
00:31:18.160 | I mean, like a big impact on history,
00:31:20.440 | maybe it's also something you won't have
00:31:22.720 | if you check into this experience machine.
00:31:25.560 | So some people might actually feel
00:31:27.760 | the life I wanna have for me is one
00:31:29.520 | where I have a big positive impact
00:31:33.000 | on how history unfolds.
00:31:35.280 | So you could kind of explore
00:31:38.560 | these different possible explanations for why it is
00:31:43.120 | you wouldn't wanna go into the experience machine
00:31:45.000 | if that's what you feel.
00:31:47.840 | And one interesting observation
00:31:50.560 | regarding this Nozick thought experiment
00:31:52.680 | and the conclusions he wanted to draw from it
00:31:54.400 | is how much is a kind of a status quo effect.
00:31:58.720 | So a lot of people might not wanna get this
00:32:01.760 | on current reality to plug into this dream machine.
00:32:05.840 | But if they instead were told,
00:32:09.920 | well, what you've experienced up to this point
00:32:13.040 | was a dream, now, do you wanna disconnect from this
00:32:18.040 | and enter the real world
00:32:20.440 | when you have no idea maybe what the real world is?
00:32:22.920 | Or maybe you could say,
00:32:23.760 | well, you're actually a farmer in Peru
00:32:26.480 | growing peanuts and you could live
00:32:30.280 | for the rest of your life in this.
00:32:33.520 | Or would you wanna continue your dream life
00:32:37.840 | as Alex Friedman going around the world,
00:32:40.560 | making podcasts and doing research?
00:32:42.640 | If the status quo was that they were actually
00:32:48.200 | in the experience machine,
00:32:50.840 | I think a lot of people might then prefer
00:32:52.560 | to live the life that they are familiar with
00:32:54.960 | rather than sort of bail out into.
00:32:57.880 | - That's interesting, the change itself, the leap.
00:33:00.240 | - Yeah, so it might not be so much
00:33:01.840 | the reality itself that we are after,
00:33:04.320 | but it's more that we are maybe involved
00:33:06.640 | in certain projects and relationships.
00:33:08.760 | And we have a self-identity and these things
00:33:12.480 | that our values are kind of connected
00:33:14.120 | with carrying that forward.
00:33:15.760 | And then whether it's inside a tank
00:33:19.200 | or outside a tank in Peru,
00:33:21.440 | or whether inside a computer, outside a computer,
00:33:23.720 | that's kind of less important
00:33:26.280 | to what we ultimately care about.
00:33:29.600 | - Yeah, but still, so just to linger on it,
00:33:32.520 | it is interesting.
00:33:33.800 | I find maybe people are different,
00:33:36.600 | but I find myself quite willing to take the leap
00:33:39.360 | to the farmer in Peru,
00:33:42.080 | especially as the virtual reality system
00:33:44.000 | become more realistic.
00:33:45.400 | I find that possibility.
00:33:48.440 | And I think more people would take that leap.
00:33:50.680 | - But so in this thought experiment,
00:33:52.120 | just to make sure we are understanding,
00:33:53.200 | so in this case, the farmer in Peru
00:33:55.200 | would not be a virtual reality.
00:33:57.600 | That would be the real--
00:33:58.880 | - The real.
00:33:59.720 | - The real, your life,
00:34:01.080 | like before this whole experience machine started.
00:34:04.440 | - Well, I kind of assumed from that description,
00:34:06.920 | you're being very specific,
00:34:08.160 | but that kind of idea just like washes away
00:34:12.720 | the concept of what's real.
00:34:14.720 | I mean, I'm still a little hesitant
00:34:16.920 | about your kind of distinction between real and illusion.
00:34:21.920 | Because when you can have an illusion that feels,
00:34:26.880 | I mean, that looks real,
00:34:28.720 | I mean, what, I don't know how you can definitively say
00:34:32.640 | something is real or not.
00:34:33.760 | Like what's a good way to prove that something is real
00:34:36.480 | in that context?
00:34:38.000 | - Well, so I guess in this case,
00:34:39.920 | it's more a stipulation.
00:34:41.040 | In one case, you're floating in a tank
00:34:43.880 | with these wires by the super duper neuroscientists
00:34:47.400 | plugging into your head,
00:34:49.400 | giving you like Friedman experiences.
00:34:52.080 | And in the other,
00:34:52.920 | you're actually tilling the soil in Peru,
00:34:55.600 | growing peanuts,
00:34:56.440 | and then those peanuts are being eaten
00:34:58.200 | by other people all around the world who buy the exports.
00:35:01.040 | - That's real.
00:35:01.880 | - So these are two different possible situations
00:35:03.960 | in the one and the same real world
00:35:06.760 | that you could choose to occupy.
00:35:09.960 | - But just to be clear,
00:35:11.040 | when you're in a vat with wires and the neuroscientists,
00:35:15.160 | you can still go farming in Peru, right?
00:35:18.560 | - No, well, you could, if you wanted to,
00:35:22.400 | you could have the experience of farming in Peru,
00:35:24.440 | but that wouldn't actually be any peanuts grown.
00:35:27.240 | - Well, but what makes a peanut?
00:35:30.000 | So a peanut could be grown
00:35:33.800 | and you could feed things with that peanut.
00:35:37.280 | And why can't all of that be done in a simulation?
00:35:41.640 | - I hope, first of all,
00:35:42.480 | that they actually have peanut farms in Peru.
00:35:45.200 | I guess we'll get a lot of comments otherwise from Angry.
00:35:48.040 | - I was with you up to the point
00:35:51.680 | when you started talking about Peru.
00:35:52.800 | - You should know you can't grow peanuts in that climate.
00:35:57.800 | - No, I mean, I think, I mean, in the simulation,
00:36:02.600 | I think there is a sense, the important sense,
00:36:05.240 | in which it would all be real.
00:36:06.760 | Nevertheless, there is a distinction
00:36:09.840 | between inside a simulation and outside a simulation,
00:36:12.720 | or in the case of NOSIG's thought experiment,
00:36:16.160 | whether you're in the vat or outside the vat.
00:36:19.400 | And some of those differences may or may not be important.
00:36:22.480 | I mean, that comes down to your values and preferences.
00:36:25.480 | So if the experience machine
00:36:29.800 | only gives you the experience of growing peanuts,
00:36:32.760 | but you're the only one in the experience machines.
00:36:35.760 | - No, but there's other, you can,
00:36:37.920 | within the experience machine, others can plug in.
00:36:41.280 | - Well, there are versions of the experience machine.
00:36:43.760 | So in fact, you might wanna have
00:36:45.600 | distinguished different thought experiments,
00:36:47.040 | different versions of it.
00:36:48.320 | So in like in the original thought experiment,
00:36:50.640 | maybe it's only you, right?
00:36:51.640 | - Maybe just you.
00:36:52.480 | So, and you think, I wouldn't wanna go in there.
00:36:54.400 | Well, that tells you something interesting
00:36:56.160 | about what you value and what you care about.
00:36:58.040 | Then you could say, well, what if you add the fact
00:37:01.080 | that there would be other people in there
00:37:02.320 | and you would interact with them?
00:37:03.280 | Well, it starts to make it more attractive, right?
00:37:05.840 | Then you could add in, well,
00:37:07.720 | what if you could also have important long-term effects
00:37:10.560 | on human history and the world,
00:37:11.800 | and you could actually do something useful,
00:37:14.320 | even though you were in there,
00:37:15.200 | that makes it maybe even more attractive.
00:37:17.640 | Like you could actually have a life that had a purpose
00:37:21.320 | and consequences.
00:37:22.360 | And so as you sort of add more into it,
00:37:25.360 | it becomes more similar to the baseline reality
00:37:30.360 | that you were comparing it to.
00:37:32.840 | - Yeah, but I just think inside the experience machine
00:37:35.120 | and without taking those steps that you just mentioned,
00:37:39.240 | you still have an impact on long-term history
00:37:43.240 | of the creatures that live inside that,
00:37:47.680 | of the quote unquote fake creatures that live inside
00:37:50.560 | that experience machine.
00:37:53.200 | And that, like at a certain point,
00:37:55.360 | if there's a person waiting for you
00:37:59.560 | inside that experience machine,
00:38:01.080 | maybe your newly found wife, and she dies,
00:38:05.760 | she has fear, she has hopes,
00:38:08.840 | and she exists in that machine.
00:38:10.480 | When you unplug yourself and plug back in,
00:38:13.680 | she's still there going on about her life.
00:38:16.160 | - Oh, well, in that case, yeah,
00:38:17.400 | she starts to have more of an independent existence.
00:38:20.160 | - Independent existence.
00:38:21.320 | But it depends, I think, on how she's implemented
00:38:24.240 | in the experience machine.
00:38:25.640 | Take one limit case where all she is
00:38:29.360 | is a static picture on the wall, a photograph.
00:38:33.160 | So you think, well, I can look at her, right?
00:38:35.880 | But that's it, there's no,
00:38:37.960 | but then you think, well, it doesn't really matter much
00:38:40.360 | what happens to that,
00:38:41.280 | and any more than a normal photograph,
00:38:42.920 | if you tear it up, right?
00:38:45.000 | It means you can't see it anymore,
00:38:46.360 | but you haven't harmed the person
00:38:48.760 | whose picture you tore up.
00:38:50.040 | - That's the good one.
00:38:51.560 | - But if she's actually implemented,
00:38:53.640 | say, at a neural level of detail,
00:38:57.040 | so that she's a fully realized digital mind
00:39:00.640 | with the same behavioral repertoire as you have,
00:39:04.760 | then very plausibly,
00:39:06.160 | she would be a conscious person like you are.
00:39:08.840 | And then you would, what you do in this experience machine
00:39:12.240 | would have real consequences
00:39:13.640 | for how this other mind felt.
00:39:16.040 | So you have to specify
00:39:18.880 | which of these experience machines you're talking about.
00:39:21.080 | I think it's not entirely obvious
00:39:23.520 | that it would be possible to have an experience machine
00:39:28.000 | that gave you a normal set of human experiences,
00:39:31.320 | which include experiences of interacting with other people,
00:39:35.280 | without that also generating consciousnesses
00:39:39.600 | corresponding to those other people.
00:39:41.320 | That is, if you create another entity
00:39:44.920 | that you perceive and interact with
00:39:46.880 | that to you looks entirely realistic,
00:39:49.120 | not just when you say hello, they say hello back,
00:39:51.160 | but you have a rich interaction,
00:39:52.840 | many days, deep conversations.
00:39:54.600 | Like it might be that the only
00:39:57.000 | plausible way of implementing that
00:39:59.920 | would be one that also has a side effect,
00:40:02.120 | instantiated this other person in enough detail
00:40:05.240 | that you would have a second consciousness there.
00:40:07.720 | I think that's to some extent an open question.
00:40:11.640 | - So you don't think it's possible
00:40:13.160 | to fake consciousness and fake intelligence?
00:40:14.880 | - Well, it might be.
00:40:15.920 | I mean, I think you can certainly fake,
00:40:18.240 | if you have a very limited interaction with somebody,
00:40:21.200 | you could certainly fake that.
00:40:22.960 | That is, if all you have to go on
00:40:25.240 | is somebody said hello to you,
00:40:26.880 | that's not enough for you to tell
00:40:28.600 | whether that was a real person there
00:40:30.440 | or a prerecorded message,
00:40:32.000 | or like a very superficial simulation
00:40:35.400 | that has no consciousness.
00:40:36.920 | 'Cause that's something easy to fake.
00:40:39.160 | We could already fake it.
00:40:40.080 | Now you can record a voice recording.
00:40:41.880 | But if you have a richer set of interactions
00:40:45.600 | where you're allowed to answer,
00:40:47.480 | ask open-ended questions and probe from different angles,
00:40:51.320 | that couldn't sort of be,
00:40:52.760 | you couldn't give canned answer
00:40:54.040 | to all of the possible ways that you could probe it,
00:40:57.480 | then it starts to become more plausible
00:40:59.440 | that the only way to realize this thing
00:41:02.240 | in such a way that you would get the right answer
00:41:04.600 | from any which angle you probed it
00:41:07.000 | would be a way of instantiating it
00:41:08.440 | where you also instantiated a conscious mind.
00:41:10.640 | - Yeah, I'm with you on the intelligence part,
00:41:12.360 | but there's something about me
00:41:13.560 | that says consciousness is easier to fake.
00:41:15.800 | Like I've recently gotten my hands on a lot of Roombas.
00:41:19.720 | Don't ask me why or how.
00:41:21.200 | And I've made them,
00:41:24.360 | there's just a nice robotic mobile platform for experiments.
00:41:28.360 | And I made them scream or moan in pain and so on
00:41:33.040 | just to see when they're responding to me.
00:41:35.520 | And it's just a sort of psychological experiment on myself.
00:41:38.920 | And I think they appear conscious to me pretty quickly.
00:41:42.960 | To me, at least my brain can be tricked quite easily.
00:41:46.000 | - Right.
00:41:46.840 | - So if I introspect,
00:41:48.760 | it's harder for me to be tricked
00:41:51.480 | that something is intelligent.
00:41:53.640 | So I just have this feeling
00:41:55.040 | that inside this experience machine,
00:41:57.800 | just saying that you're conscious
00:41:59.560 | and having certain qualities of the interaction,
00:42:03.360 | like being able to suffer, like being able to hurt,
00:42:06.320 | like being able to wander about
00:42:09.560 | the essence of your own existence,
00:42:11.760 | not actually, I mean,
00:42:13.720 | creating the illusion that you're wandering about it
00:42:17.480 | is enough to create the feeling of consciousness
00:42:19.640 | and the illusion of consciousness.
00:42:23.040 | And because of that, create a really immersive experience
00:42:26.080 | to where you feel like that is the real world.
00:42:28.240 | - So you think there's a big gap
00:42:29.280 | between appearing conscious and being conscious?
00:42:33.120 | Or is it that you think it's very easy to be conscious?
00:42:36.120 | - I'm not actually sure what it means to be conscious.
00:42:37.960 | All I'm saying is the illusion of consciousness
00:42:41.640 | is enough to create a social interaction
00:42:47.600 | that's as good as if the thing was conscious.
00:42:50.640 | Meaning I'm making it about myself.
00:42:52.600 | - Right, yeah.
00:42:53.440 | I mean, I guess there are a few different things.
00:42:55.000 | One is how good the interaction is,
00:42:56.440 | which might, I mean, if you don't really care about,
00:42:59.600 | like probing hard for whether the thing is conscious,
00:43:02.280 | maybe it would be a satisfactory interaction.
00:43:07.800 | Whether or not you really thought it was conscious.
00:43:10.360 | Now, if you really care about it being conscious
00:43:15.600 | in like inside this experience machine,
00:43:18.240 | how easy would it be to fake it?
00:43:22.240 | And you say it sounds fairly easy.
00:43:24.640 | But then the question is,
00:43:26.360 | would that also mean it's very easy
00:43:28.440 | to instantiate consciousness?
00:43:30.360 | Like it's much more widely spread in the world
00:43:33.160 | and we have thought it doesn't require a big human brain
00:43:36.080 | with a hundred billion neurons.
00:43:37.120 | All you need is some system that exhibits
00:43:39.600 | basic intentionality and can respond
00:43:41.600 | and you already have consciousness.
00:43:43.160 | Like in that case, I guess you still have a close coupling.
00:43:46.160 | That I guess a data case would be where they can come apart,
00:43:51.160 | where you could create the appearance
00:43:54.640 | of there being a conscious mind
00:43:56.080 | with actually not being another conscious mind.
00:43:59.240 | I'm, yeah, I'm somewhat agnostic exactly
00:44:02.400 | where these lines go.
00:44:03.360 | I think one observation that makes it plausible,
00:44:06.680 | that you could have very realistic appearances
00:44:11.080 | relatively simply,
00:44:13.440 | which also is relevant for the simulation argument.
00:44:17.200 | And in terms of thinking about how realistic
00:44:20.840 | would a virtual reality model have to be
00:44:23.680 | in order for the simulated creature
00:44:25.560 | not to notice that anything was awry.
00:44:27.840 | Well, just think of our own humble brains
00:44:32.080 | during the wee hours of the night when we are dreaming.
00:44:35.000 | Many times, well, dreams are very immersive
00:44:38.680 | but often you also don't realize that you're in a dream.
00:44:41.520 | And that's produced by simple, primitive,
00:44:46.640 | three pound lumps of neural matter effortlessly.
00:44:51.080 | So if a simple brain like this can create
00:44:53.560 | the virtual reality that seems pretty real to us,
00:44:58.480 | then how much easier would it be
00:45:00.640 | for a super intelligent civilization
00:45:02.520 | with planetary sized computers optimized over the eons
00:45:05.920 | to create a realistic environment for you to interact with?
00:45:10.920 | - Yeah, by the way, behind that intuition
00:45:14.360 | is that our brain is not that impressive
00:45:17.280 | relative to the possibilities
00:45:19.040 | of what technology could bring.
00:45:21.120 | It's also possible that the brain is the epitome,
00:45:24.480 | is the ceiling.
00:45:25.680 | Like just because-- - The ceiling.
00:45:28.120 | How is that possible?
00:45:30.640 | - Meaning like this is the smartest possible thing
00:45:34.000 | that the universe could create.
00:45:36.040 | - So that seems-- - Unlikely.
00:45:39.000 | - Unlikely to me, yeah.
00:45:39.960 | I mean, for some of these reasons we alluded to earlier
00:45:43.920 | in terms of designs we already have for computers
00:45:48.920 | that would be faster by many orders of magnitude
00:45:53.920 | than the human brain.
00:45:55.120 | - Yeah, but it could be that the constraints,
00:45:58.000 | the cognitive constraints in themselves
00:46:00.160 | is what enables the intelligence.
00:46:02.280 | So the more powerful you make the computer,
00:46:05.360 | the less likely it is to become super intelligent.
00:46:07.920 | This is where I say dumb things to push back on.
00:46:12.080 | - Well, yeah, I'm not sure I follow you.
00:46:14.160 | No, I mean, so there are different dimensions
00:46:15.800 | of intelligence.
00:46:16.840 | A simple one is just speed.
00:46:20.000 | Like if you could solve the same challenge faster
00:46:22.840 | in some sense, you're smarter.
00:46:25.240 | So there, I think we have very strong evidence
00:46:28.840 | for thinking that you could have a computer
00:46:31.680 | in this universe that would be much faster
00:46:34.680 | than the human brain and therefore have speed
00:46:37.880 | super intelligence, like be completely superior,
00:46:39.840 | maybe a million times faster.
00:46:41.400 | Then maybe there are other ways
00:46:43.960 | in which you could be smarter as well,
00:46:45.720 | maybe more qualitative ways, right?
00:46:48.520 | And there, the concepts are a little bit less clear cut.
00:46:51.720 | So it's harder to make a very crisp, neat,
00:46:56.160 | firmly logical argument for why that could be
00:47:00.200 | qualitative super intelligence as opposed to just things
00:47:02.640 | that were faster.
00:47:03.480 | Although I still think it's very plausible
00:47:06.000 | and for various reasons that are less
00:47:07.880 | than watertight arguments.
00:47:09.080 | But I mean, you can sort of, for example,
00:47:10.360 | if you look at animals and-
00:47:12.120 | - Brain cells.
00:47:12.960 | - Yeah, and even within humans,
00:47:14.520 | like there seems to be like Einstein versus random person.
00:47:18.520 | Like it's not just that Einstein was a little bit faster,
00:47:21.920 | but like how long would it take a normal person
00:47:23.960 | to invent general relativity?
00:47:26.480 | It's like, it's not 20% longer than it took Einstein
00:47:29.200 | or something like that.
00:47:30.040 | It's like, I don't know whether they would do it at all
00:47:31.840 | or it would take millions of years or some totally bizarre.
00:47:36.840 | - But your intuition is that the compute size
00:47:39.080 | will get you, increasing the size of the computer
00:47:43.360 | and the speed of the computer
00:47:45.520 | might create some much more powerful levels of intelligence
00:47:49.600 | that would enable some of the things
00:47:51.160 | we've been talking about with like the simulation,
00:47:53.320 | being able to simulate an ultra realistic environment,
00:47:56.760 | ultra realistic perception of reality.
00:48:01.280 | - Yeah, strictly speaking, it would not be necessary
00:48:04.160 | to have super intelligence in order to have say,
00:48:06.640 | the technology to make these simulations,
00:48:09.160 | ancestor simulations or other kinds of simulations.
00:48:11.720 | As a matter of fact, I think if we are in a simulation,
00:48:19.160 | it would most likely be one built by a civilization
00:48:22.520 | that had super intelligence.
00:48:23.960 | It certainly would help a lot.
00:48:27.640 | I mean, it could build more efficient,
00:48:28.960 | larger scale structures if you had super intelligence.
00:48:31.360 | I also think that if you had the technology
00:48:33.120 | to build these simulations,
00:48:34.160 | that's like a very advanced technology.
00:48:35.920 | It seems kind of easier to get technology
00:48:38.200 | to super intelligence.
00:48:39.360 | So I'd expect by the time they could make these
00:48:42.840 | fully realistic simulations of human history
00:48:45.360 | with human brains in there, like before that,
00:48:47.680 | they got to that stage, they would have figured out
00:48:49.400 | how to create machine super intelligence
00:48:52.520 | or maybe biological enhancements of their own brains
00:48:56.120 | if they were biological creatures to start with.
00:48:59.040 | - So we talked about the three parts
00:49:02.880 | of the simulation argument.
00:49:04.360 | One, we destroy ourselves before we ever create
00:49:06.480 | the simulation.
00:49:07.520 | Two, we somehow, everybody somehow loses interest
00:49:11.440 | in creating simulation.
00:49:12.440 | Three, we're living in a simulation.
00:49:16.080 | So you've kind of, I don't know if your thinking
00:49:20.120 | has evolved on this point, but you kind of said
00:49:22.040 | that we know so little that these three cases
00:49:25.760 | might as well be equally probable.
00:49:28.160 | So probabilistically speaking,
00:49:30.040 | where do you stand on this?
00:49:31.720 | - Yeah, I mean, I don't think equal necessarily
00:49:34.880 | would be the most supported probability assignment.
00:49:39.880 | - So how would you, without assigning actual numbers,
00:49:44.160 | what's more or less likely in your view?
00:49:48.000 | - Well, I mean, I've historically tended to punt
00:49:50.120 | on the question of like as between these three.
00:49:55.120 | - So maybe you ask another way is which kind of things
00:49:59.600 | would make each of these more or less likely?
00:50:02.960 | What kind of, yeah, intuition.
00:50:04.960 | - Certainly in general terms, if you take anything
00:50:07.480 | that say increases or reduces the probability
00:50:10.960 | of one of these, we tend to slosh probability around
00:50:15.960 | on the other.
00:50:17.280 | So if one becomes less probable, like the other
00:50:19.160 | would have to, 'cause it's gotta add up to one.
00:50:21.520 | - Yes.
00:50:22.360 | - So if we consider the first hypothesis,
00:50:25.080 | the first alternative that there's this filter
00:50:29.040 | that makes it so that virtually no civilization
00:50:32.840 | reaches technical maturity, in particular,
00:50:37.840 | our own civilization.
00:50:39.960 | And if that's true, then it's like very unlikely
00:50:41.880 | that we would reach technical maturity,
00:50:44.280 | just because if almost no civilization at our stage does it,
00:50:46.760 | then it's unlikely that we do it.
00:50:49.040 | So hence--
00:50:49.880 | - Sorry, can you linger on that for a second?
00:50:51.240 | - Well, if it's the case that almost all civilizations
00:50:53.920 | at our current stage of technological development
00:50:58.920 | fail to reach maturity, that would give us
00:51:03.640 | very strong reason for thinking we will fail
00:51:06.000 | to reach technical maturity.
00:51:07.760 | - And also sort of the flip side of that is the fact
00:51:10.280 | that we've reached it means that many other civilizations
00:51:13.200 | have reached this point.
00:51:14.040 | - Yeah, so that means if we get closer and closer
00:51:15.720 | to actually reaching technical maturity,
00:51:18.280 | there's less and less distance left where we could
00:51:22.600 | go extinct before we are there.
00:51:25.040 | And therefore the probability that we will reach
00:51:28.720 | increases as we get closer.
00:51:30.880 | And that would make it less likely to be true
00:51:32.840 | that almost all civilizations at our current stage
00:51:35.880 | failed to get there.
00:51:37.080 | Like we would have this, the one case we'd started ourselves
00:51:41.120 | would be very close to getting there.
00:51:42.680 | That would be strong evidence that it's not so hard
00:51:44.480 | to get to technical maturity.
00:51:46.320 | So to the extent that we feel we are moving nearer
00:51:50.520 | to technical maturity, that would tend to reduce
00:51:53.560 | the probability of the first alternative
00:51:56.280 | and increase the probability of the other two.
00:51:59.080 | It doesn't need to be a monotonic change.
00:52:01.800 | Like if every once in a while some new threat
00:52:05.040 | comes into view, some bad new thing you could do
00:52:07.640 | with some novel technology, for example,
00:52:11.160 | that could change our probabilities in the other direction.
00:52:14.880 | - But that technology, again, you have to think about
00:52:17.800 | as that technology has to be able to equally
00:52:21.640 | in an even way affect every civilization out there.
00:52:26.200 | - Yeah, pretty much.
00:52:27.920 | I mean, strictly speaking, it's not true.
00:52:30.720 | I mean, that could be two different existential risks
00:52:33.920 | in every civilization, you know,
00:52:36.160 | - As in one of them. - Not from one or the other.
00:52:38.440 | But none of them kills more than 50%.
00:52:41.840 | - Yeah, gotcha.
00:52:42.920 | - Incidentally, so some of my other work,
00:52:47.280 | I mean, on machine superintelligence,
00:52:48.920 | like pointed to some existential risks
00:52:51.240 | related to sort of superintelligent AI
00:52:53.800 | and how we must make sure to handle that wisely
00:52:58.200 | and carefully.
00:52:59.480 | It's not the right kind of existential catastrophe
00:53:04.000 | to make the first alternative true though.
00:53:09.000 | Like it might be bad for us
00:53:12.040 | if the future lost a lot of value
00:53:13.760 | as a result of it being shaped by some process
00:53:17.560 | that optimized for some completely non-human value.
00:53:20.880 | But even if we got killed by machine superintelligence
00:53:25.360 | is that machine superintelligence
00:53:27.480 | might still attain technological maturity.
00:53:30.000 | So-- - Oh, I see.
00:53:30.840 | So you're not human exclusive.
00:53:33.400 | This could be any intelligent species that achieves,
00:53:36.840 | like it's all about the technological maturity.
00:53:38.800 | It's not that the humans have to attain it.
00:53:42.800 | - Right. - So like superintelligence
00:53:44.800 | 'cause it replaced us and that's just as well
00:53:46.440 | for the simulation argument. - And that could still,
00:53:47.640 | yeah, yeah, I mean,
00:53:48.480 | it could interact with the second alternative.
00:53:51.640 | Like if the thing that replaced us
00:53:53.240 | was either more likely or less likely,
00:53:55.760 | then we would be to have an interest
00:53:57.880 | in creating ancestor simulations,
00:54:00.000 | you know, that could affect probabilities.
00:54:02.680 | But yeah, to a first order,
00:54:04.240 | like if we all just die, then yeah,
00:54:07.840 | we won't produce any simulations 'cause we are dead.
00:54:11.720 | But if we all die and get replaced
00:54:14.720 | by some other intelligent thing
00:54:15.960 | that then gets to technological maturity,
00:54:18.520 | the question remains, of course,
00:54:19.800 | if might not that thing then use some of its resources
00:54:22.600 | to do this stuff.
00:54:25.120 | - So can you reason about this stuff?
00:54:27.560 | This is given how little we know about the universe.
00:54:30.600 | Is it reasonable to reason about these probabilities?
00:54:35.600 | So like how little, well, maybe you can disagree,
00:54:41.000 | but to me, it's not trivial to figure out
00:54:45.160 | how difficult it is to build a simulation.
00:54:47.400 | We kind of talked about it a little bit.
00:54:49.520 | We've also don't know, like as we try to start building it,
00:54:54.240 | like start creating virtual worlds and so on,
00:54:56.920 | how that changes the fabric of society.
00:54:59.560 | Like there's all these things along the way
00:55:01.560 | that can fundamentally change just so many aspects
00:55:05.040 | of our society about our existence
00:55:07.320 | that we don't know anything about.
00:55:09.200 | Like the kind of things we might discover
00:55:11.720 | when we understand to a greater degree
00:55:15.800 | the fundamental, the physics,
00:55:19.280 | like the theory, if we have a breakthrough,
00:55:21.760 | have a theory and everything, how that changes the stuff,
00:55:23.840 | how that changes deep space exploration and so on.
00:55:27.480 | So like, is it still possible to reason about probabilities
00:55:31.360 | given how little we know?
00:55:32.600 | - Yes, I think though there will be a large residual
00:55:37.800 | of uncertainty that we'll just have to acknowledge.
00:55:41.800 | And I think that's true for most of these big picture
00:55:46.240 | questions that we might wonder about.
00:55:49.680 | It's just, we are small, short-lived, small-brained,
00:55:54.680 | cognitively very limited humans with little evidence.
00:55:59.000 | And it's amazing we can figure out as much as we can
00:56:03.000 | really about the cosmos.
00:56:04.600 | - But, okay, so there's this cognitive trick
00:56:08.960 | that seems to happen when I look at the simulation argument,
00:56:11.840 | which for me, it seems like case one and two feel unlikely.
00:56:16.440 | I wanna say feel unlikely as opposed to
00:56:19.440 | sort of, it's not like I have too much scientific evidence
00:56:23.000 | to say that either one or two are not true.
00:56:26.920 | It just seems unlikely that every single civilization
00:56:30.440 | destroys itself.
00:56:32.240 | And it seems, like feels unlikely
00:56:34.920 | that the civilizations lose interest.
00:56:37.000 | So naturally, without necessarily explicitly doing it,
00:56:42.000 | but the simulation argument basically says,
00:56:45.600 | it's very likely we're living in a simulation.
00:56:48.780 | Like to me, my mind naturally goes there.
00:56:51.800 | I think the mind goes there for a lot of people.
00:56:54.720 | Is that the incorrect place for it to go?
00:56:57.720 | - Well, not necessarily.
00:56:59.160 | I think the second alternative,
00:57:00.860 | which has to do with the motivations and interest
00:57:07.560 | of technological immature civilizations,
00:57:11.000 | I think there is much we don't understand about that.
00:57:15.600 | - Yeah, can you talk about that a little bit?
00:57:18.320 | What do you think?
00:57:19.160 | I mean, this is a question that pops up
00:57:20.280 | when you build an AGI system
00:57:22.480 | or build a general intelligence.
00:57:24.200 | How does that change our motivations?
00:57:27.840 | Do you think it'll fundamentally transform our motivations?
00:57:31.520 | - Well, it doesn't seem that implausible
00:57:33.160 | that once you take this leap to technological maturity,
00:57:38.160 | I mean, I think like it involves creating
00:57:41.840 | machine superintelligence possibly,
00:57:45.040 | that would be sort of on the path for basically
00:57:48.080 | all civilizations maybe before they are able
00:57:50.880 | to create large numbers of ancestry simulations.
00:57:53.640 | That possibly could be one of these things
00:57:56.520 | that quite radically changes the orientation
00:58:00.680 | of what a civilization is in fact optimizing for.
00:58:04.720 | There are other things as well.
00:58:08.560 | So at the moment we have not perfect control
00:58:16.040 | over our own being, our own mental states,
00:58:20.080 | our own experiences are not under our direct control.
00:58:23.660 | So for example, if you want to experience a pleasure
00:58:30.140 | and happiness, you might have to do a whole host of things
00:58:35.800 | in the external world to try to get into the stage,
00:58:39.260 | into the mental state where you experience pleasure.
00:58:42.320 | You're like, like some people get some pleasure
00:58:43.880 | from eating great food.
00:58:44.840 | Well, they can just turn that on.
00:58:47.040 | They have to kind of actually go to a nice restaurant
00:58:50.000 | and then they have to make money.
00:58:51.320 | So there's like all this kind of activity
00:58:53.100 | that maybe arises from the fact that we are trying
00:58:58.100 | to ultimately produce mental states.
00:59:02.000 | But the only way to do that is by a whole host
00:59:04.280 | of complicated activities in the external world.
00:59:06.920 | Now, at some level of technological development,
00:59:09.320 | I think we'll become auto potent in the sense
00:59:11.640 | of gaining direct ability to choose
00:59:15.440 | our own internal configuration and enough knowledge
00:59:19.000 | and insight to be able to actually do that
00:59:21.200 | in a meaningful way.
00:59:22.680 | So then it could turn out that there are a lot
00:59:24.880 | of instrumental goals that would drop out of the picture
00:59:29.360 | and be replaced by other instrumental goals
00:59:31.480 | because we could now serve some of these final goals
00:59:35.720 | in more direct ways.
00:59:36.960 | And who knows how all of that shakes out
00:59:41.240 | after civilizations reflect on that and converge
00:59:45.640 | and different attractors and so on and so forth.
00:59:48.040 | And that could be new instrumental considerations
00:59:54.520 | that come into view as well that we are just oblivious to
00:59:57.800 | that would maybe have a strong shaping effect on actions,
01:00:02.800 | like very strong reasons to do something
01:00:05.440 | or not to do something.
01:00:06.440 | And we just don't realize they are there
01:00:08.280 | because we are so dumb, fumbling through the universe.
01:00:11.000 | But if almost inevitably on route to attaining the ability
01:00:15.920 | to create many answers to simulations,
01:00:17.480 | you do have this cognitive enhancement or advice
01:00:20.920 | from super intelligences or you yourself,
01:00:22.960 | then maybe there's like this additional set
01:00:24.960 | of considerations coming into view.
01:00:26.320 | And you have to realize it's obvious that the thing
01:00:28.720 | that makes sense is to do X.
01:00:30.680 | Whereas right now it seems, hey, you could X, Y or Z
01:00:32.960 | and different people will do different things.
01:00:34.640 | And we are kind of random in that sense.
01:00:38.920 | - Yeah, because at this time with our limited technology,
01:00:42.880 | the impact of our decisions is minor.
01:00:45.200 | I mean, that's starting to change in some ways, but-
01:00:49.360 | - Well, I'm not sure it follows that the impact
01:00:52.360 | of our decisions is minor.
01:00:54.440 | - Well, it's starting to change.
01:00:55.640 | I mean, I suppose a hundred years ago it was minor.
01:00:58.600 | It's starting to-
01:01:00.560 | - So it depends on how you view it.
01:01:02.560 | So what people did a hundred years ago
01:01:06.000 | still have effects on the world today.
01:01:09.080 | - Oh, I see.
01:01:10.440 | As a civilization in the togetherness.
01:01:14.360 | - Yeah, so it might be that the greatest impact
01:01:18.080 | of individuals is not at technological maturity
01:01:21.920 | or very far down.
01:01:22.840 | It might be earlier on when there are different tracks,
01:01:25.920 | civilization could go down.
01:01:28.000 | I mean, maybe the population is smaller.
01:01:30.640 | Things still haven't settled out.
01:01:32.360 | If you count the indirect effects
01:01:35.120 | that those could be bigger than the direct effects
01:01:40.120 | that people have later on.
01:01:43.240 | - So part three of the argument says that,
01:01:46.200 | so that leads us to a place where eventually
01:01:50.120 | somebody creates a simulation.
01:01:51.820 | That I think you had a conversation with Joe Rogan.
01:01:55.520 | I think there's some aspect here
01:01:57.320 | where you got stuck a little bit.
01:02:01.040 | How does that lead to where likely living in a simulation?
01:02:06.040 | So this kind of probability argument,
01:02:10.360 | if somebody eventually creates a simulation,
01:02:12.600 | why does that mean that we're now in a simulation?
01:02:15.600 | - What you get to if you accept alternative three first
01:02:18.920 | is there would be more simulated people
01:02:23.320 | with our kinds of experiences than non simulated ones.
01:02:26.320 | Like if you look at the world as a whole
01:02:31.320 | by the end of time as it were, you just count it up.
01:02:34.960 | That would be more simulated ones than non simulated ones.
01:02:39.440 | Then there is an extra step to get from that.
01:02:43.120 | If you assume that suppose for the sake of the argument
01:02:45.160 | that that's true, how do you get from that
01:02:49.320 | to the statement we are probably in a simulation?
01:02:55.320 | So here you're introducing an indexical statement
01:02:57.600 | like it's that this person right now is in a simulation.
01:03:02.600 | There are all these other people
01:03:06.200 | that are in simulations
01:03:08.040 | and some that are not in a simulation.
01:03:09.840 | But what probability should you have that you yourself
01:03:14.240 | is one of the simulated ones in that setup?
01:03:18.240 | So yeah, so I call it the bland principle of indifference,
01:03:21.560 | which is that in cases like this,
01:03:25.920 | when you have two, I guess, sets of observers,
01:03:29.040 | one of which is much larger than the other.
01:03:33.920 | And you can't from any internal evidence you have
01:03:37.800 | tell which set you belong to.
01:03:40.720 | You should assign a probability that's proportional
01:03:45.720 | to the size of these sets.
01:03:48.160 | So that if there are 10,000 people in a simulation,
01:03:51.440 | 10 times more simulated people
01:03:53.680 | with your kinds of experiences,
01:03:55.200 | you would be 10 times more likely to be one of those.
01:03:58.480 | - Is that as intuitive as it sounds?
01:04:00.680 | I mean, that seems kind of,
01:04:03.360 | if you don't have enough information,
01:04:04.800 | you should rationally just assign the same probability
01:04:07.840 | as the size of the set.
01:04:10.840 | - It seems pretty plausible to me.
01:04:15.720 | - Where are the holes in this?
01:04:17.000 | Is it at the very beginning,
01:04:19.720 | the assumption that everything stretches,
01:04:22.360 | sort of you have infinite time, essentially?
01:04:25.360 | - You don't need infinite time.
01:04:26.880 | - You just need, how long does the time--
01:04:29.800 | - Well, however long it takes, I guess,
01:04:31.400 | for a universe to produce an intelligent civilization
01:04:35.840 | that then attains the technology
01:04:37.160 | to run some ancestor simulations.
01:04:38.960 | - Gotcha.
01:04:39.800 | At some point, when the first simulation is created,
01:04:43.040 | that stretch of time, just a little longer
01:04:45.560 | than they'll all start creating simulations.
01:04:48.160 | Kind of like order of magnitude.
01:04:49.000 | - Well, I mean, there might be different,
01:04:51.120 | it might, if you think of there being
01:04:53.880 | a lot of different planets
01:04:54.920 | and some subset of them have life,
01:04:57.720 | and then some subset of those get to intelligent life,
01:05:00.960 | and some of those maybe eventually
01:05:03.080 | start creating simulations,
01:05:05.000 | they might get started at quite different times.
01:05:07.360 | Like maybe on some planet, it takes a billion years longer
01:05:10.360 | before you get monkeys, or before you get even bacteria,
01:05:15.360 | than on another planet.
01:05:17.560 | So this might happen kind of
01:05:21.680 | at different cosmological epochs.
01:05:24.880 | - Is there a connection here to the doomsday argument
01:05:27.360 | and that sampling there?
01:05:29.320 | - Yeah, there is a connection in that
01:05:32.080 | they both involve an application of anthropic reasoning,
01:05:36.960 | that is reasoning about these kind of indexical propositions.
01:05:40.960 | But the assumption you need
01:05:42.640 | in the case of the simulation argument
01:05:47.000 | is much weaker than the assumption you need
01:05:50.080 | to make the doomsday argument go through.
01:05:53.560 | - What is the doomsday argument?
01:05:55.040 | And maybe you can speak to the anthropic reasoning
01:05:57.880 | in more general.
01:05:58.920 | - Yeah, that's a big and interesting topic
01:06:01.240 | in its own right, anthropics.
01:06:02.960 | But the doomsday argument is this really first discovered
01:06:07.920 | by Brandon Carter, who was a theoretical physicist
01:06:11.160 | and then developed by philosopher John Leslie.
01:06:16.720 | I think it might've been discovered initially
01:06:18.240 | in the '70s or '80s, and Leslie wrote this book,
01:06:21.080 | I think, in '96.
01:06:23.120 | And there are some other versions as well
01:06:25.640 | by Richard Gott, who's a physicist,
01:06:27.280 | but let's focus on the Carter-Leslie version,
01:06:29.520 | where it's an argument
01:06:33.440 | that we have systematically underestimated
01:06:38.400 | the probability that humanity will go extinct soon.
01:06:44.040 | Now, I should say most people probably think
01:06:47.720 | at the end of the day, there is something wrong
01:06:49.120 | with this doomsday argument that it doesn't really hold.
01:06:52.080 | It's like there's something wrong with it,
01:06:53.480 | but it's proved hard to say exactly what is wrong with it.
01:06:57.200 | And different people have different accounts.
01:06:59.440 | My own view is it seems inconclusive.
01:07:03.560 | And I can say what the argument is.
01:07:06.600 | - Yeah, that would be good.
01:07:07.640 | - Yeah, so maybe it's easiest to explain
01:07:09.960 | via an analogy to sampling from urns.
01:07:14.960 | So imagine you have a big,
01:07:19.000 | imagine you have two urns in front of you,
01:07:22.760 | and they have balls in them that have numbers.
01:07:25.160 | The two urns look the same,
01:07:27.840 | but inside one, there are 10 balls,
01:07:29.800 | ball number one, two, three, up to ball number 10.
01:07:32.360 | And then in the other urn, you have a million balls
01:07:37.120 | numbered one to a million.
01:07:40.360 | And somebody puts one of these urns in front of you
01:07:44.240 | and ask you to guess what's the chance it's the 10 ball urn.
01:07:49.000 | And you say, well, 50-50, I can't tell which urn it is.
01:07:52.000 | But then you're allowed to reach in
01:07:55.320 | and pick a ball at random from the urn.
01:07:57.800 | And let's suppose you find that it's ball number seven.
01:08:00.560 | So that's strong evidence for the 10 ball hypothesis.
01:08:05.400 | Like it's a lot more likely that you would get
01:08:08.360 | such a low numbered ball
01:08:10.640 | if there are only 10 balls in the urn,
01:08:11.880 | like it's in fact 10% then, right?
01:08:13.680 | Then if there are a million balls,
01:08:16.560 | it would be very unlikely you would get number seven.
01:08:19.520 | So you perform a Bayesian update.
01:08:22.520 | And if your prior was 50-50 that it was the 10 ball urn,
01:08:27.120 | you become virtually certain
01:08:28.280 | after finding the random sample was seven
01:08:30.800 | that it only has 10 balls in it.
01:08:33.200 | So in the case of the urns, this is uncontroversial,
01:08:35.200 | just elementary probability theory.
01:08:37.360 | The Doomsday Argument says that you should reason
01:08:40.360 | in a similar way with respect to different hypotheses
01:08:44.040 | about how many balls there will be in the urn of humanity.
01:08:49.040 | I said, for how many humans there will ever be
01:08:51.520 | by the time we go extinct.
01:08:52.880 | So to simplify, let's suppose we only consider
01:08:56.640 | two hypotheses, either maybe 200 billion humans in total
01:09:01.440 | or 200 trillion humans in total.
01:09:05.600 | You could fill in more hypotheses,
01:09:07.320 | but it doesn't change the principle here.
01:09:09.280 | So it's easiest to see if we just consider these two.
01:09:12.040 | So you start with some prior
01:09:13.320 | based on ordinary empirical ideas
01:09:15.880 | about threats to civilization and so forth.
01:09:18.800 | And maybe you say it's a 5% chance that we will go extinct
01:09:22.520 | by the time there will have been 200 billion only.
01:09:25.440 | You're kind of optimistic, let's say.
01:09:27.120 | You think probably we'll make it through,
01:09:28.800 | colonize the universe.
01:09:30.000 | But then according to this Doomsday Argument,
01:09:34.400 | you should take off your own birth rank
01:09:37.000 | as a random sample.
01:09:40.080 | So your birth rank is your sequence in the position
01:09:43.160 | of all humans that have ever existed.
01:09:47.680 | It turns out you're about a human number of 100 billion,
01:09:51.680 | you know, give or take.
01:09:52.520 | That's like roughly how many people
01:09:54.000 | have been born before you.
01:09:55.280 | - That's fascinating 'cause I probably,
01:09:57.440 | we each have a number.
01:09:59.080 | - We would each have a number in this.
01:10:01.160 | I mean, obviously the exact number would depend
01:10:04.000 | on where you started counting,
01:10:05.280 | like which ancestors was human enough to count as human.
01:10:08.920 | But those are not really important.
01:10:10.960 | They're relatively few of them.
01:10:12.880 | So yeah, so you're roughly 100 billion.
01:10:16.080 | Now, if they're only gonna be 200 billion in total,
01:10:18.520 | that's a perfectly unremarkable number.
01:10:20.960 | You're somewhere in the middle, right?
01:10:23.160 | It's run-of-the-mill human, completely unsurprising.
01:10:27.280 | Now, if they're gonna be 200 trillion,
01:10:28.880 | you would be remarkably early.
01:10:31.720 | Like, what are the chances?
01:10:33.880 | Out of these 200 trillion human,
01:10:35.880 | that you should be human number 100 billion?
01:10:39.800 | That seems it would have a much lower
01:10:42.240 | conditional probability.
01:10:43.680 | And so analogously to how in the urn case,
01:10:47.560 | you thought after finding this low-numbered random sample,
01:10:51.920 | you updated in favor of the urn having few balls.
01:10:54.480 | Similarly, in this case,
01:10:56.160 | you should update in favor of the human species
01:10:59.320 | having a lower total number of members.
01:11:02.400 | That is doom soon.
01:11:04.160 | - You said doom soon?
01:11:05.600 | That's the- - Yeah.
01:11:06.800 | Well, that would be the hypothesis in this case,
01:11:09.120 | that it will end after 100 billion.
01:11:11.680 | - I just like that term for that hypothesis, yeah.
01:11:14.200 | - So what it kind of crucially relies on,
01:11:17.240 | the doomsday argument,
01:11:18.080 | is the idea that you should reason
01:11:21.680 | as if you were a random sample
01:11:23.840 | from the set of all humans that will ever have existed.
01:11:27.400 | If you have that assumption,
01:11:28.400 | then I think the rest kind of follows.
01:11:30.960 | The question then is why should you make that assumption?
01:11:34.240 | In fact, you know you're 100 billion,
01:11:36.000 | so where do you get this prior?
01:11:38.560 | And then there's like a literature on that
01:11:40.360 | with different ways of supporting that assumption.
01:11:43.680 | - That's just one example of a theropic reasoning, right?
01:11:48.080 | That seems to be kind of convenient
01:11:49.880 | when you think about humanity.
01:11:52.520 | When you think about sort of even like existential threats
01:11:55.840 | and so on, as it seems that quite naturally
01:12:00.240 | that you should assume that you're just an average case.
01:12:03.040 | - Yeah, that you're a kind of a typical or randomly sample.
01:12:08.160 | Now in the case of the doomsday argument,
01:12:09.600 | it seems to lead to what intuitively we think
01:12:12.280 | is the wrong conclusion,
01:12:13.440 | or at least many people have this reaction,
01:12:15.920 | that there's gotta be something fishy about this argument,
01:12:19.040 | because from very, very weak premises,
01:12:21.560 | it gets this very striking implication
01:12:24.960 | that we have almost no chance
01:12:27.000 | of reaching size 200 trillion humans in the future.
01:12:30.880 | And how could we possibly get there
01:12:33.040 | just by reflecting on when we were born?
01:12:35.440 | It seems you would need sophisticated arguments
01:12:37.480 | about the impossibility of space colonization, blah, blah.
01:12:40.480 | So one might be tempted to reject this key assumption,
01:12:43.680 | I call it the self-sampling assumption.
01:12:45.480 | The idea that you should reason
01:12:46.520 | as if you're a random sample from all observers
01:12:48.840 | or in some reference class.
01:12:51.480 | However, it turns out that in other domains,
01:12:56.600 | it looks like we need something
01:12:58.320 | like this self-sampling assumption
01:13:00.120 | to make sense of bona fide scientific inferences.
01:13:04.320 | In contemporary cosmology, for example,
01:13:06.920 | you have these multiverse theories.
01:13:09.120 | And according to a lot of those,
01:13:10.840 | all possible human observations are made.
01:13:14.920 | So I mean, if you have a sufficiently large universe,
01:13:17.480 | you will have a lot of people observing
01:13:18.880 | all kinds of different things.
01:13:20.320 | So if you have two competing theories,
01:13:23.800 | say about the value of some constant,
01:13:26.560 | it could be true according to both of these theories
01:13:32.200 | that there will be some observers observing the value
01:13:35.800 | that corresponds to the other theory
01:13:39.720 | because there will be some observers that have hallucinations
01:13:43.080 | or there's a local fluctuation
01:13:44.960 | or a statistically anomalous measurement,
01:13:47.520 | these things will happen.
01:13:49.200 | And if enough observers make enough different observations,
01:13:52.320 | there will be some that sort of by chance
01:13:53.960 | make these different ones.
01:13:55.760 | And so what we would wanna say is,
01:13:57.640 | well, many more observers,
01:14:02.640 | a larger proportion of the observers
01:14:04.840 | will observe as it were the true value.
01:14:06.800 | And a few will observe the wrong value.
01:14:10.720 | If we think of ourselves as a random sample,
01:14:12.600 | we should expect with a probability
01:14:15.040 | to observe the true value.
01:14:16.000 | And that will then allow us to conclude
01:14:19.200 | that the evidence we actually have
01:14:20.600 | is evidence for the theories we think are supported.
01:14:24.520 | It kind of then is a way of making sense
01:14:29.200 | of these inferences that clearly seem correct
01:14:31.800 | that we can make various observations
01:14:34.840 | and infer what the temperature of the cosmic background is
01:14:39.000 | and the fine structure constant and all of this.
01:14:42.600 | But it seems that without rolling in some assumption
01:14:46.640 | similar to the self-sampling assumption,
01:14:49.520 | this inference just doesn't go through.
01:14:51.560 | And there are other examples.
01:14:53.120 | So there are these scientific contexts
01:14:54.720 | where it looks like this kind of anthropic reasoning
01:14:56.760 | is needed and makes perfect sense.
01:14:59.080 | And yet in the case of the Doomsday Argument,
01:15:01.080 | it has this weird consequence
01:15:02.440 | and people might think there's something wrong with it there.
01:15:05.680 | So there's then this project that would consist
01:15:10.680 | in trying to figure out
01:15:13.200 | what are the legitimate ways of reasoning
01:15:15.880 | about these indexical facts
01:15:18.240 | when observer selection effects are in play.
01:15:20.400 | In other words, developing a theory of anthropics.
01:15:23.080 | And there are different views of looking at that.
01:15:25.920 | And it's a difficult methodological area.
01:15:29.120 | But to tie it back to the simulation argument,
01:15:33.960 | the key assumption there,
01:15:36.440 | this bland principle of indifference,
01:15:39.440 | is much weaker than the self-sampling assumption.
01:15:41.880 | So if you think about in the case of the Doomsday Argument,
01:15:47.400 | it says you should reason as if you're a random sample
01:15:49.600 | from all humans that will ever have lived,
01:15:51.120 | even though in fact, you know that you are
01:15:54.000 | about number 100 billionth human
01:15:57.360 | and you're alive in the year 2020.
01:15:59.680 | Whereas in the case of the simulation argument,
01:16:01.520 | it says that, well, if you actually have no way
01:16:04.560 | of telling which one you are,
01:16:05.720 | then you should assign this kind of uniform probability.
01:16:10.720 | - Yeah, yeah, your role as the observer
01:16:12.840 | in the simulation argument is different, it seems like.
01:16:15.680 | Like, who's the observer?
01:16:17.360 | I mean, I keep assigning the individual consciousness.
01:16:19.640 | - Yeah, I mean, well, there are a lot of observers
01:16:22.200 | in the simulation, in the context
01:16:24.120 | of the simulation argument.
01:16:25.160 | - But they're all observers.
01:16:26.000 | - The relevant observers would be, A,
01:16:27.840 | the people in original histories,
01:16:30.120 | and B, the people in simulations.
01:16:33.320 | So this would be the class of observers that we need.
01:16:35.960 | I mean, there are also maybe the simulators,
01:16:37.400 | but we can set those aside for this.
01:16:40.160 | So the question is, given that class of observers,
01:16:43.080 | a small set of original history observers
01:16:46.240 | and a large class of simulated observers,
01:16:48.560 | which one should you think is you?
01:16:51.120 | Where are you amongst this set of observers?
01:16:53.680 | - I'm maybe having a little bit of trouble
01:16:56.320 | wrapping my head around the intricacies
01:16:59.440 | of what it means to be an observer
01:17:01.120 | in the different instantiations
01:17:06.120 | of the anthropic reasoning cases that we mentioned.
01:17:09.360 | I mean, does it have to be--
01:17:11.400 | - It's like the observer, no, I mean,
01:17:12.600 | it may be an easier way of putting it.
01:17:14.760 | It's just like, are you simulated or are you not simulated?
01:17:18.160 | Given this assumption that these two groups of people exist.
01:17:21.280 | - Yeah, in the simulation case,
01:17:22.400 | it seems pretty straightforward.
01:17:24.600 | - Yeah, so the key point is the methodological assumption
01:17:28.840 | you need to make to get the simulation argument
01:17:32.480 | to where it wants to go is much weaker and less problematic
01:17:36.960 | than the methodological assumption you need to make
01:17:39.520 | to get the doomsday argument to its conclusion.
01:17:42.560 | Maybe the doomsday argument is sound or unsound,
01:17:46.680 | but you need to make a much stronger
01:17:48.000 | and more controversial assumption to make it go through.
01:17:52.040 | In the case of the simulation argument,
01:17:54.520 | I guess one maybe way intuition popped
01:17:57.640 | to support this bland principle of indifference
01:18:00.880 | is to consider a sequence of different cases
01:18:05.400 | where the fraction of people who are simulated
01:18:08.720 | to non-simulated approaches one.
01:18:12.480 | So in the limiting case where everybody is simulated,
01:18:17.120 | obviously you can deduce with certainty
01:18:22.640 | that you are simulated.
01:18:23.840 | If everybody with your experiences is simulated
01:18:28.360 | and you know you're gotta be one of those,
01:18:30.880 | you don't need a probability at all.
01:18:32.400 | You just kind of logically conclude it, right?
01:18:35.640 | - Right.
01:18:36.480 | - So then as we move from a case where say 90% of people
01:18:41.480 | 90% of everybody is simulated, 99%, 99.9%,
01:18:46.480 | it should seem plausible that the probability assigned
01:18:50.920 | should sort of approach one certainty
01:18:54.720 | as the fraction approaches the case
01:18:57.600 | where everybody is in a simulation.
01:19:00.880 | - Yeah, that's exactly--
01:19:01.840 | - Like you wouldn't expect that to be a discrete.
01:19:04.760 | Well, if there's one non-simulated person,
01:19:06.560 | then it's 50/50, but if we'd move that,
01:19:08.800 | then it's 100%.
01:19:09.680 | It's like it should kind of...
01:19:11.720 | There are other arguments as well one can use
01:19:14.800 | to support this blind principle of indifference,
01:19:16.800 | but that might be enough to--
01:19:19.480 | - But in general, when you start from time equals zero
01:19:22.440 | and go into the future, the fraction of simulated,
01:19:26.560 | if it's possible to create simulated worlds,
01:19:29.160 | the fraction of simulated worlds will go to one.
01:19:31.560 | - Well, it won't--
01:19:33.480 | - I mean, is that an obvious kind of thing?
01:19:35.200 | - Go all the way to one.
01:19:37.240 | - In reality, that would be some ratio,
01:19:40.720 | although maybe a technologically mature civilization
01:19:43.760 | could run a lot of simulations
01:19:47.400 | using a small portion of its resources.
01:19:50.040 | It probably wouldn't be able to run infinitely many.
01:19:53.160 | I mean, if we take, say, the observed,
01:19:56.840 | the physics in the observed universe,
01:19:58.680 | if we assume that that's also the physics
01:20:01.720 | at the level of the simulators,
01:20:03.560 | that would be limits to the amount
01:20:05.520 | of information processing that any one civilization
01:20:09.360 | could perform in its future trajectory.
01:20:12.480 | - Right, I mean--
01:20:15.680 | - Well, first of all, there's a limited amount of matter
01:20:17.800 | you can get your hands off
01:20:18.880 | because with a positive cosmological constant,
01:20:22.440 | the universe is accelerating.
01:20:24.440 | There's a finite sphere of stuff,
01:20:26.480 | even if you travel with the speed of light
01:20:27.960 | that you could ever reach,
01:20:28.800 | you have a finite amount of stuff.
01:20:31.440 | And then if you think there's a lower limit
01:20:34.440 | to the amount of loss you get
01:20:37.600 | when you perform an erasure of a computation,
01:20:40.240 | or if you think, for example,
01:20:41.360 | just matter gradually over cosmological timescales,
01:20:45.000 | decay, maybe protons decay, other things,
01:20:48.120 | and you radiate out gravitational waves,
01:20:50.480 | like there's all kinds of seemingly unavoidable losses
01:20:54.840 | that occur.
01:20:55.840 | So eventually, we'll have something
01:20:59.240 | like a heat death of the universe
01:21:02.160 | or a cold death or whatever, but yeah.
01:21:04.320 | - So it's finite, but of course, we don't know which,
01:21:06.600 | if there's many ancestral simulations,
01:21:11.320 | we don't know which level we are.
01:21:13.600 | So that could be,
01:21:14.680 | couldn't there be like an arbitrary number of simulation
01:21:18.200 | that spawned ours,
01:21:19.880 | and those had more resources
01:21:22.680 | in terms of physical universe to work with?
01:21:26.640 | - Sorry, what do you mean that that could be?
01:21:28.880 | - So sort of, okay, so
01:21:32.680 | if simulations spawn other simulations,
01:21:37.440 | it seems like each new spawn has fewer resources
01:21:41.920 | to work with.
01:21:42.840 | But we don't know at which step along the way we are at.
01:21:49.080 | Any one observer doesn't know whether we're in level 42
01:21:54.600 | or 100 or one,
01:21:57.840 | or does that not matter for the resources?
01:21:59.940 | I mean--
01:22:02.080 | - I mean, it's true that there would be uncertainty as,
01:22:05.680 | you could have stacked simulations.
01:22:07.640 | - Yes, so.
01:22:08.480 | - And that could then be uncertainty
01:22:11.520 | as to which level we are at.
01:22:13.720 | As you remarked also,
01:22:17.480 | all the computations performed
01:22:21.280 | in a simulation within the simulation
01:22:24.680 | also have to be expanded at the level of the simulation.
01:22:27.760 | - Right.
01:22:28.600 | - So the computer in basement reality
01:22:30.800 | where all the simulations within simulations
01:22:32.400 | within simulations are taking place,
01:22:33.680 | like that computer ultimately,
01:22:35.320 | it's CPU or whatever it is,
01:22:37.760 | like that has to power this whole tower, right?
01:22:39.880 | So if there is a finite compute power in basement reality,
01:22:44.280 | that would impose a limit to how tall this tower can be.
01:22:48.120 | And if each level kind of imposes a large extra overhead,
01:22:53.040 | you might think maybe the tower would not be very tall,
01:22:55.560 | that most people would be low down in the tower.
01:23:00.720 | - I love the term basement reality.
01:23:03.000 | Let me ask, one of the popularizers,
01:23:06.560 | you said there's many through this,
01:23:08.600 | when you look at sort of the last few years
01:23:11.820 | of the simulation hypothesis,
01:23:13.420 | just like you said, it comes up every once in a while,
01:23:15.640 | some new community discovers it and so on.
01:23:17.880 | But I would say one of the biggest popularizers
01:23:20.000 | of this idea is Elon Musk.
01:23:21.640 | Do you have any kind of intuition
01:23:24.080 | about what Elon thinks about
01:23:26.280 | when he thinks about simulation?
01:23:27.760 | Why is this of such interest?
01:23:29.880 | Is it all the things we've talked about
01:23:32.000 | or is there some special kind of intuition
01:23:33.840 | about simulation that he has?
01:23:35.440 | - I mean, you might have a better,
01:23:37.480 | I think, I mean, why it's of interest,
01:23:39.220 | I think it's like seems fairly obvious
01:23:42.120 | why to the extent that one think the argument is credible,
01:23:45.120 | why it would be of interest.
01:23:46.120 | It would, if it's correct,
01:23:47.720 | tell us something very important about the world
01:23:50.000 | in one way or the other,
01:23:50.880 | whichever of the three alternatives for a simulation
01:23:53.320 | that seems like arguably one
01:23:55.300 | of the most fundamental discoveries, right?
01:23:58.480 | Now, interestingly, in the case of somebody like Elon,
01:24:00.600 | so there's like the standard arguments
01:24:02.240 | for why you might wanna take the simulation hypothesis
01:24:04.760 | seriously, the simulation argument, right?
01:24:07.360 | In the case that if you are actually Elon Musk, let us say,
01:24:10.320 | there's a kind of an additional reason
01:24:13.400 | in that what are the chances you would be Elon Musk?
01:24:16.000 | Like, it seems like maybe there would be more interest
01:24:20.680 | in simulating the lives of very unusual
01:24:24.400 | and remarkable people.
01:24:26.160 | So if you consider not just a simulations
01:24:29.120 | where all of human history
01:24:31.760 | or the whole of human civilization are simulated,
01:24:34.440 | but also other kinds of simulations,
01:24:36.020 | which only include some subset of people,
01:24:39.160 | like in those simulations that only include a subset,
01:24:43.680 | it might be more likely that that would include subsets
01:24:45.920 | of people with unusually interesting or consequential life.
01:24:49.320 | - So if you're Elon Musk,
01:24:50.640 | - You gotta wonder, right?
01:24:51.480 | - It's more likely that you're a simulation.
01:24:54.120 | - Like if you're Donald Trump
01:24:55.440 | or if you are Bill Gates,
01:24:56.760 | or you're like some particularly like distinctive character,
01:25:01.760 | you might think that that add,
01:25:04.080 | I mean, if you just think of yourself into the shoes, right?
01:25:07.120 | It's gotta be like an extra reason to think,
01:25:10.120 | that's kind of.
01:25:11.160 | - So interesting.
01:25:12.320 | So on a scale of like farmer in Peru to Elon Musk,
01:25:17.320 | the more you get towards the Elon Musk,
01:25:19.160 | the higher the probability.
01:25:20.600 | - You'd imagine that would be some extra boost from that.
01:25:25.120 | - There's an extra boost.
01:25:26.080 | So he also asked the question of what he would ask an AGI
01:25:30.000 | saying the question being, what's outside the simulation?
01:25:34.520 | Do you think about the answer to this question,
01:25:37.640 | if we are living in a simulation,
01:25:39.320 | what is outside the simulation?
01:25:41.440 | So the programmer of the simulation?
01:25:45.440 | - Yeah, I mean, I think it connects to the question
01:25:47.640 | of what's inside the simulation in that,
01:25:50.280 | if you had views about the creatures of the simulation,
01:25:53.920 | it might help you make predictions
01:25:57.040 | about what kind of simulation it is,
01:25:59.160 | what might happen, what happens after the simulation,
01:26:03.480 | if there is some after, but also like the kind of setup.
01:26:06.600 | So these two questions would be quite closely intertwined.
01:26:10.720 | - But do you think it would be very surprising to,
01:26:14.960 | like, is the stuff inside the simulation,
01:26:17.880 | is it possible for it to be fundamentally different
01:26:19.880 | than the stuff outside?
01:26:21.840 | - Yeah.
01:26:22.680 | - Like another way to put it,
01:26:25.000 | can the creatures inside the simulation
01:26:28.000 | be smart enough to even understand
01:26:30.200 | or have the cognitive capabilities
01:26:31.800 | or any kind of information processing capabilities
01:26:34.760 | enough to understand the mechanism that's created them?
01:26:39.760 | - They might understand some aspects of it.
01:26:43.080 | I mean, it's a level of,
01:26:45.600 | it's kind of there are levels of explanation,
01:26:49.000 | like degrees to which you can understand.
01:26:51.040 | So does your dog understand what it is to be human?
01:26:53.960 | Well, it's got some idea,
01:26:54.920 | like humans are these physical objects
01:26:57.120 | that move around and do things.
01:26:59.760 | And like a normal human would have a deeper understanding
01:27:03.640 | of what it is to be a human.
01:27:05.600 | And maybe some very experienced psychologist
01:27:10.280 | or a great novelist might understand a little bit more
01:27:12.520 | about what it is to be human.
01:27:14.080 | And maybe a superintelligence could see
01:27:16.360 | right through your soul.
01:27:18.640 | So similarly, I do think that we are quite limited
01:27:23.640 | in our ability to understand all of the relevant aspects
01:27:28.560 | of the larger context that we exist in.
01:27:31.920 | - But there might be hope for some.
01:27:33.400 | - I think we understand some aspects of it,
01:27:35.880 | but how much good is that?
01:27:38.080 | If there's like one key aspect
01:27:41.800 | that changes the significance of all the other aspects.
01:27:44.560 | So we understand maybe seven out of 10 key insights
01:27:48.880 | that you need,
01:27:49.720 | but the answer actually like varies completely
01:27:55.840 | depending on what like number eight, nine, and 10 insight is.
01:27:58.960 | It's like whether you wanna,
01:28:01.520 | suppose that the big task were to guess
01:28:06.520 | whether a certain number was odd or even,
01:28:10.040 | like a 10 digit number.
01:28:12.000 | And if it's even, the best thing for you to do in life
01:28:15.520 | is to go north.
01:28:16.360 | And if it's odd, the best thing for you is to go south.
01:28:19.120 | Now we are in a situation where maybe through our science
01:28:23.640 | and philosophy, we figured out
01:28:25.040 | what the first seven digits are.
01:28:26.960 | So we have a lot of information, right?
01:28:28.440 | Most of it we figured out,
01:28:29.800 | but we are clueless about what the last three digits are.
01:28:34.200 | So we are still completely clueless
01:28:36.520 | about whether the number is odd or even,
01:28:38.240 | and therefore whether we should go north or go south.
01:28:41.160 | I feel that's an analogy,
01:28:42.880 | but I feel we're somewhat in that predicament.
01:28:45.720 | We know a lot about the universe.
01:28:48.000 | We've come maybe more than half of the way there
01:28:51.280 | to kind of fully understanding it,
01:28:52.560 | but the parts we're missing are plausibly ones
01:28:55.320 | that could completely change the overall upshot
01:28:59.160 | of the thing, and including change our overall view
01:29:02.720 | about what the scheme of priorities should be
01:29:05.280 | or which strategic direction would make sense to pursue.
01:29:07.760 | - Yeah, I think your analogy of us being the dog
01:29:11.160 | trying to understand human beings is an entertaining one,
01:29:15.720 | and probably correct.
01:29:17.560 | The closer the understanding tends from the dog's viewpoint
01:29:21.840 | to us human psychologists' viewpoint,
01:29:24.720 | the steps along the way there
01:29:26.680 | will have completely transformative ideas
01:29:28.760 | of what it means to be human.
01:29:30.800 | So the dog has a very shallow understanding.
01:29:33.760 | It's interesting to think that,
01:29:36.120 | to analogize that a dog's understanding of a human being
01:29:39.880 | is the same as our current understanding
01:29:42.320 | of the fundamental laws of physics in the universe.
01:29:45.120 | Oh man, okay.
01:29:48.440 | We spent an hour and 40 minutes
01:29:50.320 | talking about the simulation.
01:29:51.600 | I like it.
01:29:52.920 | Let's talk about superintelligence,
01:29:54.800 | at least for a little bit.
01:29:56.920 | And let's start at the basics.
01:29:58.520 | What to you is intelligence?
01:30:00.560 | - Yeah, I tend not to get too stuck
01:30:04.560 | with the definitional question.
01:30:05.960 | I mean, the common sense understanding,
01:30:08.640 | like the ability to solve complex problems,
01:30:11.040 | to learn from experience, to plan, to reason,
01:30:14.280 | some combination of things like that.
01:30:18.520 | - Is consciousness mixed up into that or no?
01:30:21.080 | Is consciousness mixed up into that or is it--
01:30:23.000 | - Well, I don't think,
01:30:24.120 | I think it could be fairly intelligent,
01:30:26.120 | at least without being conscious, probably.
01:30:29.200 | - So then what is superintelligence?
01:30:33.400 | - Yeah, that would be like something that was much more,
01:30:35.880 | - Of that.
01:30:37.560 | - Had much more general cognitive capacity
01:30:40.200 | than we humans have.
01:30:41.600 | So if we talk about general superintelligence,
01:30:45.120 | it would be much faster learner,
01:30:48.000 | be able to reason much better,
01:30:49.640 | make plans that are more effective at achieving its goals,
01:30:53.000 | say in a wide range of complex, challenging environments.
01:30:56.880 | - In terms of, as we turn our eye to the idea
01:31:00.040 | of sort of existential threats from superintelligence,
01:31:03.920 | do you think superintelligence has to exist
01:31:07.400 | in the physical world or can it be digital only?
01:31:10.680 | Sort of, we think of our general intelligence as us humans,
01:31:15.120 | as an intelligence that's associated with a body
01:31:18.520 | that's able to interact with the world,
01:31:20.080 | that's able to affect the world directly with physically.
01:31:23.960 | - I mean, digital only is perfectly fine, I think.
01:31:26.120 | I mean, it's physical in the sense that obviously
01:31:28.920 | the computers and the memories are physical.
01:31:32.040 | But its capability to affect the world sort of-
01:31:34.840 | - Could be very strong,
01:31:35.760 | even if it has a limited set of actuators.
01:31:40.240 | If it can type text on the screen or something like that,
01:31:43.560 | that would be, I think, ample.
01:31:45.720 | - So in terms of the concerns of existential threat of AI,
01:31:50.720 | how can an AI system that's in the digital world
01:31:54.800 | have existential risk sort of,
01:31:58.200 | and what are the attack vectors for a digital system?
01:32:01.800 | - Well, I mean, I guess maybe to take one step back,
01:32:04.160 | so I should emphasize that I also think
01:32:07.840 | there's this huge positive potential
01:32:10.120 | from machine intelligence, including superintelligence.
01:32:13.280 | And I wanna stress that because some of my writing
01:32:18.160 | has focused on what can go wrong.
01:32:20.680 | And when I wrote the book "Superintelligence,"
01:32:23.040 | at that point, I felt that there was a kind of neglect
01:32:28.040 | of what would happen if AI succeeds
01:32:31.680 | and in particular, a need to get
01:32:33.520 | a more granular understanding of where the pitfalls are
01:32:36.240 | so we can avoid them.
01:32:37.440 | I think that since the book came out in 2014,
01:32:43.360 | there has been a much wider recognition of that.
01:32:45.920 | And a number of research groups
01:32:47.840 | are now actually working on developing,
01:32:50.040 | say, AI alignment techniques and so on and so forth.
01:32:52.640 | So I'd like, yeah, I think now it's important
01:32:56.720 | to make sure we bring back onto the table
01:33:01.280 | the upside as well.
01:33:02.320 | - And there's a little bit of a neglect now on the upside,
01:33:05.800 | which is, I mean, if you look at,
01:33:08.040 | I was talking to a friend,
01:33:08.880 | if you look at the amount of information there's available
01:33:11.680 | or people talking and people being excited
01:33:13.720 | about the positive possibilities of general intelligence,
01:33:16.960 | that's not, it's far outnumbered
01:33:20.400 | by the negative possibilities
01:33:22.760 | in terms of our public discourse.
01:33:25.200 | - Possibly, yeah.
01:33:26.840 | It's hard to measure, but--
01:33:28.880 | - What are, can you link on that for a little bit?
01:33:30.920 | What are some, to you, possible big positive impacts
01:33:35.680 | of general intelligence, superintelligence?
01:33:38.080 | - Well, I mean, so superintelligence,
01:33:39.800 | because I tend to also wanna distinguish
01:33:42.800 | these two different contexts of thinking about AI
01:33:45.920 | and AI impacts, the kind of near-term and long-term,
01:33:49.200 | if you want, both of which I think are legitimate things
01:33:53.000 | to think about, and people should discuss both of them.
01:33:58.000 | But they are different,
01:33:59.120 | and they often get mixed up,
01:34:01.920 | and then I get, you get confusion.
01:34:05.000 | Like, I think you get simultaneously,
01:34:06.400 | like maybe an overhyping of the near-term
01:34:08.440 | and an underhyping of the long-term.
01:34:10.160 | And so I think as long as we keep them apart,
01:34:12.200 | we can have like two good conversations,
01:34:15.120 | but, or we can mix them together
01:34:17.440 | and have one bad conversation.
01:34:18.560 | - Can you clarify just the two things we were talking about,
01:34:21.640 | the near-term and the long-term?
01:34:23.080 | - Yeah, and-- - What are the distinctions?
01:34:24.320 | - Well, it's a blurry distinction,
01:34:27.960 | but say the things I wrote about in this book,
01:34:30.120 | "Superintelligence," long-term,
01:34:33.080 | things people are worrying about today with,
01:34:37.440 | I don't know, algorithmic discrimination,
01:34:39.920 | or even things, self-driving cars and drones and stuff,
01:34:44.360 | more near-term.
01:34:45.360 | And then, of course, you could imagine some medium-term
01:34:50.120 | where they kind of overlap
01:34:51.200 | and one evolves into the other.
01:34:53.160 | But at any rate, I think both, yeah,
01:34:56.640 | the issues look kind of somewhat different
01:34:59.680 | depending on which of these contexts.
01:35:01.480 | - So I think it would be nice
01:35:03.880 | if we can talk about the long-term
01:35:05.600 | and think about a positive impact
01:35:11.600 | or a better world because of the existence
01:35:15.520 | of the long-term superintelligence.
01:35:17.760 | Do you have views of such a world?
01:35:19.240 | - Yeah, I mean, I guess it's a little hard to articulate
01:35:22.200 | because it seems obvious that the world
01:35:24.560 | has a lot of problems as it currently stands.
01:35:27.760 | And it's hard to think of any one of those
01:35:32.320 | which it wouldn't be useful to have
01:35:34.720 | a friendly aligned superintelligence working on.
01:35:39.720 | - So from health to the economic system
01:35:44.920 | to be able to sort of improve the investment
01:35:48.160 | and trade and foreign policy decisions,
01:35:50.320 | all that kind of stuff.
01:35:52.080 | - All that kind of stuff and a lot more.
01:35:55.320 | - I mean, what's the killer app?
01:35:57.960 | - Well, I don't think there is one.
01:35:59.480 | I think AI, especially artificial general intelligence
01:36:04.160 | is really the ultimate general purpose technology.
01:36:07.640 | So it's not that there is this one problem,
01:36:09.600 | this one area where it will have a big impact,
01:36:12.000 | but if and when it succeeds,
01:36:14.720 | it will really apply across the board
01:36:17.560 | in all fields where human creativity and intelligence
01:36:21.200 | and problem solving is useful,
01:36:22.400 | which is pretty much all fields, right?
01:36:24.920 | The thing that it would do
01:36:27.920 | is give us a lot more control over nature.
01:36:30.720 | It wouldn't automatically solve the problems
01:36:32.920 | that arise from conflict between humans,
01:36:35.200 | fundamentally political problems.
01:36:38.080 | Some subset of those might go away
01:36:39.520 | if we just had more resources and cooler tech,
01:36:42.040 | but some subset would require coordination
01:36:46.040 | that is not automatically achieved
01:36:50.040 | just by having more technological capability.
01:36:52.840 | But anything that's not of that sort,
01:36:54.680 | I think you just get like an enormous boost
01:36:56.920 | with this kind of cognitive technology
01:37:00.840 | once it goes all the way.
01:37:02.720 | Now, again, that doesn't mean I'm like thinking,
01:37:05.120 | oh, people don't recognize what's possible
01:37:10.120 | with current technology
01:37:11.920 | and like sometimes things get overhyped,
01:37:14.000 | but I mean, those are perfectly consistent views to hold
01:37:16.840 | the ultimate potential being enormous.
01:37:19.760 | And then it's a very different question
01:37:21.680 | of how far are we from that
01:37:23.160 | or what can we do with near-term technology?
01:37:25.320 | - Yeah, so what's your intuition
01:37:26.400 | about the idea of intelligence explosion?
01:37:29.120 | So there's this,
01:37:30.160 | when you start to think about that leap
01:37:34.040 | from the near term to the long term,
01:37:36.120 | the natural inclination,
01:37:38.120 | like for me sort of building machine learning systems today,
01:37:40.960 | it seems like it's a lot of work
01:37:43.000 | to get to general intelligence,
01:37:44.880 | but there's some intuition of exponential growth,
01:37:47.120 | of exponential improvement of intelligence explosion.
01:37:50.680 | Can you maybe try to elucidate,
01:37:55.400 | try to talk about what's your intuition
01:37:58.920 | about the possibility of intelligence explosion,
01:38:02.800 | that it won't be this gradual, slow process,
01:38:05.160 | there might be a phase shift?
01:38:07.200 | - Yeah, I think it's,
01:38:10.920 | we don't know how explosive it will be.
01:38:13.320 | I think for what it's worth,
01:38:16.160 | it seems fairly likely to me that at some point
01:38:19.240 | there will be some intelligence explosion,
01:38:21.240 | like some period of time where progress in AI
01:38:24.280 | becomes extremely rapid,
01:38:25.840 | roughly in the area where you might say
01:38:30.360 | it's kind of human-ish equivalent
01:38:33.520 | in core cognitive faculties,
01:38:37.320 | that the concept of human equivalent,
01:38:39.880 | like it starts to break down
01:38:41.080 | when you look too closely at it,
01:38:42.960 | and just how explosive does something have
01:38:45.280 | to be for it to be called an intelligence explosion?
01:38:48.920 | Like, does it have to be like overnight, literally,
01:38:50.880 | or a few years?
01:38:52.320 | But overall, I guess,
01:38:55.960 | if you plotted the opinions of different people in the world,
01:39:00.000 | I guess that would be somewhat more probability
01:39:02.680 | towards the intelligence explosion scenario
01:39:05.360 | than probably the average AI researcher, I guess.
01:39:09.480 | - So, and then the other part of the intelligence explosion,
01:39:12.680 | or just forget explosion, just progress,
01:39:15.880 | is once you achieve that gray area
01:39:18.320 | of human-level intelligence,
01:39:20.320 | is it obvious to you that we should be able
01:39:23.040 | to proceed beyond it to get to superintelligence?
01:39:27.040 | - Yeah, that seems, I mean, as much as any of these things
01:39:31.040 | can be obvious, given we've never had one,
01:39:34.960 | people have different views,
01:39:36.000 | smart people have different views,
01:39:37.320 | it's like there's some degree of uncertainty
01:39:40.520 | that always remains for any big, futuristic,
01:39:43.440 | philosophical, grand question,
01:39:46.040 | that just we realize humans are fallible,
01:39:47.880 | especially about these things.
01:39:49.440 | But it does seem, as far as I'm judging things
01:39:52.920 | based on my own impressions,
01:39:55.120 | it seems very unlikely that there would be a ceiling
01:39:59.400 | at or near human cognitive capacity.
01:40:03.960 | - But, and there's such a, I don't know,
01:40:06.840 | there's such a special moment.
01:40:08.760 | It's both terrifying and exciting
01:40:11.400 | to create a system that's beyond our intelligence.
01:40:14.900 | So maybe you can step back and say,
01:40:18.920 | how does that possibility make you feel,
01:40:21.640 | that we can create something,
01:40:24.520 | it feels like there's a line beyond which it steps,
01:40:28.280 | it'll be able to outsmart you,
01:40:31.040 | and therefore it feels like a step where we lose control.
01:40:35.520 | - Well, I don't think the latter follows,
01:40:39.480 | that is, you could imagine,
01:40:41.880 | and in fact, this is what a number of people
01:40:44.200 | are working towards,
01:40:45.040 | making sure that we could ultimately project
01:40:48.800 | higher levels of problem-solving ability,
01:40:51.920 | while still making sure that they are aligned,
01:40:54.800 | like they are in the service of human values.
01:40:57.500 | I mean, so losing control, I think, is not a given,
01:41:05.040 | that that would happen.
01:41:06.280 | Now you asked how it makes me feel,
01:41:08.040 | I mean, to some extent, I've lived with this for so long,
01:41:10.640 | since as long as I can remember,
01:41:14.640 | being an adult or even a teenager,
01:41:16.440 | it seemed to me obvious that at some point,
01:41:18.240 | AI will succeed.
01:41:19.640 | - And so I actually misspoke, I didn't mean control.
01:41:24.640 | I meant, because the control problem is an interesting thing,
01:41:27.920 | and I think the hope is,
01:41:30.720 | at least we should be able to maintain control
01:41:33.040 | over systems that are smarter than us,
01:41:35.280 | but we do lose our specialness.
01:41:39.620 | It's sort of, we'll lose our place
01:41:44.480 | as the smartest, coolest thing on Earth,
01:41:48.240 | and there's an ego involved with that,
01:41:51.840 | that humans aren't very good at dealing with.
01:41:55.720 | I mean, I value my intelligence as a human being.
01:41:59.720 | It seems like a big transformative step
01:42:02.280 | to realize there's something out there
01:42:04.560 | that's more intelligent.
01:42:05.840 | I mean, you don't see that as such a fundamentally--
01:42:09.600 | - I think yes, a lot, I think it would be small.
01:42:13.160 | I mean, I think there are already a lot of things out there
01:42:16.360 | that are, I mean, certainly if you think
01:42:18.160 | the universe is big, there's gonna be other civilizations
01:42:20.440 | that already have super intelligences,
01:42:22.880 | or that just naturally have brains the size of beach balls,
01:42:26.720 | and are like completely leaving us in the dust.
01:42:30.520 | And we haven't come face to face with that.
01:42:33.480 | - We haven't come face to face,
01:42:34.680 | but I mean, that's an open question,
01:42:36.840 | what would happen in a kind of post-human world,
01:42:41.720 | like how much day to day would these super intelligences
01:42:46.720 | be involved in the lives of ordinary?
01:42:49.520 | I mean, you could imagine some scenario
01:42:52.520 | where it would be more like a background thing
01:42:54.200 | that would help protect against some things,
01:42:56.280 | but you wouldn't, like there wouldn't be
01:42:58.880 | this intrusive kind of like making you feel bad
01:43:02.000 | by like making clever jokes on your expert,
01:43:04.560 | like there's like all sorts of things
01:43:05.880 | that maybe in the human context
01:43:07.920 | would feel awkward about that.
01:43:10.600 | You don't wanna be the dumbest kid in your class,
01:43:12.600 | everybody picks it, like a lot of those things
01:43:15.040 | maybe you need to abstract away from,
01:43:18.000 | if you're thinking about this context
01:43:19.440 | where we have infrastructure that is in some sense
01:43:21.880 | beyond any or all humans.
01:43:27.240 | I mean, it's a little bit like say the scientific community
01:43:29.400 | as a whole, if you think of that as a mind,
01:43:32.280 | it's a little bit of metaphor,
01:43:33.280 | but I mean, obviously it's gotta be like way more capacious
01:43:37.840 | than any individual.
01:43:39.400 | So in some sense, there is this mind like thing
01:43:42.520 | already out there that's just vastly more intelligent
01:43:47.280 | than a new individual is.
01:43:49.520 | And we think, okay, that's,
01:43:52.640 | you just accept that as a fact.
01:43:55.240 | - That's the basic fabric of our existence
01:43:57.440 | is there's a super intelligent.
01:43:58.880 | - Yeah, you get used to a lot of.
01:44:00.560 | - I mean, there's already Google and Twitter and Facebook,
01:44:03.840 | these recommender systems that are the basic fabric of our,
01:44:08.840 | I could see them becoming,
01:44:12.840 | I mean, do you think of the collective intelligence
01:44:14.960 | of these systems as already perhaps
01:44:17.160 | reaching super intelligence level?
01:44:18.960 | - Well, I mean, so here it comes to this,
01:44:21.840 | the concept of intelligence and the scale.
01:44:25.200 | And what human level means,
01:44:27.160 | the kind of vagueness and indeterminacy of those concepts
01:44:32.760 | starts to dominate how you would answer that question.
01:44:37.760 | So like, say the Google search engine
01:44:41.680 | has a very high capacity of a certain kind,
01:44:44.640 | like remembering and retrieving information,
01:44:50.200 | particularly like text or images
01:44:53.880 | that you have a kind of string, a word string key,
01:44:58.680 | like obviously superhuman at that,
01:45:00.320 | but a vast set of other things it can't even do at all,
01:45:05.320 | not just not do well.
01:45:07.400 | So you have these current AI systems
01:45:10.880 | that are superhuman in some limited domain
01:45:14.160 | and then like radically subhuman in all other domains.
01:45:19.160 | Same with a chess, like, or just a simple computer
01:45:22.320 | that can multiply really large numbers, right?
01:45:24.280 | So it's gonna have this like one spike of super intelligence
01:45:27.320 | and then a kind of a zero level of capability
01:45:30.160 | across all other cognitive fields.
01:45:32.200 | - Yeah, I don't necessarily think the generalness,
01:45:35.440 | I mean, I'm not so attached to it,
01:45:36.720 | but I could sort of, it's a gray area and it's a feeling,
01:45:40.400 | but to me, sort of alpha zero
01:45:44.240 | is somehow much more intelligent,
01:45:47.560 | much, much more intelligent than Deep Blue.
01:45:50.400 | And to say which domain,
01:45:52.920 | well, you could say, well, these are both just board games,
01:45:55.080 | they're both just able to play board games,
01:45:56.680 | who cares if they're gonna do better or not?
01:45:59.100 | But there's something about the learning, the self play--
01:46:01.680 | - The learning, yeah. - That makes it,
01:46:03.760 | crosses over into that land of intelligence
01:46:07.560 | that doesn't necessarily need to be general.
01:46:09.680 | In the same way, Google is much closer to Deep Blue
01:46:12.120 | currently in terms of its search engine
01:46:15.240 | than it is to sort of the alpha zero.
01:46:17.840 | And the moment it becomes,
01:46:19.560 | and the moment these recommender systems
01:46:21.280 | really become more like alpha zero,
01:46:24.320 | but being able to learn a lot without the constraints
01:46:27.880 | of being heavily constrained by human interaction,
01:46:31.560 | that seems like a special moment in time.
01:46:34.640 | - I mean, certainly learning ability
01:46:37.640 | seems to be an important facet of general intelligence.
01:46:42.920 | That you can take some new domain
01:46:45.720 | that you haven't seen before,
01:46:47.040 | and you weren't specifically pre-programmed for,
01:46:49.120 | and then figure out what's going on there
01:46:51.280 | and eventually become really good at it.
01:46:53.360 | So that's something alpha zero
01:46:56.160 | has much more of than Deep Blue had.
01:46:58.880 | And in fact, I mean, systems like alpha zero can learn,
01:47:03.840 | not just Go, but other,
01:47:05.800 | in fact, probably beat Deep Blue in chess and so forth.
01:47:09.200 | Right, so-- - Not just--
01:47:10.040 | - So you do see this-- - Destroy Deep Blue.
01:47:11.600 | - General, and so it matches the intuition.
01:47:13.640 | We feel it's more intelligent,
01:47:15.120 | and it also has more of this
01:47:16.480 | general purpose learning ability.
01:47:18.440 | And if we get systems
01:47:20.800 | that have even more general purpose learning ability,
01:47:22.760 | it might also trigger an even stronger intuition
01:47:24.720 | that they are actually starting to get smart.
01:47:28.000 | - So if you were to pick a future,
01:47:29.520 | what do you think a utopia looks like with AGI systems?
01:47:33.960 | Sort of, is it the neural link,
01:47:37.840 | brain computer interface world,
01:47:39.400 | where we're kind of really closely interlinked
01:47:41.640 | with AI systems?
01:47:43.600 | Is it possibly where AGI systems replace us completely
01:47:48.080 | while maintaining the values and the consciousness?
01:47:53.080 | Is it something like it's a completely invisible fabric,
01:47:55.960 | like you mentioned, a society where it's just AIDS
01:47:58.320 | and a lot of stuff that we do,
01:48:00.480 | like curing diseases and so on?
01:48:02.080 | What is utopia, if you get to pick?
01:48:04.360 | - Yeah, I mean, it's a good question,
01:48:05.920 | and a deep and difficult one.
01:48:09.040 | I'm quite interested in it.
01:48:10.320 | I don't have all the answers yet, or might never have,
01:48:15.000 | but I think there are some different observations
01:48:18.480 | one could make.
01:48:19.320 | One is if this scenario actually did come to pass,
01:48:23.600 | it would open up this vast space of possible modes of being.
01:48:28.600 | On one hand, material and resource constraints
01:48:33.720 | would just be expanded dramatically.
01:48:36.120 | So there would be a lot of, a big pie, let's say.
01:48:40.240 | Also, it would enable us to do things,
01:48:44.920 | including to ourselves,
01:48:48.760 | or like that it would just open up
01:48:51.680 | this much larger design space and option space
01:48:54.680 | than we have ever had access to in human history.
01:48:58.320 | So I think two things follow from that.
01:49:01.200 | One is that we probably would need to make
01:49:04.360 | a fairly fundamental rethink of what ultimately we value.
01:49:09.360 | Like think things through more from first principles.
01:49:11.880 | The context would be so different from the familiar
01:49:13.720 | that we could have just take what we've always been doing,
01:49:16.240 | and then like, oh, well, we have this cleaning robot
01:49:19.520 | that cleans the dishes in the sink,
01:49:22.960 | and a few other small things.
01:49:24.600 | Like, I think we would have to go back to first principles.
01:49:26.920 | - So even from the individual level,
01:49:29.000 | go back to the first principles of what is the meaning
01:49:31.640 | of life, what is happiness, what is fulfillment?
01:49:34.120 | - Yeah.
01:49:35.440 | And then also connected to this large space of resources
01:49:39.840 | is that it would be possible,
01:49:43.200 | and I think something we should aim for is to do well
01:49:48.200 | by the lights of more than one value system.
01:49:53.960 | That is, we wouldn't have to choose only one value system.
01:50:00.120 | We wouldn't have to choose only one value criterion
01:50:05.120 | and say, we're gonna do something that scores really high
01:50:08.680 | on the metric of, say, hedonism,
01:50:13.080 | and then is like a zero by other criteria,
01:50:17.640 | like kind of wire headed brain Cinnabat,
01:50:20.280 | and it's like a lot of pleasure, that's good,
01:50:22.840 | but then like no beauty, no achievement,
01:50:24.920 | or pick it up.
01:50:27.520 | I think to some significant, not unlimited sense,
01:50:30.960 | but a significant sense,
01:50:32.640 | it would be possible to do very well by many criteria.
01:50:36.160 | Like maybe you could get like 98% of the best
01:50:41.160 | according to several criteria at the same time,
01:50:43.960 | given this great expansion of the option space.
01:50:48.960 | And so-
01:50:50.720 | - So have competing value systems,
01:50:52.600 | competing criteria as a sort of forever,
01:50:57.040 | just like our Democrat versus Republican,
01:51:00.160 | there seems to be this always multiple parties
01:51:02.640 | that are useful for our progress in society,
01:51:05.520 | even though it might seem dysfunctional inside the moment,
01:51:08.120 | but having the multiple value systems
01:51:11.200 | seems to be beneficial for, I guess, a balance of power.
01:51:16.200 | - So that's, yeah, not exactly what I have in mind,
01:51:19.120 | that it's, well, although it can be,
01:51:21.000 | maybe in an indirect way it is,
01:51:23.560 | but that if you had the chance to do something
01:51:27.920 | that scored well on several different metrics,
01:51:32.720 | our first instinct should be to do that
01:51:34.640 | rather than immediately leap to the thing,
01:51:38.040 | which ones of these value systems are we gonna screw over?
01:51:40.800 | Like I think our first,
01:51:42.200 | let's first try to do very well by all of them.
01:51:44.440 | Then it might be that you can't get 100% of all,
01:51:47.120 | and you would have to then like have the hard conversation
01:51:49.960 | about which one will only get 97%.
01:51:51.880 | - There you go, there's my cynicism
01:51:53.360 | that all of existence is always a trade-off.
01:51:56.200 | But you're saying, maybe it's not such a bad trade-off.
01:51:58.840 | Let's first at least try to--
01:52:00.120 | - Well, this would be a distinctive context
01:52:02.320 | in which at least some of the constraints would be removed.
01:52:07.320 | - I'll leave you there.
01:52:08.800 | - So there's probably still be trade-offs in the end.
01:52:10.440 | It's just that we should first make sure
01:52:11.960 | we at least take advantage of this abundance.
01:52:15.960 | So in terms of thinking about this, like, yeah,
01:52:19.280 | one should think, I think in this kind of frame of mind
01:52:24.280 | of generosity and inclusiveness to different value systems
01:52:29.640 | and see how far one can get there first.
01:52:33.560 | And I think one could do something that would be very good
01:52:37.760 | according to many different criteria.
01:52:41.760 | - We kind of talked about AGI fundamentally transforming
01:52:46.120 | the value system of our existence, the meaning of life.
01:52:51.120 | But today, what do you think is the meaning of life?
01:52:55.280 | The silliest or perhaps the biggest question,
01:52:58.560 | what's the meaning of life?
01:52:59.560 | What's the meaning of existence?
01:53:01.920 | What makes, what gives your life fulfillment,
01:53:04.800 | purpose, happiness, meaning?
01:53:07.340 | - Yeah, I think these are, I guess,
01:53:10.600 | a bunch of different but related questions in there
01:53:14.760 | that one can ask.
01:53:15.880 | - Happiness, meaning, they're all different.
01:53:19.280 | - I mean, like you could imagine somebody
01:53:21.120 | getting a lot of happiness from something
01:53:22.320 | that they didn't think was meaningful.
01:53:24.920 | Like mindless, like watching reruns
01:53:29.680 | of some television series while eating junk food.
01:53:31.440 | Like maybe some people that gives pleasure,
01:53:33.320 | but they wouldn't think it had a lot of meaning.
01:53:35.760 | Whereas conversely, something that might be quite
01:53:38.440 | loaded with meaning might not be very fun always.
01:53:41.320 | Like some difficult achievement
01:53:42.920 | that really helps a lot of people,
01:53:44.360 | maybe requires self-sacrifice and hard work.
01:53:47.720 | And so these things can, I think, come apart,
01:53:52.080 | which is something to bear in mind also
01:53:57.200 | when if you're thinking about these utopia questions
01:54:01.000 | that you might actually start to do
01:54:06.000 | some constructive thinking about that.
01:54:07.640 | You might have to isolate and distinguish
01:54:10.920 | these different kinds of things
01:54:13.240 | that might be valuable in different ways.
01:54:15.360 | Make sure you can sort of clearly perceive each one of them.
01:54:18.680 | And then you can think about how you can combine them.
01:54:22.040 | - And just as you said, hopefully come up with a way
01:54:24.840 | to maximize all of them together.
01:54:27.480 | - Yeah, or at least get, I mean, maximize
01:54:29.840 | or get like a very high score on a wide range of them,
01:54:33.760 | even if not literally all.
01:54:35.040 | You can always come up with values
01:54:36.440 | that are exactly opposed to one another, right?
01:54:39.240 | But I think for many values, they are kind of opposed with,
01:54:43.440 | if you place them within a certain dimensionality
01:54:47.400 | of your space, like there are shapes that are kind of,
01:54:50.200 | you can't untangle like in a given dimensionality,
01:54:55.280 | but if you start adding dimensions,
01:54:56.640 | then it might in many cases just be that they are easy
01:54:59.000 | to pull apart and you could.
01:55:02.000 | So we'll see how much space there is for that.
01:55:04.200 | But I think that there could be a lot
01:55:06.600 | in this context of radical abundance.
01:55:09.200 | If ever we get to that.
01:55:10.640 | - I don't think there's a better way to end it, Nick.
01:55:15.320 | You've influenced a huge number of people
01:55:18.160 | to work on what could very well be
01:55:20.520 | the most important problems of our time.
01:55:22.520 | So it's a huge honor.
01:55:23.520 | Thank you so much for talking to me.
01:55:24.360 | - Well, thank you for coming by, Lex.
01:55:25.560 | That was fun.
01:55:26.440 | Thank you.
01:55:27.960 | - Thanks for listening to this conversation
01:55:29.520 | with Nick Bostrom.
01:55:30.600 | And thank you to our presenting sponsor, Cash App.
01:55:33.680 | Please consider supporting the podcast
01:55:35.440 | by downloading Cash App and using code LexPodcast.
01:55:39.960 | If you enjoy this podcast, subscribe on YouTube,
01:55:42.400 | review it with Five Stars on Apple Podcast,
01:55:44.640 | support it on Patreon,
01:55:45.960 | or simply connect with me on Twitter @LexFriedman.
01:55:49.320 | And now let me leave you with some words from Nick Bostrom.
01:55:53.920 | "Our approach to existential risks
01:55:57.840 | "cannot be one of trial and error.
01:56:00.400 | "There's no opportunity to learn from errors.
01:56:03.000 | "The reactive approach, see what happens,
01:56:05.720 | "limit damages, and learn from experience, is unworkable.
01:56:10.080 | "Rather, we must take a proactive approach.
01:56:12.880 | "This requires foresight to anticipate new types of threats
01:56:16.320 | "and a willingness to take decisive, preventative action
01:56:19.600 | "and to bear the costs, moral and economic, of such actions."
01:56:24.280 | Thank you for listening and hope to see you next time.
01:56:29.200 | (upbeat music)
01:56:31.800 | (upbeat music)
01:56:34.400 | [BLANK_AUDIO]