back to index

Why is the Simulation Interesting to Elon Musk? (Nick Bostrom) | AI Podcast Clips


Whisper Transcript | Transcript Only Page

00:00:00.000 | - Let me ask, one of the popularizers,
00:00:05.000 | you said there's many through this,
00:00:07.280 | when you look at sort of the last few years
00:00:10.580 | of the simulation hypothesis, just like you said,
00:00:12.920 | it comes up every once in a while,
00:00:14.400 | some new community discovers it and so on.
00:00:16.660 | But I would say one of the biggest popularizers
00:00:18.760 | of this idea is Elon Musk.
00:00:20.400 | Do you have any kind of intuition about
00:00:23.440 | what Elon thinks about when he thinks about simulation?
00:00:26.520 | Why is this of such interest?
00:00:28.640 | Is it all the things we've talked about
00:00:30.760 | or is there some special kind of intuition
00:00:32.600 | about simulation that he has?
00:00:35.000 | - I mean, you might have a better,
00:00:36.240 | I think, I mean, why it's of interest,
00:00:38.000 | I think it's like seems fairly obvious
00:00:40.920 | why if to the extent that one think the argument
00:00:43.360 | is credible, why it would be of interest.
00:00:44.920 | It would, if it's correct,
00:00:46.480 | tell us something very important about the world
00:00:48.760 | in one way or the other,
00:00:49.640 | whichever of the three alternatives for a simulation
00:00:52.080 | that seems like arguably one of the most
00:00:54.920 | fundamental discoveries, right?
00:00:57.240 | Now, interestingly, in the case of someone like Elon,
00:00:59.360 | so there's like the standard arguments
00:01:01.000 | for why you might wanna take the simulation hypothesis
00:01:03.520 | seriously, the simulation argument, right?
00:01:06.120 | In the case that if you are actually Elon Musk, let us say,
00:01:09.120 | there's a kind of an additional reason
00:01:12.180 | in that what are the chances you would be Elon Musk?
00:01:14.780 | Like, it seems like maybe there would be more interest
00:01:19.480 | in simulating the lives of very unusual
00:01:23.160 | and remarkable people.
00:01:24.920 | So if you consider not just a simulations
00:01:27.880 | where all of human history
00:01:30.520 | or the whole of human civilization are simulated,
00:01:33.200 | but also other kinds of simulations,
00:01:34.800 | which only include some subset of people,
00:01:37.920 | like in those simulations that only include a subset,
00:01:42.440 | it might be more likely that that would include subsets
00:01:44.680 | of people with unusually interesting or consequential life.
00:01:48.080 | - So if you're Elon Musk,
00:01:49.400 | - You gotta wonder, right?
00:01:50.240 | - It's more likely that you're an alien.
00:01:52.880 | - Like if you're Donald Trump
00:01:54.200 | or if you are Bill Gates,
00:01:55.520 | or you're like some particularly like distinctive character,
00:02:00.520 | you might think that that add,
00:02:02.840 | I mean, if you just think of yourself into the shoes, right?
00:02:05.880 | It's gotta be like an extra reason to think,
00:02:08.880 | that's kind of.
00:02:09.920 | - So interesting.
00:02:11.080 | So on a scale of like farmer in Peru to Elon Musk,
00:02:16.080 | the more you get towards the Elon Musk,
00:02:17.920 | the higher the probability.
00:02:19.360 | - You'd imagine that would be some extra boost from that.
00:02:23.880 | - There's an extra boost.
00:02:24.840 | So he also asked the question of what he would ask an AGI
00:02:28.760 | saying the question being, what's outside the simulation?
00:02:33.280 | Do you think about the answer to this question,
00:02:36.400 | if we are living in a simulation,
00:02:38.080 | what is outside the simulation?
00:02:40.200 | So the programmer of the simulation?
00:02:44.200 | - Yeah, I mean, I think it connects to the question
00:02:46.400 | of what's inside the simulation in that,
00:02:49.040 | if you had views about the creatures of the simulation,
00:02:52.680 | it might help you make predictions
00:02:55.760 | about what kind of simulation it is,
00:02:57.880 | what might happen, what happens after the simulation,
00:03:02.200 | if there is some after, but also like the kind of setup.
00:03:05.320 | So these two questions would be quite closely intertwined.
00:03:09.440 | - But do you think it would be very surprising to,
00:03:13.680 | like, is the stuff inside the simulation,
00:03:16.600 | is it possible for it to be fundamentally different
00:03:18.600 | than the stuff outside?
00:03:20.600 | - Yeah.
00:03:21.440 | - Like another way to put it,
00:03:23.760 | can the creatures inside the simulation
00:03:26.760 | be smart enough to even understand
00:03:28.960 | or have the cognitive capabilities
00:03:30.560 | or any kind of information processing capabilities
00:03:33.520 | enough to understand the mechanism that created them?
00:03:38.520 | - They might understand some aspects of it.
00:03:41.880 | I mean, it's a level of,
00:03:44.360 | it's kind of there are levels of explanation,
00:03:47.760 | like degrees to which you can understand.
00:03:49.800 | So does your dog understand what it is to be human?
00:03:52.720 | Well, it's got some idea,
00:03:53.680 | like humans are these physical objects
00:03:55.880 | that move around and do things.
00:03:58.480 | And like a normal human would have a deeper understanding
00:04:02.360 | of what it is to be a human.
00:04:04.360 | And maybe some very experienced psychologist
00:04:09.040 | or a great novelist might understand a little bit more
00:04:11.240 | about what it is to be human.
00:04:12.800 | And maybe a superintelligence could see
00:04:15.080 | right through your soul.
00:04:17.360 | So similarly, I do think that we are quite limited
00:04:22.360 | in our ability to understand all of the relevant aspects
00:04:27.280 | of the larger context that we exist in.
00:04:30.640 | - But there might be hope for some.
00:04:32.120 | - I think we understand some aspects of it,
00:04:34.600 | but how much good is that?
00:04:36.800 | If there's like one key aspect
00:04:40.480 | that changes the significance of all the other aspects.
00:04:43.320 | So we understand maybe seven out of 10 key insights
00:04:47.680 | that you need,
00:04:48.520 | but the answer actually like varies completely
00:04:54.600 | depending on what like number eight, nine, and 10 insight is.
00:04:57.720 | It's like whether you wanna,
00:05:00.280 | suppose that the big task were to guess
00:05:05.280 | whether a certain number was odd or even,
00:05:08.800 | like a 10 digit number.
00:05:10.760 | And if it's even, the best thing for you to do in life
00:05:14.280 | is to go north.
00:05:15.120 | And if it's odd, the best thing for you is to go south.
00:05:17.880 | Now we are in a situation where maybe through our science
00:05:22.400 | and philosophy, we figured out
00:05:23.800 | what the first seven digits are.
00:05:25.720 | So we have a lot of information, right?
00:05:27.200 | Most of it we figured out,
00:05:28.560 | but we are clueless about what the last three digits are.
00:05:32.960 | So we are still completely clueless
00:05:35.280 | about whether the number is odd or even,
00:05:37.000 | and therefore whether we should go north or go south.
00:05:39.920 | I feel that's an analogy,
00:05:41.640 | but I feel we're somewhat in that predicament.
00:05:44.480 | We know a lot about the universe.
00:05:46.800 | We've come maybe more than half of the way there
00:05:50.040 | to kind of fully understanding it,
00:05:51.320 | but the parts we are missing are plausibly ones
00:05:54.120 | that could completely change the overall upshot of the thing
00:05:58.600 | and including change our overall view
00:06:01.480 | about what the scheme of priorities should be
00:06:04.040 | or which strategic direction would make sense to pursue.
00:06:06.520 | - Yeah, I think your analogy of us being the dog
00:06:09.920 | trying to understand human beings is an entertaining one
00:06:14.480 | and probably correct.
00:06:16.320 | The closer the understanding tends from the dog's viewpoint
00:06:20.600 | to us human psychologists' viewpoint,
00:06:23.480 | the steps along the way there
00:06:25.440 | will have completely transformative ideas
00:06:27.520 | of what it means to be human.
00:06:29.580 | So a dog has a very shallow understanding.
00:06:32.540 | It's interesting to think that,
00:06:34.880 | to analogize that a dog's understanding of a human being
00:06:38.640 | is the same as our current understanding
00:06:41.080 | of the fundamental laws of physics in the universe.
00:06:43.880 | (silence)
00:06:46.040 | (silence)
00:06:48.200 | (silence)
00:06:50.360 | (silence)
00:06:52.520 | (silence)
00:06:54.680 | (silence)
00:06:56.840 | (silence)
00:06:59.000 | [BLANK_AUDIO]