back to indexWhy is the Simulation Interesting to Elon Musk? (Nick Bostrom) | AI Podcast Clips
00:00:10.580 |
of the simulation hypothesis, just like you said, 00:00:16.660 |
But I would say one of the biggest popularizers 00:00:23.440 |
what Elon thinks about when he thinks about simulation? 00:00:40.920 |
why if to the extent that one think the argument 00:00:46.480 |
tell us something very important about the world 00:00:49.640 |
whichever of the three alternatives for a simulation 00:00:57.240 |
Now, interestingly, in the case of someone like Elon, 00:01:01.000 |
for why you might wanna take the simulation hypothesis 00:01:06.120 |
In the case that if you are actually Elon Musk, let us say, 00:01:12.180 |
in that what are the chances you would be Elon Musk? 00:01:14.780 |
Like, it seems like maybe there would be more interest 00:01:30.520 |
or the whole of human civilization are simulated, 00:01:37.920 |
like in those simulations that only include a subset, 00:01:42.440 |
it might be more likely that that would include subsets 00:01:44.680 |
of people with unusually interesting or consequential life. 00:01:55.520 |
or you're like some particularly like distinctive character, 00:02:02.840 |
I mean, if you just think of yourself into the shoes, right? 00:02:11.080 |
So on a scale of like farmer in Peru to Elon Musk, 00:02:19.360 |
- You'd imagine that would be some extra boost from that. 00:02:24.840 |
So he also asked the question of what he would ask an AGI 00:02:28.760 |
saying the question being, what's outside the simulation? 00:02:33.280 |
Do you think about the answer to this question, 00:02:44.200 |
- Yeah, I mean, I think it connects to the question 00:02:49.040 |
if you had views about the creatures of the simulation, 00:02:57.880 |
what might happen, what happens after the simulation, 00:03:02.200 |
if there is some after, but also like the kind of setup. 00:03:05.320 |
So these two questions would be quite closely intertwined. 00:03:09.440 |
- But do you think it would be very surprising to, 00:03:16.600 |
is it possible for it to be fundamentally different 00:03:30.560 |
or any kind of information processing capabilities 00:03:33.520 |
enough to understand the mechanism that created them? 00:03:44.360 |
it's kind of there are levels of explanation, 00:03:49.800 |
So does your dog understand what it is to be human? 00:03:58.480 |
And like a normal human would have a deeper understanding 00:04:09.040 |
or a great novelist might understand a little bit more 00:04:17.360 |
So similarly, I do think that we are quite limited 00:04:22.360 |
in our ability to understand all of the relevant aspects 00:04:40.480 |
that changes the significance of all the other aspects. 00:04:43.320 |
So we understand maybe seven out of 10 key insights 00:04:48.520 |
but the answer actually like varies completely 00:04:54.600 |
depending on what like number eight, nine, and 10 insight is. 00:05:10.760 |
And if it's even, the best thing for you to do in life 00:05:15.120 |
And if it's odd, the best thing for you is to go south. 00:05:17.880 |
Now we are in a situation where maybe through our science 00:05:28.560 |
but we are clueless about what the last three digits are. 00:05:37.000 |
and therefore whether we should go north or go south. 00:05:41.640 |
but I feel we're somewhat in that predicament. 00:05:46.800 |
We've come maybe more than half of the way there 00:05:51.320 |
but the parts we are missing are plausibly ones 00:05:54.120 |
that could completely change the overall upshot of the thing 00:06:01.480 |
about what the scheme of priorities should be 00:06:04.040 |
or which strategic direction would make sense to pursue. 00:06:06.520 |
- Yeah, I think your analogy of us being the dog 00:06:09.920 |
trying to understand human beings is an entertaining one 00:06:16.320 |
The closer the understanding tends from the dog's viewpoint 00:06:34.880 |
to analogize that a dog's understanding of a human being 00:06:41.080 |
of the fundamental laws of physics in the universe.