back to indexNick Bostrom on the Joe Rogan Podcast Conversation About the Simulation | AI Podcast Clips
Chapters
0:0
1:40 The Bland Principle of Indifference
4:10 Doomsday Argument
6:54 The Doomsday Argument
11:1 The Self Sampling Assumption
00:00:10.260 |
Now I think you had a conversation with Joe Rogan, 00:00:17.400 |
How does that lead to we're likely living in a simulation? 00:00:31.000 |
why does that mean that we're now in a simulation? 00:00:34.000 |
- What you get to if you accept alternative three first 00:00:41.700 |
with our kinds of experiences than non simulated ones. 00:00:44.700 |
Like if, kind of, if you look at the world as a whole 00:00:49.700 |
by the end of time as it were, you just count it up, 00:00:53.340 |
there would be more simulated ones than non simulated ones. 00:00:57.840 |
Then there is an extra step to get from that. 00:01:01.500 |
If you assume that, suppose for the sake of the argument 00:01:07.720 |
to the statement we are probably in a simulation? 00:01:12.720 |
So here you're introducing an indexical statement, 00:01:18.240 |
like it's that this person right now is in a simulation. 00:01:30.240 |
But what probability should you have that you yourself 00:01:38.840 |
So yeah, so I call it the bland principle of indifference, 00:01:46.560 |
when you have two, I guess, sets of observers, 00:01:54.520 |
and you can't from any internal evidence you have, 00:02:01.360 |
you should assign a probability that's proportional 00:02:08.560 |
So that if there are 10 times more simulated people 00:02:13.560 |
you would be 10 times more likely to be one of those. 00:02:23.160 |
you should rationally just assign the same probability 00:02:49.800 |
for a universe to produce an intelligent civilization 00:02:58.200 |
At some point, when the first simulation is created, 00:03:07.400 |
- Yeah, well, I mean, there might be different, 00:03:16.160 |
and then some subset of those get to intelligent life, 00:03:23.400 |
they might get started at quite different times. 00:03:27.560 |
it takes a billion years longer before you get monkeys, 00:03:31.880 |
or before you get even bacteria, than on another planet. 00:03:43.280 |
- Is there a connection here to the doomsday argument 00:03:55.400 |
that is reasoning about these kind of indexical propositions. 00:04:13.480 |
and maybe you can speak to the anthropic reasoning 00:04:21.400 |
But the doomsday argument is this really first discovered 00:04:26.360 |
by Brandon Carter, who was a theoretical physicist 00:04:29.600 |
and then developed by philosopher John Leslie. 00:04:34.000 |
I think it might've been discovered initially 00:04:45.720 |
but let's focus on the Carter-Leslie version, 00:04:47.920 |
where it's an argument that we have systematically 00:05:00.760 |
Now, I should say, most people probably think, 00:05:06.160 |
at the end of the day, there is something wrong 00:05:07.560 |
with this doomsday argument, that it doesn't really hold. 00:05:11.920 |
but it's proved hard to say exactly what is wrong with it. 00:05:15.640 |
And different people have different accounts. 00:05:26.120 |
- Yeah, so maybe it's easiest to explain via an analogy 00:05:41.240 |
and they have balls in them that have numbers. 00:05:48.240 |
Ball number one, two, three, up to ball number 10. 00:05:50.800 |
And then in the other urn, you have a million balls 00:05:58.760 |
And somebody puts one of these urns in front of you 00:06:02.640 |
and ask you to guess what's the chance it's the 10-ball urn. 00:06:07.400 |
And you say, well, 50-50, I can't tell which urn it is. 00:06:16.200 |
And let's suppose you find that it's ball number seven. 00:06:18.960 |
So that's strong evidence for the 10-ball hypothesis. 00:06:35.000 |
it would be very unlikely you would get number seven. 00:06:40.960 |
And if your prior was 50-50 that it was the 10-ball urn, 00:06:51.600 |
So in the case of the urns, this is uncontroversial, 00:06:55.760 |
The Doomsday Argument says that you should reason 00:06:58.800 |
in a similar way with respect to different hypotheses 00:07:11.280 |
So to simplify, let's suppose we only consider 00:07:15.040 |
two hypotheses, either maybe 200 billion humans in total 00:07:27.640 |
So it's easiest to see if we just consider these two. 00:07:48.400 |
But then according to this Doomsday Argument, 00:07:58.480 |
So your birth rank is your sequence in the position 00:08:06.080 |
It turns out you're about a human number of 100 billion, 00:08:19.560 |
I mean, obviously the exact number would depend 00:08:23.680 |
like which ancestors was human enough to count as human. 00:08:34.520 |
Now, if they're only gonna be 200 billion in total, 00:08:41.600 |
It's run-of-the-mill human, completely unsurprising. 00:08:50.200 |
Like what are the chances out of these 200 trillion human 00:09:06.040 |
you thought after finding this low numbered random sample, 00:09:10.400 |
you updated in favor of the urn having few balls. 00:09:14.640 |
you should update in favor of the human species 00:09:25.200 |
- Well, that would be the hypothesis in this case, 00:09:42.240 |
from the set of all humans that will ever have existed. 00:09:49.320 |
The question then is why should you make that assumption? 00:09:58.720 |
with different ways of supporting that assumption. 00:10:03.520 |
- That's just one example of a theropic reasoning, right? 00:10:10.920 |
when you think about sort of even like existential threats 00:10:18.640 |
that you should assume that you're just an average case. 00:10:23.840 |
That you're a kind of a typical, a randomly sampled. 00:10:28.000 |
it seems to lead to what intuitively we think 00:10:34.320 |
that there's gotta be something fishy about this argument 00:10:45.360 |
of reaching size 200 trillion humans in the future. 00:10:53.800 |
It seems you would need sophisticated arguments 00:10:55.840 |
about the impossibility of space colonization, blah, blah. 00:10:58.840 |
So one might be tempted to reject this key assumption. 00:11:04.880 |
as if you were a random sample from all observers 00:11:18.440 |
to make sense of bona fide scientific inferences. 00:11:33.280 |
So I mean, if you have a sufficiently large universe, 00:11:44.920 |
it could be true according to both of these theories 00:11:50.560 |
that there will be some observers observing the value 00:11:58.120 |
because there will be some observers that have hallucinations 00:12:07.600 |
And if enough observers make enough different observations, 00:12:38.520 |
is evidence for the theories we think are supported. 00:12:47.120 |
of these inferences that clearly seem correct, 00:12:52.760 |
and infer what the temperature of the cosmic background is 00:12:56.920 |
and the fine structure constant and all of this. 00:13:00.560 |
But it seems that without rolling in some assumption 00:13:12.560 |
where it looks like this kind of anthropic reasoning 00:13:16.920 |
And yet in the case of the Doomsday Argument, 00:13:20.280 |
and people might think there's something wrong with it there. 00:13:38.680 |
In other words, developing a theory of anthropics. 00:13:41.400 |
And there are different views of looking at that. 00:13:47.400 |
But to tie it back to the simulation argument, 00:13:57.680 |
is much weaker than the self-sampling assumption. 00:14:00.080 |
So if you think about in the case of the Doomsday Argument, 00:14:04.600 |
it says you should reason as if you're a random sample 00:14:17.960 |
Whereas in the case of the simulation argument, 00:14:19.800 |
it says that, well, if you actually have no way of telling 00:14:31.200 |
in the simulation argument is different, it seems like. 00:14:35.720 |
I mean, I keep assigning the individual consciousness. 00:14:38.000 |
- Yeah, I mean, well, a lot of observers in the simulation, 00:14:51.680 |
So this would be the class of observers that we need. 00:14:58.520 |
So the question is, given that class of observers, 00:15:24.440 |
of the anthropic reasoning cases that we mentioned. 00:15:27.680 |
I mean, does it have to be-- - It's like the observer, 00:15:30.120 |
no, I mean, it may be an easier way of putting it, 00:15:33.080 |
it's just like, are you simulated or are you not simulated? 00:15:36.480 |
Given this assumption that these two groups of people exist. 00:15:42.920 |
- Yeah, so the key point is the methodological assumption 00:15:47.160 |
you need to make to get the simulation argument 00:15:50.800 |
to where it wants to go is much weaker and less problematic 00:15:55.280 |
than the methodological assumption you need to make 00:15:57.840 |
to get the doomsday argument to its conclusion. 00:16:00.880 |
Maybe the doomsday argument is sound or unsound, 00:16:06.360 |
and more controversial assumption to make it go through. 00:16:16.000 |
to support this blind principle of indifference 00:16:23.760 |
where the fraction of people who are simulated 00:16:30.840 |
So in the limiting case where everybody is simulated, 00:16:43.040 |
If everybody with your experiences is simulated 00:16:50.760 |
You just kind of logically conclude it, right? 00:17:04.880 |
it should seem plausible that the probability assigned 00:17:20.120 |
- And so you wouldn't expect that to be a discrete. 00:17:29.280 |
There are other arguments as well one can use 00:17:33.160 |
to support this blind principle of indifference, 00:17:37.880 |
- But in general, when you start from time equals zero 00:17:40.800 |
and go into the future, the fraction of simulated, 00:17:47.560 |
the fraction of simulated worlds will go to one. 00:17:49.960 |
- Well, it won't probably go all the way to one. 00:17:59.120 |
although maybe a technologically mature civilization 00:18:08.400 |
It probably wouldn't be able to run infinitely many. 00:18:21.920 |
there would be limits to the amount of information processing 00:18:34.040 |
- Well, first of all, there's a limited amount 00:18:37.240 |
because with a positive cosmological constant, 00:18:49.800 |
And then if you think there is like a lower limit 00:18:55.960 |
when you perform an erasure of a computation, 00:18:59.720 |
just matter gradually over cosmological timescales, 00:19:08.840 |
like there's all kinds of seemingly unavoidable losses 00:19:22.720 |
- So it's finite, but of course we don't know which, 00:19:34.800 |
like an arbitrary number of simulation that spawned ours 00:19:45.040 |
- Sorry, what do you mean that that could be? 00:19:47.240 |
- So sort of, okay, so if simulations spawn other simulations 00:19:52.240 |
it seems like each new spawn has fewer resources 00:20:02.440 |
- But we don't know at which step along the way we are at. 00:20:20.480 |
- I mean, it's true that there would be uncertainty 00:20:35.880 |
all the computations performed in a simulation 00:20:41.560 |
within the simulation also have to be expanded 00:20:49.200 |
where all the simulations with the simulations 00:20:52.080 |
like that computer ultimately it's CPU or whatever it is 00:20:56.160 |
like that has to power this whole tower, right? 00:20:58.280 |
So if there is a finite compute power in basement reality 00:21:02.680 |
that would impose a limit to how tall this tower can be. 00:21:06.520 |
And if each level kind of imposes a large extra overhead, 00:21:11.440 |
you might think maybe the tower would not be very tall 00:21:13.960 |
that most people would be low down in the tower.