Back to Index

Nick Bostrom on the Joe Rogan Podcast Conversation About the Simulation | AI Podcast Clips


Chapters

0:0
1:40 The Bland Principle of Indifference
4:10 Doomsday Argument
6:54 The Doomsday Argument
11:1 The Self Sampling Assumption

Transcript

- So part three of the argument says that, so that leads us to a place where eventually somebody creates a simulation. Now I think you had a conversation with Joe Rogan, I think there's some aspect here where you got stuck a little bit. How does that lead to we're likely living in a simulation?

So this kind of probability argument, if somebody eventually creates a simulation, why does that mean that we're now in a simulation? - What you get to if you accept alternative three first is there would be more simulated people with our kinds of experiences than non simulated ones. Like if, kind of, if you look at the world as a whole by the end of time as it were, you just count it up, there would be more simulated ones than non simulated ones.

Then there is an extra step to get from that. If you assume that, suppose for the sake of the argument that that's true, how do you get from that to the statement we are probably in a simulation? So here you're introducing an indexical statement, like it's that this person right now is in a simulation.

There are all these other people, that are in simulations and some that are not in a simulation. But what probability should you have that you yourself is one of the simulated ones, right? In this setup. So yeah, so I call it the bland principle of indifference, which is that in cases like this, when you have two, I guess, sets of observers, one of which is much larger than the other, and you can't from any internal evidence you have, tell which set you belong to, you should assign a probability that's proportional to the size of the sets.

So that if there are 10 times more simulated people with your kinds of experiences, you would be 10 times more likely to be one of those. - Is that as intuitive as it sounds? I mean, that seems kind of, if you don't have enough information, you should rationally just assign the same probability as the size of the set.

- It seems pretty plausible to me. - Where are the holes in this? Is it at the very beginning, the assumption that everything stretches, sort of you have infinite time, essentially? - You don't need infinite time. - You just need, how long does the time-- - Well, however long it takes, I guess, for a universe to produce an intelligent civilization that then attains the technology to run some ancestry simulations.

- Gotcha. At some point, when the first simulation is created, that stretch of time, just a little longer than they'll all start creating simulations. Kind of like order of magnitude. - Yeah, well, I mean, there might be different, it might, if you think of there being a lot of different planets, and some subset of them have life, and then some subset of those get to intelligent life, and some of those maybe eventually start creating simulations, they might get started at quite different times.

Like maybe on some planet, it takes a billion years longer before you get monkeys, or before you get even bacteria, than on another planet. So this might happen kind of at different cosmological epochs. - Is there a connection here to the doomsday argument and that sampling there? - Yeah, there is a connection, in that they both involve an application of anthropic reasoning, that is reasoning about these kind of indexical propositions.

But the assumption you need, in the case of the simulation argument, is much weaker than the assumption you need to make the doomsday argument go through. - What is the doomsday argument, and maybe you can speak to the anthropic reasoning in more general. - Yeah, that's a big and interesting topic in its own right, anthropics.

But the doomsday argument is this really first discovered by Brandon Carter, who was a theoretical physicist and then developed by philosopher John Leslie. I think it might've been discovered initially in the '70s or '80s, and Leslie wrote this book, I think, in '96. And there are some other versions as well, by Richard Gott, who's a physicist, but let's focus on the Carter-Leslie version, where it's an argument that we have systematically underestimated the probability that humanity will go extinct soon.

Now, I should say, most people probably think, at the end of the day, there is something wrong with this doomsday argument, that it doesn't really hold. It's like there's something wrong with it, but it's proved hard to say exactly what is wrong with it. And different people have different accounts.

My own view is it seems inconclusive. But, and I can say what the argument is. - Yeah, that would be good. - Yeah, so maybe it's easiest to explain via an analogy to sampling from urns. So you imagine you have a big, imagine you have two urns in front of you, and they have balls in them that have numbers.

The two urns look the same, but inside one, there are 10 balls. Ball number one, two, three, up to ball number 10. And then in the other urn, you have a million balls numbered one to a million. And somebody puts one of these urns in front of you and ask you to guess what's the chance it's the 10-ball urn.

And you say, well, 50-50, I can't tell which urn it is. But then you're allowed to reach in and pick a ball at random from the urn. And let's suppose you find that it's ball number seven. So that's strong evidence for the 10-ball hypothesis. Like it's a lot more likely that you would get such a low-numbered ball if there are only 10 balls in the urn.

Like it's in fact 10% then, right? Then if there are a million balls, it would be very unlikely you would get number seven. So you perform a Bayesian update. And if your prior was 50-50 that it was the 10-ball urn, you become virtually certain after finding the random sample was seven that it only has 10 balls in it.

So in the case of the urns, this is uncontroversial, just elementary probability theory. The Doomsday Argument says that you should reason in a similar way with respect to different hypotheses about how many balls there will be in the urn of humanity, I said, for how many humans there will ever be by the time we go extinct.

So to simplify, let's suppose we only consider two hypotheses, either maybe 200 billion humans in total or 200 trillion humans in total. You could fill in more hypotheses, but it doesn't change the principle here. So it's easiest to see if we just consider these two. So you start with some prior based on ordinary empirical ideas about threats to civilization and so forth.

And maybe you say it's a 5% chance that we will go extinct by the time there will have been 200 billion only, kind of optimistic, let's say, you think probably we'll make it through, colonize the universe. But then according to this Doomsday Argument, you should take off your own birth rank as a random sample.

So your birth rank is your sequence in the position of all humans that have ever existed. It turns out you're about a human number of 100 billion, you know, give or take. That's like roughly how many people have been born before you. - That's fascinating 'cause I probably, we each have a number.

- We would each have a number in this. I mean, obviously the exact number would depend on where you started counting, like which ancestors was human enough to count as human. But those are not really important. There are relatively few of them. So yeah, so you're roughly 100 billion.

Now, if they're only gonna be 200 billion in total, that's a perfectly unremarkable number. You're somewhere in the middle, right? It's run-of-the-mill human, completely unsurprising. Now, if they're gonna be 200 trillion, you would be remarkably early. Like what are the chances out of these 200 trillion human that you should be human number 100 billion?

That seems it would have a much lower conditional probability. And so analogously to how in the urn case, you thought after finding this low numbered random sample, you updated in favor of the urn having few balls. Similarly, in this case, you should update in favor of the human species having a lower total number of members.

That is doom soon. - You said doom soon? That's the- - Well, that would be the hypothesis in this case, that it will end after 100 billion. - I just like that term for that hypothesis. So what it kind of crucially relies on, the doomsday argument, is the idea that you should reason as if you were a random sample from the set of all humans that will ever have existed.

If you have that assumption, then I think the rest kind of follows. The question then is why should you make that assumption? In fact, you know you're 100 billion, so where do you get this prior? And then there is like a literature on that with different ways of supporting that assumption.

And it's- - That's just one example of a theropic reasoning, right? - Yeah. - That seems to be kind of convenient when you think about humanity, when you think about sort of even like existential threats and so on, as it seems that quite naturally that you should assume that you're just an average case.

- Yeah. That you're a kind of a typical, a randomly sampled. Now, in the case of the doomsday argument, it seems to lead to what intuitively we think is the wrong conclusion, or at least many people have this reaction that there's gotta be something fishy about this argument because from very, very weak premises, it gets this very striking implication that we have almost no chance of reaching size 200 trillion humans in the future.

And how could we possibly get there just by reflecting on when we were born? It seems you would need sophisticated arguments about the impossibility of space colonization, blah, blah. So one might be tempted to reject this key assumption. I call it the self-sampling assumption. The idea that you should reason as if you were a random sample from all observers or in some reference class.

However, it turns out that in other domains, it looks like we need something like this self-sampling assumption to make sense of bona fide scientific inferences. In contemporary cosmology, for example, you have these multiverse theories. And according to a lot of those, all possible human observations are made. So I mean, if you have a sufficiently large universe, you will have a lot of people observing all kinds of different things.

So if you have two competing theories, say about the value of some constant, it could be true according to both of these theories that there will be some observers observing the value that corresponds to the other theory, because there will be some observers that have hallucinations or there's a local fluctuation or a statistically anomalous measurement, these things will happen.

And if enough observers make enough different observations, there will be some that sort of by chance make these different ones. And so what we would want to say is, well, many more observers, a larger proportion of the observers will observe as it were the true value. And a few will observe the wrong value.

If we think of ourselves as a random sample, we should expect with a probability to observe the true value. And that will then allow us to conclude that the evidence we actually have is evidence for the theories we think are supported. It kind of then is a way of making sense of these inferences that clearly seem correct, that we can make various observations and infer what the temperature of the cosmic background is and the fine structure constant and all of this.

But it seems that without rolling in some assumption similar to the self-sampling assumption, this inference just doesn't go through. And there are other examples. So there are these scientific contexts where it looks like this kind of anthropic reasoning is needed and makes perfect sense. And yet in the case of the Doomsday Argument, it has this weird consequence and people might think there's something wrong with it there.

So there's then this project that would consist in trying to figure out what are the legitimate ways of reasoning about these indexical facts when observer selection effects are in play. In other words, developing a theory of anthropics. And there are different views of looking at that. And it's a difficult methodological area.

But to tie it back to the simulation argument, the key assumption there, this bland principle of indifference, is much weaker than the self-sampling assumption. So if you think about in the case of the Doomsday Argument, it says you should reason as if you're a random sample from all humans that will ever have lived, even though in fact, you know that you are about number 100 billionth human and you're alive in the year 2020.

Whereas in the case of the simulation argument, it says that, well, if you actually have no way of telling which one you are, then you should assign this kind of uniform probability. - Yeah, yeah, your role as the observer in the simulation argument is different, it seems like. Like, who's the observer?

I mean, I keep assigning the individual consciousness. - Yeah, I mean, well, a lot of observers in the simulation, in the context of the simulation argument, the relevant observers would be, A, the people in original histories, and B, the people in simulations. So this would be the class of observers that we need.

I mean, there are also maybe the simulators, but we can set those aside for this. So the question is, given that class of observers, a small set of original history observers and a large class of simulated observers, which one should you think is you? Where are you amongst this set of observers?

- I'm maybe having a little bit of trouble wrapping my head around the intricacies of what it means to be an observer in this, in the different instantiations of the anthropic reasoning cases that we mentioned. I mean, does it have to be-- - It's like the observer, no, I mean, it may be an easier way of putting it, it's just like, are you simulated or are you not simulated?

Given this assumption that these two groups of people exist. - Yeah, in the simulation case, it seems pretty straightforward. - Yeah, so the key point is the methodological assumption you need to make to get the simulation argument to where it wants to go is much weaker and less problematic than the methodological assumption you need to make to get the doomsday argument to its conclusion.

Maybe the doomsday argument is sound or unsound, but you need to make a much stronger and more controversial assumption to make it go through. In the case of the doomsday argument, sorry, the simulation argument, I guess one maybe way intuition popped to support this blind principle of indifference is to consider a sequence of different cases where the fraction of people who are simulated to non-simulated approaches one.

So in the limiting case where everybody is simulated, obviously you can deduce with certainty that you are simulated. If everybody with your experiences is simulated and you know you're gotta be one of those, you don't need the probability at all. You just kind of logically conclude it, right? - Right.

- So then as we move from a case where say, 90% of everybody is simulated, 99%, 99.9%, it should seem plausible that the probability assigned should sort of approach one certainty as the fraction approaches the case where everybody is in a simulation. - Yeah, that's exactly. - And so you wouldn't expect that to be a discrete.

Well, if there's one non-simulated person, then it's 50/50, but if we'd move that, then it's 100%, like it should kind of... There are other arguments as well one can use to support this blind principle of indifference, but that might be enough to... - But in general, when you start from time equals zero and go into the future, the fraction of simulated, if it's possible to create simulated worlds, the fraction of simulated worlds will go to one.

- Well, it won't probably go all the way to one. In reality, there would be some ratio, although maybe a technologically mature civilization could run a lot of simulations using a small portion of its resources. It probably wouldn't be able to run infinitely many. I mean, if we take, say, the physics in the observed universe, if we assume that that's also the physics at the level of the simulators, there would be limits to the amount of information processing that any one civilization could perform in its future trajectory.

- Right, I mean, that's... - Well, first of all, there's a limited amount of matter you can get your hands off because with a positive cosmological constant, the universe is accelerating. There's like a finite sphere of stuff, even if you travel with the speed of light that you could ever reach, you have a finite amount of stuff.

And then if you think there is like a lower limit to the amount of loss you get when you perform an erasure of a computation, or if you think, for example, just matter gradually over cosmological timescales, decay, maybe protons decay, other things, and you radiate out gravitational waves, like there's all kinds of seemingly unavoidable losses that occur.

So eventually we'll have something like a heat death of the universe or a cold death or whatever, but yeah. - So it's finite, but of course we don't know which, if there's many ancestral simulations, we don't know which level we are. So that could be, couldn't there be like an arbitrary number of simulation that spawned ours and those had more resources in terms of physical universe to work with?

- Sorry, what do you mean that that could be? - So sort of, okay, so if simulations spawn other simulations it seems like each new spawn has fewer resources to work with. - Yeah. - But we don't know at which step along the way we are at. - Right.

- Any one observer doesn't know whether we're in level 42 or 100 or one, or does that not matter for the resources? I mean, it's- - I mean, it's true that there would be uncertainty as you could have stacked simulations. - Yes, that's how- - And that could then be uncertainty as to which level we are at.

As you remarked also, all the computations performed in a simulation within the simulation also have to be expanded at the level of the simulation. - Right. - So the computer in basement reality where all the simulations with the simulations with the simulations are taking place, like that computer ultimately it's CPU or whatever it is like that has to power this whole tower, right?

So if there is a finite compute power in basement reality that would impose a limit to how tall this tower can be. And if each level kind of imposes a large extra overhead, you might think maybe the tower would not be very tall that most people would be low down in the tower.

- I love the term basement reality. (silence) (silence) (silence) (silence) (silence) (silence) (silence)