back to index

Nick Bostrom on the Joe Rogan Podcast Conversation About the Simulation | AI Podcast Clips


Chapters

0:0
1:40 The Bland Principle of Indifference
4:10 Doomsday Argument
6:54 The Doomsday Argument
11:1 The Self Sampling Assumption

Whisper Transcript | Transcript Only Page

00:00:00.000 | - So part three of the argument says that,
00:00:04.620 | so that leads us to a place where
00:00:06.540 | eventually somebody creates a simulation.
00:00:10.260 | Now I think you had a conversation with Joe Rogan,
00:00:13.960 | I think there's some aspect here
00:00:15.740 | where you got stuck a little bit.
00:00:17.400 | How does that lead to we're likely living in a simulation?
00:00:24.460 | So this kind of probability argument,
00:00:28.760 | if somebody eventually creates a simulation,
00:00:31.000 | why does that mean that we're now in a simulation?
00:00:34.000 | - What you get to if you accept alternative three first
00:00:37.300 | is there would be more simulated people
00:00:41.700 | with our kinds of experiences than non simulated ones.
00:00:44.700 | Like if, kind of, if you look at the world as a whole
00:00:49.700 | by the end of time as it were, you just count it up,
00:00:53.340 | there would be more simulated ones than non simulated ones.
00:00:57.840 | Then there is an extra step to get from that.
00:01:01.500 | If you assume that, suppose for the sake of the argument
00:01:03.560 | that that's true, how do you get from that
00:01:07.720 | to the statement we are probably in a simulation?
00:01:12.720 | So here you're introducing an indexical statement,
00:01:18.240 | like it's that this person right now is in a simulation.
00:01:23.240 | There are all these other people,
00:01:27.080 | that are in simulations and some
00:01:28.800 | that are not in a simulation.
00:01:30.240 | But what probability should you have that you yourself
00:01:34.680 | is one of the simulated ones, right?
00:01:38.000 | In this setup.
00:01:38.840 | So yeah, so I call it the bland principle of indifference,
00:01:42.320 | which is that in cases like this,
00:01:46.560 | when you have two, I guess, sets of observers,
00:01:49.520 | one of which is much larger than the other,
00:01:54.520 | and you can't from any internal evidence you have,
00:01:58.280 | tell which set you belong to,
00:02:01.360 | you should assign a probability that's proportional
00:02:05.880 | to the size of the sets.
00:02:08.560 | So that if there are 10 times more simulated people
00:02:12.040 | with your kinds of experiences,
00:02:13.560 | you would be 10 times more likely to be one of those.
00:02:16.840 | - Is that as intuitive as it sounds?
00:02:19.040 | I mean, that seems kind of,
00:02:21.720 | if you don't have enough information,
00:02:23.160 | you should rationally just assign the same probability
00:02:26.200 | as the size of the set.
00:02:29.200 | - It seems pretty plausible to me.
00:02:34.080 | - Where are the holes in this?
00:02:35.360 | Is it at the very beginning,
00:02:38.080 | the assumption that everything stretches,
00:02:40.720 | sort of you have infinite time, essentially?
00:02:43.720 | - You don't need infinite time.
00:02:45.240 | - You just need, how long does the time--
00:02:48.160 | - Well, however long it takes, I guess,
00:02:49.800 | for a universe to produce an intelligent civilization
00:02:54.200 | that then attains the technology
00:02:55.560 | to run some ancestry simulations.
00:02:57.360 | - Gotcha.
00:02:58.200 | At some point, when the first simulation is created,
00:03:01.400 | that stretch of time, just a little longer
00:03:03.920 | than they'll all start creating simulations.
00:03:06.560 | Kind of like order of magnitude.
00:03:07.400 | - Yeah, well, I mean, there might be different,
00:03:09.560 | it might, if you think of there being
00:03:12.280 | a lot of different planets,
00:03:13.360 | and some subset of them have life,
00:03:16.160 | and then some subset of those get to intelligent life,
00:03:19.360 | and some of those maybe eventually
00:03:21.520 | start creating simulations,
00:03:23.400 | they might get started at quite different times.
00:03:25.760 | Like maybe on some planet,
00:03:27.560 | it takes a billion years longer before you get monkeys,
00:03:31.880 | or before you get even bacteria, than on another planet.
00:03:35.200 | So this might happen kind of
00:03:40.080 | at different cosmological epochs.
00:03:43.280 | - Is there a connection here to the doomsday argument
00:03:45.760 | and that sampling there?
00:03:47.760 | - Yeah, there is a connection,
00:03:49.880 | in that they both involve an application
00:03:53.920 | of anthropic reasoning,
00:03:55.400 | that is reasoning about these kind of indexical propositions.
00:03:59.400 | But the assumption you need,
00:04:01.160 | in the case of the simulation argument,
00:04:04.400 | is much weaker than the assumption you need
00:04:08.520 | to make the doomsday argument go through.
00:04:11.960 | - What is the doomsday argument,
00:04:13.480 | and maybe you can speak to the anthropic reasoning
00:04:16.280 | in more general.
00:04:17.360 | - Yeah, that's a big and interesting topic
00:04:19.680 | in its own right, anthropics.
00:04:21.400 | But the doomsday argument is this really first discovered
00:04:26.360 | by Brandon Carter, who was a theoretical physicist
00:04:29.600 | and then developed by philosopher John Leslie.
00:04:34.000 | I think it might've been discovered initially
00:04:36.680 | in the '70s or '80s,
00:04:38.000 | and Leslie wrote this book, I think, in '96.
00:04:41.560 | And there are some other versions as well,
00:04:44.080 | by Richard Gott, who's a physicist,
00:04:45.720 | but let's focus on the Carter-Leslie version,
00:04:47.920 | where it's an argument that we have systematically
00:04:52.920 | underestimated the probability
00:04:57.760 | that humanity will go extinct soon.
00:05:00.760 | Now, I should say, most people probably think,
00:05:06.160 | at the end of the day, there is something wrong
00:05:07.560 | with this doomsday argument, that it doesn't really hold.
00:05:10.520 | It's like there's something wrong with it,
00:05:11.920 | but it's proved hard to say exactly what is wrong with it.
00:05:15.640 | And different people have different accounts.
00:05:17.920 | My own view is it seems inconclusive.
00:05:22.240 | But, and I can say what the argument is.
00:05:25.080 | - Yeah, that would be good.
00:05:26.120 | - Yeah, so maybe it's easiest to explain via an analogy
00:05:31.120 | to sampling from urns.
00:05:36.080 | So you imagine you have a big,
00:05:37.520 | imagine you have two urns in front of you,
00:05:41.240 | and they have balls in them that have numbers.
00:05:44.960 | The two urns look the same,
00:05:46.280 | but inside one, there are 10 balls.
00:05:48.240 | Ball number one, two, three, up to ball number 10.
00:05:50.800 | And then in the other urn, you have a million balls
00:05:55.560 | numbered one to a million.
00:05:58.760 | And somebody puts one of these urns in front of you
00:06:02.640 | and ask you to guess what's the chance it's the 10-ball urn.
00:06:07.400 | And you say, well, 50-50, I can't tell which urn it is.
00:06:10.400 | But then you're allowed to reach in
00:06:13.720 | and pick a ball at random from the urn.
00:06:16.200 | And let's suppose you find that it's ball number seven.
00:06:18.960 | So that's strong evidence for the 10-ball hypothesis.
00:06:23.800 | Like it's a lot more likely
00:06:25.720 | that you would get such a low-numbered ball
00:06:29.040 | if there are only 10 balls in the urn.
00:06:30.320 | Like it's in fact 10% then, right?
00:06:32.120 | Then if there are a million balls,
00:06:35.000 | it would be very unlikely you would get number seven.
00:06:37.920 | So you perform a Bayesian update.
00:06:40.960 | And if your prior was 50-50 that it was the 10-ball urn,
00:06:45.520 | you become virtually certain
00:06:46.720 | after finding the random sample was seven
00:06:49.200 | that it only has 10 balls in it.
00:06:51.600 | So in the case of the urns, this is uncontroversial,
00:06:53.600 | just elementary probability theory.
00:06:55.760 | The Doomsday Argument says that you should reason
00:06:58.800 | in a similar way with respect to different hypotheses
00:07:02.440 | about how many balls there will be
00:07:06.080 | in the urn of humanity, I said,
00:07:07.840 | for how many humans there will ever be
00:07:09.920 | by the time we go extinct.
00:07:11.280 | So to simplify, let's suppose we only consider
00:07:15.040 | two hypotheses, either maybe 200 billion humans in total
00:07:19.840 | or 200 trillion humans in total.
00:07:22.720 | You could fill in more hypotheses,
00:07:25.680 | but it doesn't change the principle here.
00:07:27.640 | So it's easiest to see if we just consider these two.
00:07:30.440 | So you start with some prior
00:07:31.720 | based on ordinary empirical ideas
00:07:34.280 | about threats to civilization and so forth.
00:07:37.160 | And maybe you say it's a 5% chance
00:07:39.160 | that we will go extinct by the time
00:07:41.680 | there will have been 200 billion only,
00:07:44.040 | kind of optimistic, let's say,
00:07:45.520 | you think probably we'll make it through,
00:07:47.200 | colonize the universe.
00:07:48.400 | But then according to this Doomsday Argument,
00:07:52.800 | you should take off your own birth rank
00:07:55.400 | as a random sample.
00:07:58.480 | So your birth rank is your sequence in the position
00:08:01.560 | of all humans that have ever existed.
00:08:06.080 | It turns out you're about a human number of 100 billion,
00:08:10.080 | you know, give or take.
00:08:10.920 | That's like roughly how many people
00:08:12.440 | have been born before you.
00:08:13.680 | - That's fascinating 'cause I probably,
00:08:15.880 | we each have a number.
00:08:17.480 | - We would each have a number in this.
00:08:19.560 | I mean, obviously the exact number would depend
00:08:22.400 | on where you started counting,
00:08:23.680 | like which ancestors was human enough to count as human.
00:08:27.320 | But those are not really important.
00:08:29.400 | There are relatively few of them.
00:08:31.320 | So yeah, so you're roughly 100 billion.
00:08:34.520 | Now, if they're only gonna be 200 billion in total,
00:08:36.960 | that's a perfectly unremarkable number.
00:08:39.400 | You're somewhere in the middle, right?
00:08:41.600 | It's run-of-the-mill human, completely unsurprising.
00:08:45.760 | Now, if they're gonna be 200 trillion,
00:08:47.320 | you would be remarkably early.
00:08:50.200 | Like what are the chances out of these 200 trillion human
00:08:54.360 | that you should be human number 100 billion?
00:08:58.280 | That seems it would have
00:09:00.040 | a much lower conditional probability.
00:09:03.320 | And so analogously to how in the urn case,
00:09:06.040 | you thought after finding this low numbered random sample,
00:09:10.400 | you updated in favor of the urn having few balls.
00:09:12.960 | Similarly, in this case,
00:09:14.640 | you should update in favor of the human species
00:09:17.800 | having a lower total number of members.
00:09:20.880 | That is doom soon.
00:09:22.800 | - You said doom soon?
00:09:24.000 | That's the-
00:09:25.200 | - Well, that would be the hypothesis in this case,
00:09:27.520 | that it will end after 100 billion.
00:09:30.080 | - I just like that term for that hypothesis.
00:09:32.600 | So what it kind of crucially relies on,
00:09:35.600 | the doomsday argument,
00:09:36.440 | is the idea that you should reason
00:09:40.080 | as if you were a random sample
00:09:42.240 | from the set of all humans that will ever have existed.
00:09:45.760 | If you have that assumption,
00:09:46.760 | then I think the rest kind of follows.
00:09:49.320 | The question then is why should you make that assumption?
00:09:52.120 | In fact, you know you're 100 billion,
00:09:54.360 | so where do you get this prior?
00:09:56.920 | And then there is like a literature on that
00:09:58.720 | with different ways of supporting that assumption.
00:10:01.920 | And it's-
00:10:03.520 | - That's just one example of a theropic reasoning, right?
00:10:05.800 | - Yeah.
00:10:06.640 | - That seems to be kind of convenient
00:10:08.280 | when you think about humanity,
00:10:10.920 | when you think about sort of even like existential threats
00:10:14.240 | and so on,
00:10:15.400 | as it seems that quite naturally
00:10:18.640 | that you should assume that you're just an average case.
00:10:21.440 | - Yeah.
00:10:23.840 | That you're a kind of a typical, a randomly sampled.
00:10:26.560 | Now, in the case of the doomsday argument,
00:10:28.000 | it seems to lead to what intuitively we think
00:10:30.680 | is the wrong conclusion,
00:10:31.800 | or at least many people have this reaction
00:10:34.320 | that there's gotta be something fishy about this argument
00:10:37.440 | because from very, very weak premises,
00:10:39.920 | it gets this very striking implication
00:10:43.320 | that we have almost no chance
00:10:45.360 | of reaching size 200 trillion humans in the future.
00:10:49.240 | And how could we possibly get there
00:10:51.400 | just by reflecting on when we were born?
00:10:53.800 | It seems you would need sophisticated arguments
00:10:55.840 | about the impossibility of space colonization, blah, blah.
00:10:58.840 | So one might be tempted to reject this key assumption.
00:11:02.040 | I call it the self-sampling assumption.
00:11:03.840 | The idea that you should reason
00:11:04.880 | as if you were a random sample from all observers
00:11:07.200 | or in some reference class.
00:11:09.840 | However, it turns out that in other domains,
00:11:14.960 | it looks like we need something
00:11:16.680 | like this self-sampling assumption
00:11:18.440 | to make sense of bona fide scientific inferences.
00:11:22.680 | In contemporary cosmology, for example,
00:11:25.280 | you have these multiverse theories.
00:11:27.480 | And according to a lot of those,
00:11:29.200 | all possible human observations are made.
00:11:33.280 | So I mean, if you have a sufficiently large universe,
00:11:35.800 | you will have a lot of people observing
00:11:37.200 | all kinds of different things.
00:11:38.680 | So if you have two competing theories,
00:11:42.120 | say about the value of some constant,
00:11:44.920 | it could be true according to both of these theories
00:11:50.560 | that there will be some observers observing the value
00:11:55.680 | that corresponds to the other theory,
00:11:58.120 | because there will be some observers that have hallucinations
00:12:01.480 | or there's a local fluctuation
00:12:03.360 | or a statistically anomalous measurement,
00:12:05.920 | these things will happen.
00:12:07.600 | And if enough observers make enough different observations,
00:12:10.720 | there will be some that sort of by chance
00:12:12.360 | make these different ones.
00:12:14.160 | And so what we would want to say is,
00:12:16.040 | well, many more observers,
00:12:21.040 | a larger proportion of the observers
00:12:23.240 | will observe as it were the true value.
00:12:26.200 | And a few will observe the wrong value.
00:12:28.600 | If we think of ourselves as a random sample,
00:12:30.520 | we should expect with a probability
00:12:32.920 | to observe the true value.
00:12:33.920 | And that will then allow us to conclude
00:12:37.120 | that the evidence we actually have
00:12:38.520 | is evidence for the theories we think are supported.
00:12:42.480 | It kind of then is a way of making sense
00:12:47.120 | of these inferences that clearly seem correct,
00:12:49.680 | that we can make various observations
00:12:52.760 | and infer what the temperature of the cosmic background is
00:12:56.920 | and the fine structure constant and all of this.
00:13:00.560 | But it seems that without rolling in some assumption
00:13:04.560 | similar to the self-sampling assumption,
00:13:07.440 | this inference just doesn't go through.
00:13:09.440 | And there are other examples.
00:13:11.000 | So there are these scientific contexts
00:13:12.560 | where it looks like this kind of anthropic reasoning
00:13:14.600 | is needed and makes perfect sense.
00:13:16.920 | And yet in the case of the Doomsday Argument,
00:13:18.920 | it has this weird consequence
00:13:20.280 | and people might think there's something wrong with it there.
00:13:22.640 | So there's then this project
00:13:27.640 | that would consist in trying to figure out
00:13:31.520 | what are the legitimate ways of reasoning
00:13:34.480 | about these indexical facts
00:13:36.520 | when observer selection effects are in play.
00:13:38.680 | In other words, developing a theory of anthropics.
00:13:41.400 | And there are different views of looking at that.
00:13:44.200 | And it's a difficult methodological area.
00:13:47.400 | But to tie it back to the simulation argument,
00:13:52.240 | the key assumption there,
00:13:54.720 | this bland principle of indifference,
00:13:57.680 | is much weaker than the self-sampling assumption.
00:14:00.080 | So if you think about in the case of the Doomsday Argument,
00:14:04.600 | it says you should reason as if you're a random sample
00:14:07.840 | from all humans that will ever have lived,
00:14:09.400 | even though in fact, you know that you are
00:14:12.240 | about number 100 billionth human
00:14:15.600 | and you're alive in the year 2020.
00:14:17.960 | Whereas in the case of the simulation argument,
00:14:19.800 | it says that, well, if you actually have no way of telling
00:14:23.400 | which one you are, then you should assign
00:14:25.880 | this kind of uniform probability.
00:14:29.360 | - Yeah, yeah, your role as the observer
00:14:31.200 | in the simulation argument is different, it seems like.
00:14:34.040 | Like, who's the observer?
00:14:35.720 | I mean, I keep assigning the individual consciousness.
00:14:38.000 | - Yeah, I mean, well, a lot of observers in the simulation,
00:14:41.880 | in the context of the simulation argument,
00:14:43.520 | the relevant observers would be, A,
00:14:46.240 | the people in original histories,
00:14:48.480 | and B, the people in simulations.
00:14:51.680 | So this would be the class of observers that we need.
00:14:54.320 | I mean, there are also maybe the simulators,
00:14:55.760 | but we can set those aside for this.
00:14:58.520 | So the question is, given that class of observers,
00:15:01.440 | a small set of original history observers
00:15:04.600 | and a large class of simulated observers,
00:15:06.920 | which one should you think is you?
00:15:09.480 | Where are you amongst this set of observers?
00:15:12.040 | - I'm maybe having a little bit of trouble
00:15:14.720 | wrapping my head around the intricacies
00:15:17.800 | of what it means to be an observer in this,
00:15:21.240 | in the different instantiations
00:15:24.440 | of the anthropic reasoning cases that we mentioned.
00:15:27.680 | I mean, does it have to be-- - It's like the observer,
00:15:30.120 | no, I mean, it may be an easier way of putting it,
00:15:33.080 | it's just like, are you simulated or are you not simulated?
00:15:36.480 | Given this assumption that these two groups of people exist.
00:15:39.600 | - Yeah, in the simulation case,
00:15:40.720 | it seems pretty straightforward.
00:15:42.920 | - Yeah, so the key point is the methodological assumption
00:15:47.160 | you need to make to get the simulation argument
00:15:50.800 | to where it wants to go is much weaker and less problematic
00:15:55.280 | than the methodological assumption you need to make
00:15:57.840 | to get the doomsday argument to its conclusion.
00:16:00.880 | Maybe the doomsday argument is sound or unsound,
00:16:05.000 | but you need to make a much stronger
00:16:06.360 | and more controversial assumption to make it go through.
00:16:10.360 | In the case of the doomsday argument,
00:16:11.680 | sorry, the simulation argument,
00:16:12.840 | I guess one maybe way intuition popped
00:16:16.000 | to support this blind principle of indifference
00:16:19.240 | is to consider a sequence of different cases
00:16:23.760 | where the fraction of people who are simulated
00:16:27.080 | to non-simulated approaches one.
00:16:30.840 | So in the limiting case where everybody is simulated,
00:16:35.480 | obviously you can deduce with certainty
00:16:41.000 | that you are simulated.
00:16:43.040 | If everybody with your experiences is simulated
00:16:46.720 | and you know you're gotta be one of those,
00:16:49.240 | you don't need the probability at all.
00:16:50.760 | You just kind of logically conclude it, right?
00:16:54.000 | - Right.
00:16:54.840 | - So then as we move from a case where say,
00:16:59.840 | 90% of everybody is simulated, 99%, 99.9%,
00:17:04.880 | it should seem plausible that the probability assigned
00:17:09.320 | should sort of approach one certainty
00:17:13.080 | as the fraction approaches the case
00:17:16.000 | where everybody is in a simulation.
00:17:19.280 | - Yeah, that's exactly.
00:17:20.120 | - And so you wouldn't expect that to be a discrete.
00:17:23.160 | Well, if there's one non-simulated person,
00:17:24.960 | then it's 50/50, but if we'd move that,
00:17:27.160 | then it's 100%, like it should kind of...
00:17:29.280 | There are other arguments as well one can use
00:17:33.160 | to support this blind principle of indifference,
00:17:35.160 | but that might be enough to...
00:17:37.880 | - But in general, when you start from time equals zero
00:17:40.800 | and go into the future, the fraction of simulated,
00:17:44.960 | if it's possible to create simulated worlds,
00:17:47.560 | the fraction of simulated worlds will go to one.
00:17:49.960 | - Well, it won't probably go all the way to one.
00:17:55.640 | In reality, there would be some ratio,
00:17:59.120 | although maybe a technologically mature civilization
00:18:02.120 | could run a lot of simulations
00:18:05.760 | using a small portion of its resources.
00:18:08.400 | It probably wouldn't be able to run infinitely many.
00:18:11.520 | I mean, if we take, say, the physics
00:18:16.000 | in the observed universe,
00:18:17.040 | if we assume that that's also the physics
00:18:20.080 | at the level of the simulators,
00:18:21.920 | there would be limits to the amount of information processing
00:18:26.120 | that any one civilization could perform
00:18:29.440 | in its future trajectory.
00:18:30.880 | - Right, I mean, that's...
00:18:34.040 | - Well, first of all, there's a limited amount
00:18:35.640 | of matter you can get your hands off
00:18:37.240 | because with a positive cosmological constant,
00:18:40.800 | the universe is accelerating.
00:18:42.800 | There's like a finite sphere of stuff,
00:18:44.840 | even if you travel with the speed of light
00:18:46.320 | that you could ever reach,
00:18:47.160 | you have a finite amount of stuff.
00:18:49.800 | And then if you think there is like a lower limit
00:18:52.800 | to the amount of loss you get
00:18:55.960 | when you perform an erasure of a computation,
00:18:58.600 | or if you think, for example,
00:18:59.720 | just matter gradually over cosmological timescales,
00:19:03.360 | decay, maybe protons decay, other things,
00:19:06.480 | and you radiate out gravitational waves,
00:19:08.840 | like there's all kinds of seemingly unavoidable losses
00:19:13.200 | that occur.
00:19:14.200 | So eventually we'll have something
00:19:17.600 | like a heat death of the universe
00:19:20.520 | or a cold death or whatever, but yeah.
00:19:22.720 | - So it's finite, but of course we don't know which,
00:19:24.920 | if there's many ancestral simulations,
00:19:29.640 | we don't know which level we are.
00:19:31.920 | So that could be, couldn't there be
00:19:34.800 | like an arbitrary number of simulation that spawned ours
00:19:38.240 | and those had more resources
00:19:41.040 | in terms of physical universe to work with?
00:19:45.040 | - Sorry, what do you mean that that could be?
00:19:47.240 | - So sort of, okay, so if simulations spawn other simulations
00:19:52.240 | it seems like each new spawn has fewer resources
00:20:00.280 | to work with.
00:20:01.160 | - Yeah.
00:20:02.440 | - But we don't know at which step along the way we are at.
00:20:07.440 | - Right.
00:20:09.040 | - Any one observer doesn't know
00:20:10.480 | whether we're in level 42 or 100 or one,
00:20:15.480 | or does that not matter for the resources?
00:20:18.280 | I mean, it's-
00:20:20.480 | - I mean, it's true that there would be uncertainty
00:20:23.560 | as you could have stacked simulations.
00:20:26.000 | - Yes, that's how-
00:20:26.840 | - And that could then be uncertainty
00:20:29.880 | as to which level we are at.
00:20:32.120 | As you remarked also,
00:20:35.880 | all the computations performed in a simulation
00:20:41.560 | within the simulation also have to be expanded
00:20:44.640 | at the level of the simulation.
00:20:46.160 | - Right.
00:20:47.000 | - So the computer in basement reality
00:20:49.200 | where all the simulations with the simulations
00:20:50.800 | with the simulations are taking place,
00:20:52.080 | like that computer ultimately it's CPU or whatever it is
00:20:56.160 | like that has to power this whole tower, right?
00:20:58.280 | So if there is a finite compute power in basement reality
00:21:02.680 | that would impose a limit to how tall this tower can be.
00:21:06.520 | And if each level kind of imposes a large extra overhead,
00:21:11.440 | you might think maybe the tower would not be very tall
00:21:13.960 | that most people would be low down in the tower.
00:21:17.880 | - I love the term basement reality.
00:21:21.240 | (silence)
00:21:23.400 | (silence)
00:21:25.560 | (silence)
00:21:27.720 | (silence)
00:21:29.880 | (silence)
00:21:32.040 | (silence)
00:21:34.200 | (silence)
00:21:36.360 | [BLANK_AUDIO]