Back to Index

Robin Hanson: Alien Civilizations, UFOs, and the Future of Humanity | Lex Fridman Podcast #292


Chapters

0:0 Introduction
1:52 Grabby aliens
39:36 War and competition
45:10 Global government
58:1 Humanity's future
68:2 Hello aliens
95:6 UFO sightings
119:43 Conspiracy theories
128:1 Elephant in the brain
141:32 Medicine
154:1 Institutions
180:54 Physics
185:46 Artificial intelligence
203:35 Economics
206:56 Political science
212:45 Advice for young people
221:36 Darkest moments
224:37 Love and loss
233:59 Immortality
237:56 Simulation hypothesis
248:13 Meaning of life

Transcript

we can actually figure out where are the aliens out there in space time by being clever about the few things we can see, one of which is our current date. And so now that you have this living cosmology, we can tell the story that the universe starts out empty and then at some point, things like us appear, very primitive, and then some of those stop being quiet and expand.

And then for a few billion years, they expand and then they meet each other. And then for the next hundred billion years, they commune with each other. That is the usual models of cosmology say that in roughly 100, 150 billion years, the expansion of the universe will happen so much that all you'll have left is some galaxy clusters and that are sort of disconnected from each other.

But before then, they will interact. There will be this community of all the grabby alien civilizations and each one of them will hear about and even meet thousands of others. And we might hope to join them someday and become part of that community. (air whooshing) The following is a conversation with Robin Hansen, an economist at George Mason University and one of the most fascinating, wild, fearless, and fun minds I've ever gotten a chance to accompany for a time in exploring questions of human nature, human civilization, and alien life out there in our impossibly big universe.

He is the co-author of a book titled "The Elephant in the Brain, Hidden Motives in Everyday Life, "The Age of M, Work, Love, and Life "When Robots Rule the Earth," and a fascinating recent paper I recommend on quote, "Grabby Aliens," titled "If Loud Aliens Explain Human Earliness, "Quiet Aliens Are Also Rare." This is the Lex Friedman Podcast.

To support it, please check out our sponsors in the description. And now, dear friends, here's Robin Hansen. You are working on a book about quote, "Grabby Aliens." This is a technical term, like the Big Bang. So what are grabby aliens? - Grabby aliens expand fast into the universe and they change stuff.

That's the key concept. So if they were out there, we would notice. That's the key idea. So the question is, where are the grabby aliens? So Fermi's question is, where are the aliens? And we could vary that in two terms, right? Where are the quiet, hard to see aliens and where are the big, loud grabby aliens?

So it's actually hard to say where all the quiet ones are, right? There could be a lot of them out there 'cause they're not doing much. They're not making a big difference in the world. But the grabby aliens, by definition, are the ones you would see. We don't know exactly what they do with where they went, but the idea is they're in some sort of competitive world where each part of them is trying to grab more stuff and do something with it.

And almost surely, whatever is the most competitive thing to do with all the stuff they grab isn't to leave it alone the way it started, right? So we humans, when we go around the Earth and use stuff, we change it. We turn a forest into a farmland, turn a harbor into a city.

So the idea is aliens would do something with it and so we're not exactly sure what it would look like, but it would look different. So somewhere in the sky, we would see big spheres of different activity where things had been changed because they had been there. - Expanding spheres.

- Right. - So as you expand, you aggressively interact and change the environment. So the word grabby versus loud, you're using them sometimes synonymously, sometimes not. Grabby to me is a little bit more aggressive. What does it mean to be loud? What does it mean to be grabby? What's the difference?

And loud in what way? Is it visual, is it sound, is it some other physical phenomenon like gravitational waves? What, are you using this kind of in a broad philosophical sense? Or there's a specific thing that it means to be loud in this universe of ours? - My co-authors and I put together a paper with a particular mathematical model.

And so we used the term grabby aliens to describe that more particular model. And the idea is it's a more particular model of the general concept of loud. So loud would just be the general idea that they would be really obvious. - So grabby is the technical term, is it in the title of the paper?

- It's in the body. The title is actually about loud and quiet. - Right, loud like that. - So the idea is there's, you know, you wanna distinguish your particular model of things from the general category of things everybody else might talk about. So that's how we distinguish. - The paper title is, if loud aliens explain human earliness, quiet aliens are also rare.

If life on Earth, God, this is such a good abstract. If life on Earth had to achieve-- - N hard. - N hard steps to reach humanity's level, then the chance of this event rose as time to the Nth power. So we'll talk about power, we'll talk about linear increase.

So what is the technical definition of grabby? How do you envision grabbiness? And why are, in contrast with humans, why aren't humans grabby? So like where's that line? Is it well definable? What is grabby, what is not grabby? - We have a mathematical model of the distribution of advanced civilizations, i.e.

aliens, in space and time. That model has three parameters, and we can set each one of those parameters from data, and therefore we claim this is actually what we know about where they are in space-time. So the key idea is they appear at some point in space-time, and then after some short delay, they start expanding, and they expand at some speed.

And the speed is one of those parameters. That's one of the three. And the other two parameters are about how they appear in time. That is, they appear at random places, and they appear in time according to a power law, and that power law has two parameters, and we can fit each of those parameters to data.

And so then we can say, now we know. We know the distribution of advanced civilizations in space and time. So we are right now a new civilization, and we have not yet started to expand. But plausibly, we would start to do that within, say, 10 million years of the current moment.

That's plenty of time. And 10 million years is a really short duration in the history of the universe. So we are, at the moment, a sort of random sample of the kind of times at which an advanced civilization might appear, because we may or may not become grabby, but if we do, we'll do it soon.

And so our current date is a sample, and that gives us one of the other parameters. The second parameter is the constant in front of the power law, and that's arrived from our current date. - So power law, what is the N in the power law? What is the constant?

- That's a complicated thing to explain. Advanced life appeared by going through a sequence of hard steps. So starting with very simple life, and here we are at the end of this process at pretty advanced life, and so we had to go through some intermediate steps, such as sexual selection, photosynthesis, multicellular animals, and the idea is that each of those steps was hard.

Evolution just took a long time searching in a big space of possibilities to find each of those steps, and the challenge was to achieve all of those steps by a deadline of when the planets would no longer host simple life. And so Earth has been really lucky compared to all the other billions of planets out there, and that we managed to achieve all these steps in the short time of the five billion years that Earth can support simple life.

- So not all steps, but a lot of them, 'cause we don't know how many steps there are before you start the expansion. So these are all the steps from the birth of life to the initiation of major expansion. - Right, so we're pretty sure that it would happen really soon so that it couldn't be the same sort of a hard step as the last one, so in terms of taking a long time.

So when we look at the history of Earth, we look at the durations of the major things that have happened, that suggests that there's roughly, say, six hard steps that happened, say, between three and 12, and that we have just achieved the last one that would take a long time.

- Which is? - Well, we don't know. - Oh, okay. - But whatever it is, we've just achieved the last one. - Are we talking about humans or aliens here? So let's talk about some of these steps. So Earth is really special in some way. We don't exactly know the level of specialness, we don't really know which steps were the hardest or not, because we just have a sample of one, but you're saying that there's three to 12 steps that we have to go through to get to where we are that are hard steps, hard to find by something that took a long time and is unlikely.

There's a lot of ways to fail. There's a lot more ways to fail than to succeed. - The first step would be sort of the very simplest form of life of any sort, and then we don't know whether that first sort is the first sort that we see in the historical record or not, but then some other steps are, say, the development of photosynthesis, the development of sexual reproduction, there's the development of eukaryotic cells, which are certain kind of complicated cell that seems to have only appeared once, and then there's multicellularity, that is multiple cells coming together to large organisms like us.

And in this statistical model of trying to fit all these steps into a finite window, the model actually predicts that these steps could be of varying difficulties, that is they could each take different amounts of time on average, but if you're lucky enough that they all appear at a very short time, then the durations between them will be roughly equal, and the time remaining left over in the rest of the window will also be the same length.

So we at the moment have roughly a billion years left on Earth until simple life like us would no longer be possible. Life appeared roughly 400 million years after the very first time when life was possible at the very beginning, so those two numbers right there give you the rough estimate of six hard steps.

- Just to build up an intuition here, so we're trying to create a simple mathematical model of how life emerges and expands in the universe. And there's a section in this paper, how many hard steps, question mark. - Right. - The two most plausibly diagnostic Earth durations seem to be the one remaining after now before Earth becomes uninhabitable for complex life.

So you estimate how long Earth lasts, how many hard steps, there's windows for doing different hard steps, and you can sort of like queuing theory, mathematically estimate of like the solution or the passing of the hard steps or the taking of the hard steps. Sort of like coldly mathematical look.

If life, pre-expansionary life requires a number of steps, what is the probability of taking those steps on an Earth that lasts a billion years or two billion years or five billion years or 10 billion years? And you say solving for E using the observed durations of 1.1 and 0.4 then gives E values of 3.9 and 12.5, range 5.7 to 26, suggesting a middle estimate of at least six, that's where you said six hard steps.

- Right. - Just to get to where we are. - Right. - We started at the bottom, now we're here, and that took six steps on average. The key point is on average, these things on any one random planet would take trillions or trillions of years, just a really long time.

And so we're really lucky that they all happened really fast in a short time before our window closed. And the chance of that happening in that short window goes as that time period to the power of the number of steps. And so that was where the power we talked about before it came from.

And so that means in the history of the universe, we should overall roughly expect advanced life to appear as a power law in time. So that very early on, there was very little chance of anything appearing, and then later on as things appear, other things are appearing somewhat closer to them in time because they're all going as this power law.

- What is a power law? Can we, for people who are not-- - Sure. - Math inclined, can you describe what a power law is? - So, say the function x is linear, and x squared is quadratic, so it's the power of two. If we make x to the three, that's cubic, or the power of three.

And so x to the sixth is the power of six. And so we'd say life appears in the universe on a planet like Earth in that proportion to the time that it's been ready for life to appear. And that over the universe in general, it'll appear at roughly a power law like that.

- What is the x, what is n? Is it the number of hard steps? - Yes, the number of hard steps. So that's the idea. - Okay, so it's like if you're gambling and you're doubling up every time, this is the probability you just keep winning. (laughing) - So it gets very unlikely very quickly.

And so we're the result of this unlikely chain of successes. - It's actually a lot like cancer. So the dominant model of cancer in an organism like each of us is that we have all these cells, and in order to become cancerous, a single cell has to go through a number of mutations.

And these are very unlikely mutations, and so any one cell is very unlikely to have all these mutations happen by the time your lifespan's over. But we have enough cells in our body that the chance of any one cell producing cancer by the end of your life is actually pretty high, more like 40%.

And so the chance of cancer appearing in your lifetime also goes as a power law, this power of the number of mutations that's required for any one cell in your body to become cancerous. - So the longer you live, the likely you are to have cancer. - And the power is also roughly six.

That is, the chance of you getting cancer is roughly the power of six of the time you've been since you were born. - It is perhaps not lost on people that you're comparing power laws of the survival or the arrival of the human species to cancerous cells. The same mathematical model, but of course we might have a different value assumption about the two outcomes.

But of course, from the point of view of cancer, (both laughing) it's more similar. From the point of view of cancer, it's a win-win. We both get to thrive, I suppose. It is interesting to take the point of view of all kinds of life forms on earth, of viruses, of bacteria.

They have a very different view. It's like the Instagram channel, Nature is Metal. The ethic under which nature operates doesn't often correlate with human morals. It seems cold and machine-like in the selection process that it performs. I am an analyst, I'm a scholar, an intellectual, and I feel I should carefully distinguish predicting what's likely to happen and then evaluating or judging what I think would be better to happen.

And it's a little dangerous to mix those up too closely because then we can have wishful thinking. And so I try typically to just analyze what seems likely to happen, regardless of whether I like it or whether we do anything about it. And then once you see a rough picture of what's likely to happen if we do nothing, then we can ask, well, what might we prefer?

And ask where could the levers be to move it at least a little toward what we might prefer. - It's good. - And that's a useful, but often doing that just analysis of what's likely to happen if we do nothing offends many people. They find that dehumanizing or cold or metal, as you say, to just say, well, this is what's likely to happen and it's not your favorite, sorry, but maybe we can do something, but maybe we can't do that much.

- This is very interesting, that the cold analysis, whether it's geopolitics, whether it's medicine, whether it's economics, sometimes misses some very specific aspect of human condition. Like for example, when you look at a doctor and the act of a doctor helping a single patient, if you do the analysis of that doctor's time and cost of the medicine or the surgery or the transportation of the patient, this is the Paul Farmer question, is it worth spending 10, 20, $30,000 on this one patient?

When you look at all the people that are suffering in the world, that money could be spent so much better. And yet, there's something about human nature that wants to help the person in front of you, and that is actually the right thing to do, despite the analysis. And sometimes when you do the analysis, there's something about the human mind that allows you to not take that leap, that irrational leap to act in this way, that the analysis explains it away.

Well, it's like, for example, the US government, the DOT, Department of Transportation, puts a value of, I think, like $9 million on a human life. And the moment you put that number on a human life, you can start thinking, well, okay, I can start making decisions about this or that and with a sort of cold economic perspective, and then you might lose, you might deviate from a deeper truth of what it means to be human somehow.

So you have to dance, because then if you put too much weight on the anecdotal evidence on these kinds of human emotions, then you're going to lose, you could also probably more likely deviate from truth. But there's something about that cold analysis. Like I've been listening to a lot of people coldly analyze wars, war in Yemen, war in Syria, Israel-Palestine, war in Ukraine, and there's something lost when you do a cold analysis of why something happened.

When you talk about energy, talking about sort of conflict, competition over resources, when you talk about geopolitics, sort of models of geopolitics and why a certain war happened, you lose something about the suffering that happens. I don't know. It's an interesting thing because you're both, you're exceptionally good at models in all domains, literally, but also there's a humanity to you.

So it's an interesting dance. I don't know if you can comment on that dance. - Sure. It's definitely true as you say that for many people, if you are accurate in your judgment of say, for a medical patient, right? What's the chance that this treatment might help? And what's the cost?

And compare those to each other. And you might say, this looks like a lot of cost for a small medical gain. And at that point, knowing that fact, that might take the air out of your sails. You might not be willing to do the thing that maybe you feel is right anyway, which is still to pay for it.

And then somebody knowing that might wanna keep that news from you, not tell you about the low chance of success or the high cost in order to save you this tension, this awkward moment where you might fail to do what they and you think is right. But I think the higher calling, the higher standard to hold you to, which many people can be held to is to say, I will look at things accurately, I will know the truth, and then I will also do the right thing with it.

I will be at peace with my judgment about what the right thing is in terms of the truth. I don't need to be lied to in order to figure out what the right thing to do is. And I think if you do think you need to be lied to in order to figure out what the right thing to do is, you're at a great disadvantage because then people will be lying to you, you will be lying to yourself, and you won't be as effective achieving whatever good you were trying to achieve.

- But getting the data, getting the facts is step one, not the final step. - Absolutely. So I would say having a good model, getting the good data is step one, and it's a burden. Because you can't just use that data to arrive at sort of the easy, convenient thing.

You have to really deeply think about what is the right thing. You can't use, so the dark aspect of data, of models, is you can use it to excuse away actions that aren't ethical. You can use data to basically excuse away anything. - But not looking at data lets you-- - Excuse yourself to pretend and think that you're doing good when you're not.

- Exactly. But it is a burden. It doesn't excuse you from still being human and deeply thinking about what is right. That very kind of gray area, that very subjective area. That's part of the human condition. But let us return for a time to aliens. So you started to define sort of the model, the parameters of grabbiness.

- Right. - Or the, as we approach grabbiness. So what happens? - So again, there was three parameters. - Yes. - There's the speed at which they expand. There's the rate at which they appear in time. And that rate has a constant and a power. So we've talked about the history of life on Earth suggests that power is around six, but maybe three to 12.

We can say that constant comes from our current date, sort of sets the overall rate. And the speed, which is the last parameter, comes from the fact that when we look in the sky, we don't see them. So the model predicts very strongly that if they were expanding slowly, say 1% of the speed of light, our sky would be full of vast spheres that were full of activity.

That is, at a random time when a civilization is first appearing, if it looks out into its sky, it would see many other grabby alien civilizations in the sky. And they would be much bigger than the full moon. They'd be huge spheres in the sky. And they would be visibly different.

We don't see them. - Can we pause for a second? - Okay. - There's a bunch of hard steps that Earth had to pass to arrive at this place we are currently, which we're starting to launch rockets out into space. We're kind of starting to expand. - A bit.

- Very slowly. - Okay. - But this is like the birth. If you look at the entirety of the history of Earth, we're now at this precipice of expansion. - We could. We might not choose to. But if we do, we will do it in the next 10 million years.

- 10 million, wow. Time flies when you're having fun. - I was thinking more like a-- - 10 million is a short time on the cosmological scale. So that is, it might be only 1,000. But the point is, even if it's up to 10 million, that hardly makes any difference to the model.

So I might as well give you 10 million. - This makes me feel, I was so stressed about planning what I'm gonna do today. And now-- - Right, you've got plenty of time. - Plenty of time. I just need to be generating some offspring quickly here. Okay. So, and there's this moment.

This 10 million year gap or window when we start expanding. And you're saying, okay, so this is an interesting moment where there's a bunch of other alien civilizations that might, at some history of the universe, arrived at this moment, we're here. They passed all the hard steps. There's a model for how likely it is that that happens.

And then they start expanding. And you think of an expansion as almost like a sphere. - Right. - When you say speed, we're talking about the speed of the radius growth. - Exactly, the surface, how fast the surface expands. - Okay, and so you're saying that there is some speed for that expansion, average speed.

And then we can play with that parameter. And if that speed is super slow, then maybe that explains why we haven't seen anything. If it's super fast-- - Well, if the slow would create the puzzle, if slow predicts we would see them, but we don't see them. - Okay.

- And so the way to explain that is that they're fast. So the idea is if they're moving really fast, then we don't see them until they're almost here. - And okay, this is counterintuitive. All right, hold on a second. So I think this works best when I say a bunch of dumb things.

- Okay. - And then you elucidate the full complexity and the beauty of the dumbness. Okay, so there's these spheres out there in the universe that are made visible because they're sort of using a lot of energy. So they're generating a lot of light. - Doing stuff, they're changing things.

- They're changing things. And change would be visible a long way off. - Yes. - They would take apart stars, rearrange them, restructure galaxies, they would just-- - All kinds of fun. - Big, huge stuff. - Okay, if they're expanding slowly, we would see a lot of them because the universe is old.

Is old enough to where we would see-- - That is, we're assuming we're just typical, maybe at the 50th percentile of them. So like half of them have appeared so far, the other half will still appear later. And the math of our best estimate is that they appear roughly once per million galaxies.

And we would meet them in roughly a billion years if we expanded out to meet them. - So we're looking at a Grabby Aliens model, 3D sim. - Right. - What's, that's the actual name of the video. What, by the time we get to 13.8 billion years, the fun begins.

Okay, so this is, we're watching a three-dimensional sphere rotating, I presume that's the universe, and then Grabby Aliens are expanding and filling that universe-- - Exactly. - With all kinds of fun. - Pretty soon it's all full. - It's full. So that's how the Grabby Aliens come in contact, first of all, with other aliens, and then with us, humans.

The following is a simulation of the Grabby Aliens model of alien civilizations. Civilizations are born, they expand outwards at constant speed. A spherical region of space is shown. By the time we get to 13.8 billion years, this sphere will be about 3,000 times as wide as the distance from the Milky Way to Andromeda.

Okay, this is fun. - It's huge. - Okay, it's huge. All right, so why don't we see, we're one little tiny, tiny, tiny, tiny dot in that giant, giant sphere. - Right. - Why don't we see any of the Grabby Aliens? - Depends on how fast they expand. So you could see that if they expanded at the speed of light, you wouldn't see them until they were here.

So like out there, if somebody is destroying the universe with a vacuum decay, there's this doomsday scenario where somebody somewhere could change the vacuum of the universe, and that would expand at the speed of light and basically destroy everything it hit. But you'd never see that until it got here, 'cause it's expanding at the speed of light.

If you're expanding really slow, then you see it from a long way off. So the fact we don't see anything in the sky tells us they're expanding fast, say over a third the speed of light, and that's really, really fast. But that's what you have to believe if you look out and you don't see anything.

Now you might say, well, maybe I just don't wanna believe this whole model. Why should I believe this whole model at all? And our best evidence why you should believe this model is our early date. We are right now, almost 14 million years into the universe, on a planet around a star that's roughly five billion years old.

But the average star out there will last roughly five trillion years. That is 1,000 times longer. And remember that power law. It says that the chance of advanced life appearing on a planet goes as the power of sixth of the time. So if a planet lasts 1,000 times longer, then the chance of it appearing on that planet, if everything would stay empty at least, is 1,000 to the sixth power, or 10 to the 18.

So enormous, overwhelming chance that if the universe would just say, sit and empty and waiting for advanced life to appear, when it would appear would be way at the end of all these planet lifetimes. That is the long planets near the end of the lifetime, trillions of years into the future.

But we're really early compared to that. And our explanation is, at the moment, as you saw in the video, the universe is filling up in roughly a billion years. It'll all be full. And at that point, it's too late for advanced life to show up. So you had to show up now before that deadline.

- Okay, can we break that apart a little bit? Okay, or linger on some of the things you said. So with the power law, the things we've done on Earth, the model you have says that it's very unlikely. Like we're lucky SOBs. Is that mathematically correct to say? - We're crazy early.

- That is. - When early means like-- - In the history of the universe. - In the history, okay, so given this model, how do we make sense of that? If we're super, can we just be the lucky ones? - Well, 10 to the 18 lucky, you know? How lucky do you feel?

So, you know. (laughs) That's pretty lucky, right? 10 to the 18 is a billion billion. So then if you were just being honest and humble, that that means, what does that mean? - Means one of the assumptions that calculated this crazy early must be wrong. That's what it means.

So the key assumption we suggest is that the universe would stay empty. So most life would appear like 1,000 times longer later than now if everything would stay empty waiting for it to appear. - So what does non-empty mean? - So the gravity aliens are filling the universe right now.

Roughly at the moment they've filled half of the universe and they've changed it. And when they fill everything, it's too late for stuff like us to appear. - But wait, hold on a second. Did anyone help us get lucky? If it's so difficult, how do, like-- - So it's like cancer, right?

There's all these cells, each of which randomly does or doesn't get cancer. And eventually some cell gets cancer and, you know, we were one of those. - But hold on a second. Okay. But we got it early. We got it-- - Early compared to the prediction with an assumption that's wrong.

So that's how we do a lot of, you know, theoretical analysis. You have a model that makes a prediction that's wrong, then that helps you reject that model. - Okay. Let's try to understand exactly where the wrong is. So the assumption is that the universe is empty. - Stays empty.

- Stays empty. - And waits until this advanced life appears in trillions of years. That is, if the universe would just stay empty, if there was just, you know, nobody else out there, then when you should expect advanced life to appear, if you're the only one in the universe, when should you expect to appear?

You should expect to appear trillions of years in the future. - I see. Right, right. So this is a very sort of nuanced mathematical assumption. I don't think we can intuit it cleanly with words. But if you assume that you're just, the universe stays empty and you're waiting for one life civilization to pop up, then it should happen very late, much later than now.

And if you look at Earth, the way things happen on Earth, it happened much, much, much, much, much earlier than it was supposed to according to this model if you take the initial assumption. Therefore, you can say, well, the initial assumption of the universe staying empty is very unlikely.

- Right. - Okay. - And the other alternative theory is the universe is filling up and will fill up soon. And so we are typical for the origin data of things that can appear before the deadline. - Before the deadline. Okay, it's filling up, so why don't we see anything if it's filling up?

- Because they're expanding really fast. - Close to the speed of light. - Exactly. - So we will only see it when it's here. - Almost here. - Okay. What are the ways in which we might see a quickly expanding? - This is both exciting and terrifying. - It is terrifying.

- It's like watching a truck driving at you at 100 miles an hour. - So we would see spheres in the sky, at least one sphere in the sky, growing very rapidly. - Like very rapidly. - Right, yes, very rapidly. - So there's different, 'cause we were just talking about 10 million years, this would be-- - You might see it 10 million years in advance coming.

I mean, you still might have a long warning. Again, the universe is 14 million years old. The typical origin times of these things are spread over several billion years. So the chance of one originating very close to you in time is very low. So it still might take millions of years from the time you see it, from the time it gets here.

You've got a million years to be terrified of this fast sphere coming at you. - But coming at you very fast, so if they're traveling close to the speed of light-- - But they're coming from a long way away. So remember, the rate at which they appear is one per million galaxies.

- Right. - So they're roughly 100 galaxies away. - I see, so the delta between the speed of light and their actual travel speed is very important? - Right, so if they're going at, say, half the speed of light-- - We'll have a long time. - Then-- - Yeah.

But what if they're traveling exactly at a speed of light? Then we see 'em like-- - Then we wouldn't have much warning, but that's less likely. Well, we can't exclude it. - And they could also be somehow traveling fast in the speed of light. - Well, I think we can exclude, because if they could go faster than the speed of light, then they would just already be everywhere.

So in a universe where you can travel faster than the speed of light, you can go backwards in space time. So any time you appeared anywhere in space time, you could just fill up everything. - Yeah, and-- - So anybody in the future, whoever appeared, they would have been here by now.

- Can you exclude the possibility that those kinds of aliens aren't already here? - Well, we should have a different discussion of that. - Right, okay. Well, let's actually leave that discussion aside just to linger and understand the Grabby alien expansion, which is beautiful and fascinating. Okay. So there's these giant expanding-- - Spheres.

- Spheres of alien civilizations. Now, when those spheres collide, mathematically, it's very likely that we're not the first collision of Grabby alien civilizations, I suppose is one way to say it. So there's, like, the first time the spheres touch each other and recognize each other, they meet. They recognize each other first before they meet.

- They see each other coming. - They see each other coming. And then, so there's a bunch of them, there's a combinatorial thing where they start seeing each other coming, and then there's a third neighbor, it's like, what the hell? And then there's a fourth one. Okay, so what does that, you think, look like?

What lessons, from human nature, that's the only data we have? What can you draw-- - So the story of the history of the universe here is what I would call a living cosmology. So what I'm excited about, in part, by this model, is that it lets us tell a story of cosmology where there are actors who have agendas.

So most ancient peoples, they had cosmologies, the stories they told about where the universe came from and where it's going and what's happening out there. And their stories, they like to have agents and actors, gods or something, out there doing things. And lately, our favorite cosmology is dead, kind of boring.

We're the only activity we know about our sea and everything else just looks dead and empty. But this is now telling us, no, that's not quite right. At the moment, the universe is filling up, and in a few billion years, it'll be all full. And from then on, the history of the universe will be the universe full of aliens.

- Yeah, so that's a really good reminder, a really good way to think about cosmologies. We're surrounded by a vast darkness, and we don't know what's going on in that darkness until the light from whatever generate lights arrives here. So we kind of, yeah, we look up at the sky, okay, there's stars, oh, they're pretty.

But you don't think about the giant expanding spheres of aliens. (laughs) - Right, 'cause you don't see them. But now our date, looking at the clock, if you're clever, the clock tells you. - So I like the analogy with the ancient Greeks. So you might think that an ancient Greek staring at the universe couldn't possibly tell how far away the sun was, or how far away the moon is, or how big the earth is.

That all you can see is just big things in the sky you can't tell. But they were clever enough, actually, to be able to figure out the size of the earth and the distance to the moon and the sun and the size of the moon and sun. That is, they could figure those things out, actually, by being clever enough.

And so similarly, we can actually figure out where are the aliens out there in space-time by being clever about the few things we can see, one of which is our current date. And so now that you have this living cosmology, we can tell the story that the universe starts out empty, and then at some point, things like us appear, very primitive, and then some of those stop being quiet and expand.

And then for a few billion years, they expand, and then they meet each other. And then for the next 100 billion years, they commune with each other. That is, the usual models of cosmology say that in roughly 100, 150 billion years, the expansion of the universe will happen so much that all you'll have left is some galaxy clusters and that are sort of disconnected from each other.

But before then, for the next 100 billion years, they will interact. There will be this community of all the grabby alien civilizations, and each one of them will hear about and even meet thousands of others. And we might hope to join them someday and become part of that community.

That's an interesting thing to aspire to. - Yes, interesting is an interesting word. Is the universe of alien civilizations defined by war as much or more than war-defined human history? I would say it's defined by competition, and then the question is how much competition implies war. So up until recently, competition defined life on Earth.

Competition between species and organisms and among humans, competitions among individuals and communities, and that competition often took the form of war in the last 10,000 years. Many people now are hoping or even expecting to sort of suppress and end competition in human affairs. They regulate business competition, they prevent military competition, and that's a future I think a lot of people will like to continue and strengthen.

People will like to have something close to world government or world governance or at least a world community, and they will like to suppress war and any forms of business and personal competition over the coming centuries. And they may like that so much that they prevent interstellar colonization, which would become the end of that era.

That is, interstellar colonization would just return severe competition to human or our descendant affairs. And many civilizations may prefer that, and ours may prefer that. But if they choose to allow interstellar colonization, they will have chosen to allow competition to return with great force. That is, there's really not much of a way to centrally govern a rapidly expanding sphere of civilization.

And so I think one of the most solid things we can predict about Gravelians is they have accepted competition, and they have internal competition, and therefore they have the potential for competition when they meet each other at the borders. But whether that's military competition is more of an open question.

- So military meaning physically destructive. - Right. - So there's a lot to say there. So one idea that you kind of proposed is progress might be maximized through competition, through some kind of healthy competition, some definition of healthy. So like constructive, not destructive competition. So like we would likely, Graby alien civilizations would be likely defined by competition 'cause they can expand faster.

Because competition allows innovation and sort of the battle of ideas. - The way I would take the logic is to say, competition just happens if you can't coordinate to stop it. And you probably can't coordinate to stop it in an expanding interstellar wave. So competition is a fundamental force in the universe.

- It has been so far, and it would be within an expanding Graby alien civilization. But we today have the chance, many people think and hope, of greatly controlling and limiting competition within our civilization for a while. And that's an interesting choice. Whether to allow competition to sort of regain its full force or whether to suppress and manage it.

- Well, one of the open questions that has been raised in the past less than 100 years is whether our desire to lessen the destructive nature of competition or the destructive kind of competition will be outpaced by the destructive power of our weapons. Sort of if nuclear weapons and weapons of that kind become more destructive than our desire for peace, then all it takes is one asshole at the party to ruin the party.

- It takes one asshole to make a delay, but not that much of a delay on the cosmological scales we're talking about. So even a vast nuclear war, if it happened here right now on Earth, it would not kill all humans. It certainly wouldn't kill all life. And so human civilization would return within 100,000 years.

So all the history of atrocities, and if you look at the Black Plague, which is not human-caused atrocities or whatever. - There are a lot of military atrocities in history, absolutely. - In the 20th century. Those are, those challenges to think about human nature, but the cosmic scale of time and space, they do not stop the human spirit, essentially.

The humanity goes on. Through all the atrocities, it goes on. Life goes on. - Most likely. So even a nuclear war isn't enough to destroy us or to stop our potential from expanding, but we could institute a regime of global governance that limited competition, including military and business competition of sorts, and that could prevent our expansion.

Of course, to play devil's advocate, global governance is centralized power, and power corrupts, and absolute power corrupts absolutely. One of the aspects of competition that's been very productive is not letting any one person, any one country, any one center of power become absolutely powerful, because that's another lesson, is it seems to corrupt.

There's something about ego and the human mind that seems to be corrupted by power, so when you say global governance, that terrifies me more than the possibility of war, because it's-- - I think people will be less terrified than you are right now, and let me try to paint the picture from their point of view.

This isn't my point of view, but I think it's going to be a widely shared point of view. - Yes, this is two devil's advocates arguing, two devils. - Okay, so for the last half century and into the continuing future, we actually have had a strong elite global community that shares a lot of values and beliefs and has created a lot of convergence in global policy.

So if you look at electromagnetic spectrum or medical experiments or pandemic policy or nuclear power energy or regulating airplanes or just in a wide range of area, in fact, the world has very similar regulations and rules everywhere, and it's not a coincidence because they are part of a world community where people get together at places like Davos, et cetera, where world elites want to be respected by other world elites, and they have a convergence of opinion, and that produces something like global governance, but without a global center.

And this is sort of what human mobs or communities have done for a long time. That is, humans can coordinate together on shared behavior without a center by having gossip and reputation within a community of elites. And that is what we have been doing and are likely to do a lot more of.

So for example, one of the things that's happening, say, with the war in Ukraine is that this world community of elites has decided that they disapprove of the Russian invasion and they are coordinating to pull resources together from all around the world in order to oppose it, and they are proud of that, sharing that opinion in there, and they feel that they are morally justified in their stance there.

And that's the kind of event that actually brings world elite communities together, where they come together and they push a particular policy and position that they share and that they achieve successes. And the same sort of passion animates global elites with respect to, say, global warming, or global poverty, and other sorts of things.

And they are, in fact, making progress on those sorts of things through shared global community of elites. And in some sense, they are slowly walking toward global governance, slowly strengthening various world institutions of governance, but cautiously, carefully, watching out for the possibility of a single power that might corrupt it.

I think a lot of people over the coming centuries will look at that history and like it. - It's an interesting thought, and thank you for playing that devil's advocate there. But I think the elites too easily lose touch of the morals that the best of human nature and power corrupts.

- Sure, but-- - And everything you just said. - If their view is the one that determines what happens, their view may still end up there, even if you or I might criticize it from that point of view, so. - From a perspective of minimizing human suffering, elites can use topics of the war in Ukraine and climate change and all of those things to sell an idea to the world and with disregard to the amount of suffering it causes, their actual actions.

So like you can tell all kinds of narratives, that's the way propaganda works. Hitler really sold the idea that everything Germany is doing is either, it's the victim, is defending itself against the cruelty of the world, and it's actually trying to bring about a better world. So every power center thinks they're doing good.

And so this is the positive of competition, of having multiple power centers. This kind of gathering of elites makes me very, very, very nervous. The dinners, the meetings in the closed rooms. I don't know. But remember we talked about separating our cold analysis of what's likely or possible from what we prefer, and so this isn't exactly enough time for that.

We might say, I would recommend we don't go this route of a strong world governance, and because I would say it'll preclude this possibility of becoming grabby aliens, of filling the nearest million galaxies for the next billion years with vast amounts of activity, and interest, and value of life out there.

That's the thing we would lose by deciding that we wouldn't expand, that we would stay here and keep our comfortable shared governance. - So you, wait, you think that global governance makes it more likely or less likely that we expand out into the universe? - Less. - So okay.

- This is the key point. - Great, right, so screw the elites. (laughing) - Right. - So if we want to, wait, do we want to expand? - So again, I want to separate my neutral analysis from my evaluation and say, first of all, I have an analysis that tells us this is a key choice that we will face, and that it's a key choice other aliens have faced out there.

And it could be that only one in 10 or one in 100 civilizations chooses to expand, and the rest of them stay quiet. And that's how it goes out there. And we face that choice too. And it'll happen sometime in the next 10 million years, maybe the next thousand.

But the key thing to notice from our point of view is that even though you might like our global governance, you might like the fact that we've come together, we no longer have massive wars, and we no longer have destructive competition, and that we could continue that. The cost of continuing that would be to prevent interstellar colonization.

That is, once you allow interstellar colonization, then you've lost control of those colonies, and whatever they change into, they could come back here and compete with you back here as a result of having lost control. And I think if people value that global governance and global community and regulation and all the things it can do enough, they would then want to prevent interstellar colonization.

- I want to have a conversation with those people. I believe that both for humanity, for the good of humanity, for what I believe is good in humanity, and for expansion, exploration, innovation, distributing the centers of power is very beneficial. So this whole meeting of elites, and I've been very fortunate to meet quite a large number of elites, they make me nervous.

Because it's easy to lose touch of reality. I'm nervous about that in myself, to make sure that you never lose touch as you get sort of older, wiser, you know how you generally get disrespectful of kids, kids these days. No, the kids are-- - Okay, but I think you should hear a stronger case for their position, so I'm gonna play that.

- For the elites. - Yes, well, for the limiting of expansion, and for the regulation of behavior. - Just, okay, can I linger on that? So you're saying those two are connected. So the human civilization and alien civilizations come to a crossroads, they have to decide, do we want to expand or not?

And connected to that, do we want to give a lot of power to a central elite, or do we want to distribute the power centers, which is naturally connected to the expansion? When you expand, you distribute the power. - If, say, over the next thousand years, we fill up the solar system, right?

We go out from Earth and we colonize Mars and we change a lot of things. Within a solar system, still, everything is within reach. That is, if there's a rebellious colony around Neptune, you can throw rocks at it and smash it, and teach them discipline, okay? - How does that work for the British?

- A central control over the solar system is feasible. But once you let it escape the solar system, it's no longer feasible. But if you have a solar system that doesn't have a central control, maybe broken into a thousand different political units in the solar system, then any one part of that that allows interstellar colonization, and it happens.

That is, interstellar colonization happens when only one party chooses to do it, and is able to do it, and that's what it, therefore. So we can just say, in a world of competition, if interstellar colonization is possible, it will happen, and then competition will continue. And that will sort of ensure the continuation of competition into the indefinite future.

- And competition, we don't know, but competition can take violent forms, or can take productive forms. - In many forms. And the case I was going to make is that, I think one of the things that most scares people about competition is not just that it creates holocausts and death on massive scales, is that it's likely to change who we are, and what we value.

- Yes. So this is the other thing with power. As we grow, as human civilization grows, becomes multi-planetary, multi-solar system, potentially, how does that change us, do you think? - I think the more you think about it, the more you realize it can change us a lot. So, first of all, I would say-- - This is pretty dark, by the way.

- Well, it's-- - It's just honest. - Right, well, I'm trying to get you there. I think the first thing you should say, if you look at history, just human history over the last 10,000 years, if you really understood what people were like a long time ago, you'd realize they were really quite different.

Ancient cultures created people who were really quite different. Most historical fiction lies to you about that. It often offers you modern characters in an ancient world. But if you actually study history, you will see just how different they were, and how differently they thought. And they've changed a lot, many times, and they've changed a lot across time.

So I think the most obvious prediction about the future is, even if you only have the mechanisms of change we've seen in the past, you should still expect a lot of change in the future. But we have a lot bigger mechanisms for change in the future than we had in the past.

So, I have this book called "The Age of M," "Work, Love, and Life," and "Robots Rule the Earth," and it's about what happens if brain emulations become possible. So a brain emulation is where you take a actual human brain and you scan it and find spatial and chemical detail to create a computer simulation of that brain.

And then those computer simulations of brains are basically citizens in a new world. They work and they vote and they fall in love and they get mad and they lie to each other. And this is a whole new world. And my book is about analyzing how that world is different than our world, basically using competition as my key lever of analysis.

That is, if that world remains competitive, then I can figure out how they change in that world, what they do differently than we do. And it's very different. And it's different in ways that are shocking sometimes to many people and ways some people don't like. I think it's an okay world, but I have to admit it's quite different.

And that's just one technology. If we add dozens more technologies, changes into the future, we should just expect it's possible to become very different than who we are. I mean, in the space of all possible minds, our minds are a particular architecture, a particular structure, a particular set of habits, and they are only one piece in a vast space of possibilities.

The space of possible minds is really huge. - So yeah, let's linger on the space of possible minds for a moment just to sort of humble ourselves how peculiar our peculiarities are. Like the fact that we like a particular kind of sex and the fact that we eat food through one hole and poop through another hole.

And that seems to be a fundamental aspect of life, is very important to us. And that life is finite in a certain kind of way. We have a meat vehicle. So death is very important to us. I wonder which aspects are fundamental or would be common throughout human history and also throughout, sorry, throughout history of life on Earth and throughout other kinds of lives.

Like what is really useful? You mentioned competition, seems to be a one fundamental thing. - I've tried to do analysis of where our distant descendants might go in terms of what are robust features we could predict about our descendants. So again, I have this analysis of sort of the next generation, so the next era after ours.

If you think of human history as having three eras so far, right? There was the forager era, the farmer era and the industry era. Then my attempt in age of M is to analyze the next era after that. And it's very different, but of course there could be more and more eras after that.

So, analyzing a particular scenario and thinking it through is one way to try to see how different the future could be, but that doesn't give you some sort of like sense of what's typical. But I have tried to analyze what's typical. And so I have two predictions, I think I can make pretty solidly.

One thing is that we know at the moment that humans discount the future rapidly. So, we discount the future in terms of caring about consequences roughly a factor of two per generation. And there's a solid evolutionary analysis why sexual creatures would do that. 'Cause basically your descendants only share half of your genes and your descendants are a generation away.

- So we only care about our grandchildren. - Basically that's a factor of four later because it's later. So, this actually explains typical interest rates in the economy, that is interest rates are greatly influenced by our discount rates. And we basically discount the future by a factor of two per generation.

But that's a side effect of the way our preferences evolved as sexually selected creatures. We should expect that in the longer run creatures will evolve who don't discount the future. They will care about the long run and they will therefore not neglect the wrong. So for example, for things like global warming or things like that, at the moment many commenters are sad that basically ordinary people don't seem to care much, market prices don't seem to care much and more ordinary people, it doesn't really impact them much because humans don't care much about the long-term future.

But, and futurists find it hard to motivate people and to engage people about the long-term future because they just don't care that much. But that's a side effect of this particular way that our preferences evolved about the future. And so in the future, they will neglect the future less.

And that's an interesting thing that we can predict robustly. Eventually, maybe a few centuries, maybe longer, eventually our descendants will care about the future. - Can you speak to the intuition behind that? Is it useful to think more about the future? - Right, if evolution rewards creatures for having many descendants, then if you have decisions that influence how many descendants you have, then that would be good if you made those decisions.

But in order to do that, you'll have to care about them. You'll have to care about that future. - So to push back, that's if you're trying to maximize the number of descendants but the nice thing about not caring too much about the long-term future is you're more likely to take big risks or you're less risk-averse.

And it's possible that both evolution and just life in the universe rewards the risk-takers. - Well, we actually have analysis of the ideal risk preferences too. So there's literature on ideal preferences that evolution should promote. And for example, there's literature on competing investment funds and what the managers of those funds should care about in terms of various kinds of risks and in terms of discounting.

And so managers of investment funds should basically have logarithmic risk, i.e. in shared risk, in correlated risk, but be very risk-neutral with respect to uncorrelated risk. So that's a feature that's predicted to happen about individual personal choices in biology and also for investment funds. So that's other things. That's also something we can say about the long run.

- What's correlated and uncorrelated risk? - If there's something that would affect all of your descendants, then if you take that risk, you might have more descendants, but you might have zero. And that's just really bad to have zero descendants. But an uncorrelated risk would be a risk that some of your descendants would suffer, but others wouldn't.

And then you have a portfolio of descendants. And so that portfolio ensures you against problems with any one of them. - I like the idea of portfolio of descendants. And we'll talk about portfolios with your idea of you briefly mentioned. We'll return there with M, E-M, the age of E-M.

Work, love, and life when robots rule the earth. E-M, by the way, is emulated minds. So this, one of the-- - M is short for emulations. - M is short for emulations, and it's kind of an idea of how we might create artificial minds, artificial copies of minds, or human-like intelligences.

- I have another dramatic prediction I can make about long-term preferences. - Yes. - Which is at the moment, we reproduce as the result of a hodgepodge of preferences that aren't very well integrated, but sort of in our ancestral environment induced us to reproduce. So we have preferences over being sleepy, and hungry, and thirsty, and wanting to have sex, and wanting to be excitement, et cetera, right?

And so in our ancestral environment, the packages of preferences that we evolved to have did induce us to have more descendants. That's why we're here. But those packages of preferences are not a robust way to promote having more descendants. They were tied to our ancestral environment, which is no longer true.

So that's one of the reasons we are now having a big fertility decline, because in our current environment, our ancestral preferences are not inducing us to have a lot of kids, which is, from evolution's point of view, a big mistake. We can predict that in the longer run, there will arise creatures who just abstractly know that what they want is more descendants.

That's a very robust way to have more descendants is to have that as your direct preference. First of all, you're thinking is so clear. I love it. So mathematical, and thank you for thinking so clear with me and bearing with my interruptions and going on the tangents when we go there.

So you're just clearly saying that successful, long-term civilizations will prefer to have descendants, more descendants. - Not just prefer, consciously and abstractly prefer. That is, it won't be the indirect consequence of other preference. It will just be the thing they know they want. - There'll be a president in the future that says, "We must have more sex." - We must have more descendants and do whatever it takes to do that.

- Whatever. - We must go to the moon and do the other things. - Right. - Not because they're easy, but because they're hard. But instead of the moon, let's have lots of sex. Okay, but there's a lot of ways to have descendants, right? - Right, but so that's the whole point.

When the world gets more complicated and there are many possible strategies, it's having that as your abstract preference that will force you to think through those possibilities and pick the one that's most effective. - So just to clarify, descendants doesn't necessarily mean the narrow definition of descendants, meaning humans having sex and then having babies.

- Exactly. - You can have artificial intelligence systems that in whom you instill some capability of cognition and perhaps even consciousness. You can also create through genetics and biology clones of yourself or slightly modified clones, thousands of them. - Right. - So all kinds of descendants. It could be descendants in the space of ideas too, for somehow we no longer exist in this meat vehicle.

It's now just like whatever the definition of a life form is, you have descendants of those life forms. - Yes, and they will be thoughtful about that. They will have thought about what counts as a descendant and that'll be important to them to have the right concept. - So the they there is very interesting, who the they are.

- But the key thing is we're making predictions that I think are somewhat robust about what our distant descendants will be like. Another thing I think you would automatically accept is they will almost entirely be artificial. And I think that would be the obvious prediction about any aliens we would meet.

That is they would long sense have given up reproducing biologically. - Well, it's all, it's like organic or something. It's all real and it's-- - It might be squishy and made out of hydrocarbons, but it would be artificial in the sense of made in factories with designs on CAD things, right?

Factories with scale economies. So the factories we have made on Earth today have much larger scale economies than the factories in our cells. So the factories in our cells are, there are marvels, but they don't achieve very many scale economies. They're tiny little factories. - But they're all factories.

- Yes. - Factories on top of factories. So everything, the factories on top-- - But the factories that are designed is different than sort of the factories that have evolved. - I think the nature of the word design is very interesting to uncover there. But let me, in terms of aliens, let me go, let me analyze your Twitter like it's Shakespeare.

- Okay. - There's a tweet that says, define "hello" in quotes, alien civilizations as one that might, in the next million years, identify humans as intelligent and civilized, travel to Earth and say, "Hello," by making their presence and advanced abilities known to us. The next 15 polls, this is a Twitter thread, the next 15 polls ask about such "hello" aliens.

And what these polls ask is your Twitter followers, what they think those aliens will be like. Certain particular qualities. So poll number one is, what percent of "hello" aliens evolved from biological species with two main genders? And, you know, the popular vote is above 80%. So most of them have two genders.

What do you think about that? I'll ask you about some of these 'cause they're so interesting. It's such an interesting question. - It is a fun set of questions. - Yes, a fun set of questions. So the genders as we look through evolutionary history, what's the usefulness of that?

As opposed to having just one or like millions. - So there's a question in evolution of life on Earth, there are very few species that have more than two genders. There are some, but they aren't very many. But there's an enormous number of species that do have two genders, much more than one.

And so there's literature on why did multiple genders evolve and then sort of what's the point of having males and females versus hermaphrodites. So most plants are hermaphrodites. That is they have, they would mate male, female, but each plant can be either role. And then most animals have chosen to split into males and females.

And then they're differentiating the two genders. And there's an interesting set of questions about why that happens. - 'Cause you can do selection. You basically have like one gender competes for the affection of other and there's sexual partnership that creates the offspring. So there's sexual selection. It's nice to have like to a party, it's nice to have dance partners.

And then each one get to choose based on certain characteristics. And that's an efficient mechanism for adapting to the environment, being successfully adapted to the environment. - It does look like there's an advantage. If you have males, then the males can take higher variance. And so there can be stronger selection among the males in terms of weeding out genetic mutations because the males have higher variance in their mating success.

- Sure, okay. Question number two, what percent of hello aliens evolved from land animals as opposed to plants or ocean/air organisms? By the way, I did recently see that there's only 10% of species on earth are in the ocean. So there's a lot more variety on land. - There is.

- It's interesting. So why is that? I don't even, I can't even intuit exactly why that would be. Maybe survival on land is harder, and so you get a lot-- - So the story that I understand is it's about small niches. So speciation can be promoted by having multiple different species.

So in the ocean, species are larger. That is, there are more creatures in each species because the ocean environments don't vary as much. So if you're good in one place, you're good in many other places. But on land, and especially in rivers, rivers contain an enormous percentage of the kinds of species on land, you see, because they vary so much from place to place.

And so a species can be good in one place, and then other species can't really compete because they came from a different place where things are different. So it's a remarkable fact, actually, that speciation promotes evolution in the long run. That is, more evolution has happened on land because there have been more species on land because each species has been smaller.

And that's actually a warning about something called rot that I've thought a lot about, which is one of the problems with even a world government, which is large systems of software today just consistently rot and decay with time and have to be replaced. And that plausibly also is a problem for other large systems, including biological systems, legal systems, regulatory systems.

And it seems like large species actually don't evolve as effectively as small ones do. And that's an important thing to notice about. And that's actually, that's different from ordinary sort of evolution in economies on Earth in the last few centuries, say. On Earth, the more technical evolution and economic growth happens in larger integrated cities and nations.

But in biology, it's the other way around. More evolution happened in the fragmented species. - Yeah, it's such a nuanced discussion 'cause you can also push back in terms of nations and at least companies. It's like large companies seems to evolve less effectively. There is something that, you know, they have more resources, more, they don't even have better resilience.

And when you look at the scale of decades and centuries, it seems like a lot of large companies die. - But still large economies do better. Like large cities grow better than small cities. Large integrated economies like the United States or the European Union do better than small fragmented ones.

So even-- - Yeah, sure. That's a very interesting long discussion. But so most of the people, and obviously votes on Twitter represent the absolute objective truth of things. So most, but an interesting question about oceans is that, okay, remember I told you about how most planets would last for trillions of years and be later, right?

So people have tried to explain why life appeared on earth by saying, oh, all those planets are gonna be unqualified for life because of various problems. That is, they're around smaller stars, which lasts longer, and smaller stars have some things like more solar flares, maybe more tidal locking. But almost all of these problems with longer lived planets aren't problems for ocean worlds.

And a large fraction of planets out there are ocean worlds. So if life can appear on an ocean world, then that pretty much ensures that these planets that last a very long time could have advanced life because most, you know, there's a huge fraction of ocean worlds. - So that's actually an open question.

So when you say, sorry, when you say life appear, you're kind of saying life and intelligent life. So like, so that's an open question. Is land, and that's I suppose the question behind the Twitter poll, which is a grabby alien civilization that comes to say hello. What's the chance that they first began their early steps, the difficult steps they took on land?

What do you think? 80%, most people on Twitter think it's very likely. - Right. - What do you think? - I think people are discounting ocean worlds too much. That is, I think people tend to assume that whatever we did must be the only way it's possible, and I think people aren't giving enough credit for other possible paths, but.

- Dolphins, water world, by the way, people criticize that movie. I love that movie. Kevin Costner can do me no wrong. Okay, next question. What percent of hello aliens once had a nuclear war with greater than 10 nukes fired in anger? So not in incompetence and as an accident.

- Intentional firing of nukes, and less than 20% was the most popular vote. - That just seems wrong to me. - So like, I wonder what, so most people think once you get nukes, we're not gonna fire them. They believe in the power of the game theory. - I think they're assuming that if you had a nuclear war, then that would just end civilization for good.

I think that's the thinking. - That's the main thing. - Right, and I think that's just wrong. I think you could rise again after a nuclear war. It might take 10,000 years or 100,000 years, but it could rise again. - So what do you think about mutually assured destruction as a force to prevent people from firing nuclear weapons?

That's a question that I knew to a terrifying degree has been raised now and what's going on. - Well, I mean, clearly it has had an effect. The question is just how strong an effect for how long? I mean, clearly we have not gone wild with nuclear war, and clearly the devastation that you would get if you initiated a nuclear war is part of the reasons people have been reluctant to start a war.

The question is just how reliably will that ensure the absence of a war? - Yeah, the knight is still young. - Exactly. - So it's been 70 years or whatever it's been. I mean, but what do you think? Do you think we'll see nuclear war in the century? - I don't know if in the century, but it's the sort of thing that's likely to happen eventually.

- That's a very loose statement. Okay, I understand. Now this is where I pull you out of your mathematical model and ask a human question. Do you think, this particular human question-- - I think we've been lucky that it hasn't happened so far. - But what is the nature of nuclear war?

Let's think about this. There is dictators, there's democracies, miscommunication, how did war start? World War I, World War II. - So the biggest datum here is that we've had an enormous decline in major war over the last century. So that has to be taken into account. Now, so the problem is, war is a process that has a very long tail.

That is, there are rare, very large wars. So the average war is much worse than the median war because of this long tail. And that makes it hard to identify trends over time. So the median war has clearly gone way down in the last century, that a median rate of war.

But it could be that's because the tail has gotten thicker. And in fact, the average war is just as bad. But most wars are gonna be big wars. So that's the thing we're not so sure about. - There's no strong data on wars with one, because of the destructive nature of the weapons, kill hundreds of millions of people.

There's no data on this. - Right. - But we can start intuiting. - But we can see that the power law, we can do a power law fit to the rate of wars. And it's a power law with a thick tail. So it's one of those things that you should expect most of the damage to be in the few biggest ones.

So that's also true for pandemics and some a few other things. For pandemics, most of the damage is in the few biggest ones. So the median pandemics of ours less than the average that you should expect in the future. - But those, that fitting of data is very questionable because everything you said is correct.

The question is like, what can we infer about the future of civilization threatening pandemics or nuclear war from studying the history of the 20th century? So like, you can't just fit it to the data, the rate of wars and the destructive nature. Like that's not how nuclear war will happen.

Nuclear war happens with two assholes or idiots that have access to a button. - Small wars happen that way too. - No, I understand that. But that's, it's very important, small wars aside, it's very important to understand the dynamics, the human dynamics and the geopolitics of the way nuclear war happens in order to predict how we can minimize the chance of-- - It is a common and useful intellectual strategy to take something that could be really big or but is often very small and fit the distribution of the data, small things which you have a lot of them and then ask, do I believe the big things are really that different?

- Right, I see. - So sometimes it's reasonable to say like, say with tornadoes or even pandemics or something, the underlying process might not be that different for the high and small ones. It might not be. The fact that mutual assured destruction seems to work to some degree shows you that to some degree it's different than the small wars.

So it's a really important question to understand is are humans capable, one human, like how many humans on earth, if I give them a button now, say you pressing this button will kill everyone on earth, everyone, right? How many humans will press that button? I wanna know those numbers, like day to day, minute to minute, how many people have that much irresponsibility, evil, incompetence, ignorance, whatever word you wanna assign, there's a lot of dynamics in the psychology that leads you to press that button, but how many?

My intuition is the number, the more destructive that press of a button, the fewer humans you find. That number gets very close to zero very quickly, especially people have access to such a button. But that's perhaps a hope than a reality. And unfortunately we don't have good data on this, which is like how destructive are humans willing to be?

- So I think part of this just has to think about, ask what your time scales you're looking at, right? So if you say, if you look at the history of war, we've had a lot of wars pretty consistently over many centuries. So if you ask, will we have a nuclear war in the next 50 years, I might say, well, probably not.

If I say 500 or 5,000 years, like if the same sort of risks are underlying and they just continue, then you have to add that up over time and think the risk is getting a lot larger the longer a time scale we're looking at. - But, okay, let's generalize nuclear war because what I was more referring to is something that kills more than 20% of humans on earth and injures or makes the other 80% suffer horribly, survive but suffer.

That's what I was referring to. So when you look at 500 years from now, there might not be nuclear war, there might be something else that has that destructive effect. And I don't know, these feel like novel questions in the history of humanity. I just don't know. I think since nuclear weapons, this has been engineering pandemics, for example, robotics, so nanobots, here's how I phrase the question.

- It just seems like a real new possibility that we have to contend with and we don't have good models, or from my perspective. - So if you look on, say, the last 1,000 years or 10,000 years, we could say we've seen a certain rate at which people are willing to make big destruction in terms of war.

- Yes. - Okay, and if you're willing to project that data forward, then I think if you wanna ask over periods of thousands or tens of thousands of years, you would have a reasonable data set. So the key question is what's changed lately? - Yes. - Okay, and so a big question of which I've given a lot of thought to, what are the major changes that seem to have happened in culture and human attitudes over the last few centuries and what's our best explanation for those so that we can project them forward into the future?

And I have a story about that, which is the story that we have been drifting back toward forager attitudes in the last few centuries as we get rich. So the idea is we spent a million years being a forager, and that was a very sort of standard lifestyle that we know a lot about.

Foragers sort of live in small bands, they make decisions cooperatively, they share food, they don't have much property, et cetera. And humans liked that. And then 10,000 years ago, farming became possible, but it was only possible because we were plastic enough to really change our culture. Farming styles and cultures are very different.

They have slavery, they have war, they have property, they have inequality, they have kings, they stay in one place instead of wandering, they don't have as much diversity of experience or food, they have more disease. Farming life is just very different. But humans were able to sort of introduce conformity and religion and all sorts of things to become just a very different kind of creature as farmers.

Farmers are just really different than foragers in terms of their values and their lives. But the pressures that made foragers into farmers were a part mediated by poverty. Farmers are poor, and if they deviated from the farming norms that people around them supported, they were quite at risk of starving to death.

And then in the last few centuries, we've gotten rich. And as we've gotten rich, the social pressures that turned foragers into farmers have become less persuasive to us. So for example, a farming young woman who was told, "If you have a child out of wedlock, "you and your child may starve," that was a credible threat.

She would see actual examples around her to make that a believable threat. Today, if you say to a young woman, "You shouldn't have a child out of wedlock," she will see other young women around her doing okay that way. We're all rich enough to be able to afford that sort of a thing, and therefore, she's more inclined often to go with her inclinations, her sort of more natural inclinations about such things rather than to be pressured to follow the official farming norms of that you shouldn't do that sort of thing.

And all through our lives, we have been drifting back toward forager attitudes because we've been getting rich. And so aside from at work, which is an exception, but elsewhere, I think this explains trends toward less slavery, more democracy, less religion, less fertility, more promiscuity, more travel, more art, more leisure, fewer work hours.

All of these trends are basically explained by becoming more forager-like. And much science fiction celebrates this. Star Trek or the culture novels, people like this image that we are moving toward this world where basically like foragers, we're peaceful, we share, we make decisions collectively, we have a lot of free time, we are into art.

So forager, you know, forager is a word, and it's a loaded word because it's connected to the actual, what life was actually like at that time. As you mentioned, we sometimes don't do a good job of telling accurately what life was like back then. But you're saying if it's not exactly like foragers, it rhymes in some fundamental way.

- Right. - 'Cause you also said peaceful. Is it obvious that a forager with a nuclear weapon would be peaceful? I don't know if that's 100% obvious. - So we know, again, we know a fair bit about what foragers' lives were like. The main sort of violence they had would be sexual jealousy.

They were relatively promiscuous, and so there'd be a lot of jealousy. But they did not have organized wars with each other. That is, they were at peace with their neighboring forager bands. They didn't have property in land or even in people. They didn't really have marriage. And so they were, in fact, peaceful.

- And when you think about large-scale wars, they don't start large-scale wars. - Right, they didn't have coordinated large-scale wars in the way chimpanzees do. Now, chimpanzees do have wars between one tribe of chimpanzees and others, but human foragers did not. Farmers returned to that, of course, the more chimpanzee-like styles.

- Well, that's a hopeful message. If we could return real quick to the Hello Aliens Twitter thread, one of them is really interesting about language. What percent of Hello Aliens would be able to talk to us in our language? So this is the question of communication. It actually gets to the nature of language.

- It also gets to the nature of how advanced you expect them to be. So I think some people see that we have advanced over the last thousands of years and we aren't reaching any sort of limit, and so they tend to assume it could go on forever. And I actually tend to think that within, say, 10 million years, we will sort of max out on technology.

We will sort of learn everything that's feasible to know, for the most part. And then obstacles to understanding would more be about cultural differences, like ways in which different places have just chosen to do things differently. And so then the question is, is it even possible to communicate across some cultural distances?

And I might think, yeah, I could imagine some maybe advanced aliens who've become so weird and different from each other they can't communicate with each other, but we're probably pretty simple compared to them. So I would think, sure, if they wanted to, they could communicate with us. - So it's the simplicity of the recipient.

I tend to, just to push back, let's explore the possibility where that's not the case. Can we communicate with ants? I find that, like this idea that-- - We're not very good at communicating in general. - Oh, you're saying, all right, I see. You're saying once you get orders of magnitude better at communicating.

- Once they had maxed out on all communication technology in general, and they just understood in general how to communicate with lots of things, and had done that for millions of years. - But you have to be able to, this is so interesting, as somebody who cares a lot about empathy and imagining how other people feel, communication requires empathy, meaning you have to truly understand how the other person, the other organism sees the world.

It's like a four-dimensional species talking to a two-dimensional species. It's not as trivial as, to me at least, as it might at first seem. - So let me reverse my position a little, because I'll say, well, the hello aliens question really combines two different scenarios that we're slipping over.

So one scenario would be that the hello aliens would be like grabby aliens. They would be just fully advanced. They would have been expanding for millions of years. They would have a very advanced civilization, and then they would finally be arriving here after a billion years, perhaps, of expanding, in which case they're gonna be crazy advanced at some maximum level.

But the hello aliens about aliens we might meet soon, which might be sort of UFO aliens, and UFO aliens probably are not grabby aliens. - How do you get here if you're not a grabby alien? - Well, they would have to be able to travel. But they would not be expansive.

So if it's a road trip, it doesn't count as grabby. So we're talking about expanding the comfortable colony. - The question is, if UFOs, some of them are aliens, what kind of aliens would they be? This is sort of the key question you have to ask in order to try to interpret that scenario.

The key fact we would know is that they are here right now, but the universe around us is not full of an alien civilization. So that says right off the bat that they chose not to allow massive expansion of a grabby civilization. - Is it possible that they chose it, but we just don't see them yet?

These are the stragglers, the journeymen, the-- - So the timing coincidence is, it's almost surely if they are here now, they are much older than us. They are many millions of years older than us. And so they could have filled the galaxy in that last millions of years if they had wanted to.

That is, they couldn't just be right at the edge. Very unlikely. Most likely they would have been around waiting for us for a long time. They could have come here anytime in the last millions of years, and they just chosen, they've been waiting around for this, or they just chose to come recently.

But the timing coincidence, it would be crazy unlikely that they just happened to be able to get here, say in the last 100 years. They would no doubt have been able to get here far earlier than that. - Again, we don't know. So this is a friend like UFO sightings on Earth.

We don't know if this kind of increase in sightings have anything to do with actual visitation. - I'm just talking about the timing. They arose at some point in space time. And it's very unlikely that that was just to the point that they could just barely get here recently.

Almost surely they would have-- - But they might have been here. - They could have gotten here much earlier. - And well, throughout the stretch of several billion years that Earth existed, they could have been here often. - Exactly, so they could have therefore filled the galaxy long time ago if they had wanted to.

- Let's push back on that. The question to me is, isn't it possible that the expansion of a civilization is much harder than the travel, the sphere of the reachable is different than the sphere of the colonized. So isn't it possible that the sphere of places where the stragglers go, the different people that journey out, the explorers, is much, much larger and grows much faster than the civilization?

So in which case, they would visit us. There's a lot of visitors, the grad students of the civilization. They're exploring, they're collecting the data, but we're not yet going to see them. And by yet, I mean across millions of years. - The time delay between when the first thing might arrive and then when colonists could arrive in mass and do a mass amount of work is cosmologically short.

In human history, of course, sure, there might be a century between that, but a century is just a tiny amount of time on the scales we're talking about. - So this is, in computer science, there's ant colony optimization. It's true for ants. So it's like when the first ant shows up, it's likely, and if there's anything of value, it's likely the other ants will follow quickly.

Yeah. - Relatively short. It's also true that traveling over very long distances, probably one of the main ways to make that feasible is that you land somewhere, you colonize a bit, you create new resources that can then allow you to go farther. - Many short hops as opposed to a giant, long journey.

- Exactly, but those hops require that you are able to start a colonization of sorts along those hops, right? You have to be able to stop somewhere, make it into a way station such that you can then support you moving farther. - So what do you think of, there's been a lot of UFO sightings, what do you think about those UFO sightings and what do you think if any of them are of extraterrestrial origin and we don't see giant civilizations out in the sky, how do you make sense of that then?

- I wanna do some clearing of throats, which is people like to do on this topic, right? They wanna make sure you understand they're saying this and not that, right? So I would say the analysis needs both a prior and a likelihood. So the prior is what are the scenarios that are at all plausible in terms of what we know about the universe and then the likelihood is the particular actual sightings, like how hard are those to explain through various means.

I will establish myself as someone of an expert on the prior, I would say my studies and the things I've studied make me an expert and I should stand up and have an opinion on that and be able to explain it. The likelihood, however, is not my area of expertise.

That is, I'm not a pilot, I don't do atmospheric studies of things I haven't studied in detail, the various kinds of atmospheric phenomena or whatever that might be used to explain the particular sightings. I can just say from my amateur stance, the sightings look damn puzzling. They do not look easy to dismiss, the attempts I've seen to easily dismiss them seem to me to fail, it seems like these are pretty puzzling, weird stuff that deserve an expert's attention in terms of considering, asking what the likelihood is.

So analogy I would make is a murder trial, okay? On average, if we say what's the chance any one person murdered another person as a prior probability, maybe one in a thousand people get murdered, maybe each person has a thousand people around them who could plausibly have done it, so the prior probability of a murder is one in a million.

But we allow murder trials because often evidence is sufficient to overcome a one in a million prior because the evidence is often strong enough, right? My guess, rough guess for the UFOs as aliens scenario, at least some of them, is the prior is roughly one in a thousand, much higher than the usual murder trial, plenty high enough that strong physical evidence could put you over the top to think it's more likely than not.

But I'm not an expert on that physical evidence, I'm gonna leave that part to someone else. I'm gonna say the prior is pretty high, this isn't a crazy scenario. So then I can elaborate on where my prior comes from. What scenario could make most sense of this data? My scenario to make sense has two main parts.

First is panspermia siblings. So panspermia is the hypothesized process by which life might have arrived on Earth from elsewhere. And a plausible time for that, I mean, it would have to happen very early in Earth's history 'cause we see life early in history. And a plausible time could have been during the stellar nursery where the sun was born with many other stars in the same close proximity with lots of rocks flying around, able to move things from one place to another.

If a rock with life on it from some rock with planet with life came into that stellar nursery, it plausibly could have seeded many planets in that stellar nursery all at the same time. They're all born at the same time in the same place, pretty close to each other, lots of rocks flying around.

So a panspermia scenario would then create siblings, i.e. there would be say a few thousand other planets out there. So after the nursery forms, it drifts, it separates, they drift apart. And so out there in the galaxy, there would now be a bunch of other stars all formed at the same time, and we can actually spot them in terms of their spectrum.

And they would have then started on the same path of life as we did with that life being seeded, but they would move at different rates. And most likely, most of them would never reach an advanced level before the deadline, but maybe one other did, and maybe it did before us.

So if they did, they could know all of this, and they could go searching for their siblings. That is, they could look in the sky for the other stars that match the spectrum that matches the spectrum that came from this nursery. They could identify their sibling stars in the galaxy, the thousand of them, and those would be of special interest to them 'cause they would think, well, life might be on those.

And they could go looking for them. - You're just such a brilliant mathematical, philosophical, physical, biological idea of panspermia siblings because we all kind of started at similar time in this local pocket of the universe. And so that changes a lot of the math. - So that would create this correlation between when advanced life might appear.

No longer just random independent spaces in space-time. There'd be this cluster, perhaps. And that allows interaction between-- - The elements of the cluster, yes. - Non-grabby alien civilizations, like kind of primitive alien civilizations like us with others, and they might be a little bit ahead. That's so fascinating. - They would probably be a lot ahead.

So the puzzle is-- - Sure, sure. - If they happen before us, they probably happened hundreds of millions of years before us. - But less than a billion. - Less than a billion, but still plenty of time that they could have become grabby and filled the galaxy and gone beyond.

So there'd be plenty, so the fact is they chose not to become grabby. That would have to be the interpretation. If we have panspermia siblings-- - Plenty of time to become grabby, you said. So it should be fine. - Yes, they had plenty of time and they chose not to.

- Are we sure about this? So 100 million years is enough? - 100 million, so I told you before that I said within 10 million years, our descendants will become grabby or not. - And they'll have that choice, okay. - And so they clearly more than 10 million years earlier than us, so they chose not to.

- But still go on vacation, look around, so it's not grabby. - If they chose not to expand, that's going to have to be a rule they set to not allow any part of themselves to do it. If they let any little ship fly away with the ability to create a colony, the game's over.

Then they have prevented, then the universe becomes grabby from their origin with this one colony. So in order to prevent their civilization being grabby, they have to have a rule they enforce pretty strongly that no part of them can ever try to do that. - Through a global authoritarian regime or through something that's internal to them, meaning it's part of the nature of life that it doesn't want, as become advanced-- - Like a political officer in the brain or whatever.

- Yes, there's something in human nature that prevents you from, or like alien nature, that as you get more advanced, you become lazier and lazier in terms of exploration and expansion. - So I would say they would have to have enforced a rule against expanding, and that rule would probably make them reluctant to let people leave very far.

Any one vacation trip far away could risk an expansion from this vacation trip. So they would probably have a pretty tight lid on just allowing any travel out from their origin in order to enforce this rule. - Interesting. - But then we also know, well, they would have chosen to come here.

So clearly they made an exception from their general rule to say, okay, but an expedition to Earth, that should be allowed. - It could be intentional exception or incompetent exception. - But if incompetent, then they couldn't maintain this over 100 million years, this policy of not allowing any expansion.

So we have to see they have successfully, they not just had a policy to try, they succeeded over 100 million years in preventing the expansion. That's a substantial competence. - Let me think about this. So you don't think there could be a barrier in 100 million years, you don't think there could be a barrier to, like, technological barrier to becoming expansionary?

- Imagine the Europeans that tried to prevent anybody from leaving Europe to go to the New World. And imagine what it would have taken to make that happen over 100 million years. - Yeah, it's impossible. - They would have had to have very strict, you know, guards at the borders, at the borders saying, "No, you can't go." - But just to clarify, you're not suggesting that's actually possible.

- I am suggesting it's possible. - I don't know how you keep, in my silly human brain, maybe it's a brain that values freedom, but I don't know how you can keep, no matter how much force, no matter how much censorship or control or so on, I just don't know how you can keep people from exploring into the mysterious, into the unknown.

- You're thinking of people, we're talking aliens. So remember, there's a vast space of different possible social creatures they could have evolved from, different kinds of cultures they could be in, different kinds of threats. I mean, there are many things, as you talked about, that most of us would feel very reluctant to do.

This isn't one of those, but-- - Okay, so how, if the UFO sightings represent alien visitors, how the heck are they getting here under the Panspermia siblings? - So Panspermia siblings is one part of the scenario, which is that's where they came from. And from that, we can conclude they had this rule against expansion, and they've successfully enforced that.

That also creates a plausible agenda for why they would be here, that is, to enforce that rule on us. That is, if we go out and expanding, then we have defeated the purpose of this rule they set up. - Interesting. - Right, so they would be here to convince us to not expand.

- Convince in quotes. - Right, through various mechanisms. So obviously, one thing we conclude is they didn't just destroy us. That would have been completely possible, right? So the fact that they're here and we are not destroyed means that they chose not to destroy us. They have some degree of empathy or whatever their morals are that would make them reluctant to just destroy us.

They would rather persuade us. - They're, destroy their brethren. And so they may have been, there's a difference between arrival and observation. They may have been observing for a very long time. - Exactly. - And they arrived to try to, not to try, I don't think, to try. - To ensure.

- To ensure that we don't become grabby. - Which is because that's, we can see that they did not, they must have enforced a ruling against that, and they are therefore here to, that's a plausible interpretation why they would risk this expedition when they clearly don't risk very many expeditions over this long period, to allow this one exception, because otherwise, if they don't, we may become grabby.

And they could have just destroyed us, but they didn't. - And they're closely monitoring the technological advancing of civilization, like what nuclear weapons is one thing, is that, all right, cool, that might have less to do with nuclear weapons and more with nuclear energy. Maybe they're monitoring fusion closely.

Like, how clever are these apes getting? So no doubt, they have a button that if we get too uppity or risky, they can push the button and ensure that we don't expand, but they'd rather do it some other way. So now, that explains why they're here and why they aren't out there.

But there's another thing that we need to explain. There's another key data we need to explain about UFOs if we're gonna have a hypothesis that explains them. And this is something many people have noticed, which is they had two extreme options they could have chosen and didn't choose. They could have either just remained completely invisible.

Clearly, an advanced civilization could have been completely invisible. There's no reason they need to fly around and be noticed. They could just be in orbit in dark satellites that are completely invisible to us, watching whatever they wanna watch. That would be well within their abilities. That's one thing they could have done.

The other thing they could do is just show up and land on the White House lawn, as they say, and shake hands, like make themselves really obvious. They could have done either of those, and they didn't do either of those. That's the next thing you need to explain about UFOs as aliens.

Why would they take this intermediate approach, hanging out near the edge of visibility with somewhat impressive mechanisms, but not walking up and introducing themselves, nor just being completely invisible? - So, okay, a lot of questions there. So one, do you think it's obvious where the White House is, or the White House lawn-- - Well, it's obvious where there are concentrations of humans that you could go up and introduce.

- But is humans the most interesting thing about Earth? - Yeah, are you sure about this? Because-- - If they're worried about an expansion, then they would be worried about a civilization that could be capable of expansion. Obviously, humans are the civilization on Earth that's by far the closest to being able to expand.

- I just don't know if aliens obviously see, obviously see humans, like the individual humans, like the meat vehicles, as the center of focus for observing a life on a planet. - They're supposed to be really smart and advanced. Like, this shouldn't be that hard for them. - But I think we're actually the dumb ones, because we think humans are the important things, but it could be our ideas, it could be something about our technologies.

- But that's mediated with us, it's correlated with us. - No, we make it seem like it's mediated by us humans, but the focus for alien civilizations might be the AI systems or the technologies themselves. That might be the organism. Human is the food, the source of the organism that's under observation, versus like-- - So what they wanted to have close contact with was something that was closely near humans, then they would be contacting those, and we would just incidentally see, but we would still see.

- But don't you think, isn't it possible, taking their perspective, isn't it possible that they would want to interact with some fundamental aspect that they're interested in without interfering with it? And that's actually a very, no matter how advanced you are, it's very difficult to do, I think. - But that's puzzling.

So, I mean, the prototypical UFO observation is a shiny, big object in the sky that has very rapid acceleration and no apparent surfaces for using air to manipulate its speed. And the question is, why that, right? Again, if they just, for example, if they just wanted to talk to our computer systems, they could move some sort of a little probe that connects to a wire and reads and sends bits there.

They don't need a shiny thing flying in the sky. - But don't you think they would be, they are, would be looking for the right way to communicate, the right language to communicate? Everything you just said, looking at the computer systems, I mean, that's not a trivial thing. Coming up with a signal that us humans would not freak out too much about, but also understand, might not be that trivial.

How would you talk to things? - Well, so not freak out a part is another interesting constraint. So again, I said, like the two obvious strategies are just to remain completely invisible and watch, which would be quite feasible, or to just directly interact, let's come out and be really very direct, right?

I mean, there's big things that you can see around. There's big cities, there's aircraft carriers. There's lots of, if you want to just find a big thing and come right up to it and like tap it on the shoulder or whatever, that would be quite feasible. Then they're not doing that.

So my hypothesis is that one of the other questions there was, do they have a status hierarchy? And I think most animals on earth who are social animals have status hierarchy, and they would reasonably presume that we have a status hierarchy. - Take me to your leader. - Well, I would say their strategy is to be impressive and sort of get us to see them at the top of our status hierarchy.

Just to, you know, that's how, for example, we domesticate dogs, right? We convince dogs we're the leader of their pack, right? And we domesticate many animals that way, but as we just swap into the top of their status hierarchy and we say, we're your top status animal, so you should do what we say, you should follow our lead.

So the idea that would be, they are going to get us to do what they want by being top status. You know, all through history, kings and emperors, et cetera, have tried to impress their citizens and other people by having the bigger palace, the bigger parade, the bigger crown and diamonds, right?

Whatever, maybe building a bigger pyramid, et cetera. Just, it's a very well-established trend that just be high status by being more impressive than the rest. - To push back, when there's an order of, several orders of magnitude of power differential, asymmetry of power, I feel like that status hierarchy no longer applies.

It's like mimetic theory. It's like- - Most emperors are several orders of magnitude more powerful than any one member of their empire. - Let's increase that by even more. So like, if I'm interacting with ants, right? I no longer feel like I need to establish my power with ants.

I actually want to lessen, I want to lower myself to the ants. I want to become the lowest possible ant so that they would welcome me. So I'm less concerned about them worshiping me. I'm more concerned about them welcoming me, integrating me into their world. - But it is important that you be non-threatening and that you be local.

So I think, for example, if the aliens had done something really big in the sky, you know, a hundred light years away, that would be there, not here. - Yes. - And that could seem threatening. So I think their strategy to be the high status would have to be to be visible, but to be here and non-threatening.

- I just don't know if it's obvious how to do that. Like, take your own perspective. You see a planet with relatively intelligent, like complex structures being formed, like, yeah, life forms. We could see this under, in Titan or something like that. The moon, you know, Europa. You start to see not just primitive bacterial life, but multicellular life, and it seems to form some very complicated cellular colonies, structures that they're dynamic.

There's a lot of stuff going on. Some gigantic cellular automata type of construct. How do you make yourself known to them in an impressive fashion without destroying it? Like, we know how to destroy, potentially. - Right, so if you go touch stuff, you're likely to hurt it, right? There's a good risk of hurting something by getting too close and touching it and interacting, right?

- Yeah, like landing on a White House lawn. - Right, so the claim is that their current strategy of hanging out at the periphery of our vision and just being very clearly physically impressive with very clear physically impressive abilities is at least a plausible strategy they might use to impress us and convince us that we're at the top of their status hierarchy.

And I would say, if they came closer, not only would they risk hurting us in ways that they couldn't really understand, but more plausibly, they would reveal things about themselves we would hate. So if you look at how we treat other civilizations on Earth and other people, we are generally interested in foreigners and people from other plant lands, and we were generally interested in their varying cult customs, et cetera, until we find out that they do something that violates our moral norms, and then we hate them.

(laughs) And these are aliens, for God's sakes, right? - Yeah. - There's just gonna be something about them that we hate. They eat babies, who knows what it is. But something they don't think is offensive, but that they think we might find. And so they would be risking a lot by revealing a lot about themselves.

We would find something we hated. - Interesting, but do you resonate at all with mimetic theory where we only feel this way about things that are very close to us? So aliens are sufficiently different to where we'll be like fascinated, terrified or fascinated, but not like-- - Right, but if they wanna be at the top of our status hierarchy to get us to follow them, they can't be too distant.

They have to be close enough that we would see them that way. - But pretend to be close enough, right, and not reveal much, that mystery, that old Clint Eastwood cowboy say less. - I mean, the point is we're clever enough that we can figure out their agenda. That is just from the fact that we're here.

If we see that they're here, we can figure out, oh, they want us not to expand. And look, they are this huge power and they're very impressive, and a lot of us don't wanna expand, so that could easily tip us over the edge toward we already wanted to not expand.

We already wanted to be able to regulate and have a central community, and here are these very advanced, smart aliens who have survived for 100 million years, and they're telling us not to expand either. - (laughs) This is brilliant. I love this so much. Returning to panspermia siblings, just to clarify one thing, in that framework, who originated, who planted it?

Would it be a grabby alien civilization that planted the siblings or no? - The simple scenario is that life started on some other planet billions of years ago, and it went through part of the stages of evolution to advanced life, but not all the way to advanced life, and then some rock hit it, grabbed a piece of it on the rock, and that rock drifted for maybe a million years until it happened to prawn the stellar nursery, where it then seeded many stars.

- And something about that life, without being super advanced, it was nevertheless resilient to the harsh conditions of space. - There's some graphs that I've been impressed by that show sort of the level of genetic information in various kinds of life on the history of Earth, and basically, we are now more complex than the earlier life, but the earlier life was still pretty damn complex.

And so if you actually project this log graph in history, it looks like it was many billions of years ago when you get down to zero, so plausibly, you could say there was just a lot of evolution that had to happen before you could get to the simplest life we've ever seen in history of life on Earth, was still pretty damn complicated.

Okay, and so that's always been this puzzle. How could life get to this enormously complicated level in the short period it seems to at the beginning of Earth history? So where, you know, it's only 300 million years at most it appeared, and then it was really complicated at that point, so panspermia allows you to explain that complexity by saying, well, it's been another five billion years on another planet, going through lots of earlier stages where it was working its way up to the level of complexity you see at the beginning of Earth.

- We'll try to talk about other ideas of the origin of life, but let me return to UFO sightings. Is there other explanations that are possible outside of panspermia siblings that can explain no grabby aliens in the sky, and yet alien arrival on Earth? - Well, the other categories of explanations that most people will use is, well, first of all, just mistakes, like, you know, you're confusing something ordinary for something mysterious, right?

Or some sort of secret organization, like our government is secretly messing with us, and trying to do a false flag ops or whatever, right? They're trying to convince the Russians or the Chinese that there might be aliens and scare them into not attacking or something, right? 'Cause if you know the history of World War II, say the US government did all these big fake operations where they were faking a lot of big things in order to mess with people, so that's a possibility.

The government's been lying and faking things and paying people to lie about what they saw, et cetera. That's a plausible set of explanations for the range of sightings seen. And another explanation people offer is some other hidden organization on Earth, or some secret organization somewhere that has much more advanced capabilities than anybody's given it credit for.

For some reason, it's been keeping secret. I mean, they all sound somewhat implausible, but again, we're looking for maybe one in a thousand sort of priors. Question is, you know, could they be in that level of plausibility? - Can we just linger on this? So you, first of all, you've written, talked about, thought about so many different topics.

You're an incredible mind, and I just thank you for sitting down today. I'm almost like at a loss of which place we explore, but let me, on this topic, ask about conspiracy theories. 'Cause you've written about institutions, authorities. What, this is a bit of a therapy session, but what do we make of conspiracy theories?

- The phrase itself is pushing you in a direction, right? So clearly, in history, we have had many large coordinated keepings of secrets, right? Say the Manhattan Project, right? And there was, what, hundreds of thousands of people working on that over many years. But they kept it a secret, right?

Clearly, many large military operations have kept things secret over even decades with many thousands of people involved. So clearly, it's possible to keep something secret over time periods, but the more people you involve and the more time you're assuming, and the more, the less centralized an organization or the less discipline they have, the harder it gets to believe.

But we're just trying to calibrate, basically, in our minds which kind of secrets can be kept by which groups over what time periods for what purposes, right? - But let me, I don't have enough data. So I'm somebody, I hang out with people, and I love people, I love all things, really.

And I just, I think that most people, even the assholes, have the capacity to be good, and they're beautiful, and I enjoy them. So the kind of data my brain, whatever the chemistry of my brain is that sees the beautiful in things, is maybe collecting a subset of data that doesn't allow me to intuit the competence that humans are able to achieve in constructing a conspiracy theory.

So for example, one thing that people often talk about is intelligence agencies, this broad thing they say, the CIA, the FSB, the different, the British intelligence. I've, fortunate or unfortunate enough, never gotten a chance that I know of to talk to any member of those intelligence agencies. Nor take a peek behind the curtain, or the first curtain, I don't know how many levels of curtains there are, and so I can't intuit.

But my interactions with government, I was funded by DOD and DARPA, and I've interacted, been to the Pentagon. With all due respect to my friends, lovely friends in government, and there are a lot of incredible people, but there is a very giant bureaucracy that sometimes suffocates the ingenuity of the human spirit, is one way I can put it.

Meaning, they are, I just, it's difficult for me to imagine extreme competence at a scale of hundreds or thousands of human beings. Now, that doesn't mean, that's my very anecdotal data of the situation. And so I try to build up my intuition about centralized system of government, how much conspiracy is possible, how much the intelligence agencies, or some other source can generate sufficiently robust propaganda that controls the populace.

If you look at World War II, as you mentioned, there have been extremely powerful propaganda machines on the side of Nazi Germany, on the side of the Soviet Union, on the side of the United States, and all these different mechanisms. Sometimes they control the free press through social pressures.

Sometimes they control the press through the threat of violence, as you do in authoritarian regimes. Sometimes it's like deliberately, the dictator writing the news, the headlines and literally announcing it. And something about human psychology forces you to embrace the narrative and believe the narrative. And at scale, that becomes reality when the initial spark was just a propaganda thought in a single individual's mind.

So I can't necessarily intuit of what's possible, but I'm skeptical of the power of human institutions to construct conspiracy theories that cause suffering at scale. Especially in this modern age, when information is becoming more and more accessible by the populace. Anyway, I don't know if you can elucidate-- - You said it's cause suffering at scale, but of course, say during wartime, the people who are managing the various conspiracies like D-Day or Manhattan Project, they thought that their conspiracy was avoiding harm rather than causing harm.

So if you can get a lot of people to think that supporting the conspiracy is helpful, then a lot more might do that. And there's just a lot of things that people just don't wanna see. So if you can make your conspiracy the sort of thing that people wouldn't wanna talk about anyway, even if they knew about it, you're most of the way there.

So I have learned over the years, many things that most ordinary people should be interested in, but somehow don't know, even though the data has been very widespread. So I have this book, "The Elephant and the Brain," and one of the chapters is there on medicine. And basically, most people seem ignorant of the very basic fact that when we do randomized trials where we give some people more medicine than others, the people who get more medicine are not healthier.

Just overall, in general, just like induce somebody to get more medicine because you just give them more budget to buy medicine, say. Not a specific medicine, just the whole category. And you would think that would be something most people should know about medicine. You might even think that would be a conspiracy theory to think that would be hidden, but in fact, most people never learn that fact.

- So just to clarify, just a general high-level statement, the more medicine you take, the less healthy you are. - Randomized experiments don't find that fact. Do not find that more medicine makes you more healthy. They're just no connection. In randomized experiments, there's no relationship between more medicine and being healthier.

- So it's not a negative relationship, but it's just no relationship. - Right. - And so the conspiracy theory would say that the businesses that sell you medicine don't want you to know that fact. And then you're saying that there's also part of this is that people just don't wanna know.

- They just don't wanna know. And so they don't learn this. So I've lived in the Washington area for several decades now, reading the Washington Post regularly. Every week there was a special section on health and medicine. It never was mentioned in that section of the paper in all the 20 years I read that.

- So do you think there is some truth to this caricatured blue pill, red pill, where most people don't want to know the truth? - There are many things about which people don't want to know certain kinds of truths. - Yeah. - That is bad looking truths, truths that discouraging, truths that sort of take away the justification for things they feel passionate about.

- Do you think that's a bad aspect of human nature? That's something we should try to overcome? - Well, as we discussed, my first priority is to just tell people about it, to do the analysis and the cold facts of what's actually happening, and then to try to be careful about how we can improve.

So our book, "The Elephant in the Brain," coauthored with Kevin Simler, is about hidden motives in everyday life. And our first priority there is just to explain to you what are the things that you are not looking at that you are reluctant to look at. And many people try to take that book as a self-help book where they're trying to improve themselves and make sure they look at more things.

And that often goes badly because it's harder to actually do that than you think. - Yeah. - And so, but we at least want you to know that this truth is available if you want to learn about it. - It's the Nietzsche, "If you gaze long into the abyss, the abyss gazes into you." Let's talk about this "Elephant in the Brain." Amazing book.

"The Elephant in the Room" is, quote, "An important issue that people are reluctant to acknowledge or address a social taboo. The elephant in the brain is an important but unacknowledged feature of how our mind works, an introspective taboo. You describe selfishness and self-deception as the core or some of the core elephants, as some of the elephant offspring in the brain.

Selfishness and self-deception." All right. Can you explain? Can you explain why these are the taboos in our brain that we don't want to acknowledge to ourselves? - In your conscious mind, the one that's listening to me that I'm talking to at the moment, you like to think of yourself as the president or king of your mind, ruling over all that you see, issuing commands that immediately obeyed.

You are instead better understood as the press secretary of your brain. You don't make decisions, you justify them to an audience. That's what your conscious mind is for. You watch what you're doing and you try to come up with stories that explain what you're doing so that you can avoid accusations of violating norms.

So humans compared to most other animals have norms and this allows us to manage larger groups with our morals and norms about what we should or shouldn't be doing. This is so important to us that we needed to be constantly watching what we were doing in order to make sure we had a good story to avoid norm violations.

So many norms are about motives. So if I hit you on purpose, that's a big violation. If I hit you accidentally, that's okay. I need to be able to explain why it was an accident and not on purpose. - So where does that need come from for your own self-preservation?

- Right, so humans have norms and we have the norm that if we see anybody violating a norm, we need to tell other people and then coordinate to make them stop and punish them for violating. So such benefits are strong enough and severe enough that we each want to avoid being successfully accused of violating norms.

So for example, hitting someone on purpose is a big clear norm violation. If we do it consistently, we may be thrown out of the group and that would mean we would die. Okay, so we need to be able to convince people we are not going around hitting people on purpose.

If somebody happens to be at the other end of our fist and their face connects, that was an accident and we need to be able to explain that. And similarly for many other norms humans have, we are serious about these norms and we don't want people to violate. We find them violating, we're going to accuse them.

But many norms have a motive component and so we are trying to explain ourselves and make sure we have a good motive story about everything we do, which is why we're constantly trying to explain what we're doing and that's what your conscious mind is doing. It is trying to make sure you've got a good motive story for everything you're doing.

And that's why you don't know why you really do things. What you know is what the good story is about why you've been doing things. - And that's the self-deception. And you're saying that there is a machine, the actual dictator is selfish. And then you're just the press secretary who's desperately doesn't want to get fired and is justifying all of the decisions of the dictator.

And that's the self-deception. - Right, now most people actually are willing to believe that this is true in the abstract. So our book has been classified as psychology and it was reviewed by psychologists and the basic way that psychology referees and reviewers responded is to say this is well known.

Most people accept that there's a fair bit of self-deception. - But they don't want to accept it about themselves. - Well, they don't want to accept it about the particular topics that we talk about. So people accept the idea in the abstract that they might be self-deceived or that they might not be honest about various things.

But that hasn't penetrated into the literatures where people are explaining particular things like why we go to school, why we go to the doctor, why we vote, et cetera. So our book is mainly about 10 areas of life and explaining about in each area what our actual motives there are.

And people who study those things have not admitted that hidden motives are explaining those particular areas. - So they haven't taken the leap from theoretical psychology to actual public policy. - Exactly. - And economics and all that kind of stuff. Well, let me just linger on this and bring up my old friends Zingman Freud and Carl Jung.

So how vast is this landscape of the unconscious mind, the power and the scope of the dictator? Is it only dark there? Is it some light? Is there some love? - The vast majority of what's happening in your head you are unaware of. So in a literal sense, the unconscious, the aspects of your mind that you're not conscious of is the overwhelming majority.

But that's just true in a literal engineering sense. Your mind is doing lots of low-level things and you just can't be consciously aware of all that low-level stuff. But there's plenty of room there for lots of things you're not aware of. - But can we try to shine a light at the things we're unaware of, specifically, now again, staying with the philosophical psychology side for a moment.

You know, can you shine a light in the Jungian shadow? Can you, what's going on there? What is this machine like? Like what level of thoughts are happening there? Is it something that we can even interpret? If we somehow could visualize it, is it something that's human interpretable? Or is it just a kind of chaos of like monitoring different systems in the body, making sure you're happy, making sure you're fed, all those kind of basic forces that form abstractions on top of each other and they're not introspective at all?

- We humans are social creatures. Plausibly being social is the main reason we have these unusually large brains. Therefore, most of our brain is devoted to being social. And so the things we are very obsessed with and constantly paying attention to are, how do I look to others? What would others think of me if they knew these various things they might learn about me?

- So that's close to being fundamental to what it means to be human, is caring what others think. - Right, to be trying to present a story that would be okay for what other things, but we're very constantly thinking, what do other people think? - So let me ask you this question then about you, Robin Hanson, who many places, sometimes for fun, sometimes as a basic statement of principle, likes to disagree with what the majority of people think.

So how do you explain, how are you self-deceiving yourself in this task and how are you being self, how's your, like why is the dictator manipulating you inside your head to be self-critical? Like there's norms, why do you wanna stand out in this way? Why do you want to challenge the norms in this way?

- Almost by definition, I can't tell you what I'm deceiving myself about, but the more practical strategy that's quite feasible is to ask about what are typical things that most people deceive themselves about and then to own up to those particular things. - Sure, what's a good one? - So for example, I can very much acknowledge that I would like to be well thought of, that I would be seeking attention and glory and praise from my intellectual work and that that would be a major agenda driving my intellectual attempts.

So if there were topics that other people would find less interesting, I might be less interested in those for that reason, for example. I might want to find topics where other people are interested and I might want to go for the glory of finding a big insight rather than a small one and maybe one that was especially surprising.

That's also of course consistent with some more ideal concept of what an intellectual should be, but most intellectuals are relatively risk-averse. They are in some local intellectual tradition and they are adding to that and they are staying conforming to the sort of usual assumptions and usual accepted beliefs and practices of a particular area so that they can be accepted in that area and treated as part of the community.

But you might think for the purpose of the larger intellectual project of understanding the world better, people should be less eager to just add a little bit to some tradition and they should be looking for what's neglected between the major traditions and major questions. They should be looking for assumptions maybe we're making that are wrong.

They should be looking at ways, things that are very surprising, like things that would be, you would have thought a priori unlikely that once you are convinced of it, you find that to be very important and a big update, right? So you could say that one motivation I might have is less motivated to be sort of comfortably accepted into some particular intellectual community and more willing to just go for these more fundamental long shots that should be very important if you could find them.

- Which would, if you can find them, would get you appreciated-- - Attention, respect. - Across a larger number of people across the longer time span of history. - Right. - So like maybe the small local community will say you suck. - Right. - You must conform, but the larger community will see the brilliance of you breaking out of the cage of the small conformity into a larger cage.

It's always a bigger, there's always a bigger cage, and then you'll be remembered by more. Yeah, also that explains your choice of colorful shirt that looks great in a black background, so you definitely stand out. - Right, now of course, you could say, well, you could get all this attention by making false claims of dramatic improvement.

- Sure. - And then wouldn't that be much easier than actually working through all the details-- - Why not? - To make true claims, right? - Let me ask the press secretary, why not? Why, so of course, you spoke several times about how much you value truth and the pursuit of truth.

That's a very nice narrative. - Right. - Hitler and Stalin also talked about the value of truth. Do you worry when you introspect, as broadly as all humans might, that it becomes a drug? This being a martyr, being the person who points out that the emperor wears no clothes, even when the emperor is obviously dressed, just to be the person who points out that the emperor is wearing no clothes.

Do you think about that? - So I think the standards you hold yourself to are dependent on the audience you have in mind. So if you think of your audience as relatively easily fooled or relatively gullible, then you won't bother to generate more complicated, deep arguments and structures and evidence to persuade somebody who has higher standards because why bother?

You can get away with something much easier. And of course, if you are, say, a salesperson, or you make money on sales, then you don't need to convince the top few percent of the most sharp customers. You can just go for the bottom 60% of the most gullible customers and make plenty of sales, right?

So I think intellectuals have to vary. One of the main ways intellectuals vary is in who is their audience in their mind. Who are they trying to impress? Is it the people down the hall? Is it the people who are reading their Twitter feed? Is it their parents? Is it their high school teacher?

Or is it Einstein and Freud and Socrates, right? So I think those of us who are especially arrogant, especially think that we're really big shot or have a chance at being a really big shot, we were naturally gonna pick the big shot audience that we can. We're gonna be trying to impress Socrates and Einstein.

- Is that why you hang out with Tyler Cohen a lot? - Sure, I mean. - Try to convince him and stuff. - You know, and you might think, from the point of view of just making money or having sex or other sorts of things, this is misdirected energy, right?

Trying to impress the very most highest quality minds, that's such a small sample and they can't do that much for you anyway. So I might well have had more ordinary success in life, be more popular, invited to more parties, make more money if I had targeted a lower tier set of intellectuals with the standards they have.

But for some reason, I decided early on that Einstein was my audience, or people like him, and I was gonna impress them. - Yeah, I mean, you pick your set of motivations. You know, convincing, impressing Tyler Cohen is not gonna help you get laid, trust me, I tried. All right.

What are some notable sort of effects of the elephant in the brain in everyday life? So you mentioned when we tried to apply that to economics, to public policy, so when we think about medicine, education, all those kinds of things, what are some things that we just-- - Well, the key thing is, medicine is much less useful health-wise than you think.

So, you know, if you were focused on your health, you would care a lot less about it. And if you were focused on other people's health, you would also care a lot less about it. But if medicine is, as we suggest, more about showing that you care and let other people showing that they care about you, then a lot of priority on medicine can make sense.

So that was our very earliest discussion in the podcast. You were talking about, you know, should you give people a lot of medicine when it's not very effective? And then the answer then is, well, if that's the way that you show that you care about them and you really want them to know you care, then maybe that's what you need to do if you can't find a cheaper, more effective substitute.

- So if we actually just pause on that for a little bit, how do we start to untangle the full set of self-deception happening in the space of medicine? - So we have a method that we use in our book that is what I recommend for people to use in all these sorts of topics.

The straightforward method is, first, don't look at yourself. Look at other people. Look at broad patterns of behavior in other people. And then ask, what are the various theories we could have to explain these patterns of behavior? And then just do the simple matching. Which theory better matches the behavior they have?

And the last step is to assume that's true of you too. Don't assume you're an exception. If you happen to be an exception, that won't go so well, but nevertheless, on average, you aren't very well positioned to judge if you're an exception. So look at what other people do, explain what other people do, and assume that's you too.

- But also, in the case of medicine, there's several parties to consider. So there's the individual person that's receiving the medicine. There's the doctors that are prescribing the medicine. There's drug companies that are selling drugs. There are governments that have regulations that are lobbyists. So you can build up a network of categories of humans in this, and they each play their role.

So how do you introspect, sort of analyze the system at a system scale versus at the individual scale? - So it turns out that in general, it's usually much easier to explain producer behavior than consumer behavior. That is, the drug companies or the doctors have relatively clear incentives to give the customers whatever they want.

And similarly say, governments in democratic countries have the incentive to give the voters what they want. So that focuses your attention on the patient and the voter in this equation, and saying, what do they want? They would be driving the rest of the system. Whatever they want, the other parties are willing to give them in order to get paid.

So now we're looking for puzzles in patient and voter behavior. What are they choosing and why do they choose that? - And how much exactly? And then we can explain that potentially, again, returning to the producer, by the producer being incentivized to manipulate the decision-making processes of the voter and the consumer.

- Well now, in almost every industry, producers are, in general, happy to lie and exaggerate in order to get more customers. This is true of auto repair as much as human body repair in medicine. So the differences between these industries can't be explained by the willingness of the producers to give customers what they want or to do various things that we have to, again, go to the customers.

Why are customers treating body repair different than auto repair? - Yeah, and that potentially requires a lot of thinking, a lot of data collection, and potentially looking at historical data too 'cause things don't just happen overnight. Over time, there's trends. - In principle, it does, but actually, it's a lot, actually, easier than you might think.

I think the biggest limitation is just the willingness to consider alternative hypotheses. So many of the patterns that you need to rely on are actually pretty obvious, simple patterns. You just have to notice them and ask yourself, how can I explain those? Often, you don't need to look at the most subtle, most difficult statistical evidence that might be out there.

The simplest patterns are often enough. - All right, so there's a fundamental statement about self-deception in the book. There's the application of that, like we just did in medicine. Can you steel man the argument that many of the foundational ideas in the book are wrong? Meaning there's two that you just made, which is it can be a lot simpler than it looks.

Can you steel man the case that it's, case by case, it's always super complicated. Like it's a complex system. It's very difficult to have a simple model about. It's very difficult to introspect. And the other one is that the human brain isn't not just about self-deception, that there's a lot of motivations at play.

And we are able to really introspect our own mind. And like what's on the surface of the conscious is actually quite a good representation of what's going on in the brain. And you're not deceiving yourself. You're able to actually arrive to deeply think about where your mind stands and what you think about the world.

And it's less about impressing people and more about being a free thinking individual. - So when a child tries to explain why they don't have their homework assignment, they are sometimes inclined to say, the dog ate my homework. They almost never say the dragon ate my homework. The reason is the dragon is a completely implausible explanation.

Almost always when we make excuses for things, we choose things that are at least in some degree plausible. It could perhaps have happened. That's an obstacle for any explanation of a hidden motive or a hidden feature of human behavior. If people are pretending one thing while really doing another, they're usually gonna pick as a pretense something that's somewhat plausible.

That's gonna be an obstacle to proving that hypothesis if you are focused on sort of the local data that a person would typically have if they were challenged. So if you're just looking at one kid and his lack of homework, maybe you can't tell whether his dog ate his homework or not.

If you happen to know he doesn't have a dog, you might have more confidence, right? You will need to have a wider range of evidence than a typical person would when they're encountering that actual excuse in order to see past the excuse. That will just be a general feature of it.

So if I say, there's this usual story about why we go to the doctor and then there's this other explanation, it'll be true that you'll have to look at wider data in order to see that because people don't usually offer excuses unless in the local context of their excuse, they can get away with it.

That is, it's hard to tell, right? So in the case of medicine, I have to point you to sort of larger sets of data, but in many areas of academia, including health economics, the researchers there also want to support the usual points of view. And so they will have selection effects in their publications and their analysis whereby if they're getting a result too much contrary to the usual point of view everybody wants to have, they will file drawer that paper or redo the analysis until they get an answer that's more to people's liking.

So that means in the health economics literature, there are plenty of people who will claim that in fact, we have evidence that medicine is effective. And when I respond, I will have to point you to our most reliable evidence and ask you to consider the possibility that the literature is biased in that when the evidence isn't as reliable, when they have more degrees of freedom in order to get the answer they want, they do tend to get the answer they want.

But when we get to the kind of evidence that's much harder to mess with, that's where we will see the truth be more revealed. So with respect to medicine, we have millions of papers published in medicine over the years, most of which give the impression that medicine is useful.

There's a small literature on randomized experiments of the aggregate effects of medicine, where there's maybe a few half dozen or so papers, where it would be the hardest to hide it because it's such a straightforward experiment done in a straightforward way that it's hard to manipulate. And that's where I will point you to, to show you that there's relatively little correlation between health and medicine.

But even then, people could try to save the phenomenon and say, "Well, it's not hidden motives, it's just ignorance." They could say, for example, medicine's complicated, most people don't know the literature, therefore they can be excused for ignorance. They are just ignorantly assuming that medicine is effective. It's not that they have some other motive that they're trying to achieve.

And then I will have to do, as with a conspiracy theory analysis, I'm saying, "Well, how long has this misperception been going on? How consistently has it happened around the world and across time?" And I would have to say, "Look, if we're talking about, say, a recent new product, like Segway scooters or something, I could say not so many people have seen them or used them.

Maybe they could be confused about their value. If we're talking about a product that's been around for thousands of years, used in roughly the same way all across the world, and we see the same pattern over and over again, this sort of ignorance mistake just doesn't work so well." - It's also is a question of how much of the self-deception is prevalent versus foundational?

Because there's a kind of implied thing where it's foundational to human nature versus just a common pitfall. This is a question I have. So maybe human progress is made by people who don't fall into the self-deception. It's like a baser aspect of human nature, but then you escape it easily if you're motivated.

- The motivational hypotheses about the self-deceptions are in terms of how it makes you look to the people around you. Again, the press secretary. So the story would be most people want to look good to the people around them. Therefore, most people present themselves in ways that help them look good to the people around them.

That's sufficient to say there would be a lot of it. It doesn't need to be 100%, right? There's enough variety in people and in circumstances that sometimes taking a contrarian strategy can be in the interest of some minority of the people. So I might, for example, say that that's a strategy I've taken.

I've decided that being contrarian on these things could be winning for me in that there's a room for a small number of people like me who have these sort of messages who can then get more attention, even if there's not room for most people to do that. And that can be explaining sort of the variety, right?

Similarly, you might say, look, just look at the most obvious things. Most people would like to look good, right, in the sense of physically. Just you look good right now. You're wearing a nice suit. You have a haircut. You shaved, right? So, and we-- - I cut my own hair, by the way.

- Okay, well, then, all the more impressive. - That's a counter argument for your claim that most people wanna look good. - Clearly, if we look at most people and their physical appearance, clearly most people are trying to look somewhat nice, right? They shower, they shave, they comb their hair, but we certainly see some people around who are not trying to look so nice, right?

Is that a big challenge, the hypothesis that people wanna look nice? Not that much, right? We can see in those particular people's context more particular reasons why they've chosen to be an exception to the more general rule. - So the general rule does reveal something foundational. - In general, really.

- Right. - That's the way things work. Let me ask you, you wrote a blog post about the accuracy of authorities, since we're talking about this, especially in medicine. Just looking around us, especially during this time of the pandemic, there's been a growing distrust of authorities, of institutions, even the institution of science itself.

What are the pros and cons of authorities, would you say? So what's nice about authorities? What's nice about institutions? And what are their pitfalls? - One standard function of authority is as something you can defer to respectively without needing to seem too submissive or ignorant or gullible. That is, when you're asking, what should I act on or what belief should I act on?

You might be worried if I chose something too contrarian, too weird, too speculative, that that would make me look bad so I would just choose something very conservative. So maybe an authority lets you choose something a little less conservative because the authority is your authorization. The authority will let you do it.

And somebody says, why did you do that thing? And they say, the authority authorized it. The authority tells me I should do this. Why aren't you doing it? - So the authority is often pushing for the conservative. - Well, no, the authority can do more. I mean, so for example, we just think about, I don't know, in a pandemic even, you could just think, oh, I'll just stay home and close all the doors or I'll just ignore it.

You could just think of just some very simple strategy that might be defensible if there were no authorities. But authorities might be able to know more than that. They might be able to look at some evidence, draw a more context-dependent conclusion, declare it as the authority's opinion, and then other people might follow that, and that could be better than doing nothing.

- So you mentioned WHO, the world's most beloved organization. So this is me speaking in general, WHO and CDC has been kind of, depending on degrees. - Right. - Details, just not behaving as I would have imagined in the best possible evolution of human civilization, authorities should act. They seem to have failed in some fundamental way in terms of leadership in a difficult time for our society.

Can you say what are the pros and cons of this particular authority? - So again, if there were no authorities whatsoever, no accepted authorities, then people would have to sort of randomly pick different local authorities who would conflict with each other, and then they'd be fighting each other about that, or just not believe anybody and just do some initial default action that you would always do without responding to context.

So the potential gain of an authority is that they could know more than just basic ignorance, and if people followed them, they could both be more informed than ignorance and all doing the same thing, so they're each protected from being accused or complained about. That's the idea of an authority.

That would be the good-- - What's the con of that? - Okay. - What's the negative? How does that go wrong? - So the con is that if you think of yourself as the authority and asking, "What's my best strategy as an authority?" it's unfortunately not to be maximally informative.

So you might think the ideal authority would not just tell you more than ignorance, it would tell you as much as possible. Okay, it would give you as much detail as you could possibly listen to and manage to assimilate, and it would update that as frequently as possible or as frequently as you were able to listen and assimilate, and that would be the maximally informative authority.

The problem is there's a conflict between being an authority or being seen as an authority and being maximally informative. That was the point of my blog post that you're pointing out to here. That is, if you look at it from their point of view, they won't long remain the perceived authority if they are too incautious about how they use that authority, and one of the ways to be incautious would be to be too informative.

- Okay, that's still in the pro column for me 'cause you're talking about the tensions that are very data-driven and very honest, and I would hope that authorities struggle with that, how much information to provide to people to maximize outcomes. Now, I'm generally somebody that believes more information is better 'cause I trust in the intelligence of people, but I'd like to mention a bigger con on authorities, which is the human question.

This comes back to global government and so on, is that there's humans that sit in chairs during meetings and those authorities, they have different titles, humans form hierarchies, and sometimes those titles get to your head a little bit, and you start to want to think, how do I preserve my control over this authority, as opposed to thinking through what is the mission of the authority, what is the mission of WHO and the other such organization, and how do I maximize the implementation of that mission, you start to think, well, I kind of like sitting in this big chair at the head of the table, I'd like to sit there for another few years, or better yet, I want to be remembered as the person who in a time of crisis was at the head of this authority and did a lot of good things.

So you stop trying to do good under what good means, given the mission of the authority, and you start to try to carve a narrative, to manipulate the narrative. First, in the meeting room, everybody around you, just a small little story you tell yourself, the new interns, the managers, throughout the whole hierarchy of the company.

Okay, once everybody in the company, or in the organization believes this narrative, now you start to control the release of information, not because you're trying to maximize outcomes, but because you're trying to maximize the effectiveness of the narrative, that you are truly a great representative of this authority in human history.

And I just feel like those human forces, whenever you have an authority, it starts getting to people's heads. One of the most, me as a scientist, one of the most disappointing things to see during the pandemic is the use of authority from colleagues of mine to roll their eyes, to dismiss other human beings, just because they got a PhD, just because they're an assistant associate, full faculty, just because they are deputy head of X organization, NIH, whatever the heck the organization is, just because they got an award of some kind, and at a conference, they won a Best Paper award seven years ago, and then somebody shook their hand and gave them a medal, maybe it was a president, and it's been 20, 30 years that people have been patting them on the back, saying how special they are, especially when they're controlling money and getting sucked up from other scientists who really want the money in a self-deception kind of way, they don't actually really care about your performance, and all of that gets to your head, and no longer are you the authority that's trying to do good and lessen the suffering in the world, you become an authority that just wants to maximize self-preserve yourself in a sitting on a throne of power.

- So this is core to sort of what it is to be an economist, I'm a professor of economics. - There you go, with the authority again. - No, so it's about saying-- - Just joking, yes. - We often have a situation where we see a world of behavior, and then we see ways in which particular behaviors are not sort of maximally socially useful.

- Yes. - And we have a variety of reactions to that, so one kind of reaction is to sort of morally blame each individual for not doing the maximally socially useful thing, under perhaps the idea that people could be identified and shamed for that, and maybe induced into doing the better thing if only enough people were calling them out on it, right?

But another way to think about it is to think that people sit in institutions with certain stable institutional structures, and that institutions create particular incentives for individuals, and that individuals are typically doing whatever is in their local interest in the context of that institution, and then perhaps to less blame individuals for winning their local institutional game, and more blaming the world for having the wrong institutions.

So economists are often like wondering what other institutions we could have instead of the ones we have, and which of them might promote better behavior, and this is a common thing we do all across human behavior, is to think of what are the institutions we're in, and what are the alternative variations we could imagine, and then to say which institutions would be most productive.

I would agree with you that our information institutions, that is the institutions by which we collect information and aggregate it and share it with people, are especially broken in the sense of far from the ideal of what would be the most cost-effective way to collect and share information, but then the challenge is to try to produce better institutions.

And as an academic, I'm aware that academia is particularly broken in the sense that we give people incentives to do research that's not very interesting or important, because basically they're being impressive, and we actually care more about whether academics are impressive than whether they're interesting or useful. And I'm happy to go into detail with lots of different known institutions and their known institutional failings, ways in which those institutions produce incentives that are mistaken, and that was the point of the post we started with talking about the authorities.

If I need to be seen as an authority, that's at odds with my being informative, and I might choose to be the authority instead of being informative, 'cause that's my institutional incentives. - And if I may, I'd like to, given that beautiful picture of incentives and individuals that you just painted, let me just apologize for a couple of things.

One, I often put too much blame on leaders of institutions versus the incentives that govern those institutions. And as a result of that, I've been, I believe, too critical of Anthony Fauci, too emotional about my criticism of Anthony Fauci, and I'd like to apologize for that, because I think there's deeper truths to think about, there's deeper incentives to think about.

That said, I do sort of, I'm a romantic creature by nature. I romanticize Winston Churchill, and when I think about Nazi Germany, I think about Hitler more than I do about the individual people of Nazi Germany. You think about leaders, you think about individuals, not necessarily the parameters, the incentives that govern the system, 'cause it's harder.

It's harder to think through deeply about the models from which those individuals arise, but that's the right thing to do. But also, I don't apologize for being emotional sometimes, and being-- - I'm happy to blame the individual leaders in the sense that I might say, well, you should be trying to reform these institutions if you're just there to get promoted and look good at being at the top, but maybe I can blame you for your motives and your priorities in there.

But I can understand why the people at the top would be the people who are selected for having the priority of primarily trying to get to the top. I get that. - Can I maybe ask you about particular universities? They've received, like science has received an increase in distrust overall as an institution, which breaks my heart, because I think science is beautiful as a, not maybe not as an institution, but as one of the things, one of the journeys that humans have taken on.

The other one is university. I think university is actually a place, for me at least, in the way I see it, is a place of freedom of exploring ideas, scientific ideas, engineering ideas, more than a corporate, more than a company, more than a lot of domains in life. It's not just in its ideal, but it's in its implementation, a place where you can be a kid for your whole life and play with ideas.

And I think with all the criticism that universities still not currently receive, I think they, I don't think that criticism is representative of universities. They focus on very anecdotal evidence of particular departments, particular people. But I still feel like there's a lot of place for freedom of thought, at least at MIT, at least in the fields I care about, in particular kind of science, particular kind of technical fields, mathematics, computer science, physics, engineering, so robotics, artificial intelligence.

This is a place where you get to be a kid. Yet there is bureaucracy that's rising up. There's like more rules, there's more meetings, and there's more administration, having like PowerPoint presentations, which to me, you should like be more of a renegade explorer of ideas. And meetings destroy, they suffocate that radical thought that happens when you're an undergraduate student and you can do all kinds of wild things when you're a graduate student.

Anyway, all that to say, you've thought about this aspect too. Is there something positive, insightful you could say about how we can make for better universities in the decades to come, this particular institution? How can we improve them? - I hear that centuries ago, many scientists and intellectuals were aristocrats.

They had time and could, if they chose, choose to be intellectuals. That's a feature of the combination that they had some source of resources that allowed them leisure, and that the kind of competition they were faced in among aristocrats allowed that sort of a self-indulgence or self-pursuit at least at some point in their lives.

So the analogous observation is that university professors often have sort of the freedom and space to do a wide range of things. And I am certainly enjoying that as a tenured professor. - You're a really, sorry to interrupt, a really good representative of that. Just the exploration you're doing, the depth of thought, like most people are afraid to do the kind of broad thinking that you're doing, which is great.

- The fact that that can happen is the combination of these two things analogously. One is that we have fierce competition to become a tenured professor, but then once you become tenured, we give you the freedom to do what you like. And that's a happenstance. It didn't have to be that way.

And in many other walks of life, even though people have a lot of resources, et cetera, they don't have that kind of freedom set up. So I think I'm kind of lucky that tenure exists and that I'm enjoying it. But I can't be too enthusiastic about this unless I can approve of sort of the source of the resources that's paying for all this.

So for the aristocrat, if you thought they stole it in war or something, you wouldn't be so pleased, whereas if you thought they had earned it or their ancestors had earned this money that they were spending as an aristocrat, then you could be more okay with that. So for universities, I have to ask, where are the main sources of resources that are going to the universities and are they getting their money's worth?

Or are they getting a good value for that payment? So first of all, they're students. And the question is, are students getting good value for their education? And each person is getting value in the sense that they are identified and shown to be a more capable person, which is then worth more salary as an employee later.

But there is a case for saying there's a big waste of the system because we aren't actually changing the students or educating them, we're more sorting them or labeling them. And that's a very expensive process to produce that outcome. And part of the expense is the freedom from tenure, I guess.

So I feel like I can't be too proud of that because it's basically a tax on all these young students to pay this enormous amount of money in order to be labeled as better, whereas I feel like we should be able to find cheaper ways of doing that. The other main customer is researcher patrons like the government or other foundations.

And then the question is, are they getting their money worth out of the money they're paying for research to happen? And my analysis is they don't actually care about the research progress. They are mainly buying an affiliation with credentialed impressiveness on the part of the researchers. They mainly pay money to researchers who are impressive and have impressive affiliations, and they don't really much care what research project happens as a result.

- Is that a cynical? So there's a deep truth to that cynical perspective. Is there a less cynical perspective that they do care about the long-term investment into the progress of science and humanity? - They might personally care, but they're stuck in an equilibrium. - Sure. - Wherein they basically most foundations like governments or research, or like the Ford Foundation, the individuals there are rated based on the prestige they bring to that organization.

- Yeah. - And even if they might personally want to produce more intellectual progress, they are in a competitive game where they don't have tenure and they need to produce this prestige. And so once they give grant money to prestigious people, that is the thing that shows that they have achieved prestige for the organization, and that's what they need to do in order to retain their position.

- And you do hope that there's a correlation between prestige and actual competence. - Of course there is a correlation. The question is just, could we do this better some other way? - Yes. - I think it's pretty clear we could. What it's harder to do is move the world to a new equilibrium where we do that instead.

- What are the components of the better ways to do it? Is it money? So the sources of money and how the money is allocated to give the individual researchers freedom? - Years ago, I started studying this topic exactly because this was my issue and this was many decades ago now, and I spent a long time, and my best guess still is prediction markets, betting markets.

So if you as a research patron want to know the answer to a particular question, like what's the mass of the electron neutrino, then what you can do is just subsidize a betting market in that question, and that will induce more research into answering that question because the people who then answer that question can then make money in that betting market with the new information they gain.

So that's a robust way to induce more information on a topic. If you want to induce an accomplishment, you can create prizes, and there's of course a long history of prizes to induce accomplishments. And we moved away from prizes, even though we once used them far more often than we did today.

And there's a history to that. And for the customers who want to be affiliated with impressive academics, which is what most of the customers want, students, journalists, and patrons, I think there's a better way of doing that, which I just wrote about in my second most recent blog post.

- Can you explain? - Sure. What we do today is we take sort of acceptance by other academics recently as our best indication of their deserved prestige. That is recent publications, recent job affiliation, institutional affiliations, recent invitations to speak, recent grants. We are today taking other impressive academics' recent choices to affiliate with them as our best guesstimate of their prestige.

I would say we could do better by creating betting markets in what the distant future will judge to have been their deserved prestige, looking back on them. I think most intellectuals, for example, think that if we looked back two centuries, say to intellectuals from two centuries ago, and tried to look in detail at their research and how it influenced future research and which path it was on, we could much more accurately judge their actual deserved prestige.

That is who was actually on the right track, who actually helped, which will be different than what people at the time judged using the immediate indications of the time or which position they had or which publications they had or things like that. - In this way, if you think from the perspective of multiple centuries, you would higher prioritize true novelty, you would disregard the temporal proximity, like how recent the thing is, and you would think like what is the brave, the bold, the big, novel idea that this, and you would actually-- - You would be able to rate that 'cause you could see the path with which ideas took, which things had dead ends, which led to what other followings.

You could, looking back centuries later, have a much better estimate of who actually had what long-term effects on intellectual progress. So my proposal is we actually pay people in several centuries to do this historical analysis and we have prediction markets today where we buy and sell assets which will later off pay off in terms of those final evaluations.

So now we'll be inducing people today to make their best estimate of those things by actually looking at the details of people and setting the prices according. So my proposal would be we rate people today on those prices today. So instead of looking at their list of publications or affiliations, you look at the actual price of assets that represent people's best guess of what the future will say about them.

- That's brilliant. So this concept of idea futures, can you elaborate what this would entail? - I've been elaborating two versions of it here. So one is if there's a particular question, say the mass of the electron neutrino, and what you as a patron wanna do is get an answer to that question, then what you would do is subsidize a betting market in that question under the assumption that eventually we'll just know the answer and we can pay off the bets that way.

And that is a plausible assumption for many kinds of concrete intellectual questions like what's the mass of the electron neutrino. - In this hypothetical world that you're constructing that may be a real world, do you mean literally financial? - Yes, literal, very literal. Very cash, very direct and literal, yes.

- Or crypto. - Well, crypto is money. - Yes, true. - So the idea would be research labs would be for profit. They would have as their expense, paying researchers to study things, and then their profit would come from using the insights the researchers gains to trade in these financial markets.

Just like hedge funds today make money by paying researchers to study firms and then making their profits by trading on those, that insight in the ordinary financial market. - And the market would, if it's efficient, would be able to become better and better predicting the powerful ideas that the individual is able to generate.

- The variance around the mass of the electron neutrino would decrease with time as we learned that value of that parameter better and any other parameters that we wanted to estimate. - You don't think those markets would also respond to recency of prestige and all those kinds of things?

- They would respond, but the question is, if they might respond incorrectly, but if you think they're doing it incorrectly, you have a profit opportunity. - There'll be a correction mechanism. - You can go fix it. So we'd be inviting everybody to ask whether they can find any biases or errors in the current ways in which people are estimating these things from whatever clues they have.

- Right, there's a big incentive for the correction mechanism. In academia currently, there's not, it's the safe choice to go with the prestige. - Exactly. - And there's no-- - Even if you privately think that the prestige is overrated. - Even if you privately think strongly that it's overrated.

- Still, you don't have an incentive to defy that publicly. - You're going to lose a lot, unless you're a contrarian that writes brilliant blogs and you could talk about it on a podcast. - Right. I mean, initially, this was my initial concept of having these betting markets on these key parameters.

What I then realized over time was that that's more what people pretend to care about. What they really mostly care about is just who's how good. And that's what most of the system is built on, is trying to rate people and rank them. And so I designed this other alternative based on historical evaluation centuries later, just about who's how good, because that's what I think most of the customers really care about.

- Customers, I like the word customers here, humans. - Right, well, every major area of life, which has specialists who get paid to do that thing, must have some customers from elsewhere who are paying for it. - Well, who are the customers for the mass of the neutrino? Yes, I understand, in a sense, people who are willing to pay for a thing.

- That's an important thing to understand about anything, who are the customers, and what's the product, like medicine, education, academia, military, et cetera. That's part of the hidden motives analysis. Often people have a thing they say about what the product is and who the customer is, and maybe you need to dig a little deeper to find out what's really going on.

- Or a lot deeper. You've written that you seek out, quote, view quakes. You're able, as an intelligent black box word generating machine, you're able to generate a lot of sexy words. I like it, I love it. View quakes, which are insights which dramatically changed my worldview, your worldview.

You write, I loved science fiction as a child, studied physics and artificial intelligence for a long time each, and now study economics and political science, all fields full of such insights. So, let me ask, what are some view quakes, or a beautiful, surprising idea to you from each of those fields, physics, AI, economics, political science?

I know it's a tough question, something that springs to mind about physics, for example, that just is beautiful to me. Right from the beginning, say, special relativity was a big surprise. Most of us have a simple concept of time, and it seems perfectly adequate for everything we've ever seen.

And to have it explained to you that you need to sort of have a mixture concept of time and space, where you put it into the space-time construct, how it looks different from different perspectives, that was quite a shock. And that was such a shock that it makes you think, what else do I know that isn't the way it seems?

Certainly, quantum mechanics is certainly another enormous shock in terms of, from your point, you know, you have this idea that there's a space, and then there's particles at points, and maybe fields in between. And quantum mechanics is just a whole different representation. It looks nothing like what you would have thought as sort of the basic representation of the physical world.

And that was quite a surprise. - What would you say is the catalyst for the view quake in theoretical physics in the 20th century? Where does that come from? So the interesting thing about Einstein, it seems like a lot of that came from like almost thought experiments. It wasn't almost experimentally driven.

And with, actually, I don't know the full story of quantum mechanics, how much of it is experiment, like where, if you look at the full trace of idea generation there, of all the weird stuff that falls out of quantum mechanics, how much of that was the experimentalists, how much was it the theoreticians?

But usually, in theoretical physics, the theories lead the way. So maybe can you elucidate, like what is the catalyst for these? - The remarkable thing about physics and about many other areas of academic intellectual life is that it just seems way over-determined. That is, if it hadn't been for Einstein or if it hadn't been for Heisenberg, certainly within a half a century, somebody else would have come up with essentially the same things.

- Is that something you believe? - Yes. - Or is that something? - Yes, so I think when you look at sort of just the history of physics and the history of other areas, some areas like that, there's just this enormous convergence. That the different kind of evidence that was being collected was so redundant in the sense that so many different things revealed the same things that eventually you just kind of have to accept it because it just gets obvious.

So if you look at the details, of course, Einstein did it before somebody else, and it's well worth celebrating Einstein for that. And we, by celebrating the particular people who did something first or came across something first, we are encouraging all the rest to move a little faster, to try to push us all a little faster, which is great, but I still think we would have gotten roughly to the same place within a half century.

So sometimes people are special because of how much longer it would have taken. So some people say general relativity would have taken longer without Einstein than other things. I mean, Heisenberg quantum mechanics, I mean, there were several different formulations of quantum mechanics all around the same few years, means no one of them made that much of a difference.

We would have had pretty much the same thing regardless of which of them did it exactly when. Nevertheless, I'm happy to celebrate them all. But this is a choice I make in my research. That is, when there's an area where there's lots of people working together, who are sort of scoping each other and getting a result just before somebody else does, you ask, well, how much of a difference would I make there?

At most, I could make something happen a few months before somebody else. And so I'm less worried about them missing things. So when I'm trying to help the world, like doing research, I'm looking for neglected things. I'm looking for things that nobody's doing it. If I didn't do it, nobody would do it.

- Nobody would do it. - Or at least for a long time. - In the next 10, 20 years kind of thing. - Exactly. - Same with general relativity, just, you know, the whole would do it. It might take another 10, 20, 30, 50 years. - So that's the place where you can have the biggest impact, is finding the things that nobody would do unless you did them.

- And then that's when you get the big view quake, the insight. So what about artificial intelligence? Would it be the EMs, the emulated minds? What idea, whether that struck you in the shower one day, or are they you just observed? - Clearly, the biggest view quake in artificial intelligence is the realization of just how complicated our human minds are.

So most people who come to artificial intelligence from other fields or from relative ignorance, a very common phenomenon, which you must be familiar with, is that they come up with some concept and then they think that must be it. Once we implement this new concept, we will have it.

We will have full human level or higher artificial intelligence, right? And they're just not appreciating just how big the problem is, how long the road is, just how much is involved, because that's actually hard to appreciate. When we just think, it seems really simple. And studying artificial intelligence, going through many particular problems, looking in each problem, all the different things you need to be able to do to solve a problem like that makes you realize all the things your minds are doing that you are not aware of.

That's that vast subconscious that you're not aware. That's the biggest view quake from artificial intelligence by far, for most people who study artificial intelligence, is to see just how hard it is. - I think that's a good point. But I think it's a very early view quake. It's when the stunning Kruger crashes hard.

It's the first realization that humans are actually quite incredible. The human mind, the human body is quite incredible. - There's a lot of different parts to it. - But then, see, it's already been so long for me that I've experienced that view quake, that for me, I now experience the view quakes of holy shit, this little thing is actually quite powerful.

Like neural networks, I'm amazed. 'Cause you've become more cynical after that first view quake of like, this is so hard. Like evolution did some incredible work to create the human mind. But then you realize, Jessica, as you have, you've talked about a bunch of simple models, that simple things can actually be extremely powerful.

That maybe emulating the human mind is extremely difficult, but you can go a long way with a large neural network. You can go a long way with a dumb solution. It's that Stuart Russell thing with the reinforcement learning. Holy crap, you can go quite a long way with a simple thing.

- But we still have a very long road to go, but nevertheless. - I can't, I refuse to sort of know. The road is full of surprises. So long is an interesting, like you said, with the six hard steps that humans have to take to arrive at where we are from the origin of life on Earth.

So it's long maybe in the statistical improbability of the steps that have to be taken. But in terms of how quickly those steps could be taken, I don't know if my intuition says it's if it's hundreds of years away, or if it's a couple of years away. I prefer to measure-- - Pretty confident it's at least a decade.

And mildly confident it's at least three decades. - I can steel man either direction. I prefer to measure that journey in Elon Musk's. That's a new-- - We don't get Elon Musk very often, so that's a long timescale. - For now, I don't know, maybe you can clone, or maybe multiply, or I don't even know what Elon Musk, what that is, what is that?

- That's a good question, exactly. Well, that's an excellent question. - How does that fit into the model of the three parameters that are required for becoming a grabby alien civilization? - That's the question of how much any individual makes in the long path of civilization over time. Yes, and it's a favorite topic of historians and people to try to focus on individuals and how much of a difference they make.

And certainly some individuals make a substantial difference in the modest term, right? Like, certainly without Hitler being Hitler in the role he took, European history would have taken a different path for a while there. But if we're looking over many centuries longer term things, most individuals do fade in their individual influence.

- So, I mean-- - Even Einstein. - Even Einstein. No matter how sexy your hair is, you will also be forgotten in the long arc of history. So you said at least 10 years, so let's talk a little bit about this AI point of how we achieve, how hard is the problem of solving intelligence by engineering artificial intelligence that achieves human-level, human-like qualities that we associate with intelligence?

How hard is this? What are the different trajectories that take us there? - One way to think about it is in terms of the scope of the technology space you're talking about. So let's take the biggest possible scope, all of human technology, right? The entire human economy. So the entire economy is composed of many industries, each of which have many products with many different technologies supporting each one.

At that scale, I think we can accept that most innovations are a small fraction of the total. That is, usually you have relatively gradual overall progress and that individual innovations that have a substantial effect, that total are rare and their total effect is still a small percentage of the total economy, right?

There's very few individual innovations that made a substantial difference to the whole economy. What are we talking, steam engine, shipping containers, a few things. - Shipping containers? - Shipping containers deserves to be up there with steam engines, honestly. - Can you say exactly why shipping containers? - Shipping containers revolutionized shipping.

Shipping is very important. - But placing that at shipping containers, so you're saying you wouldn't have some of the magic of the supply chain and all that without shipping containers? - Made a big difference, absolutely. - Interesting, that's something we'll look into. We shouldn't take that tangent, although I'm tempted to.

But anyway, so there's a few, just a few innovations. - Right, so at the scale of the whole economy, right? Now, as you move down to a much smaller scale, you will see individual innovations having a bigger effect, right? So if you look at, I don't know, lawnmowers or something, I don't know about the innovations of lawnmower, but there were probably like steps where you just had a new kind of lawnmower, and that made a big difference to mowing lawns, because you're focusing on a smaller part of the whole technology space, right?

So, and you know, sometimes like military technology, there's a lot of military technologies, a lot of small ones, but every once in a while, a particular military weapon like makes a big difference. But still, even so, mostly overall, they're making modest differences to something that's increasing relatively, like US military is the strongest in the world consistently for a while.

No one weapon in the last 70 years has like made a big difference in terms of the overall prominence of the US military, right? 'Cause that's just saying, even though every once in a while, even the recent Soviet hyper missiles or whatever they are, they aren't changing the overall balance dramatically, right?

So when we get to AI, now I can frame the question, how big is AI? Basically, if, so one way of thinking about AI is it's just all mental tasks. And then you ask what fraction of tasks are mental tasks? And then I go, a lot. And then if I think of AI as like half of everything, then I think, well, it's gotta be composed of lots of parts where any one innovation is only a small impact, right?

Now, if you think, no, no, no, AI is like AGI. And then you think AGI is a small thing, right? There's only a small number of key innovations that will enable it. Now you're thinking there could be a bigger chunk that you might find that would have a bigger impact.

So the way I would ask you to frame these things in terms of the chunkiness of different areas of technology, in part, in terms of how big they are. So if you take 10 chunky areas and you add them together, the total is less chunky. - Yeah, but don't you, are you able until you solve the fundamental core parts of the problem to estimate the chunkiness of that problem?

- Well, if you have a history of prior chunkiness, that could be your best estimate for future chunkiness. So for example, I mean, even at the level of the world economy, right? We've had this, what, 10,000 years of civilization. Well, that's only a short time. You might say, oh, that doesn't predict future chunkiness.

But it looks relatively steady and consistent. We can say, even in computer science, we've had seven years of computer science. We have enough data to look at chunkiness in computer science. Like, when were there algorithms or approaches that made a big, chunky difference? And how large a fraction of those that was that?

And I'd say, mostly in computer science, most innovation has been relatively small chunks. The bigger chunks have been rare. - Well, this is the interesting thing. This is about AI and just algorithms in general, is page rank. So Google's, right? So sometimes it's a simple algorithm that by itself is not that useful, but the scale-- - Context.

- And in a context that's scalable, like depending on the, yeah, depending on the context, is all of a sudden the power is revealed. And there's something, I guess that's the nature of chunkiness, is that you could, things that can reach a lot of people simply can be quite chunky.

- So one standard story about algorithms is to say, algorithms have a fixed cost plus a marginal cost. And so in history, when you had computers that were very small, you tried, all the algorithms had low fixed costs, and you look for the best of those. But over time, as computers got bigger, you could afford to do larger fixed costs and try those.

And some of those had more effective algorithms in terms of their marginal cost. And that, in fact, that roughly explains the long-term history where, in fact, the rate of algorithmic improvement is about the same as the rate of hardware improvement, which is a remarkable coincidence. But it would be explained by saying, well, there's all these better algorithms you can't try until you have a big enough computer to pay the fixed cost of doing some trials to find out if that algorithm actually saves you on the marginal cost.

And so that's an explanation for this relatively continuous history where, so we have a good story about why hardware is so continuous, right? And you might think, why would software be so continuous with the hardware? But if there's a distribution of algorithms in terms of their fixed costs, and it's, say, spread out in a wide log-normal distribution, then we could be sort of marching through that log-normal distribution, trying out algorithms with larger fixed costs and finding the ones that have lower marginal costs.

- So would you say AGI, human-level, AI, even EM, M, emulated minds, is chunky? Like a few breakthroughs can take this? - So an M is by its nature chunky, in the sense that if you have an emulated brain and you're 25% effective at emulating it, that's crap. That's nothing.

You pretty much need to emulate a full human brain. - Is that obvious? Is that obvious that the 25%- - I think it's pretty obvious. I'm talking about like, you know, so the key thing is you're emulating various brain cells, and so you have to emulate the input-output pattern of those cells.

So if you get that pattern somewhat close, but not close enough, then the whole system just doesn't have the overall behavior you're looking for, right? - But it could have, functionally, some of the power of the overall system. - So there'll be some threshold. The point is, when you get close enough, then it goes over the threshold.

It's like taking a computer chip and deleting every 1% of the gates, right? - No, that's very chunky. But the hope is that the emulating the human brain, I mean, the human brain itself is not- - Right, so it has a certain level of redundancy and a certain level of robustness.

And so there's some threshold. When you get close to that level of redundancy and robustness, then it starts to work. But until you get to that level, it's just gonna be crap, right? It's gonna be just a big thing that isn't working well. So we can be pretty sure that emulations is a big chunk in an economic sense, right?

At some point, you'll be able to make one that's actually effective in able substituting for humans, and then that will be this huge economic product that people will try to buy like crazy. - You'll bring a lot of value to people's lives, so they'll be willing to pay for it.

- But it could be that the first emulation costs a billion dollars each, right? And then we have them, but we can't really use them, they're too expensive. And then the cost slowly comes down, and now we have less of a chunky adoption, right? That as the cost comes down, then we use more and more of them in more and more contexts.

And that's a more continuous curve. So it's only if the first emulations are relatively cheap that you get a more sudden disruption to society. And that could happen if sort of the algorithm is the last thing you figure out how to do or something. - What about robots that capture some magic in terms of social connection?

The robots, like we have a robot dog on the carpet right there, robots that are able to capture some magic of human connection as they interact with humans, but are not emulating the brain. What about those, how far away? - So we're thinking about chunkiness or distance now. So if you ask how chunky is the task of making a emulatable robot or something, - Which chunkiness and time are correlated.

- Right, but it's about how far away it is or how suddenly it would happen. Chunkiness is how suddenly, and difficulty is just how far away it is. But it could be a continuous difficulty. It could just be far away, but we'll slowly steadily get there. Or there could be these thresholds where we reach a threshold and suddenly we can do a lot better.

- Yeah, that's a good question for both. I tend to believe that all of it, not just the M, but AGI too is chunky. And human level intelligence embodied in robots is also chunky. The history of computer science and chunkiness so far seems to be my rough best guess for the chunkiness of AGI.

That is, it is chunky. Modestly chunky, not that chunky. Right? (laughing) Our ability to use computers to do many things in the economy has been moving relatively steadily. Overall, in terms of our use of computers in society, they have been relatively steadily improving for 70 years. - No, but I would say that's hard.

Well, yeah, okay. Okay, I would have to really think about that 'cause neural networks are quite surprising. - Sure, but every once in a while we have a new thing that's surprising. But if you stand back, we see something like that every 10 years or so. Some new innovations-- - The progress is gradual.

- That has a big effect. - So moderately chunky. Huh, yeah. - The history of the level of disruption we've seen in the past would be a rough estimate of the level of disruption in the future. Unless the future is we're gonna hit a chunky territory, much chunkier than we've seen in the past.

- Well, I do think there's, it's like Kuhnian, like revolution type. It seems like the data, especially on AI, is difficult to reason with because it's so recent. It's such a recent field. - Well, I've been around for 50 years. - I mean, 50, 60, 70, 80 years being recent.

- Okay. - That's how I'm-- - It's enough time to see a lot of trends. - A few trends, a few trends. I think the internet, computing, there's really a lot of interesting stuff that's happened over the past 30 years that I think the possibility of revolutions is likelier than it was in the-- - I think for the last 70 years, there have always been a lot of things that looked like they had a potential for revolution.

- So we can't reason well about this. - I mean, we can reason well by looking at the past trends. I would say the past trend is roughly your best guess for the future. - No, but if I look back at the things that might have looked like revolutions in the '70s and '80s and '90s, they are less like the revolutions that appear to be happening now, or the capacity of revolution that appear to be there now.

First of all, there's a lot more money to be made. So there's a lot more incentive for markets to do a lot of kind of innovation, it seems like, in the AI space. But then again, there's a history of winters and summers and so on. So maybe we're just like riding a nice wave right now.

- One of the biggest issues is the difference between impressive demos and commercial value. - Yes. - So we often, through the history of AI, we saw very impressive demos that never really translated much into commercial value. - Somebody who works on and cares about autonomous and semi-autonomous vehicles, tell me about it.

And there again, we return to the number of Elon Musks per Earth per year generated. That's the M. Coincidentally, same initials as the M. - Yeah. - Very suspicious, very suspicious. We're gonna have to look into that. All right, two more fields that I would like to force and twist your arm to.

- All right. - To look for view quakes and for beautiful ideas, economics. What is a beautiful idea to you about economics? You've mentioned a lot of them already. - Sure, so as you said before, there's gonna be the first view quake most people encounter that makes the biggest difference on average in the world, 'cause that's the only thing most people ever see is the first one.

And so, with AI, the first one is just how big the problem is, but once you get past that, you'll find others. Certainly for economics, the first one is just the power of markets. You might've thought it was just really hard to figure out how to optimize in a big, complicated space, and markets just do a good first pass for an awful lot of stuff.

And they are really quite robust and powerful. And that's just quite the view quake, where you just say, you know, just let a, if you wanna get in the ballpark, just let a market handle it and step back. And that's true for a wide range of things. It's not true for everything, but it's a very good first approximation.

And most people's intuitions for how they should limit markets are actually messing them up. They're that good in sense, right? Most people, when you go, I don't know if we wanna trust that, well, you should be trusting that. - What about, what are markets? Like just a couple of words.

- So the idea is if people want something, then let other companies form to try to supply that thing. Let those people pay for their cost of whatever they're making and try to offer that product to those people. Let many people, many such firms enter that industry and let the customers decide which ones they want.

And if the firm goes out of business, let it go bankrupt and let other people invest in whichever ventures they wanna try to attract customers to their version of the product. And that just works for a wide range of products and services. - And through all of this, there's a free exchange of information too.

There's a hope that there's no manipulation of information and so on, that there, you're making-- - Even when those things happen, still just the simple market solution is usually better than the things you'll try to do to fix it. - Than the alternative. - That's a view, Craig. It's surprising.

It's not what you would have initially thought. - That's one of the great, I guess, inventions of human civilization that trusts the markets. - Now, another view, Craig, that I learned in my research that's not all of economics but something more specialized is the rationality of disagreement. That is, basically, people who are trying to believe what's true in a complicated situation would not actually disagree.

And of course, humans disagree all the time, so it was quite the striking fact for me to learn in grad school that actually, rational agents would not knowingly disagree. And so, that makes disagreement more puzzling and it makes you less willing to disagree. - Humans are, to some degree, rational and are able to-- - Their priorities are different than just figuring out the truth.

Which might not be the same as being irrational. - That's another tangent that could take an hour. In the space of human affairs, political science, what is a beautiful, foundational, interesting idea to you, a view, Craig, in the space of political science? - The main thing that goes wrong in politics is people not agreeing on what the best thing to do is.

- That's a wrong thing. - So that's what goes wrong, that is when you say what's fundamentally behind most political failures, it's that people are ignorant of what the consequences of policy is. And that's surprising because it's actually feasible to solve that problem, which we aren't solving. - So it's a bug, not a feature, that there's an inability to arrive at a consensus.

- So most political systems, if everybody looked to some authority, say, on a question, and that authority told them the answer, then most political systems are capable of just doing that thing. That is, and so it's the failure to have trustworthy authorities that is sort of the underlying failure behind most political failure.

We failed, we have bad, we invade Iraq, say, when we don't have an authority to tell us that's a really stupid thing to do. And it is possible to create more informative, trustworthy authorities, that that's a remarkable fact about the world of institutions, that we could do that, but we aren't.

- Yeah, that's surprising. We could and we aren't. - Right, another big view crick about politics is from the elephant in the brain, that most people, when they're interacting with politics, they say they want to make the world better, they make their city better, their country better, and that's not their priority.

- What is it? - They want to show loyalty to their allies. They wanna show their people they're on their side, yes. Are there various tribes they're in? That's their primary priority, and they do accomplish that. - Yeah, and the tribes are usually color-coded, conveniently enough. What would you say, you know, it's the Churchill question, democracy's the crappiest form of government, but it's the best one we got.

What's the best form of government for this, our, seven billion human civilization, and the, maybe, as we get farther and farther, you mentioned a lot of stuff that's fascinating about human history as we become more forager-like, and looking out beyond, what's the best form of government in the next 50, 100 years as we become a multi-planetary species?

- So, the key failing is that we have existing political institutions and related institutions, like media institutions and other authority institutions, and these institutions sit in a vast space of possible institutions. And the key failing, we're just not exploring that space. So, I have made my proposals in that space, and I think I can identify many promising solutions, and many other people have made many other promising proposals in that space, but the key thing is we're just not pursuing those proposals, we're not trying them out on small scales, we're not doing tests, we're not exploring the space of these options.

That is the key thing we're failing to do. And if we did that, I am confident we would find much better institutions than the one we're using now, but we would have to actually try. (Lex laughing) - So, a lot of those topics, I do hope we get a chance to talk again.

You're a fascinating human being, so I'm skipping a lot of tangents on purpose that I would love to take. You're such a brilliant person on so many different topics. Let me take a stroll into the deep human psyche of Robin Hanson himself. So first-- - May not be that deep.

(both laughing) I might just be all on the surface. What you see, what you get, there might not be much hiding behind it. - Some of the fun is on the surface. - I actually think this is true of many of the most successful, most interesting people you see in the world.

That is, they have put so much effort into the surface that they've constructed, and that's where they put all their energy. So somebody might be a statesman or an actor or something else, and people wanna interview them, and they wanna say, "What are you behind the scenes? "What do you do in your free time?" You know what?

Those people don't have free time. They don't have another life behind the scenes. They put all their energy into that surface, the one we admire, the one we're fascinated by, and they kinda have to make up the stuff behind the scenes to supply it for you, but it's not really there.

- Well, there's several ways of phrasing this. So one of it is authenticity, which is if you become the thing you are on the surface, if the depths mirror the surface, then that's what authenticity is. You're not hiding something. You're not concealing something. To push back on the idea of actors, they actually have often a manufactured surface that they put on, and they try on different masks, and the depths are very different from the surface, and that's actually what makes them very not interesting to interview.

If you're an actor who actually lives the role that you play, so like, I don't know, a Clint Eastwood-type character who clearly represents the cowboy, I mean, at least rhymes or echoes the person you play on the surface, that's authenticity. - Some people are typecasts, and they have basically one persona.

They play in all of their movies and TV shows, and so those people, it probably is the actual persona that they are, or it has become that over time. Clint Eastwood would be one. I think of Tom Hanks as another. I think they just always play the same person.

- And you and I are just both surface players. You're the fun, brilliant thinker, and I am the suit-wearing idiot full of silly questions. All right, that said, let's put on your wise sage hat and ask you what advice would you give to young people today in high school and college about life, about how to live a successful life in career or just in general that they can be proud of?

- Most young people, when they actually ask you that question, what they usually mean is how can I be successful by usual standards. - Yeah. - I'm not very good at giving advice about that 'cause that's not how I tried to live my life. So I would more flip it around and say, you live in a rich society, you will have a long life, you have many resources available to you.

Whatever career you take, you'll have plenty of time to make progress on something else. Yes, it might be better if you find a way to combine your career and your interests in a way that gives you more time and energy, but there are often big compromises there as well.

So if you have a passion about some topic or some thing that you think just is worth pursuing, you can just do it, you don't need other people's approval. And you can just start doing whatever it is you think is worth doing. It might take you decades, but decades are enough to make enormous progress on most all interesting things.

- And don't worry about the commitment of it. I mean, that's a lot of what people worry about is, well, there's so many options, and if I choose a thing and I stick with it, I sacrifice all the other paths I could have taken. - But I mean, so I switched my career at the age of 34 with two kids age zero and two, went back to grad school in social science after being a software, research software engineer.

So it's quite possible to change your mind later in life. - How can you have an age of zero? - Shot, less than one. - Okay, so, oh, oh, you indexed with zero, I got it, okay. - Right, and you know, like people also ask what to read, and I say textbooks.

And until you've read lots of textbooks, or maybe review articles, I'm not so sure you should be reading blog posts and Twitter feeds and even podcasts. I would say at the beginning, read the, this is our best, humanity's best summary of how to learn things is crammed into textbooks.

- Yeah, especially the ones on introduction to biology. - Yeah, everything, introduction to everything. Just read all the textbooks. - Algorithms. - Read as many textbooks as you can stomach and then maybe if you wanna know more about a subject, find review articles. You don't need to read the latest stuff for most topics.

- Yeah, and actually textbooks often have the prettiest pictures. - There you go. - And depending on the field, if it's technical, then doing the homework problems at the end is actually extremely, extremely useful. Extremely powerful way to understand something if you allow it. You know, I actually think of like high school and college, which you kind of remind me of.

People don't often think of it that way, but you will almost not again get an opportunity to spend the time with a fundamental subject. - Bring up lots of stuff. - And like, no. - All the basics. - And everybody's forcing you, like everybody wants you to do it.

And like you'll never get that chance again to sit there, even though it's outside of your interest, biology. Like in high school I took AP biology, AP chemistry. I'm thinking of subjects I never again really visited seriously. And it was so nice to be forced into anatomy and physiology, to be forced into that world, to stay with it, to look at the pretty pictures, to certain moments to actually for a moment enjoy the beauty of these, of like how a cell works and all those kinds of things.

And somehow that stays, like the ripples of that fascination that stays with you even if you never do those, even if you never utilize those learnings in your actual work. - A common problem, at least of many young people I meet, is that they're like feeling idealistic and altruistic, but in a rush.

- Yes. - So the usual human tradition that goes back hundreds of thousands of years is that people's productivity rises with time and maybe peaks around the age of 40 or 50. The age of 40, 50 is when you will be having the highest income, you'll have the most contacts, you will sort of be wise about how the world works.

Expect to have your biggest impact then. Before then, you can have impacts, but you're also mainly building up your resources and abilities. That's the usual human trajectory, expect that to be true of you too. Don't be in such a rush to like accomplish enormous things at the age of 18 or whatever.

I mean, you might as well practice trying to do things, but that's mostly about learning how to do things by practicing. There's a lot of things you can't do unless you just keep trying them. - And when all else fails, try to maximize the number of offspring however way you can.

- That's certainly something I've neglected. I would tell my younger version of myself, try to have more descendants. Yes, absolutely. It matters more than I realized at the time. - Both in terms of making copies of yourself in mutated form and just the joy of raising them? - Sure.

I mean, the meaning even. So in the literature on the value people get out of life, there's a key distinction between happiness and meaning. So happiness is how do you feel right now about right now, and meaning is how do you feel about your whole life? And many things that produce happiness don't produce meaning as reliably, and if you have to choose between them, you'd rather have meaning.

And meaning goes along with sacrificing happiness sometimes. And children are an example of that. You get a lot more meaning out of children, even if they're a lot more work. - Why do you think kids, children are so magical, like raising kids? 'Cause I would love to have kids, and whenever I work with robots, there's some of the same magic when there's an entity that comes to life.

And in that case, I'm not trying to draw too many parallels, but there is some echo to it, which is when you program a robot, there's some aspect of your intellect that is now instilled in this other moving being that's kind of magical. Well, why do you think that's magical?

And you said happiness and meaning. - Meaningful. - As opposed to a short, why is it meaningful? - Overdetermined, like I can give you several different reasons, all of which is sufficient. And so the question is, we don't know which ones are the correct reasons. - Such a technical, it's overdetermined, look it up.

- So I meet a lot of people interested in the future, interested in thinking about the future. They're thinking about how can I influence the future? But overwhelmingly in history so far, the main way people have influenced the future is by having children, overwhelmingly. And that's just not an incidental fact.

You are built for that. That is, you're the sequence of thousands of generations, each of which successfully had a descendant. And that affected who you are. You just have to expect, and it's true that who you are is built to be, expect to have a child, to want to have a child, to have that be a natural and meaningful interaction for you.

And it's just true. It's just one of those things you just should have expected, and it's not a surprise. - Well, to push back and sort of, in terms of influencing the future, as we get more and more technology, more and more of us are able to influence the future in all kinds of other ways, right?

Being a teacher, educator. - Even so, though, still most of our influence in the future has probably happened being kids, even though we've accumulated more other ways to do it. - You mean at scale. I guess the depth of influence, like really how much effort, how much of yourself you really put into another human being.

Do you mean both the raising of a kid, or do you mean raw genetic information? - Well, both, but raw genetics is probably more than half of it. - More than half. More than half, even in this modern world? - Yeah. - Genetics. Let me ask some dark, difficult questions, if I might.

Let's take a stroll into that place that may or may not exist, according to you. What's the darkest place you've ever gone to in your mind, in your life? A dark time, a challenging time in your life that you had to overcome? - Probably just feeling strongly rejected. And so I'm apparently somewhat emotionally scarred by just being very rejection-averse, which must have happened because some rejections were just very scarring.

- At a scale, in what kinds of communities? On the individual scale? - I mean, lots of different scales, yeah. All the different, many different scales. Still, that rejection stings. - Hold on a second, but you're a contrarian thinker. You challenge the norms. Why, if you were scarred by rejection, why welcome it in so many ways at a much larger scale, constantly with your ideas?

- It could be that I'm just stupid. Or that I've just categorized them differently than I should or something. Most rejection that I've faced hasn't been because of my intellectual ideas. So. - Oh, so that once-- - The intellectual ideas haven't been the thing to risk the rejection. - The one that, the things that challenge your mind, taking you to a dark place, the more psychological rejections.

- You just asked me what took me to a dark place. You didn't specify it as sort of an intellectual dark place, I guess. Yeah, I just meant like what-- - So intellectual is disjoint, or at least at a more surface level than something emotional. - Yeah, I would just think there are times in your life when you're just in a dark place and that can have many different causes.

Most intellectuals are still just people and most of the things that will affect them are the kinds of things that affect people. They aren't that different necessarily. I mean, that's gonna be true for, like I presume most basketball players are still just people. If you ask them what was the worst part of their life, it's gonna be this kind of thing that was the worst part of life for most people.

- So rejection early in life? - Yeah, I mean, not in grade school probably, but yeah, sort of being a young nerdy guy and feeling not in much demand or interest or later on lots of different kinds of rejection. But yeah, but I think that's, most of us like to pretend we don't that much need other people, we don't care what they think.

It's a common sort of stance if somebody rejects you and says, "Oh, I didn't care about them anyway." But I think to be honest, people really do care. - Yeah, we do seek that connection, that love. What do you think is the role of love in the human condition?

- Opacity in part. That is, love is one of those things where we know at some level it's important to us, but it's not very clearly shown to us exactly how or why or in what ways. There are some kinds of things we want where we can just clearly see that we want it and why that we want it, right?

We know when we're thirsty and we know why we were thirsty and we know what to do about being thirsty and we know when it's over that we're no longer thirsty. Love isn't like that. - It's like, what do we seek from this? We're drawn to it, but we do not understand why we're drawn exactly, because it's not just affection, because if it was just affection, we don't seem to be drawn to pure affection.

We don't seem to be drawn to somebody who's like a servant. We don't seem to be necessarily drawn to somebody that satisfies all your needs or something like that. - So it's clearly something we want or need, but we're not exactly very clear about it, and that isn't kind of important to it.

So I've also noticed there are some kinds of things you can't imagine very well. So if you imagine a situation, there's some aspects of the situation that you can clearly, you can imagine it being bright or dim, you can imagine it being windy, or you can imagine it being hot or cold, but there's some aspects about your emotional stance in a situation that's actually just hard to imagine or even remember.

It's hard to like, you can often remember an emotion only when you're in a similar sort of emotion situation, and otherwise you just can't bring the emotion to your mind, and you can't even imagine it, right? So there's certain kinds of emotions you can have, and when you're in that emotion, you can know that you have it, and you can have a name and it's associated, but later on I tell you, you know, remember joy, and it doesn't come to mind.

- Not able to replay it. - Right, and it's sort of a reason why we have, one of the reasons that pushes us to reconsume it and reproduce it is that we can't reimagine it. - Well, there's a, it's interesting, 'cause there's a Daniel Kahneman type of thing of like reliving memories, 'cause I'm able to summon some aspect of that emotion, again, by thinking of that situation from which that emotion came.

- Right. - So like a certain song, you can listen to it, and you can feel the same way you felt the first time you remembered that song associated with a certain-- - Right, but you need to remember that situation in some sort of complete package. - Yes, and then-- - You can't just take one part off of it, and then if you get the whole package again, if you remember the whole feeling.

- Yes, or some fundamental aspect of that whole experience that aroused, from which the feeling arose, and actually the feeling is probably different in some way. It could be more pleasant or less pleasant than the feeling you felt originally, and that morphs over time, every time you replay that memory.

It is interesting, you're not able to replay the feeling perfectly. You don't remember the feeling, you remember the facts of the events. - So there's a sense in which, over time, we expand our vocabulary as a community of language, and that allows us to sort of have more feelings and know that we are feeling them.

'Cause you can have a feeling, but not have a word for it, and then you don't know how to categorize it, or even what it is, and whether it's the same as something else, but once you have a word for it, you can sort of pull it together more easily.

And so I think, over time, we are having a richer palette of feelings, 'cause we have more words for them. - What has been a painful loss in your life? Maybe somebody or something that's no longer in your life, but played an important part in your life. - Youth?

(laughs) - That's a concept, no, it has to be-- But I was once younger, I had more health, and I had vitality, I was insomer, I mean, you know, I've lost that over time. - Do you see that as a different person? Maybe you've lost that person? - Certainly, yes, absolutely, I'm a different person than I was when I was younger, and I'm not who, I don't even remember exactly what he was.

So I don't remember as many things from the past as many people do, so in some sense, I've just lost a lot of my history by not remembering it. - Does that-- - And I'm not that person anymore, that person's gone, and I don't have any of their abilities.

- Is that a painful loss? Is it a painful loss, though? - Yeah. - Or is it a, why is it painful? 'Cause you're wiser, you're, I mean, there's so many things that are beneficial to getting older. - Right, but-- - Or you just call it-- - I just was this person, and I felt assured that I could continue to be that person.

- And you're no longer that person. - And he's gone, and I'm not him anymore, and he died without fanfare or a funeral. - And that the person you are today, talking to me, that person will be changed, too. - Yes, and in 20 years, he won't be there anymore.

- And a future person, you have to, we'll look back, a future version of you-- - For M's, this'll be less of a problem. For M's, they would be able to save an archived copy of themselves at each different age, and they could turn it on periodically and go back and talk to it.

- To replay. You think some of that will be, so with emulated minds, with M's, there's a digital cloning that happens, and do you think that makes your, you less special, if you're clonable? Like, does that make you the experience of life, the experience of a moment, the scarcity of that moment, the scarcity of that experience, isn't that a fundamental part of what makes that experience so delicious, so rich of feeling?

- I think if you think of a song that lots of people listen to that are copies all over the world, we're gonna call that a more special song. - Yeah, yeah. So there's a perspective on copying and cloning where you're just scaling happiness versus degrading it. Each copy of a song is less special if there are many copies, but the song itself is more special if there are many copies.

- And on mass, right, you're actually spreading the happiness even if it diminishes over a larger number of people at scale, and that increases the overall happiness in the world. And then you're able to do that with multiple songs. - Is a person who has an identical twin more or less special?

- Well, the problem with identical twins is it's just two with Ms. - But two is different than one. - But there's a diminishing-- - I think an identical twin's life is richer for having this other identical twin, somebody who understands them better than anybody else can. From the point of view of an identical twin, I think they have a richer life for being part of this couple, each of which is very similar.

Now if you said, will the world, if we lose one of the identical twins, will the world miss it as much because you've got the other one and they're pretty similar? Maybe from the rest of the world's point of view, they suffer less of a loss when they lose one of the identical twins, but from the point of view of the identical twin themselves, their life is enriched by having a twin.

- See, but the identical twin copying happens at the place of birth. It's different than copying after you've done some of the environment, like the nurture at the teenage or in the 20s after going to college. - Yes, that'll be an interesting thing for M's to find out, all the different ways that they can have different relationships to different people who have different degrees of similarity to them in time.

- Yeah. Yeah, man. - But it seems like a rich space to explore and I don't feel sorry for them. This seems like interesting world to live in. - And there could be some ethical conundrums there. - There will be many new choices to make that they don't make now.

We discussed it, and I discussed that in the book "Age of M." Say you have a lover and you make a copy of yourself, but the lover doesn't make a copy. Well, now, which one of you, or are both still related to the lover? - Socially entitled to show up.

- Yes, so you'll have to make choices then when you split yourself. Which of you inherit which unique things? - Yeah, and of course, there'll be an equivalent increase in lawyers. Well, I guess you can clone the lawyers to help manage some of these negotiations of how to split property.

The nature of owning, I mean, property is connected to individuals, right? - You only really need lawyers for this with an inefficient, awkward law that is not very transparent and able to do things. So, for example, an operating system of a computer is a law for that computer. When the operating system is simple and clean, you don't need to hire a lawyer to make a key choice with the operating system.

- You don't need a human in the loop. - You just make a choice. - Qualify rules, yeah. - Right, so ideally, we want a legal system that makes the common choices easy and not require much overhead. - And that's what the digitization of things further enables that. So the loss of a younger self.

What about the loss of your life overall? Do you ponder your death, your mortality? Are you afraid of it? - I am a cryonics customer. That's what this little tag around my deck says. It says that if you find me in a medical situation, you should call these people to enable the cryonics transfer.

So I am taking a long-shot chance at living a much longer life. - Can you explain what cryonics is? - So when medical science gives up on me in this world, instead of burning me or letting worms eat me, they will freeze me, or at least freeze my head.

And there's damage that happens in the process of freezing the head, but once it's frozen, it won't change for a very long time. Chemically, it'll just be completely exactly the same. So future technology might be able to revive me. And in fact, I would be mainly counting on the brain emulation scenario, which doesn't require reviving my entire biological body.

It means I would be in a computer simulation. And so that's, I think I've got at least a 5% shot at that. And that's immortality. - Are you, can you still-- - Most likely it won't happen, and therefore I'm sad that it won't happen. - Do you think immortality is something that you would like to have?

- Well, I mean, just like infinity, I mean, you can't know until forever, which means never, right? So all you can really, the better choice is, at each moment, do you wanna keep going? So I would like at every moment to have the option to keep going. - The interesting thing about human experience is that the way you phrase it is exactly right.

At every moment, I would like to keep going. But the thing that happens, you know, I'll leave them wanting more of whatever that phrase is. The thing that happens is over time, it's possible for certain experiences to become bland, and you become tired of them. And that actually makes life really unpleasant.

Sorry, it makes that experience really unpleasant. And perhaps you can generalize that to life itself if you have a long enough horizon. And so-- - Might happen, but might as well wait and find out. But then you're ending on suffering, you know? - So in the world of brain emulations, I have more options.

- You can return yourself to-- - That is, I can make copies of myself, archive copies at various ages. And at a later age, I could decide that I'd rather replace myself with a new copy from a younger age. - So does a brain emulation still operate in physical space?

So can we do, what do you think about like the metaverse and operating in virtual reality? So we can conjure up, not just emulate, not just your own brain and body, but the entirety of the environment. - Most brain emulations will in fact spost most of their time in virtual reality.

But they wouldn't think of it as virtual reality, they would just think of it as their usual reality. I mean, the thing to notice, I think, in our world, most of us spend most time indoors. And indoors, we are surrounded by walls covered with paint and floors covered with tile or rugs.

Most of our environment is artificial. It's constructed to be convenient for us, it's not the natural world that was there before. A virtual reality is basically just like that. It is the environment that's comfortable and convenient for you. And, but when it's the right, that environment for you, it's real for you, just like the room you're in right now, most likely is very real for you.

You're not focused on the fact that the paint is hiding the actual studs behind the wall and the actual wires and pipes and everything else. The fact that we're hiding that from you doesn't make it fake or unreal. - What are the chances that we're actually in the very kind of system that you're describing where the environment and the brain is being emulated and you're just replaying an experience when you were first did a podcast with Lex after, and now, you know, the person that originally launched this already did hundreds of podcasts with Lex.

This is just the first time. And you like this time because there's so much uncertainty. There's nerves, it could have gone any direction. - At the moment, we don't have the technical ability to create that emulation. So we'd have to be postulating that in the future, we have that ability, and then they choose to evaluate this moment now, to simulate it.

- Don't you think we could be in the simulation of that exact experience right now and we wouldn't be able to know? - So one scenario would be this never really happened. This only happens as a reconstruction later on. - Yeah. - That's different than the scenario that this did happen the first time and now it's happening again as a reconstruction.

That second scenario is harder to put together because it requires this coincidence where between the two times we produce the ability to do it. - No, but don't you think replay of memories, poor replay of memories is something that-- - That might be a possible thing in the future.

- So you're saying it's harder than to conjure up things from scratch? - It's certainly possible. So the main way I would think about it is in terms of the demand for simulation versus other kinds of things. So I've given this a lot of thought because I first wrote about this long ago when Bostrom first wrote his papers about simulation argument and I wrote about how to live in a simulation.

And so the key issue is the fraction of creatures in the universe that are really experiencing what you appear to be really experiencing relative to the fraction that are experiencing it in a simulation way, i.e. simulated. So then the key parameter is at any one moment in time, creatures at that time, many of them, most of them are presumably really experiencing what they're experiencing, but some fraction of them are experiencing some past time where that past time is being remembered via their simulation.

So to figure out this ratio, what we need to think about is basically two functions. One is how fast in time does the number of creatures grow? And then how fast in time does the interest in the past decline? Because at any one time, people will be simulating different periods in the past with different emphasis based on-- - I love the way you think so much.

That's exactly right, yeah. - So if the first function grows slower than the second one declines, then in fact, your chances of being simulated are low. So the key question is how fast does interest in the past decline relative to the rate at which the population grows with time?

- Does this correlate to, you earlier suggested that the interest in the future increases over time. Are those correlated, interest in the future versus interest in the past? Like why are we interested in the past? - But the simple way to do it is, as you know, like Google Ngrams has a way to type in a word and see how interest in it declines or rises over time.

Right? - Yeah, yeah. - You can just type in a year and get the answer for that. If you type in a particular year like 1900 or 1950, you can see with Google Ngram how interest in that year increased up until that date and decreased after it. And you can see that interest in a date declines faster than does the population grow with time.

- That is brilliant. - And so-- - That is so interesting. - You have the answer. - Wow. And that was your argument against, not against, to this particular aspect of the simulation, how much past simulation there will be, replay of past memories. - First of all, if we assume that like simulation of the past is a small fraction of all the creatures at that moment, right?

- Yes. - And then it's about how fast. Now, some people have argued plausibly that maybe most interest in the past falls with this fast function, but some unusual category of interest in the past won't fall that quickly, and then that eventually would dominate. So that's a other hypothesis.

- Some category. So that very outlier specific kind of, yeah, okay. Yeah, yeah, yeah. Like really popular kinds of memories, but like probably sexual-- - In a trillion years, there's some small research institute that tries to randomly select from all possible people in history or something to simulate. - Yeah, yeah, yeah.

- So the question is how big is this research institute and how big is the future in a trillion years, right? And that would be hard to say. But if we just look at the ordinary process by which people simulate recent, so if you look at, I think it's also true for movies and plays and video games, overwhelmingly they're interested in the recent past.

There's very few video games where you play someone in the Roman Empire. - Right. - Even fewer where you play someone in the ancient Egyptian Empire. - Yeah, just different-- - It's just declined very quickly. - But every once in a while, that's brought back. But yeah, you're right.

I mean, just if you look at the mass of entertainment, movies and games, it's focusing on the present, recent past. And maybe some, I mean, where does science fiction fit into this? Because it's sort of, what is science fiction? I mean, it's a mix of the past and the present and some kind of manipulation of that to make it more efficient for us to ask deep philosophical questions about humanity.

- The closest genre to science fiction is clearly fantasy. Fantasy and science fiction in many bookstores and even Netflix or whatever categories, they're just lumped together. So clearly they have a similar function. So the function of fantasy is more transparent than the function of science fiction. So use that as your guide.

What's fantasy for? It's just to take away the constraints of the ordinary world and imagine stories with much fewer constraints. That's what fantasy is. You're much less constrained. - What's the purpose to remove constraints? Is it to escape from the harshness of the constraints of the real world? Or is it to just remove constraints in order to explore some, get a deeper understanding of our world?

What is it? I mean, why do people read fantasy? - I'm not a cheap fantasy reading kind of person. So I need to... - One story that it sounds plausible to me is that there are sort of these deep story structures that we love and we want to realize, and then many details of the world get in their way.

Fantasy takes all those obstacles out of the way and lets you tell the essential hero story or the essential love story, whatever essential story you want to tell. The reality and constraints are not in the way. - And so science fiction can be thought of as like fantasy, except you're not willing to admit that it can't be true.

So the future gives the excuse of saying, "Well, it could happen." And you accept some more reality constraints for the illusion, at least, that maybe it could really happen. - Maybe it could happen, and that it stimulates the imagination. The imagination is something really interesting about human beings, and it seems also to be an important part of creating really special things, is to be able to first imagine them.

With you and Nick Bostrom, where do you land on the simulation and all the mathematical ways of thinking it and just the thought experiment of it? Are we living in a simulation? - That was the discussion we just had. That is, you should grant the possibility of being in a simulation.

You shouldn't be 100% confident that you're not. You should certainly grant a small probability. The question is, how large is that probability? - Are you saying we would be, I misunderstood because I thought our discussion was about replaying things that have already happened. - Right, but the whole question is, right now, is that what I am?

Am I actually a replay from some distant future? - But it doesn't necessarily need to be a replay. It could be a totally new. You don't have to be an NPC. - Right, but clearly, I'm in a certain era with a certain kind of world around me, right? So either this is a complete fantasy or it's a past of somebody else in the future.

But no, it could be a complete fantasy, though. - It could be, right, but then you have to talk about what's the frank fraction of complete fantasies, right? - I would say it's easier to generate a fantasy than to replay a memory, right? - Sure, but if we just look at the entire history, if we just look at the entire history of everything, we should say, sure, but most things are real.

Most things aren't fantasies, right? Therefore, the chance that my thing is real, right? So the simulation argument works stronger about sort of the past. We say, ah, but there's more future people than there are today. So you being in the past or the future makes you special relative to them, which makes you more likely to be in a simulation, right?

If we're just taking the full count and saying, in all creatures ever, what percentage are in simulations? Probably no more than 10%. - So what's a good argument for that? That most things are real? - Yeah. - 'Cause as Bostrom says the other way, right? - In a competitive world, in a world where people like have to work and have to get things done, then they have a limited budget for leisure.

And so, you know, leisure things are less common than work things, like real things, right? That's just-- - But if you look at the stretch of history in the universe, doesn't the ratio of leisure increase? Isn't that where we, isn't that the-- - Right, but now we're looking at the fraction of leisure, which takes the form of something where the person doing the leisure doesn't realize it.

Now there could be some fraction of it, but that's much smaller, right? - Yeah. Clueless foragers. - Or somebody is clueless in the process of supporting this leisure, right? It might not be the person leisuring, somebody, they're a supporting character or something, but still, that's gotta be a pretty small fraction of leisure.

- What, you mentioned that children are one of the things that are a source of meaning, broadly speaking. And let me ask the big question, what's the meaning of this whole thing? - The-- - Robin, meaning of life. What is the meaning of life? We talked about alien civilizations, but this is the one we got.

Where are the aliens? Where are the human? Seem to be conscious, be able to introspect. What's, why are we here? - This is the thing I told you before about how we can predict that future creatures will be different from us. We, our preferences are this amalgam of various sorts of random sort of patched together preferences about thirst and sex and sleep and attention and all these sorts of things.

So we don't understand that very well. It's not very transparent and it's a mess, right? That is the source of our motivation. That is how we were made and how we are induced to do things. But we can't summarize it very well and we don't even understand it very well.

That's who we are. And often we find ourselves in a situation where we don't feel very motivated, we don't know why. In other situations, we find ourselves very motivated and we don't know why either. And so that's the nature of being a human of the sort that we are, because even though we can think abstractly and reason abstractly, this package of motivations is just opaque and a mess.

And that's what it means to be a human today and the motivation. We can't very well tell the meaning of our life. It is this mess. That our descendants will be different. They will actually know exactly what they want and it will be to have more descendants. That will be the meaning for them.

- Well, it's funny that you have the certainty. You have more certainty, you have more transparency about our descendants than you do about your own self. - Right. - So it's really interesting to think, 'cause you mentioned this about love, that something that's fundamental about love is this opaqueness, that we're not able to really introspect what the heck it is.

Or all the feelings, the complex feelings involved. - And that's true about many of our motivations. - And that's what it means to be human of the 20th and the 21st century variety. Why is that not a feature that we want, will choose to persist in civilization then? This opaqueness, put another way, mystery.

Maintaining a sense of mystery about ourselves and about those around us. Maybe that's a really nice thing to have, maybe. - So, I mean, this is the fundamental issue in analyzing the future, what will set the future? One theory about what will set the future is what do we want the future to be?

So under that theory, we should sit and talk about what we want the future to be, have some conferences, have some conventions, discussion things, vote on it maybe, and then hand it off to the implementation people to make the future the way we've decided it should be. That's not the actual process that's changed the world over history up to this point.

It has not been the result of us deciding what we want and making it happen. In our individual lives, we can do that. We might decide what career we want or where we want to live, who we want to live with. In our individual lives, we often do slowly make our lives better according to our plan and our things, but that's not the whole world.

The whole world so far has mostly been a competitive world where things happen if anybody anywhere chooses to adopt them and they have an advantage. And then it spreads and other people are forced to adopt it by competitive pressures. So that's the kind of analysis I can use to predict the future, and I do use that to predict the future.

It doesn't tell us it'll be a future we like. It just tells us what it'll be. - And it'll be one where we're trying to maximize the number of our descendants. - And we know that abstractly and directly, and it's not opaque. - With some probability that's non-zero that will lead us to become grabby in expanding aggressively out into the cosmos until we meet other aliens.

- The timing isn't clear. We might become grabby and then this happens. These are, be grabbyness and this are both the results of competition, but it's less clear which happens first. - Does this future excite you, scare you? How do you feel about this whole thing? - Well, again, I told you compared to sort of a dead cosmology, at least it's energizing and having a living story with real actors and characters and agendas, right?

- Yeah, and that's one hell of a fun universe to live in. - Robin, you're one of the most fascinating, fun people to talk to, brilliant, humble, systematic in your analysis. - Hold on to my wallet here. What's he looking for? - I already stole your wallet long ago.

I really, really appreciate you spending your valuable time with me. I hope we get a chance to talk many more times in the future. Thank you so much for sitting down. - Thank you. - Thanks for listening to this conversation with Robin Hanson. To support this podcast, please check out our sponsors in the description.

And now let me leave you with some words from Ray Bradbury. We are an impossibility in an impossible universe. Thank you for listening and hope to see you next time. (upbeat music) (upbeat music)