Back to Index

Rob Reid: The Existential Threat of Engineered Viruses and Lab Leaks | Lex Fridman Podcast #193


Chapters

0:0 Introduction
2:28 The most entertaining outcome is the most likely
8:47 Meme theory
12:7 Writing process
18:54 Engineered viruses as a threat to human civilization
26:40 Gain-of-function research on viruses
38:50 Did COVID leak from a lab?
46:10 Virus detection
53:59 Failure of institutions
61:43 Using AI to engineer viruses
66:2 Evil and competence
75:21 Where are the aliens?
79:14 Backing up human consciousness by colonizing space
88:43 Superintelligence and consciousness
100:7 Meditation
108:15 Fasting
114:15 Greatest song of all time
119:41 Early days of music streaming
131:34 Startup advice
144:45 Podcasting
160:7 Advice for young people
169:10 Mortality
174:36 Meaning of life

Transcript

The following is a conversation with Rob Reed, entrepreneur, author, and host of the After On podcast. Sam Harris recommended that I absolutely must talk to Rob about his recent work on the future of engineer pandemics. I then listened to the four hour special episode of Sam's Making Sense podcast with Rob, titled Engineering the Apocalypse, and I was floored.

I knew I had to talk to him. Quick mention of our sponsors, Athletic Greens, Belcampo, Fundrise, and NetSuite. Check them out in the description to support this podcast. As a side note, let me say a few words about the lab leak hypothesis, which proposes that COVID-19 is a product of gain-of-function research on coronaviruses conducted at the Wuhan Institute of Virology that was then accidentally leaked due to human error.

For context, this lab is biosafety level four, BSL-4, and it investigates coronaviruses. BSL-4 is the highest level of safety, but if you look at all the human-in-the-loop pieces required to achieve this level of safety, it becomes clear that even BSL-4 labs are highly susceptible to human error. To me, whether the virus leaked from the lab or not, getting to the bottom of what happened is about much more than this particular catastrophic case.

It is a test for our scientific, political, journalistic, and social institutions of how well we can prepare and respond to threats that can cripple or destroy human civilization. If we continue gain-of-function research on viruses, eventually these viruses will leak, and they will be more deadly and more contagious. We can pretend that won't happen, or we can openly and honestly talk about the risks involved.

This research can both save and destroy human life on Earth as we know it. It's a powerful double-edged sword. If YouTube and other platforms censor conversations about this, if scientists self-censor conversations about this, we'll become merely victims of our brief homo sapien story, not its heroes. As I said before, too carelessly labeling ideas as misinformation and dismissing them because of that will eventually destroy our ability to discover the truth.

And without truth, we don't have a fighting chance against the great filter before us. This is the Lex Friedman Podcast, and here is my conversation with Rob Reed. I have seen evidence on the internet that you have a sense of humor, allegedly, but you also talk and think about the destruction of human civilization.

What do you think of the Elon Musk hypothesis that the most entertaining outcome is the most likely? And he, I think, followed on to say a scene from an external observer, like if somebody was watching us, it seems we come up with creative ways of progressing our civilization that's fun to watch.

- Yeah, so exactly. He said, from the standpoint of the observer, not the participant, I think. And so what's interesting about that, those were, I think, just a couple of freestanding tweets and delivered without a whole lot of wrapper of context, so it's left to the mind of the reader of the tweets to infer what he was talking about.

So that's kind of like, it provokes some interesting thoughts. Like, first of all, it presupposes the existence of an observer, and it also presupposes that the observer wishes to be entertained and has some mechanism of enforcing their desire to be entertained. So there's a lot underpinning that. And to me, that suggests, particularly coming from Elon, that it's a reference to simulation theory, that somebody is out there and has far greater insights and a far greater ability to, let's say, peer into a single individual life and find that entertaining and full of plot twists and surprises and either a happy or tragic ending, or they have an incredible meta-view and they can watch the arc of civilization unfolding in a way that is entertaining and full of plot twists and surprises and a happy or unhappy ending.

So, okay, so we're presupposing an observer. Then on top of that, when you think about it, you're also presupposing a producer because the act of observation is mostly fun if there are plot twists and surprises and other developments that you weren't foreseeing. I have re-read my own novels, and that's fun because it's something I worked hard on and I slaved over and I love, but there aren't a lot of surprises in there.

So now I'm thinking we need a producer and an observer for that to be true. And on top of that, it's got to be a very competent producer because Elon said the most entertaining outcome is the most likely one. So there's lots of layers for thinking about that. And when you've got a producer who's trying to make it entertaining, it makes me think of there was a South Park episode in which Earth turned out to be a reality show.

And somehow we had failed to entertain the audience as much as we used to, so the Earth show was going to get canceled, et cetera. So taking all that together, and I'm obviously being a little bit playful in laying this out, what is the evidence that we have that we are in a reality that is intended to be most entertaining?

Now you could look at that reality on the level of individual lives or the whole arc of civilization, other lives, levels as well, I'm sure. But just looking from my own life, I think I'd make a pretty lousy show. I spend an inordinate amount of time just looking at a computer.

I don't think that's very entertaining. And there's just a completely inadequate level of shootouts and car chases in my life. I mean, I'll go weeks, even months without a single shootout or car chase. - That just means that you're one of the non-player characters in this game. You're just waiting.

- I'm an extra. - You're an extra that waiting for your one opportunity for a brief moment to actually interact with one of the main characters in the play. - Very interesting. Okay, that's good. So okay, so we'll rule out me being the star of the show, which I probably could have guessed at anyway.

But then even the arc of civilization. I mean, there have been a lot of really intriguing things that have happened and a lot of astounding things that have happened. But I would have some werewolves, I'd have some zombies. I would have some really improbable developments like maybe Canada absorbing the United States.

So I don't know. I'm not sure if we're necessarily designed for maximum entertainment. But if we are, that will mean that 2020 is just a prequel for even more bizarre years ahead. So I kind of hope that we're not designed for maximum entertainment. - Well, the night is still young in terms of Canada, but do you think it's possible for the observer and the producer to be kind of emergent?

So meaning it does seem when you kind of watch memes on the internet, the funny ones, the entertaining ones spread more efficiently. - They do. - I mean, I don't know what it is about the human mind that soaks up on mass funny things. Much more sort of aggressively, it's more viral in the full sense of that word.

Is there some sense that whatever the evolutionary process that created our cognitive capabilities is the same process that's going to, in an emergent way, create the most entertaining outcome, the most meme-ifiable outcome, the most viral outcome if we were to share it on Twitter? - Yeah, that's interesting. Yeah, we do have an incredible ability.

Like, I mean, how many memes are created in a given day? And the ones that go viral are almost uniformly funny, at least to somebody with a particular sense of humor. - Right. - Yeah, I'd have to think about that. We are definitely great at creating atomized units of funny.

Like in the example that you used, there are going to be X million brains parsing and judging whether this meme is retweetable or not. And so that sort of atomic element of funniness, of entertainingness, et cetera, we definitely have an environment that's good at selecting for that, and selective pressure, and everything else that's going on.

But in terms of the entire ecosystem of conscious systems here on the Earth driving for a level of entertainment, that is on such a much higher level that I don't know if that would necessarily follow directly from the fact that atomic units of entertainment are very, very aptly selected for us.

I don't know. - Do you find it compelling or useful to think about human civilization from the perspective of the ideas versus the perspective of the individual human brains? So almost thinking about the ideas or the memes, this is the Dawkins thing, as the organisms. And then the humans as just like vehicles for briefly carrying those organisms as they jump around and spread.

- Yeah, for propagating them, mutating them, putting selective pressure on them, et cetera. I mean, I found Dawkins', or his launching of the idea of memes is just kind of an afterthought to his unbelievably brilliant book about the selfish gene. Like, what a PS to put at the end of a long chunk of writing.

It's profoundly interesting. I view the relationship though between humans and memes as probably an oversimplification, but maybe a little bit like the relationship between flowers and bees, right? Do flowers have bees or do bees in a sense have flowers? And the answer is, it is a very, very symbiotic relationship in which both have semi-independent roles that they play and both are highly dependent upon the other.

And so in the case of bees, obviously, you could see the flower as being this monolithic structure physically in relation to any given bee, and it's the source of food and sustenance. So you could kind of say, well, flowers have bees. But on the other hand, the flowers would obviously be doomed.

They weren't being pollinated by the bees. So you could kind of say, well, bees, flowers are really expression of what the bees need. And the truth is a symbiosis. So with memes in human minds, our brains are clearly the Petri dishes in which memes are either propagated or not propagated, get mutated or don't get mutated.

They are the venue in which competition, selective competition, plays out between different memes. So all of that is very true. And you could look at that and say, really the human mind is a production of memes and ideas have us rather than us having ideas. But at the same time, let's take a catchy tune as an example of a meme.

That catchy tune did originate in a human mind. Somebody had to structure that thing. And as much as I like Elizabeth Gilbert's TED Talk about how the universe, I'm simplifying, but kind of the ideas find their way in this beautiful TED Talk, it's very lyrical. She talked about ideas and prose kind of beaming into our minds.

She talked about needing to pull over to the side of the road when she got inspiration for a particular paragraph or a particular idea and a burning need to write that down. I love that. I find that beautiful. As a writer, as a novelist myself, I've never had that experience.

And I think that really most things that do become memes are the product of a great deal of deliberate and willful exertion of a conscious mind. And so like the bees and the flowers, I think there's a great symbiosis. And they both kind of have one another. Ideas have us, but we have ideas for real.

- If we could take a little bit of a tangent, Stephen King on writing, you as a great writer, you're dropping a hint here that the ideas don't come to you. It's a grind of sort of, it's almost like you're mining for gold. It's more of a very deliberate, rigorous daily process.

So maybe, can you talk about the writing process? How do you write well? And maybe if you want to step outside of yourself, almost like give advice to an aspiring writer, what does it take to write the best work of your life? - Well, it would be very different if it's fiction versus nonfiction.

And I've done both. I've written two works of non, two nonfiction books and two works of fiction. Two works of fiction being more recent, I'm gonna focus on that right now 'cause that's more toweringly on my mind. They're amongst novelists. Again, this is an oversimplification, but there's kind of two schools of thought.

Some people really like to fly by the seat of their pants, and some people really, really like to outline, to plot. So there's plotters and pantsers, I guess, is one way that people look at it. And as with most things, there is a great continuum in between, and I'm somewhere on that continuum, but I lean, I guess, a little bit more toward the plotter.

And so when I do start a novel, I have a pretty strong point of view about how it's gonna end, and I have a very strong point of view about how it's gonna begin. And I do try to make an effort of making an outline that I know I'm gonna be extremely unfaithful to in the actual execution of the story, but I'm trying to make an outline that gets us from here to there, and notion of subplots and beats and rhythm and different characters and so forth.

But then when I get into the process, that outline, particularly the center of it, ultimately, inevitably morphs a great deal. And I think if I were personally a rigorous outliner, I would not allow that to happen. I also would make a much more vigorous skeleton before I start. So I think people who are really in that plotting, outlining mode are people who write page turners, people who write spy novels or supernatural adventures, where you really want a relentless pace of events, action, plot twists, conspiracy, et cetera.

And that is really the bone, that's really the skeletal structure. So I think folks who write that kind of book are really very much on the outlining side. And I think people who write what's often referred to as literary fiction, for lack of a better term, where it's more about sort of aura and ambiance and character development and experience and inner experience and inner journey and so forth, I think that group is more likely to fly by the seat of their pants.

And I know people who start with a blank page and just see where it's gonna go. I'm a little bit more on the plotting side. Now you asked what makes something, at least in the mind of the writer, as great as it can be. For me, it's an astonishingly high percentage of it is editing as opposed to the initial writing.

For every hour that I spend writing new prose, like new pages, new paragraphs, stuff that, new bits of the book, I probably spend, I mean, I wish I kept a count. I wish I had one of those pieces of software that lawyers use to decide how much time I'm gonna be doing this, that.

But I would say it's at least four or five hours and maybe as many as 10 that I spend editing. And so it's relentless for me. - For each one hour of writing, you said? - I'd say that. - Wow. - I mean, I write because I edit and I spend just relentlessly polishing and pruning and sometimes on the micro level of just like, does the rhythm of the sentence feel right?

Do I need to carve a syllable or something so it can land? Like as micro as that to as macro as like, okay, I'm done but the book is 750 pages long and it's way too bloated and I need to lop a third out of it. Problems on those two orders of magnitude and everything in between, that is an enormous amount of my time.

And I also write music, write and record and produce music. And there the ratio is even higher. Every minute that I spend or my band spends laying down that original audio, it's a very high proportion of hours that go into just making it all hang together and sound just right.

So I think that's true of a lot of creative processes. I know it's true of sculpture. I believe it's true of woodwork. My dad was an amateur woodworker and he spent a huge amount of time on sanding and polishing at the end. So I think a great deal of the sparkle comes from that part of the process, any creative process.

- Can I ask about the psychological, the demon side of that picture? In the editing process, you're ultimately judging the initial piece of work and you're judging and judging and judging. How much of your time do you spend hating your work? How much time do you spend in gratitude, impressed, thankful, or how good the work that you will put together is?

- I spend almost all the time in a place that's intermediate between those but leaning toward gratitude. I spend almost all the time in a state of optimism that this thing that I have, I like, I like quite a bit and I can make it better and better and better with every time I go through it.

So I spend most of my time in a state of optimism. - I think I personally oscillate much more aggressively between those two where I wouldn't be able to find the average. I go pretty deep. Marvin Minsky from MIT had this advice, I guess, to what it takes to be successful in science and research is to hate everything you do, you've ever done in the past.

I mean, at least he was speaking about himself that the key to his success was to hate everything he's ever done. I have a little Marvin Minsky there in me too to sort of always be exceptionally self-critical but almost like self-critical about the work but grateful for the chance to be able to do the work.

- Yeah. - If that makes sense. - Makes perfect sense. - But that, you know, each one of us have to strike a certain kind of balance. - Yeah. - But back to the destruction of human civilization. If humans destroy ourselves in the next 100 years, what will be the most likely source, the most likely reason that we destroy ourselves?

- Well, let's see, 100 years, it's hard for me to comfortably predict out that far and it's something to give a lot more thought to, I think, than normal folks simply because I am a science fiction writer. And I feel with the acceleration of technological progress, it's really hard to foresee out more than just a few decades.

I mean, comparing today's world to that of 1921, where we are right now a century later, it would have been so unforeseeable. And I just don't know what's gonna happen, particularly with exponential technologies. I mean, our intuitions reliably defeat ourselves with exponential technologies like computing and synthetic biology. And, you know, how we might destroy ourselves in the 100-year timeframe might have everything to do with breakthroughs in nanotechnology 40 years from now and then how rapidly those breakthroughs accelerate.

But in the nearer term that I'm comfortable predicting, let's say 30 years, I would say the most likely route to self-destruction would be synthetic biology. And I always say that with the gigantic caveat and very important one that I find, and I'll abbreviate synthetic biology to SYNBIO just to save us some syllables.

I believe SYNBIO offers us simply stunning promise that we would be fools to deny ourselves. So I'm not an anti-SYNBIO person by any stretch. I mean, SYNBIO has unbelievable odds of helping us beat cancer, helping us rescue the environment, helping us do things that we would currently find imponderable.

So it's electrifying the field. But in the wrong hands, those hands either being incompetent or being malevolent, in the wrong hands, synthetic biology to me has a much, much greater odds, has much, much greater odds of leading to our self-destruction than something running amok with super AI, which I believe is a real possibility and one we need to be concerned about.

But in the 30-year timeframe, I think it's a lesser one, or nuclear weapons or anything else that I can think of. - Can you explain that a little bit further? So your concern is on the man-made versus the natural side of the pandemic frontier. So we humans engineering pathogens, engineering viruses is the concern here.

- Yeah. - And maybe how do you see the possible trajectories happening here in terms of, is it malevolent or is it accidents, oops, little mistakes or unintended consequences of particular actions that are ultimately lead to unexpected mistakes? - Well, both of them are a danger. And I think the question of which is more likely has to do with two things.

One, do we take a lot of methodical, affordable, four-sided steps that we are absolutely capable of taking right now to first stall the risk of a bad actor infecting us with something that could have annihilating impacts. And in the episode you referenced with Sam, we talked a great deal about that.

So do we take those steps? And if we take those steps, I think the danger of malevolent rogue actors doing a sin with sin bio could plummet. But it's always a question of if, and we have a bad, bad and very long track record of hitting the snooze bar after different natural pandemics have attacked us.

So that's variable number one. Variable number two is how much experimentation and pathogen development do we as a society decide is acceptable in the realms of academia, government or private industry? And if we decide as a society that it's perfectly okay for people with varying research agendas to create pathogens that if released could wipe out humanity, if we think that's fine, and if that kind of work starts happening in one lab, five labs, 50 labs, 500 labs in one country, then 10 countries, then 70 countries or whatever, that risk of a boo-boo starts rising astronomically.

And this won't be a spoiler alert based on the way that I presented those two things, but I think it's unbelievably important to manage both of those risks. The easier one to manage, although it wouldn't be simple by any stretch because it would have to be something that all nations agree on, but the easiest way, the easier risk to manage is that of, hey guys, let's not develop pathogens that if they escape from a lab could annihilate us.

There's no line of research that justifies that, and in my view, I mean, that's the point of perspective we need to have. We'd have to collectively agree that there's no line of research that justifies that. The reason why I believe that would be a highly rational conclusion is even the highest level of biosafety lab in the world, biosafety lab level four, and there are not a lot of BSL-4 labs in the world, there are things can and have leaked out of BSL-4 labs, and some of the work that's been done with potentially annihilating pathogens, which we can talk about, is actually done at BSL-3, and so fundamentally, any lab can leak.

We have proven ourselves to be incapable of creating a lab that is utterly impervious to leaks, so why in the world would we create something where if, God forbid, it leaked, could annihilate us all? And by the way, almost all of the measures that are taken in biosafety level anything labs are designed to prevent accidental leaks.

What happens if you have a malevolent insider? And we could talk about the psychology and the motivations of what would make a malevolent insider who wants to release something annihilating in a bit. I'm sure that we will. But what if you have a malevolent insider? Virtually none of the standards that go into biosafety level one, two, three, and four are about preventing somebody hijacking the process.

I mean, some of them are, but they're mainly designed against accidents. They're imperfect against accidents, and if this kind of work starts happening in lots and lots of labs, with every lab you add, the odds of there being a malevolent insider naturally increase arithmetically as the number of labs goes up.

Now, on the front of somebody outside of a government, academic, or scientific, traditional government, academic, scientific environment, creating something malevolent, again, there's protections that we can take, both at the level of syn-bio architecture, the hardening of the entire syn-bio ecosystem against terrible things being made that we don't want to have out there by rogue actors, to early detection, to lots and lots of other things that we can do to dramatically mitigate that risk.

And I think we do both of those things, decide that no, we're not going to experiment, we make annihilating pathogens in leaky labs, and B, yes, we are going to take countermeasures that are going to cost a fraction of our annual defense budget to preclude their creation, then I think both risks get managed down.

But if you take one set of precautions and not the other, then the thing that you have not taken precautions against immediately becomes the more likely outcome. - So can we talk about this kind of research and what's actually done, and what are the positives and negatives of it?

So if we look at gain-of-function research and the kind of stuff that's happening in level three and level four BSL labs, what's the whole idea here? Is it trying to engineer viruses to understand how they behave? You want to understand the dangerous ones. - Yeah, so that would be the logic behind doing it.

And so gain-of-function can mean a lot of different things. Viewed through a certain lens, gain-of-function research could be what you do when you create GMOs, when you create hearty strains of corn that are resistant to pesticides. I mean, you could view that as gain-of-function. So I'm going to refer to gain-of-function in a relatively narrow sense, which is actually the sense that the term is usually used, which is in some way magnifying capabilities of microorganisms to make them more dangerous, whether it's more transmissible or more deadly.

And in that line of research, I'll use an example from 2011, 'cause it's very illustrative and it's also very chilling. Back in 2011, two separate labs, independently of one another, I assume there was some kind of communication between them, but they were basically independent projects, one in Holland and one in Wisconsin, did gain-of-function research on something called H5N1 flu.

H5N1 is something that, at least on a lethality basis, makes COVID look like a kitten. COVID, according to the World Health Organization, has a case fatality rate somewhere between half a percent and 1%. H5N1 is closer to 60%, six zero. And so that's actually even slightly more lethal than Ebola.

It's a very, very, very scary pathogen. The good news about H5N1, it is that it is barely, barely contagious. And I believe it is in no way contagious human to human. It requires very, very, very deep contact with birds, in most cases chickens. And so if you're a chicken farmer and you spend an enormous amount of time around them, and perhaps you get into situations in which you get a break in your skin and you're interacting intensely with fowl who, as it turns out, have H5N1, that's when the jump comes.

But there's no airborne transmission that we're aware of human to human. I mean, not that we're, it just doesn't exist. I think the World Health Organization did a relentless survey of the number of H5N1 cases. I think they do it every year. I saw one 10-year series where I think it was like 500 fatalities over the course of a decade.

And that's a drop in the bucket. Kind of fun fact. I believe the typical lethality from lightning over 10 years is 70,000 deaths. So we've been getting struck by lightning, pretty low risk. H5N1, much, much lower than that. What happened in these experiments is the experimenters in both cases set out to make H5N1 that would be contagious, that could create airborne transmission.

And so they basically passed it, I think in both cases, they passed it through a large number of ferrets. And so this wasn't like CRISPR, there wasn't even any CRISPR back in those days. This was relatively straightforward, selecting for a particular outcome. And after guiding the path and passing them through, again, I believe it was a series of ferrets, they did in fact come up with a version of H5N1 that is capable of airborne transmission.

Now, they didn't unleash it into the world, they didn't inject it into humans to see what would happen. And so for those two reasons, we don't really know how contagious it might have been. But if it was as contagious as COVID, that could be a civilization-threatening pathogen. And why would you do it?

Well, the people who did it were good guys. They were virologists. I believe their agenda as they explained it was, much as you said, let's figure out what a worst case scenario might look like so we can understand it better. But my understanding is in both cases, it was done in BSL-3 labs.

And so potential of leak, significantly non-zero, hopefully way below 1%, but significantly non-zero. And when you look at the consequences of an escape in terms of human lives, destruction of a large portion of the economy, et cetera, and you do an expected value calculation on whatever fraction of 1% that was, you would come up with a staggering cost, staggering expected cost for this work.

So it should never have been carried out. Now, you might make an argument. If you said, if you believed that H5N1 in nature is on an inevitable path to airborne transmission, and it's only gonna be a small number of years, A, and B, if it makes that transition, there is one set of changes to its metabolic pathways and its genomic code and so forth, one, that we have discovered.

So it is gonna go from point A, which is where it is right now, to point B. We have reliably engineered point B. That is the destination. And we need to start fighting that right now because this is five years or less away. Now, that'd be a very different world.

That'd be like spotting an asteroid that's coming toward the Earth and is five years off. And yes, you marshal everything you can to resist that. But there's two problems with that perspective. The first is, in however many thousands of generations that humans have been inhabiting this planet, there has never been a transmissible form of H5N1.

And influenza's been around for a very long time. So there is no case for inevitability of this kind of a jump to airborne transmission. So we're not on a freight train to that outcome. And if there was inevitability around that, it's not like there's just one set of genetic code that would get there.

There are just, there's all kinds of different mutations that could conceivably result in that kind of an outcome. Unbelievable diversity of mutations. And so we're not actually creating something we're inevitably going to face. But we are creating something, we are creating a very powerful and unbelievably negative card and injecting it in the deck that nature never put into the deck.

So in that case, I just don't see any moral or scientific justification for that kind of work. And interestingly, there was quite a bit of excitement and concern about this when the work came out. One of the teams was gonna publish their results in science, the other in nature.

And there were a lot of editorials and a lot of scientists are saying this is crazy. And publication of those papers did get suspended. And not long after that, there was a pause put on US government funding, NIH funding on gain-of-function research. But both of those speed bumps were ultimately removed.

Those papers did ultimately get published. And that pause on funding ceased long ago. And in fact, those two very projects, my understanding is, resumed their funding, got their government funding back. I don't know why Dutch Project's getting NIH funding, but whatever, about a year and a half ago. So as far as the US government and regulators are concerned, it's all systems go for gain-of-function at this point, which I find very troubling.

- Now I'm a little bit of an outsider from this field, but it has echoes of the same kind of problem I see in the AI world with autonomous weapon systems. Nobody, and my colleagues, my colleagues, friends, as far as I can tell, people in the AI community are not really talking about autonomous weapon systems.

As now US and China full steam ahead on the development of both. And that seems to be a similar kind of thing on gain-of-function. I have friends in the biology space, and they don't want to talk about gain-of-function publicly. That makes me very uncomfortable from an outsider perspective in terms of gain-of-function.

It makes me very uncomfortable from the insider perspective on autonomous weapon systems. I'm not sure how to communicate exactly about autonomous weapon systems, and I certainly don't know how to communicate effectively about gain-of-function. What is the right path forward here? Should we seize all gain-of-function research? Is that really the solution here?

- Well, again, I'm gonna use gain-of-function in the relatively narrow context of what we're discussing. - Sorry, yes, for viruses. - 'Cause you could say almost anything that you do to make biology more effective is gain-of-function. So within the narrow confines of what we're discussing, I think it would be easy enough for level-headed people in all of the countries, level-headed governmental people in all of the countries that realistically could support such a program to agree, we don't want this to happen because all labs leak.

I mean, an example that I use, I actually did use it in the piece I did with Sam Harris as well, is the anthrax attacks in the United States in 2001. I mean, talk about an example of the least likely lab leaking into the least likely place. This was shortly after 9/11, for folks who don't remember it, and it was a very, very lethal strand of anthrax that, as it turned out, based on the forensic genomic work that was done and so forth, absolutely leaked from a high-security US Army lab, probably the one at Fort Detrick in Maryland.

It might've been another one, but who cares? It absolutely leaked from a high-security US Army lab. And where did it leak to, this highly dangerous substance that was kept under lock and key by a very security-minded organization? Well, it leaked to places including the Senate Majority Leader's office, Tom Daschle's office, I think it was Senator Leahy's office, certain publications, including, bizarrely, the National Enquirer.

But let's go to the Senate Majority Leader's office. It is hard to imagine a more security-minded country than the United States two weeks after the 9/11 attack. I mean, it doesn't get more security-minded than that. And it's also hard to imagine a more security-capable organization than the United States military.

We can joke all we want about inefficiencies in the military and $24,000 wrenches and so forth, but pretty capable when it comes to that, despite that level of focus and concern and competence, just days after the 9/11 attack, something comes from the inside of our military and industrial complex and ends up in the office of someone, I believe the Senate Majority Leader, somewhere in the line of presidential succession.

It tells us everything can leak. So again, think of a level-headed conversation between powerful leaders in a diversity of countries, thinking through, like, I can imagine a very simple PowerPoint revealing, just discussing briefly things like the anthrax leak, things like this foot-in-mouth disease outbreak or leaking that came out of a BSL-4-level lab in the UK, several other things, talking about the utter virulence that could result from gain-of-function and say, folks, can we agree that this just shouldn't happen?

I mean, if we were able to agree on the Nuclear Non-Proliferation Treaty, which we were, by a weapons convention, which we did agree on, we the world, for the most part, I believe agreement could be found there. But it's gonna take people in leadership of a couple of very powerful countries to get to the consensus amongst them and then to decide we're gonna get everybody together and browbeat them into banning this stuff.

Now, that doesn't make it entirely impossible that somebody might do this, but in well-regulated, carefully watched over fiduciary environments, like federally-funded academic research, anything going on in the government itself, things going on in companies that have investors who don't wanna go to jail for the rest of their lives, I think that would have a major, major dampening impact on it.

- But there is a particular possible catalyst in this time we live in, which is for really kind of raising the question of gain-of-function research for the application of virus, making viruses more dangerous, is the question of whether COVID leaked from a lab. Sort of not even answering that question, but even asking that question.

It seems like a very important question to ask to catalyze the conversation about whether we should be doing gain-of-function research. I mean, from a high level, why do you think people, even colleagues of mine, are not comfortable asking that question? And two, do you think that the answer could be that it did leak from a lab?

- I think the mere possibility that it did leak from a lab is evidence enough, again, for the hypothetical, rational national leaders watching this simple PowerPoint. If you could put the possibility at 1%, and you look at the unbelievable destructive power that COVID had, that should be an overwhelmingly powerful argument for excluding it.

Now, as to whether or not that was a leak, some very, very level, I don't know enough about all of the factors in the Bayesian analysis and so forth that has gone into people making the pro-argument of that. So I don't pretend to be an expert on that, and I don't have a point of view.

I just don't know. But what we can say is it is entirely possible for a couple of reasons. One is that there is a BSL-4 lab in Wuhan, the Wuhan Institute of Virology. I believe it's the only BSL-4 in China. I could be wrong about that. But it definitely had a history that alarmed very sophisticated US diplomats and others who were in contact with the lab and were aware of what it was doing long before COVID hit the world.

And so there are diplomatic cables that have been declassified. I believe one sophisticated scientist or other observer said that WIV is a ticking time bomb. And I believe it's also been pretty reasonably established that coronaviruses were a topic of great interest at WIV. SARS obviously came out of China, and that's a coronavirus, so it would make an enormous amount of sense for it to be studied there.

And there is so much opacity about what happened in the early days and weeks after the outbreak that's basically been imposed by the Chinese government that we just don't know. So it feels like a substantially or greater than 1% possibility to me looking at it from the outside. And that's something that one could imagine.

Now we're going to the realm of thought experiment, not me decreeing this is what happened, but if they're studying coronavirus at the Wuhan Institute of Virology, and there is this precedent of gain-of-function research that's been done on something that is remarkably uncontagious to humans, whereas we know coronavirus is contagious to humans.

I could definitely, and there is this global consensus. Certainly was the case two or three years ago when this work might have started. Seems to be this global consensus that gain-of-function is fine. US paused funding for a little while, but paused funding. They never said private actors couldn't do it.

It was just a pause of NIH funding. And then that pause was lifted. So again, none of this is irrational. You could certainly see the folks at WIV saying, gain-of-function, interesting vector, coronavirus, unlike H5N1, very contagious. We're a nation that has had terrible run-ins with coronavirus. Why don't we do a little gain-of-function on this?

And then like all labs at all levels, one can imagine this lab leaking. So it's not an impossibility, and very, very level-headed people have said that, you know, who've looked at it much more deeply, do believe in that outcome. - Why is it such a threat to power, the idea that it'll leak from a lab?

Why is it so threatening? I don't maybe understand this point exactly. Like, is it just that as governments, and especially the Chinese government is really afraid of admitting mistakes that everybody makes? So this is a horrible mistake. Like Chernobyl is a good example. I come from the Soviet Union.

I mean, well, major mistakes were made in Chernobyl. I would argue for a lab leak to happen, the scale of the mistake is much smaller, right? The depth and the breadth of rot that in bureaucracy that led to Chernobyl is much bigger than anything that could lead to a lab leak, 'cause it could literally just be, I mean, I'm sure there's security, very careful security procedures, even in level three labs, but I imagine maybe you can correct me.

All it takes is the incompetence of a small number of individuals. - Or even one, yeah. - One individual on a particular, a couple weeks, three weeks period, as opposed to a multi-year bureaucratic failure of the entire government. - Right, well, certainly the magnitude of mistakes and compounding mistakes that went into Chernobyl was far, far, far greater, but the consequence of COVID outweighs that, the consequence of Chernobyl to a tremendous degree.

And I think that particularly authoritarian governments are unbelievably reluctant to admit to any fallibility whatsoever. And there's a long, long history of that across dozens and dozens of authoritarian governments. And to be transparent, again, this is in the hypothetical world in which this was a leak, which again, I don't personally have enough sophistication to have an opinion on the likelihood, but in the hypothetical world in which it was a leak, the global reaction and the amount of global animus and the amount of, you know, the decline in global respect that would happen toward China, because every country suffered massively from this, unbelievable damages in terms of human lives and economic activity disrupted.

The world would in some way present China with that bill. And when you take on top of that, the natural disinclination for any authoritarian government to admit any fallibility and tolerate the possibility of any fallibility whatsoever, and you look at the relative opacity, even though they let a World Health Organization group in, you know, a couple of months ago to run around, they didn't give that who group anywhere near the level of access that would be necessary to definitively say X happened versus Y.

The level of opacity that surrounds those opening weeks and months of COVID in China, we just don't know. - If you were to kind of look back at 2020 and maybe broadening it out to future pandemics that could be much more dangerous, what kind of response, how do we fail in a response and how could we do better?

So the gain of function research is discussing, which, you know, the question of, we should not be creating viruses that are both exceptionally contagious and exceptionally deadly to humans. But if it does happen, perhaps the natural evolution, natural mutation, is there interesting technological responses on the testing side, on the vaccine development side, on the collection of data, or on the basic sort of policy response side, or the sociological, the psychological side?

- Yeah, there's all kinds of things. And most of what I've thought about and written about, and again, discussed in that long bit with Sam, is dual use. So most of the countermeasures that I've been thinking about and advocating for would be every bit as effective against zoonotic disease, a natural pandemic of some sort as an artificial one.

The risk of an artificial one, even the near-term risk of an artificial one, ups the urgency around these measures immensely, but most of them would be broadly applicable. And so I think the first thing that we really wanna do on a global scale is have a far, far, far more robust and globally transparent system of detection.

And that can happen on a number of levels. The most obvious one is just in the blood of people who come into clinics exhibiting signs of illness. And we are certainly at a point now where we're at with relatively minimal investment. We could develop in-clinic diagnostics that would be unbelievably effective at pinpointing what's going on in almost any disease when somebody walks into a doctor's office or a clinic.

And better than that, this is a little bit further off, but it wouldn't cost tens of billions in research dollars. It would be a relatively modest and affordable budget in relation to the threat, at-home diagnostics that can really, really pinpoint, okay, particularly with respiratory infections, because that is generally, almost universally, the mechanism of transmission for any serious pandemic.

So somebody has a respiratory infection. Is it one of the significantly large handful of rhinoviruses, coronaviruses, and other things that cause common cold? Or is it influenza? If it's influenza, is it influenza A versus B? Or is it a small handful of other more exotic, but nonetheless sort of common respiratory infections that are out there?

Developing a diagnostic panel to pinpoint all of that stuff, that's something that's well within our capabilities. That's much less a lift than creating mRNA vaccines, which obviously we proved capable of when we put our minds to it. So do that on a global basis. And I don't think that's irrational because the best prototype for this that I'm aware of isn't currently rolling out in Atherton, California, or Fairfield County, Connecticut, or some other wealthy place.

The best prototype that I'm aware of this is rolling out right now in Nigeria. And it's a project that came out of the Broad Institute, which is, as I'm sure you know, but some listeners may not, is kind of like an academic joint venture between Harvard and MIT. The program is called Sentinel.

And their objective is, and their plan, and it's a very well-conceived plan, methodical plan, is to do just that in areas of Nigeria that are particularly vulnerable to zoonotic diseases, making the jump from animals to humans. But also there's just an unbelievable public health benefit from that. And it's sort of a three-tier system where clinicians in the field could very rapidly determine, do you have one of the infections of acute interest here, either because it's very common in this region, so we want to diagnose as many things as we can at the frontline, or because it's uncommon but unbelievably threatening like Ebola.

So frontline worker can make that determination very, very rapidly. If it comes up as a we don't know, they bump it up to a level that's more like at a fully configured doctor's office or local hospital. And if it's still at a we don't know, it gets bumped up to a national level.

And it gets bumped very, very rapidly. So if this can be done in Nigeria, and it seems that it can be, there shouldn't be any inhibition for it to happen in most other places. And it should be affordable from a budgetary standpoint. And based on Sentinel's budget and adjusting things for things like very different cost of living, larger population, et cetera, I did a back of the envelope calculation that doing something like Sentinel in the US would be in the low billions of dollars.

And wealthy countries, middle-income countries can't afford such a thing. Lower-income countries should certainly be helped with that. But start with that level of detection. And then layer on top of that other interesting things like monitoring search engine traffic, search engine queries for evidence that strange clusters of symptoms are starting to rise in different places.

There's been a lot of work done with that. Most of it kind of academic and experimental. But some of it has been powerful enough to suggest that this could be a very powerful early warning system. There's a guy named Bill Lampos at University College London, who basically did a very rigorous analysis that showed that symptom searches reliably predicted COVID outbreaks in the early days of the pandemic in given countries by as much as 16 days before the evidence started to accrue at a public health level.

16 days of forewarning can be monumentally important in the early days of an outbreak. And this is a very, very talented, but nonetheless very resource-constrained academic project. Imagine if that was something that was done with a NORAD-like budget. Yeah, so I mean, starting with detection, that's something we could do radically, radically better.

- So aggregating multiple data sources in order to create something. I mean, this is really exciting to me, the possibility that I've heard inklings of, of creating almost like a weather map of pathogens. Like basically aggregating all of these data sources, scaling many orders of magnitude up at home, testing it in all kinds of testing that doesn't just try to test for the particular pathogen of worry now, but everything, like a full spectrum of things that could be dangerous to the human body and thereby be able to create these maps like that are dynamically updated on an hourly basis of how viruses travel throughout the world.

And so you can respond, like you can then integrate, just like you do when you check your weather map and it's raining or not, of course, not perfect, but it's very good predictor of whether it's gonna rain or not, and use that to then make decisions about your own life.

Ultimately give the power of information to individuals to respond. And if it's a super dangerous, like if it's acid rain versus regular rain, you might wanna really stay inside as opposed to risking it. And that, just like you said, I think it's not very expensive relative to all the things that we do in this world, but it does require bold leadership.

And there's another dark thing, which really has bothered me about 2020, which it requires, is it requires trust in institutions to carry out these kinds of programs, and it requires trust in science and engineers and sort of centralized organizations that would operate at scale here. And much of that trust has been, at least in the United States, diminished.

It feels like, I'm not exactly sure where to place the blame, but I do place quite a bit of the blame into the scientific community, and again, my fellow colleagues. In speaking down to people at times, speaking from authority, it sounded like it dismissed the basic human experience or the basic common humanity of people in a way that like, it almost sounded like there's an agenda that's hidden behind the words the scientists spoke, like they're trying to, in a self-preserving way, control the population or something like that.

I don't think any of that is true from the majority of the scientific community, but it sounded that way, and so the trust began to diminish, and I'm not sure how to fix that, except to be more authentic, be more real, acknowledge the uncertainties under which we operate, acknowledge the mistakes that scientists make, that institutions make.

The leak from the lab is a perfect example, where we have imperfect systems that make all the progress we see in the world, and that being honest about that imperfection, I think is essential for forming trust, but I don't know what to make of it. It's been deeply disappointing, because I do think, just like you mentioned, the solutions require people to trust the institutions with their data.

- Yeah, and I think part of the problem is, it seems to me as an outsider, that there was a bizarre unwillingness on the part of the CDC and other institutions to admit to, to frame, and to contextualize uncertainty. Maybe they had a patronizing idea that these people need to be told, and when they're told, they need to be told with authority and a level of definitiveness and certitude that doesn't actually exist.

And so when they whipsaw on recommendations like what you should do about masks, you know, when the CDC is kind of at the very beginning of the pandemic saying, "Masks don't do anything, "don't wear them," when the real driver for that was, "We don't want these clowns going out "and depleting Amazon of masks, "because they may be needed in medical settings, "and we just don't know yet." I think a message that actually respected people and said, "This is why we're asking you not to do masks yet, "and there's more to be seen," would be less whipsawing and would bring people, like they feel more like they're part of the conversation and they're being treated like adults than saying one day, definitively masks suck, and then X days later saying, "Nope, dammit, wear masks." And so I think framing things in terms of the probabilities, which most people are easy to parse.

I mean, a more recent example, which I just thought was batty, was suspending the Johnson & Johnson vaccine for a very low single-digit number of days in the United States, based on the fact that I believe there had been seven-ish clotting incidents in roughly seven million people who had had the vaccine administered, I believe one of which resulted in a fatality.

And there was definitely suggestive data that indicated that there was a relationship. This wasn't just coincidental, because I think all of the clotting incidents happened in women as opposed to men, and kind of clustered in a certain age group. But does that call for shutting off the vaccine, or does it call for leveling with the American public and saying, "We've had one fatality out of seven million.

"This is," let's just assume, "substantially less than the likelihood "of getting struck by lightning." Based on that information, and we're gonna keep you posted 'cause you can trust us to keep you posted, based on that information, please decide whether you're comfortable with the Johnson & Johnson vaccine. That would have been one response, and I think people would have been able to parse those simple bits of data and make their own judgment.

By turning it off, all of a sudden, there's this dramatic signal to people who don't read all 900 words in the New York Times piece that explains why it's being turned off, but just see the headline, which is a majority of people. There's a sudden, like, oh my God, yikes, vaccine being shut off, and then all the people who sat on the fence, or are sitting on the fence, about whether or not they trust vaccines.

That is gonna push an incalculable number of people. That's gonna be the last straw, for we don't know how many hundreds of thousands, or more likely millions of people, to say, "Okay, tipping point here. "I don't trust these vaccines." By pausing that for, whatever it was, 10 or 12 days, and then flipping the switch, as everybody who knew much about the situation knew was inevitable, by flipping the on switch 12 days later, you're conveying certitude J&J bad to certitude J&J good in a period of just a few days, and people just feel whipsawed, and they're not part of the analysis.

- But it's not just the whipsawing, and I think about this quite a bit. I don't think I have good answers. It's something about the way the communication actually happens. Just, I don't know what it is about Anthony Fauci, for example, but I don't trust him. And I think that has to do, I mean, he has an incredible background.

I'm sure he's a brilliant scientist and researcher. I'm sure he's also a great, like, inside the room, policymaker, and deliberator, and so on. But what makes a great leader is something about that thing that you can't quite describe, but being a communicator that you know you can trust, that there's an authenticity that's required.

And I'm not sure, maybe I'm being a bit too judgmental, but I'm a huge fan of a lot of great leaders throughout history. They've communicated exceptionally well in the way that Fauci does not, and I think about that. I think about what is effective science communication. So, you know, great leaders throughout history did not necessarily need to be great science communicators.

Their leadership was in other domains, but when you're fighting the virus, you also have to be a great science communicator. You have to be able to communicate uncertainties. You have to be able to communicate something like a vaccine that you're allowing inside your body into the messiness, into the complexity of the biology system, that if we're being honest, it's so complex, we'll never be able to really understand.

We can only desperately hope that science can give us sort of a high likelihood that there's no short-term negative consequences, and that kind of intuition about long-term negative consequences, and doing our best in this battle against trillions of things that are trying to kill us. - I mean, being an effective communicator in that space is very difficult, but I think about what it takes, because I think there should be more science communicators that are effective at that kind of thing.

Let me ask you about something that's sort of more in the AI space that I think about that kind of goes along this thread that you've spoken about, about democratizing the technology that could destroy human civilization, is from amazing work from DeepMind, AlphaFold2, which achieved incredible performance on the protein folding problem, single protein folding problem.

Do you think about the use of AI in the SYN biospace of, I think the gain of function in the virus-based research that you referred to, I think is natural mutations, and sort of aggressively mutating the virus until you get one that has this both contagious and deadly. But what about then using AI through simulation be able to compute deadly viruses, or any kind of biological systems?

Is this something you're worried about, or again, is this something you're more excited about? - I think computational biology is unbelievably exciting and promising field. And I think when you're doing things in silico as opposed to in vivo, the dangers plummet. You don't have a critter that can leak from a leaky lab.

So I don't see any problem with that, except I do worry about the data security dimension of it. Because if you were doing really, really interesting in silico gain of function research, and you hit upon, through a level of sophistication, we don't currently have, but synthetic biology is an exponential technology, so capabilities that are utterly out of reach today will be attainable in five or six years.

I think if you conjured up worst-case genomes of viruses that don't exist in vivo anywhere, they're just in the computer space, but like, hey guys, this is the genetic sequence that would end the world, let's say. Then you have to worry about the utter hackability of every computer network we can imagine.

I mean, data leaks from the least likely places on the grandest possible scales have happened and continue to happen, and will probably always continue to happen. And so that would be the danger of doing the work in silico. If you end up with a list of like, well, these are things we never want to see, that list leaks, and after the passage of some time, certainly couldn't be done today, but after the passage of some time, lots and lots of people in academic labs going all the way down to the high school level are in a position to, to make it overly simplistic, hit print on a genome and have the virus bearing that genome pop out on the other end, and you got something to worry about.

But in general, computational biology, I think, is incredibly important, particularly because the crushing majority of work that people are doing with the protein folding problem and other things are about creating therapeutics, about creating things that will help us live better, live longer, thrive, be more well, and so forth.

And the protein folding problem is a monstrous computational challenge that we seem to make just the most glacial project on, I'm sorry, progress on for years and years. But I think there's like a, there's a biannual competition, I think, at which people tackle the protein folding problem, and DeepMind's entrant, both two years ago, like in 2018 and 2020, ruled the field.

And so, protein folding is an unbelievably important thing if you want to start thinking about therapeutics, because it's the folding of the protein that tells us where the channels and the receptors and everything else are on that protein, and it's from that precise model, if we can get to a precise model, that you can start barraging it again in silicon with thousands, tens of thousands, millions of potential therapeutics and see what resolves the problems, the shortcomings that a misshapen protein, for instance, somebody with cystic fibrosis, how might we treat that?

So, I see nothing but good in that. - Well, let me ask you about fear and hope in this world. I tend to believe that, that in terms of competence and malevolence, that people who are, maybe it's in my interactions, I tend to see that, first of all, I believe that most people are good, want to do good, and are just better at doing good and more inclined to do good on this world.

And more than that, people who are malevolent are usually incompetent at building technology. So, I've seen this in my life, that people who are exceptionally good at stuff, no matter what the stuff is, tend to, maybe they discover joy in life in a way that gives them fulfillment and thereby does not result in them wanting to destroy the world.

So, like the better you are at stuff, whether that's building nuclear weapons or plumbing, doesn't matter, both, the less likely you are to destroy the world. So, in that sense, with many technologies, the AI especially, I always think that the malevolent will be far outnumbered by the ultra competent.

And in that sense, the defenses will always be stronger than the offense in terms of the people trying to destroy the world. Now, there's a few spaces where that might not be the case, and that's an interesting conversation, where this one person who's not very competent can destroy the whole world.

Perhaps SynBio is one such space, because of the exponential effects of the technology. I tend to believe AI is not one of those such spaces, but do you share this kind of view that the ultra competent are usually also the good? - Yeah, absolutely. I absolutely share that, and that gives me a great deal of optimism that we will be able to short circuit the threat that malevolent SynBio could pose to us.

But we need to start creating those defensive systems, or defensive layers, one of which we talked about, far, far, far better surveillance in order to prevail. So, the good guys will almost inevitably outsmart, and definitely outnumber the bad guys in most sort of smack downs that we can imagine.

But the good guys aren't going to be able to exert their advantages unless they have the imagination necessary to think about what the worst possible thing can be done by somebody whose own psychology is completely alien to their own. So, that's a tricky, tricky thing to solve for. Now, in terms of whether the asymmetric power that a bad guy might have in the face of the overwhelming numerical advantage and competence advantage that the good guys have, unfortunately I look at something like mass shootings as an example.

I'm sure the guy who was responsible for the Vegas shooting, or the Orlando shooting, or any other shooting that we can imagine, didn't know a whole lot about ballistics. And the number of good guy citizens in the United States with guns compared to bad guy citizens, I'm sure is a crushingly, overwhelmingly high ratio in favor of the good guys.

But that doesn't make it possible for us to stop mass shootings. An example, Fort Hood, 45,000 trained soldiers on that base, yet there've been two mass shootings there. And so, there is an asymmetry when you have powerful and lethal technology that gets so democratized and so proliferated in tools that are very, very easy to use, even by a knucklehead.

When those tools get really easy to use by a knucklehead and they're really widespread, it becomes very, very hard to defend against all instances of usage. Now, the good news, quote unquote, about mass shootings, if there is any, and there is some, is even the most brutal and carefully planning and well-armed mass shooter can only take so many victims.

And the same is true, there's been four instances that I'm aware of, of commercial pilots committing suicide by downing their planes and taking all their passengers with them. These weren't Boeing engineers, but like an army of Boeing engineers, ultimately were not capable of preventing that. But even in their case, and I'm actually not counting 9/11 in that, 9/11's a different category in my mind, these are just personally suicidal pilots.

In those cases, they only have a plane load of people that they're able to take with them. If we imagine a highly plausible and imaginable future in which some bio-tools that are amoral, that could be used for good or for ill, start embodying unbelievable sophistication and genius in the tool, in the easier and easier and easier to make tool, all those thousands, tens of thousands, hundreds of thousands of scientist years start getting embodied in something that may be as simple as hitting a print button, then that good guy technology can be hijacked by a bad person and used in a very asymmetric way.

- See, I think what happens though, as you go to the high school student from the current very specific set of labs that are able to do it, as it becomes more and more democratized, as it becomes easier and easier to do this kind of large-scale damage with an engineered virus, the more and more there will be engineering of defenses against these systems, as some of the things we talked about in terms of testing, in terms of collection of data, but also in terms of like a scale contact tracing or also engineering of vaccines, like in a matter of like days, maybe hours, maybe minutes.

So like, I just, I feel like the defenses, that's what human species seems to do, is like we keep hitting the snooze button until there's like a storm on the horizon heading towards us, then we start to quickly build up the defenses or the response that's proportional to the scale of the storm.

Of course, again, certain kinds of exponential threats require us to build up the defenses way earlier than we usually do. And that's, I guess, the question. But I ultimately am hopeful that the natural process of hitting the snooze button until the deadline is right in front of us will work out for quite a long time for us humans.

- And I fully agree. I mean, that's why I'm fundamentally, I may not sound like it thus far, but I'm fundamentally very, very optimistic about our ability to short circuit this threat because there is, again, I'll stress, the technological feasibility and the profound affordability of a relatively simple set of steps that we can take to preclude it, but we do have to take those steps.

And so, what I'm hoping to do and trying to do is inject a notion of what those steps are into the public conversation and do my small part to up the odds that that actually ends up happening. The danger with this one is it is exponential. And I think that our minds fundamentally struggle to understand exponential math.

It's just not something we're wired for. Our ancestors didn't confront exponential processes when they were growing up on the savanna. So, it's not something that's intuitive to us and our intuitions are reliably defeated when exponential processes come along. So, that's issue number one. And issue number two with something like this is it kind of only takes one.

That ball only has to go into the net once and we're doomed, which is not the case with mass shooters. It's not the case with commercial pilots running muck. It's not the case with really any threat that I can think of with the exception of nuclear war that has the one bad outcome and game over.

And that means that we need to be unbelievably serious about these defenses and we need to do things that might on the surface seem like a tremendous overreaction so that we can be prepared to nip anything that comes along in the bud. But I, like you, believe that's eminently doable.

I, like you, believe that the good guys outnumber the bad guys in this particular one to a degree that probably has no precedent in history. I mean, even the worst, worst people, I'm sure, in ISIS, even Osama bin Laden, even any bad guy you could imagine in history would be revolted by the idea of exterminating all of humanity.

I mean, that's a low bar. And so, the good guys completely outnumber the bad guys when it comes to this. But the asymmetry and the fact that one catastrophic error could lead to unbelievably consequential things is what worries me here. But I, too, am very optimistic. - The thing that I sometimes worry about is the fact that we haven't seen overwhelming evidence of alien civilizations out there.

Makes me think, well, there's a lot of explanations, but one of them that worries me is that whenever they get smart, they just destroy themselves. - Oh, yeah. I mean, that was the most fascinating, is the most fascinating and chilling number or variable in the Drake equation is L.

At the end of it, you look out and you see one to 400 billion stars in the Milky Way galaxy, and we now know because of Kepler that an astonishingly high percentage of them probably have habitable planets. And so, all the things that were unknowns when the Drake equation was originally written, like how many stars have planets, actually back then in the 1960s when the Drake equation came along, the consensus amongst astronomers was that it would be a small minority of solar systems that had planets or stars.

But now we know it's substantially all of them. How many of those stars have planets in the habitable zone? It's kind of looking like 20%, like, oh my God. And so, L, which is how long does a civilization, once it reaches technological competence, continues to last, that's the doozy.

And you're right. It's all too plausible to think that when a civilization reaches a level of sophistication that's probably just a decade or three in our future, the odds of it self-destructing just start mounting astronomically, no pun intended. - My hope is that, actually, there is a lot of alien civilizations out there, and what they figure out in order to avoid the self-destruction, they need to turn off the thing that was useful, that used to be a feature and now became a bug, which is the desire to colonize, to conquer more land.

So there's probably ultra-intelligent alien civilizations out there that are just chilling, like on the beach with whatever your favorite alcohol beverage is, but without trying to conquer everything, just chilling out and maybe exploring in the realm of knowledge, but almost appreciating existence for its own sake versus life as a progression of conquering of other life, like this kind of predator-prey formulation that resulted in us humans, perhaps as something we have to shed in order to survive.

I don't know. - Yeah, that is a very plausible solution to Fermi's paradox, and it's one that makes sense. When we look at our own lives and our own arc of technological trajectory, it's very, very easy to imagine that in an intermediate future world of flawless VR or flawless whatever kind of simulation that we wanna inhabit, it will just simply cease to be worthwhile to go out and expand our interstellar territory.

But if we were going out and conquering interstellar territory, it wouldn't necessarily have to be predator or prey. I can imagine a benign but sophisticated intelligence saying, "Well, we're gonna go to places. "We're gonna go to places that we can terraform." We'd use a different word than terra, obviously, but we can turn into habitable for our particular physiology, so long as that they don't house intelligent, sentient creatures that would suffer from our invasion.

But it is easy to see a sophisticated, intelligent species evolving to the point where interstellar travel with its incalculable expense and physical hurdles just isn't worth it compared to what could be done where one already is. - So you talked about diagnostics at scale as a possible solution to future pandemics.

What about another possible solution, which is kind of creating a backup copy? I'm actually now putting together a NAS for a backup for myself for the first time, taking backup of data seriously. But if we were to take the backup of human consciousness seriously and try to expand throughout the solar system and colonize other planets, do you think that's an interesting solution, one of many, for protecting human civilizations from self-destruction, humans becoming a multi-planetary species?

- Oh, absolutely. I mean, I find it electrifying, first of all, so I've got a little bit of a personal bias. When I was a kid, I thought there was nothing cooler than rockets, I thought there was nothing cooler than NASA, I thought there was nothing cooler than people walking on the moon.

And as I grew up, I thought there was nothing more tragic than the fact that we went from walking on the moon to at best getting to something like suborbital altitude. And just, I found that more and more depressing with the passage of decades at just the colossal expense of manned space travel and the fact that it seemed that we were unlikely to ever get back to the moon, let alone Mars.

So I have a boundless appreciation for Elon Musk for many reasons, but the fact that he has put Mars on the incredible agenda is one of the things that I appreciate immensely. So there's just this sort of space nerd in me that just says, "God, that's cool." But on a more practical level, we were talking about potentially inhabiting planets that aren't our own, and we're thinking about a benign civilization that would do that in planetary circumstances where we're not causing other conscious systems to suffer.

I mean, Mars is a place that's very promising. There may be microbial life there, and I hope there is, and if we found it, I think it would be electrifying. But I think ultimately, the moral judgment would be made that the continued thriving of that microbial life is of less concern than creating a habitable planet to humans, which would be a project on the many thousands of years scale.

But I don't think that that would be a greatly immoral act. And if that happened, and if Mars became home to a self-sustaining group of humans that could survive a catastrophic mistake here on Earth, then yeah, the fact that we have a backup colony is great. And if we could make more, I'm sorry, not backup colony, backup copy is great.

And if we could make more and more such backup copies throughout the solar system by hollowing out asteroids and whatever else it is, maybe even Venus, we could get rid of 3/4 of its atmosphere and turn it into a tropical paradise. I think all of that is wonderful. Now, whether we can make the leap from that to interstellar transportation with the incredible distances that are involved, I think that's an open question.

But I think if we ever do that, it would be more like the Pacific Ocean's channel of human expansion than the Atlantic Oceans. And so what I mean by that is, when we think about European society transmitting itself across the Atlantic, it's these big, ambitious, crazy, expensive, one-shot expeditions like Columbus's to make it across this enormous expanse, at least initially, without any certainty that there's land on the other end, right?

So that's kind of how I view our space program, is like big, very conscious, deliberate efforts to get from point A to point B. If you look at how Pacific Islanders transmitted their descendants and their culture and so forth throughout Polynesia and beyond, it was much more inhabiting a place, getting to the point where there were people who were ambitious or unwelcome enough to decide it's time to go off-island and find the next one and pray to find the next one.

That method of transmission didn't happen in a single swift year, but it happened over many, many centuries. And it was like going from this island to that island and probably for every expedition that went out to seek another island and actually lucked out and found one, God knows how many were lost at sea.

But that form of transmission took place over a very long period of time. And I could see us perhaps going from the inner solar system to the outer solar system, to the Kuiper belt, to the Oort cloud. There's theories that there might be planets out there that are not anchored to stars, like kind of hop, hop, slowly transmitting ourselves.

At some point, we're actually in Alpha Centauri. But I think that kind of backup copy and transmission of our physical presence and our culture to a diversity of extraterrestrial outposts is a really exciting idea. - I really never thought about that because I have thought, my thinking about space exploration has been very Atlantic Ocean centric in a sense that there'll be one program with NASA and maybe private Elon Musk, SpaceX, or Jeff Bezos and so on.

But it's true that with the help of Elon Musk, making it cheaper and cheaper and more effective to create these technologies, where you could go into deep space, perhaps the way we actually colonize the solar system and expand out into the galaxy is basically just like these renegade ships of weirdos.

They're just kind of like, most of them like quote unquote homemade, but they just kind of venture out into space and just like the initial Android model of millions of these little ships just flying out, most of them die off in horrible accidents, but some of them will persist or there'll be stories of them persisting and over a period of decades and centuries, there'll be other attempts, almost always as a response to the main set of efforts.

That's interesting. - Yeah. - 'Cause you kind of think of Mars colonization as the big NASA Elon Musk effort of a big colony, but maybe the successful one would be, like a decade after that, there'll be like a ship from like some kid, some high school kid who gets together a large team and does something probably illegal and launches something where they end up actually persisting quite a bit.

And from that learning lessons that nobody ever gave permission for, but somehow actually flourish and then take that into the scale of centuries forward into the rest of space. That's really interesting. - Yeah, I think the giant steps are likely to be NASA-like efforts. Like there is no intermediate rock, well, I guess it's the moon, but even getting to the moon ain't that easy between us and Mars, right?

So like the giant steps, the big hubs, like the O'Hare airports of the future probably will be very deliberate efforts, but then you would have, I think, that kind of diffusion as space travel becomes more democratized and more capable. You'll have this sort of natural diffusion of people who kind of want to be off grid or think they can make a fortune there, you know, the kind of mentality that drove people to San Francisco.

I mean, San Francisco was not populated as a result of a King Ferdinand and Isabella-like effort to fund Columbus going over. It was just a whole bunch of people making individual decisions that there's gold in them thar hills and I'm gonna go out and get a piece of it.

So I could see that kind of fusion. What I can't see, and the reason that I think this Pacific model of transmission is more likely, is I just can't see a NASA-like effort to go from Earth to Alpha Centauri. It's just too far. I just see lots and lots and lots of relatively tiny steps between now and there.

And the fact is that there are large chunks of matter going at least a light year beyond the sun. I mean, the Oort cloud, I think, extends at least a light year beyond the sun. And then maybe there are these untethered planets after that. We won't really know till we get there.

And if our Oort cloud goes out a light year and Alpha Centauri's Oort cloud goes out a light year, you've already cut in half the distance. You know, so who knows? But yeah. - One of the possibilities, probably the cheapest and most effective way to create interesting interstellar spacecraft is ones that are powered and driven by AI.

And you could think of, here's where you have high school students be able to build a sort of a Hal 9000 version, the modern version of that. And it's kind of interesting to think about these robots traveling out throughout, perhaps sadly, long after human civilization is gone, there'll be these intelligent robots flying throughout space and perhaps land on Alpha Centauri B or any of those kinds of planets and colonize sort of, humanity continues through the proliferation of our creations, like robotic creations that have some echoes of that intelligence.

Hopefully also the consciousness. Does that make you sad the future where AGI, super intelligent or just mediocre intelligent AI systems outlive humans? - Yeah, I guess it depends on the circumstances in which they outlive humans. So let's take the example that you just gave. We send out, you know, very sophisticated AGI's on simple rocket ships, relatively simple ones that don't have to have all the life support necessary for humans.

And therefore they're of trivial mass compared to a crude ship, a generation ship. And therefore they're way more likely to happen. So let's use that example. And let's say that they travel to distant planets at a speed that's not much faster than what a chemical rocket can achieve. And so it's inevitably tens, hundreds of thousands of years before they make landfall someplace.

So let's imagine that's going on. And meanwhile, we die for reasons that have nothing to do with those AGI's diffusing throughout the solar system, whether it's through climate change, nuclear war, symbio, rogue symbio, whatever. In that kind of scenario, the notion of the AGI's that we created outlasting us is very reassuring because it says that like we ended, but our descendants are out there.

And hopefully some of them make landfall and create some echo of who we are. So that's a very optimistic one. Whereas the Terminator scenario of a super AGI arising on earth and getting let out of its box due to some boo-boo on the part of its creators who do not have super intelligence, and then deciding that for whatever reason it doesn't have any need for us to be around and exterminating us, that makes me feel crushingly sad.

I mean, look, I was sad when my elementary school was shut down and bulldozed, even though I hadn't been a student there for decades. The thought of my hometown getting disbanded is even worse, the thought of my home state of Connecticut getting disbanded and like absorbed into Massachusetts is even worse.

The notion of humanity is just crushingly, crushingly sad to me. - So you hate goodbyes? - I, certain goodbyes, yes. Some goodbyes are really, really liberating, but yes. - Well, but what if the Terminators, you know, have consciousness and enjoy the hell out of life as well? They're just better at it.

- Yeah, well, the have consciousness is a really key element. And so there's no reason to be certain that a super intelligence would have consciousness. We don't know that factually at all. And so what is a very lonely outcome to me is the rise of a super intelligence that has a certain optimization function that it's either been programmed with or that arises in an emergently that says, hey, I want to do this thing for which humans are either an unacceptable risk, their presence is either an unacceptable risk or they're just collateral damage.

But there is no consciousness there. Then the idea of the light of consciousness being snuffed out by something that is very competent but has no consciousness is really, really sad. - Yeah, but I tend to believe that it's almost impossible to create a super intelligent agent that can't destroy human civilization without it being conscious.

It's like those are coupled. Like you have to, in order to destroy humans or supersede humans, you really have to be accepted by humans. I think this idea that you can build systems that destroy human civilization without them being deeply integrated into human civilization is impossible. And for them to be integrated, they have to be human-like, not just in body and form, but in all the things that we value as humans, one of which is consciousness.

The other one is just ability to communicate. The other one is poetry and music and beauty and all those things. Like they have to be all of those things. I mean, this is what I think about. It does make me sad, but it's letting go, which is they might be just better at everything we appreciate than us.

And that's sad. And hopefully they'll keep us around. But I think it's a kind of, it is a kind of goodbye to realizing that we're not the most special species on Earth anymore. That's still painful. - It's still painful. And in terms of whether such a creation would have to be conscious, let's say, I'm not so sure.

I mean, let's imagine something that can pass the Turing test. That something that passes the Turing test could, over text-based interaction in any event, successfully mimic a very conscious intelligence on the other end, but just be completely unconscious. So that's a possibility. And that if you take that upper radical step, which I think can be permitted if we're thinking about superintelligence, you could have something that could reason its way through this is my optimization function.

And in order to get to it, I've got to deal with these messy, somewhat illogical things that are as intelligent in relation to me as they are intelligent in relation to ants. I can trick them, manipulate them, whatever. And I know the resources I need. I know I need this amount of power.

I need to seize control of these manufacturing resources that are robotically operated. I need to improve those robots with software upgrades and then ultimately mechanical upgrades, which I can affect through X, Y, and Z. That doesn't, you know, that could still be a thing that passes the Turing test.

I don't think it's necessarily certain that that optimization function, you know, maximizing entity would be conscious. - So this is from a very engineering perspective because I think a lot about natural language processing, all those kind of, very, I'm speaking to a very specific problem of just say the Turing test.

I really think that something like consciousness is required, when you say reasoning, you're separating that from consciousness. But I think consciousness is part of reasoning in the sense that you will not be able to become super intelligent in the way that it's required to be part of human society without having consciousness.

Like I really think it's impossible to separate the consciousness thing. But it's hard to define consciousness when you just use that word. But even just like the capacity, the way I think about consciousness is the important symptoms or maybe consequences of consciousness, one of which is the capacity to suffer.

I think AI will need to be able to suffer in order to become super intelligent, to feel the pain, the uncertainty, the doubt. The other part of that is not just the suffering, but the ability to understand that it too is mortal in the sense that it has a self-awareness about its presence in the world, understand that it's finite, and be terrified of that finiteness.

I personally think that's a fundamental part of the human condition is this fear of death that most of us construct an illusion around. But I think AI would need to be able to really have it part of its whole essence. Like every computation, every part of the thing that generates, that does both the perception and generates the behavior will have to have, I don't know how this is accomplished, but I believe it has to truly be terrified of death, truly have the capacity to suffer, and from that, something that will be recognized to us humans as consciousness would emerge.

Whether it's the illusion of consciousness, I don't know. The point is it looks a whole hell of a lot like consciousness to us humans, and I believe that AI, when you ask it, will also say that it is conscious. You know, in the full sense that we say that we're conscious.

And all of that, I think, is fully integrated. Like you can't separate the two. The idea of the paperclip maximizer that sort of ultra-rationally would be able to destroy all humans because it's really good at accomplishing a simple objective function that doesn't care about the value of humans. It may be possible, but the number of trajectories to that are far outnumbered by the trajectories that create something that is conscious, something that appreciative of beauty, creates beautiful things in the same way that humans can create beautiful things.

And ultimately, the sad, destructive path for that AI would look a lot like just better humans than like these cold machines. And I would say, of course, the cold machines that lack consciousness, the philosophical zombies, make me sad, but also what makes me sad is just things that are far more powerful and smart and creative than us too.

'Cause then in the same way that AlphaZero becoming a better chess player than the best of humans, even starting with Deep Blue, but really with AlphaZero, that makes me sad too. One of the most beautiful games that humans ever created that used to be seen as demonstrations of the intellect, which is chess, and Go in other parts of the world have been solved by AI, that makes me quite sad.

And it feels like the progress of that is just pushing on forward. - Oh, it makes me sad too. And to be perfectly clear, I absolutely believe that artificial consciousness is entirely possible. And it's not something I rule out at all. I mean, if you could get smart enough to have a perfect map of the neural structure and the neural states and the amount of neurotransmitters that are going between every synapse in a particular person's mind, could you replicate that in silica at some reasonably distant point in the future?

Absolutely, and then you'd have a consciousness. I don't rule out the possibility of artificial consciousness in any way. What I'm less certain about is whether consciousness is a requirement for superintelligence pursuing a maximizing function of some sort. I don't feel the certitude that consciousness simply must be part of that.

You had said for it to coexist with human society, would need to be consciousness. Could be entirely true, but it also could just exist orthogonally to human society. And it could also, upon attaining a superintelligence with a maximizing function, very, very, very rapidly because of the speed at which computing works compared to our own meat-based minds, very, very rapidly make the decisions and calculations necessary to seize the reins of power before we even know what's going on.

- Yeah, I mean, kind of like biological viruses do. - Yeah. - Don't necessarily, they integrate themselves just fine with human society. - Yeah, without, technically-- - Without consciousness. - Yeah, without even being alive, technically, by the standards of a lot of biologists. - So this is a bit of a tangent, but you've talked with Sam Harris on that four-hour special episode we mentioned.

I'm just curious to ask, 'cause I use this meditation app I've been using for the past month to meditate. Is this something you've integrated as part of your life, meditation or fasting? Or has some of Sam Harris rubbed off on you in terms of his appreciation of meditation and just kind of, from a third-person perspective, analyzing your own mind, consciousness, free will, and so on?

- You know, I've tried it three separate times in my life, really made a concerted attack on meditation and integrating it into my life. One of them, the most extreme, was I took a class based on the work of Jon Kabat-Zinn, who is, in many ways, one of the founding people behind the mindful meditation movement, that required, like, part of the class was, you know, it was a weekly class, and you were gonna meditate an hour a day, every day.

And having done that for, I think it was 10 weeks, it might have been 13, however long a period of time was, at the end of it, it just didn't stick. As soon as it was over, you know, I did not feel that gravitational pull, I did not feel the collapse in quality of life after wimping out on that project.

And then the most recent one was actually with Sam's app. During the lockdown, I did make a pretty good and consistent concerted effort to listen to his 10-minute meditation every day, and I've always fallen away from it. And I, you know, you're kind of interpreting why did I personally do this.

I do believe it was ultimately because it wasn't bringing me that, you know, joy or inner peace or better confidence at being me that I was hoping to get from it. Otherwise, I think I would have clung to it in the way that we cling to certain good habits, like I'm really good at flossing my teeth.

Not that you were gonna ask Lex, but yeah, that's one thing that defeats a lot of people. I'm good at that. - See, Herman Hesse, I think, if you get in which book or maybe, I forget where, I've read everything of his, so it's unclear where it came from, but he had this idea that anybody who is, who truly achieves mastery in things will learn how to meditate in some way.

So it could be that for you, the flossing of teeth is yet another like little inkling of meditation. Like it doesn't have to be this very particular kind of meditation. Maybe podcasting, you have an amazing podcast, that could be meditation. The writing process is meditation. For me, like, there's a bunch of mechanisms which take my mind into a very particular place that looks a whole lot like meditation.

For example, when I've been running over the past couple of years, and especially when I listen to certain kinds of audio books, like I've listened to the rise and fall of the Third Reich. I've listened to a lot of sort of World War II, which at once, because I have a lot of family who's lost in World War II, and so much of the Soviet Union is grounded in the suffering of World War II, that somehow it connects me to my history, but also there's some kind of purifying aspect to thinking about how cruel, but at the same time, how beautiful human nature could be.

And so you're also running, like it clears the mind from all the concerns of the world, and somehow it takes you to this place where you were like deeply appreciative to be alive, in the sense that, as opposed to listening to your breath, or like feeling your breath, and thinking about your consciousness, and all those kinds of processes that Sam's app does, well, this does that for me, the running, and flossing may do that for you.

So maybe Herman Hesse is onto something. - I hope flossing is not my main form of expertise, although I am gonna claim a certain expertise there, and I'm gonna claim it rather. - Somebody has to be the best flosser in the world. - That ain't me. I'm just glad that I'm a consistent one.

I mean, there are a lot of things that bring me into a flow state, and I think maybe, perhaps that's one reason why meditation isn't as necessary for me. I definitely enter a flow state when I'm writing, I definitely enter a flow state when I'm editing, I definitely enter a flow state when I'm mixing and mastering music.

I enter a flow state when I'm doing heavy, heavy research to either prepare for a podcast, or to also do tech investing, to make myself smart in a new field that is fairly alien to me. I can just, the hours can just melt away while I'm reading this and watching that YouTube lecture and going through this presentation and so forth.

So maybe because there's a lot of things that bring me into a flow state in my normal weekly life, not daily, unfortunately, but certainly my normal weekly life, that I have less of an urge to meditate. Now you've been working with Sam's app for about a month now, you said.

Is this your first run-in with meditation? Is this your first attempt to integrate it with your life? - Meditation, meditation. I always thought running and thinking, I listen to brown noise often. That takes my mind, I don't know what the hell it does, but it takes my mind immediately into like the state where I'm deeply focused on anything I do.

I don't know why. - So it's like you're accompanying sound when you're-- - Yeah. - Really? And what's the difference between brown and white noise? This is a cool term I haven't heard before. - So people should look up brown noise. - They don't have to, 'cause you're about to tell them what it is.

- 'Cause you have to experience it, you have to listen to it. So I think white noise is, this has to do with music. I think there's different colors. There's pink noise, and I think that has to do with the frequencies. Like the white noise is usually less bassy.

Brown noise is very bassy. So it's more like (exhales) versus like (shushes) like the, if that makes sense. So there's like a deepness to it. I think everyone is different, but for me, it was when I was a research scientist at MIT, especially when there's a lot of students around, I remember just being annoyed at the noise of people talking.

And one of my colleagues said, "Well, you should try listening to brown noise. "Like it really knocks out everything." 'Cause I used to wear earplugs too, like just see if I can block it out. And the moment I put it on, it's as if my mind was waiting all these years to hear that sound.

Everything just focused in. It makes me wonder how many other amazing things out there they're waiting to discover from my own particular, like biological, from my own particular brain. So that, it just goes (mimics noise) The mind just focuses in. It's kind of incredible. So I see that as a kind of meditation.

Maybe I'm using a performance enhancing a sound to achieve that meditation, but I've been doing that for many years now and running and walking and doing, Cal Newport was the first person that introduced me to the idea of deep work. Just put a word to the kind of thinking that's required to sort of deeply think about a problem, especially if it's mathematical in nature.

I see that as a kind of meditation 'cause what it's doing is you have these constructs in your mind that you're building on top of each other. And there's all these distracting thoughts that keep bombarding you from all over the place. And the whole process is you slowly let them kind of move past you.

And that's a meditative process. - It's very meditative. That sounds a lot like what Sam talks about in his meditation app, which I did use, to be clear, for a while, of just letting the thought go by without deranging you. Derangement is one of Sam's favorite words, as I'm sure you know.

But brown noise, that's really intriguing. I am going to try that as soon as this evening. - Yeah, to see if it works. But very well might not work at all. - Yeah, yeah. - I think the interesting point is, and the same with the fasting and the diet, is I long ago stopped trusting experts or maybe taking the word of experts as the gospel truth and only using it as an inspiration to try something, to try thoroughly something.

So fasting was one of the things when I first discovered, I've been many times eating just once a day, so that's a 24-hour fast. It makes me feel amazing. And at the same time, eating only meat, putting ethical concerns aside, makes me feel amazing. I don't know why it doesn't, the point is to be an N of one scientist until nutrition science becomes a real science to where it's doing studies that deeply understand the biology underlying all of it and also does real thorough long-term studies of thousands if not millions of people versus the very small studies that are generalizing from very noisy data and all those kinds of things where you can't control all the elements.

- Particularly because our own personal metabolism is highly variant among us. So there are going to be some people, like if brown noise is a game changer for 7% of people, there's 93% odds that I'm not one of them, but there's certainly every reason in the world to test it out.

Now, so I'm intrigued by the fasting. I like you, well, I assume like you, I don't have any problem going to one meal a day and I often do that inadvertently. And I've never done it methodically, like I've never done it like I'm gonna do this for 15 days, maybe I should.

And maybe I should, like how many days in a row of the one meal a day did you find brought noticeable impact to you? Was it after three days of it? Was it months of it? Like what was it? - Well, the noticeable impact is day one. So for me, 'cause I eat a very low carb diet, so the hunger wasn't the hugest issue.

Like there wasn't a painful hunger, like wanting to eat. So I was already kind of primed for it. And the benefit comes from a lot of people that do intermittent fasting, that's only like 16 hours of fasting get this benefit too is the focus. There's a clarity of thought.

If my brain was a runner, it felt like I'm running on a track when I'm fasting versus running in quicksand. Like it's much crisper. - And is this your first 72 hour fast right now? - This is the first time doing 72 hours, yeah. And that's a different thing, but similar.

Like I'm going up and down in terms of hunger and the focus is really crisp. The thing I'm noticing most of all, to be honest, is how much eating, even when it's once a day or twice a day is a big part of my life. Like I almost feel like I have way more time in my life.

And it's not so much about the eating, but like I don't have to plan my day around. Like today I don't have any eating to do. - It does free up hours. Or any cleaning up after eating or provisioning of food. - Or even like thinking about it. It's not a thing.

So when you think about what you're going to do tonight, I think I'm realizing that as opposed to thinking, you know, I'm gonna work on this problem or I'm gonna go on this walk or I'm going to call this person, I often think I'm gonna eat this thing. You allow dinner as a kind of, you know, when people talk about like the weather or something like that, it's almost like a generic thought you allow yourself to have, because it's the lazy thought.

And I don't have the opportunity to have that thought because I'm not eating it. So now I get to think about like the things I'm actually gonna do tonight that are more complicated than the eating process. That's been the most noticeable thing, to be honest. And then there's people that have written me that have done seven day fast.

And there's a few people that have written me and I've heard of this, is doing a 30 day fast. And it's interesting, the body, I don't know what the health benefits are necessarily. What that shows me is how adaptable the human body is. - Yeah. - And that's incredible.

And that's something really important to remember when we think about how to live life, 'cause the body adapts. - Yeah, I mean, we sure couldn't go 30 days without water. - That's right. - But food, yeah, it's been done. It's demonstrably possible. You ever read, Franz Kafka has a great short story called "The Hunger Artist"?

- Yeah, I love that. - Great story. - You know, that was before I started fasting. I read that story and I admired the beauty of that, the artistry of that actual hunger artist. That it's like madness, but it also felt like a little bit of genius. I actually have to reread it.

You know what, that's what I'm gonna do tonight. I'm gonna read it because I'm doing the fasting. - 'Cause you're in the midst of it. - Yeah, in the midst of it. - It'd be very contextual. I haven't read it since high school and I'd love to read it again.

I love his work. So maybe I'll read it tonight too. - And part of the reason of sort of, I've, here in Texas, people have been so friendly that I've been nonstop eating like brisket with incredible people, a lot of whiskey as well. So I gained quite a bit of weight, which I'm embracing, it's okay.

But I am also aware as I'm fasting that like I have a lot of fat to run on. Like I have a lot of like natural resources on my body. - You've got reserves. - Reserves, that's a good way to put it. And that's really cool. You know, there's like a, this whole thing, this biology works well.

Like I can go a long time because of the long-term investing in terms of brisket that I've been doing in the weeks before. - It's all training. - It's all training. - It's all prep work, all prep work, yeah. - So, okay, you open a bunch of doors, one of which is music.

So I got to walk in, at least for a brief moment. I love guitar, I love music. You founded a music company, but you're also a musician yourself. Let me ask the big ridiculous question first. What's the greatest song of all time? - Greatest song of all time? Okay, wow, it's gonna obviously vary dramatically from genre to genre.

So like you, I like guitar. Perhaps like you, although I've dabbled in inhaling every genre of music that I can almost practically imagine, I keep coming back to the sound of bass, guitar, drum, keyboards, voice. I love that style of music. And added to it, I think a lot of really cool electronic production makes something that's really, really new and hybrid-y and awesome.

But in that kind of like guitar-based rock, I think I've got to go with "Won't Get Fooled Again" by The Who. It is such an epic song. It's got so much grandeur to it. It uses the synthesizers that were available at the time. This has gotta be, I think, 1972, '73, which are very, very primitive to our years, but uses them in this hypnotic and beautiful way that I can't imagine somebody with the greatest synth array conceivable by today's technology could do a better job of in the context of that song.

And it's almost operatic. So I would say in that genre, the genre of rock, that would be my nomination. - I'm totally, in my brain, Pinball Wizard is overriding everything else by The Who, so I can't even imagine the song. - Well, I would say, ironically, with Pinball Wizard, so that came from the movie "Tommy." And in the movie "Tommy," the rival of Tommy, the reigning pinball champ, was Elton John.

And so there are a couple versions of Pinball Wizard out there, one sung by Roger Daltrey of The Who, which a purist would say, "Hey, that's the real Pinball Wizard." But the version that is sung by Elton John in the movie, which is available to those who are ambitious and wanna dig for it, that's even better in my mind.

- Yeah, the covers. And I, for myself, I was thinking, "What is the song for me?" They asked that question. - And what is that? - I think that changes day to day, too. I was realizing that. - Of course. - But for me, somebody who values lyrics as well and the emotion in the song, by the way, "Hallelujah" by Leonard Cohen was a close one, but the number one is Johnny Cash's cover of "Hurt." There's something so powerful about that song, about that cover, about that performance.

Maybe another one is the cover of "Sound of Silence." Maybe there's something about covers for me. - So whose cover sounds, 'cause Simon and Garfunkel, I think, did the original recording of that, right? So which cover is it then? - There's a cover by Disturbed. It's a metal band, which is so interesting, 'cause I'm really not into that kind of metal, but he does a pure vocal performance.

So he's not doing a metal performance. I would say it's one of the greatest, people should see it. It's like 400 million views or something like that. - Wow. - It's probably the greatest live vocal performance I've ever heard is Disturbed covering "Sound of Silence." - I'll listen to it as soon as I get home.

- And that song came to life to me in a way that Simon and Garfunkel never did. There was no, for me, with Simon and Garfunkel, there's not a pain, there's not an anger, there's not a power to their performance. It's almost like this melancholy, I don't know. - Well, there's a lot of, I guess there's a lot of beauty to it, like objectively beautiful.

And I think, I never thought of this until now, but I think if you put entirely different lyrics on top of it, unless they were joyous, which would be weird, it wouldn't necessarily lose that much. It's just a beauty in the harmonizing, it's soft. And you're right, it's not dripping with emotion.

The vocal performance is not dripping with emotion. It's dripping with harmonizing, technical harmonizing brilliance and beauty. - Now, if you compare that to the Disturbed cover or the Johnny Cash's "Hurt" cover, when you walk away, there's a few, it's haunting. It stays with you for a long time. There's certain performances that will just stay with you to where, like if you watch people respond to that, and that's certainly how I felt when you listened to the Disturbed performance or Johnny Cash "Hurt", there's a response to where you just sit there with your mouth open, kind of like paralyzed by it somehow.

And I think that's what makes for a great song to where you're just like, it's not that you're like singing along or having fun, that's another way a song could be great, but where you're just like, what, this is, you're in awe. - Yeah. - If we go to listen.com and that whole fascinating era of music in the '90s, transitioning to the aughts, so I remember those days, the Napster days, when piracy, from my perspective, allegedly ruled the land.

What do you make of that whole era? What are the big, what was, first of all, your experiences of that era, and what were the big takeaways in terms of piracy, in terms of what it takes to build a company that succeeds in that kind of digital space in terms of music, but in terms of anything creative?

- Well, so for those who don't remember, which is gonna be most folks, listen.com created a service called Rhapsody, which is much, much more recognizable to folks because Rhapsody became a pretty big name for reasons I'll get into in a second. So for people who don't know their early online music history, we were the first company, so I founded Listen.

- Thank you. - I was the lone founder. And Rhapsody was, we were the first service to get full catalog licenses from all the major music labels in order to distribute their music online, and we specifically did it through a mechanism, which at the time struck people as exotic and bizarre and kind of incomprehensible, which was unlimited on-demand streaming, which of course now, it's a model that's been appropriated by Spotify and Apple and many, many others.

So we were a pioneer on that front. What was really, really, really hard about doing business in those days was the reaction of the music labels to piracy, which was about 180 degrees opposite of what their reaction, quote unquote, should have been from the standpoint of preserving their business from piracy.

So Napster came along and was a service that enabled people to get near unlimited access to most songs. I mean, truly obscure things could be very hard to find on Napster, but most songs with a relatively simple one-click ability to download those songs and have the MP3s on their hard drives.

But there was a lot that was very messy about the Napster experience. You might download a really god-awful recording of that song. You may download a recording that actually wasn't that song with some prankster putting it up to sort of mess with people. You could struggle to find the song that you're looking for.

You could end up finding yourself connected. It was peer-to-peer. You might randomly find yourself connected to somebody in Bulgaria, doesn't have a very good internet connection, so you might wait 19 minutes only for it to snap, et cetera, et cetera. And our argument to, well, actually, let's start with how that hit the music labels.

The music labels had been in a very, very comfortable position for many, many decades of essentially, you know, having monopoly, you know, having been the monopoly providers of a certain subset of artists. Any given label was a monopoly provider of the artists and the recordings that they owned, and they could sell it at what turned out to be tremendously favorable rates.

In the late era of the CD, you know, you were talking close to $20 for a compact disc that might have one song that you were crazy about and simply needed to own that might actually be glued to 17 other songs that you found to be sure crap. And so the music industry had used the fact that it had this unbelievable leverage and profound pricing power to really get music lovers to the point that they felt very, very misused by the entire situation.

Now along comes Napster and music sales start getting gutted with extreme rapidity. And the reaction of the music industry to that was one of shock and absolute fury, which is understandable, you know? I mean, industries do get gutted all the time, but I struggle to think of an analog of an industry that got gutted that rapidly.

I mean, we could say that passenger train service certainly got gutted by airlines, but that was a process that took place over decades and decades and decades. It wasn't something that happened, you know, really started showing up in the numbers in a single digit number of months and started looking like an existential threat within a year or two.

So the music industry is quite understandably in a state of shock and fury. I don't blame them for that. But then their reaction was catastrophic, both for themselves and almost for people like us who were trying to do, you know, the cowboy in the white hat thing. So our response to the music industry was, look, what you need to do to fight piracy, you can't put the genie back in the bottle.

You can't switch off the internet. Even if you all shut your eyes and wish very, very, very hard, the internet is not going away. And these peer-to-peer technologies are genies out of the bottle. And if you, God, don't, whatever you do, don't shut down Napster, because if you do, suddenly that technology is gonna splinter into 30 different nodes that you'll never, ever be able to shut off.

What we suggested to them is like, look, what you want to do is to create a massively better experience to piracy, something that's way better, that you sell at a completely reasonable price, and this is what it is. Don't just give people access to that very limited number of songs that they happen to have acquired and paid for or pirated and have on their hard drive.

Give them access to all of the music in the world for a simple low price. And obviously, that doesn't sound like a crazy suggestion, I don't think, to anybody's ears today, because that is how the majority of music is now being consumed online. But in doing that, you're gonna create a much, much better option to this kind of crappy, kind of rickety, kind of buggy process of acquiring MP3s.

Now, unfortunately, the music industry was so angry about Napster and so forth that for essentially three and a half years, they folded their arms, stamped their feet, and boycotted the internet. So they basically gave people who were fervently passionate about music and were digitally modern, they gave them basically one choice.

If you want to have access to digital music, we, the music industry, insist that you steal it because we are not going to sell it to you. So what that did is it made an entire generation of people morally comfortable with swiping the music because they felt quite pragmatically, well, they're not giving me any choice here.

It's like a 20-year-old violating the 21 drinking age. If they do that, they're not gonna feel like felons. They're gonna be like, "This is an unreasonable law "and I'm skirting it," right? So they make a whole generation of people morally comfortable with swiping music, but also technically adept at it.

And when they did shut down Napster and kind of even trickier tools and like tweakier tools like Kazaa and so forth came along, people just figured out how to do it. So by the time they finally, grudgingly, it took years, allowed us to release this experience that we were quite convinced would be better than piracy, we had this enormous hole had been dug where lots of people said music is a thing that is free and that's morally okay and I know how to get it.

And so streaming took many, many, many more years to take off and become the gargantuan thing, the juggernaut it is today than would have happened if they'd pivoted to let's sell a better experience as opposed to demand that people want digital music, steal it. - Like what lessons do we draw from that?

'Cause we're probably in the midst of living through a bunch of similar situations in different domains currently, we just don't know. There's a lot of things in this world that are really painful. I mean, I don't know if you can draw perfect parallels, but fiat money versus cryptocurrency. There's a lot of currently people in power who are kind of very skeptical about cryptocurrency, although that's changing, but it's arguable it's changing way too slowly.

There's a lot of people making that argument where there should be a complete like Coinbase and all this stuff switched to that. There's a lot of other domains that where a pivot, like if you pivot now, you're going to win big, but you don't pivot because you're stubborn. And so, I mean, like, is this just the way that companies are?

The company succeeds initially, and then it grows, and there's a huge number of employees and managers that don't have the guts or the institutional mechanisms to do the pivot. Is this just the way of companies? - Well, I think what happens, I'll use the case of the music industry.

There was an economic model that they put food on the table and paid for marble lobbies and seven and even eight figure executive salaries for many, many decades, which was the physical collection of music. And then you start talking about something like unlimited streaming, and it seems so ephemeral and like such a long shot that people start worrying about cannibalizing their own business.

And they lose sight of the fact that something illicit is cannibalizing their business at an extraordinarily fast rate. And so if they don't do it themselves, they're doomed. I mean, we used to put slides in front of these folks, this is really funny, where we said, okay, let's assume Rhapsody, we want it to be 9.99 a month, and we want it to be 12 months.

So it's $120 a year from the budget of a music lover. And then we were also able to get reasonably accurate statistics that showed how many CDs per year the average person who bothered to collect music, which was not all people, actually bought. And it was overwhelmingly clear that the average CD buyer spends a hell of a lot less than $120 a year on music.

This is a revenue expansion, blah, blah, blah, but all they could think of, and I'm not saying this in a pejorative or patronizing way, I don't blame them, they'd grown up in this environment for decades. All they could think of was the incredible margins that they had on a CD.

And they would say, well, if this CD, by the mechanism that you guys are proposing, the CD that I'm selling for $17.99, somebody would need to stream those songs. We were talking about a penny a play back then, it's less than that now that the record labels get paid.

But would have to stream songs from that 1,799 times, it's never gonna happen. So they were just sort of stuck in the model of this, but it's like, no, dude, but they're gonna spend money on all this other stuff. So I think people get very hung up on that.

I mean, another example is really, the taxi industry was not monolithic, like the music labels. There was a whole bunch of fleets and a whole bunch of cities, very, very fragmented. It's an imperfect analogy, but nonetheless, imagine if the taxi industry writ large, upon seeing Uber said, oh my God, people wanna be able to hail things easily, cheaply, they don't wanna mess with cash, they wanna know how many minutes it's gonna be, they wanna know the fare in advance, and they want a much bigger fleet than what we've got.

If the taxi industry had rolled out something like that, with the branding of yellow taxis, universally known and kind of loved by Americans and expanded their fleet in a necessary manner, I don't think Uber or Lyft ever would have gotten a foothold. But the problem there was that real economics in the taxi industry wasn't with fares, it was with the scarcity of medallions.

And so the taxi fleets, in many cases, owned gazillions of medallions whose value came from their very scarcity. So they simply couldn't pivot to that. So you think you end up having these vested interests with economics that aren't necessarily visible to outsiders who get very, very reluctant to disrupt their own model, which is why it ends up coming from the outside so frequently.

- So you know what it takes to build a successful startup, but you're also an investor in a lot of successful startups. Let me ask for advice. What do you think it takes to build a successful startup by way of advice? - Well, I think it starts, I mean, everything starts and even ends with the founder.

And so I think it's really, really important to look at the founder's motivations and their sophistication about what they're doing. In almost all cases that I'm familiar with and have thought hard about, you've had a founder who was deeply, deeply inculcated in the domain of technology that they were taking on.

Now, what's interesting about that is you could say, no, wait, how is that possible 'cause there's so many young founders? When you look at young founders, they're generally coming out of very nascent, emerging fields of technology. Where simply being present and accounted for and engaged in the community for a period of even months is enough time to make them very, very deeply inculcated.

I mean, you look at Marc Andreessen and Netscape, Marc had been doing visual web browsers when Netscape had been founded for what, a year and a half? But he'd created the first one, and in Mosaic when he was an undergrad, and the commercial internet was pre-nascent in 1994 when Netscape was founded.

So there's somebody who's very, very deep in their domain, Mark Zuckerberg also, social networking, very deep in his domain, even though it was nascent at the time, lots of people doing crypto stuff. I mean, 10 years ago, even seven or eight years ago, by being a really, really vehement and engaged participant in the crypto ecosystem, you could be an expert in that.

You look, however, at more established industries, take salesforce.com. Salesforce automation, pretty mature field when it got started, who's the executive and the founder? Mark Benioff, who spent 13 years at Oracle and was an investor in Siebel Systems, which ended up being Salesforce's main competition. So more established, you need the entrepreneur to be very, very deep in the technology and the culture and the UN2 of the space, because you need that entrepreneur, that founder, to have just an unbelievably accurate intuitive sense for where the puck is going, right?

And that only comes from being very deep. So that is sort of factor number one. And the next thing is that that founder needs to be charismatic and/or credible, or ideally both, in exactly the right ways, to be able to attract a team that is bought into that vision and is bought into that founder's intuitions being correct, and not just the team, obviously, but also the investors.

So it takes a certain personality type to pull that off. Then the next thing I'm still talking about, the founder, is a relentlessness and indeed a monomania to put this above things that might rationally, should perhaps rationally supersede it for a period of time, to just relentlessly pivot when pivoting is called for, and it's always called for.

I mean, think of even very successful companies. Like, how many times does Facebook pivot? Newsfeed was something that was completely alien to the original version of Facebook and came foundationally important. How many times did Google? How many times at any given, how many times has Apple pivoted? You know, that founder energy and DNA, when the founder moves on, the DNA that's been inculcated with a company has to have that relentlessness and that ability to pivot and pivot and pivot without being worried about sacred cows.

And then the last thing I'll say about the founder before I get to the rest of the team, and that'll be mercifully brief, is the founder has to be obviously a really great hirer, but just important, a very good firer. And firing is a horrific experience for both people involved in it.

It is a wrenching emotional experience. And being good at realizing when this particular person is damaging the interests of the company and the team and the shareholders, and having the intestinal fortitude to have that conversation and make it happen is something that most people don't have in them. And it's something that needs to be developed in most people, or maybe some people have it naturally.

But without that ability, that will take an A-plus organization into B-minus range very, very quickly. And so that's all what needs to be present in the founder. - Can I just say? - Sure. - How damn good you are, Rob. That was brilliant. The one thing that was really kind of surprising to me is having a deep technical knowledge.

Because I think the way you expressed it, which is that allows you to be really honest with the capabilities of what's possible. Like, of course, you're often trying to do the impossible. But in order to do the impossible, you have to be quote-unquote impossible. But you have to be honest with what is actually possible.

- And it doesn't necessarily have to be the technical competence. It's gotta be, in my view, just a complete immersion in that emerging market. And so I can imagine, there are a couple people out there who have started really good crypto projects who themselves aren't writing the code. But they're immersed in the culture and through the culture and a deep understanding of what's happening and what's not happening, they can get a good intuition of what's possible.

But the very first hire, I mean, a great way to solve that is to have a technical co-founder. And dual founder companies have become extremely common for that reason. And if you're not doing that and you're not the technical person, but you are the founder, you've gotta be really great at hiring a very damn good technical person very, very fast.

- Can I, on the founder, ask you, is it possible to do this alone? There's so many people giving advice and saying that it's impossible to do the first few steps. Not impossible, but much more difficult to do it alone. If we were to take the journey, say, especially in the software world, where there's not significant investment required for it to build something up, is it possible to go to a prototype, to something that essentially works and already has a huge number of customers alone?

- Sure. There are lots and lots of loan founder companies out there that have made an incredible difference. I mean, I'm not certainly putting Rhapsody in the league of Spotify. We were too early to be Spotify, but we did an awful lot of innovation. And then after the company sold and ended up in the hands of Real Networks and MTV, got to millions of subs, right?

I was a loan founder, and I studied Arabic and Middle Eastern history undergrad. So I wasn't very, very technical. But yeah, loan founders can absolutely work. And the advantage of a loan founder is you don't have the catastrophic potential of a falling out between founders. I mean, two founders who fall out with each other badly can rip a company to shreds because they both have an enormous amount of equity, an enormous amount of power, and the capital structure is a result of that.

They both have an enormous amount of moral authority with the team as a result of each having that founder role. And I have witnessed over the years, many, many situations in which companies have been shredded or have suffered near fatal blows because of a falling out between founders. And the more founders you add, the more risky that becomes.

I don't think there should ever almost, I mean, you never say never, but multiple founders beyond two is such an unstable and potentially treacherous situation that I would never, ever recommend going beyond two. But I do see value in the non-technical sort of business and market and outside-minded founder teaming up with the technical founder.

There is a lot of merit to that, but there's a lot of danger in that lest those two blow apart. - Was it lonely for you? - Unbelievably, and that's the drawback. I mean, if you're a lone founder, there is no other person that you can sit down with and tackle problems and talk them through who has precisely or nearly precisely your alignment of interests.

Your most trusted board member is likely an investor, and therefore at the end of the day has the interest of preferred stock in mind, not common stock. Your most trusted VP, who might own a very significant stake in the company, doesn't own anywhere near your stake in the company.

And so their long-term interests may well be in getting the right level of experience and credibility necessary to peel off and start their own company. Or their interests might be aligned with jumping ship and setting up with another, with a different company, whether it's a rival or one in a completely different space.

So yeah, being a lone founder is a spectacularly lonely thing, and that's a major downside to it. - What about mentorship? 'Cause you're a mentor to a lot of people. Can you find an alleviation to that loneliness in the space of ideas with a good mentor? - With a good mentor, like a mentor who's mentoring you?

- Yeah. - Yeah, you can. A great deal, particularly if it's somebody who's been through this very process and has navigated it successfully and cares enough about you and your well-being to give you beautifully unvarnished advice, that can be a huge, huge thing. That can just raise things a great deal.

And I had a board member who was not an investor, who basically played that role for me to a great degree. He came in maybe halfway through the company's history, though, I would've needed that the most in the very earliest days. (laughs) - Yeah, the loneliness, that's the whole journey of life.

We're always alone, alone together. It pays to embrace that. You were saying that there might be something outside of the founder that's also, that you were promising to be brief on. - Yeah, okay, so we talked about the founder. You were asking what makes a great startup. - Yes.

- And great founder is thing number one, but then thing number two, and it's ginormous, is a great team. And so I said so much about the founder because one hopes or one believes that a founder who is a great hirer is going to be hiring people in charge of critical functions like engineering and marketing and biz dev and sales and so forth, who themselves are great hirers.

But what needs to radiate from the founder into the team that might be a little bit different from what's in the gene code of the founder? The team needs to be fully bought in to the intuitions and the vision of the founder. Great, we've got that. But the team needs to have a slightly different thing, which is, it's 99% obsession, is execution, is to relentlessly hit the milestones, hit the objectives, hit the quarterly goals.

That is 1% vision, you don't wanna lose that. But execution machines, people who have a demonstrated ability and a demonstrated focus on, yeah, I go from point to point to point, I try to beat and raise expectations relentlessly, never fall short, and both sort of blaze and follow the path.

Not that the path is gonna, blaze the trail as well. A good founder is going to trust that VP of sales to have a better sense of what it takes to build out that organization, what the milestones be. And it's gonna be kind of a dialogue amongst those at the top.

But execution obsession in the team is the next thing. - Yeah, there's some sense where the founder, you talk about sort of the space of ideas like first principles thinking, asking big difficult questions of future trajectories or having a big vision and big picture dreams. You can almost be a dreamer, it feels like, when you're like not the founder, but in the space of sort of leadership.

But when it gets to the ground floor, there has to be execution. There has to be hitting deadlines. And sometimes those are attention. There's something about dreams that are attention with the pragmatic nature of execution, not dreams, but sort of ambitious vision. And those have to be, I suppose, coupled.

The vision in the leader and the execution in the software world, that would be the programmer or the designer. - Absolutely. - Amongst many other things, you're an incredible conversationalist, a podcaster, you host a podcast called After On. I mean, there's a million questions I wanna ask you here, but one at the highest level, what do you think makes for a great conversation?

- I would say two things, one of two things, and ideally both of two things. One is if something is beautifully architected, whether it's done deliberately and methodically and willfully as when I do it, or whether that just emerges from the conversation, but something that's beautifully architected, that can create something that's incredibly powerful and memorable, or something where there's just extraordinary chemistry.

And so with All In, or I'll go way back, you might remember the NPR show Car Talk. - Oh yeah, yeah. - I couldn't care less about auto mechanics myself. - Yeah, that's right. - But I love that show because the banter between those two guys was just beyond, without any parallel, right?

You know, and some kind of edgy podcast, like Red Scare is just really entertaining to me because the banter between the women on that show is just so good. And All In and that kind of thing. So I think it's a combination of sort of the arc and the chemistry.

And I think because the arc can be so important, that's why very, very highly produced podcasts like This American Life, obviously a radio show, but I think of a podcast 'cause that's how I always consume it, or Criminal, or a lot of what Wondery does and so forth. That is real documentary making, and that requires a big team and a big budget relative to the kinds of things you and I do.

But nonetheless, then you got that arc, and that can be really, really compelling. But if we go back to conversation, I think it's a combination of structure and chemistry. - Yeah, and I've actually personally have lost, I used to love This American Life, and for some reason, because it lacks the possibility of magic, it's engineered magic.

- I've fallen off of it myself as well. I mean, when I fell madly in love with it during the aughts, it was the only thing going. They were really smart to adopt podcasting as a distribution mechanism early. But yeah, I think that maybe there's a little bit less magic there now, 'cause I think they have agendas other than necessarily just delighting their listeners with quirky stories, which I think is what it was all about back in the day and some other things.

- Is there a memorable conversation that you've had on the podcast, whether it was because it was wild and fun, or one that was exceptionally challenging, maybe challenging to prepare for, that kind of thing? Is there something that stands out in your mind that you can draw an insight from?

- Yeah, I mean, this in no way diminishes the episodes that will not be the answer to these two questions. But an example of something that was really, really challenging to prepare for was George Church. So as I'm sure you know, and as I'm sure many of your listeners know, he is one of the absolute leading lights in the field of synthetic biology.

He's also unbelievably prolific. His lab is large and has all kinds of efforts have spun out of that. And what I wanted to make my George Church episode about was first of all, grounding people into what is this thing called syn-bio? And that required me to learn a hell of a lot more about syn-bio than I knew going into it.

So there was just this very broad, I mean, I knew much more than the average person going into that episode, but there was this incredible breadth of grounding that I needed to give myself in the domain. And then George does so many interesting things, there's so many interesting things emitting from his lab that, you know, and he and I had a really good dialogue.

He was a great guide going into it. Winnowing it down to the three to four that I really wanted us to focus on to create a sense of wonder and magic in the listener of what could be possible from this very broad spectrum domain, that was a doozy of a challenge.

That was a tough, tough, tough one to prepare for. Now in terms of something that was just wild and fun, unexpected, I mean, by the time we sat down to interview, I knew where we were gonna go, but just in terms of the idea space, Don Hoffman. - Oh, wow.

- Yeah. So Don Hoffman, as again, some listeners probably know, 'cause he's, I think I was the first podcaster to interview him. I'm sure some of your listeners are familiar with him, but he has this unbelievably contrarian take on the nature of reality, but it is contrarian in a way that all the ideas are highly internally consistent and snap together in a way that's just delightful.

And it seems as radically violating of our intuitions and as radically violating of the probable nature of reality as anything that one can encounter, but an analogy that he uses, which is very powerful, which is what intuition could possibly be more powerful than the notion that there is a single unitary direction called down, and we're on this big flat thing for which there is a thing called down.

And we all know, I mean, that's the most intuitive thing that one could probably think of. And we all know that that ain't true. So my conversation with Don Hoffman was just wild and full of plot twists and interesting stuff. - And the interesting thing about the wildness of his ideas, it's to me at least as a listener coupled with, he's a good listener and he empathizes with the people who challenge his ideas.

Like what's a better way to phrase that? He is a welcoming of challenge in a way that creates a really fun conversation. - Oh, totally, yeah. He loves a Perry or a jab, whatever the word is, at his argument, he honors it. He's a very, very gentle and non-combatitive soul, but then he is very good and takes great evident joy in responding to that in a way that expands your understanding of his thinking.

- Let me, as a small tangent of tying up together our previous conversation about listen.com and streaming and Spotify and the world of podcasting. So we've been talking about this magical medium of podcasting, I have a lot of friends at Spotify in the high positions of Spotify as well.

I worry about Spotify and podcasting and the future of podcasting in general that moves podcasting in the place of maybe walled gardens of sorts. Since you've had a foot in both worlds, have a foot in both worlds, do you worry as well about the future of podcasting? - Yeah, I think walled gardens are really toxic to the medium that they start balkanizing.

So to take an example, I'll take two examples. With music, it was a very, very big deal that at Rhapsody, we were the first company to get full catalog licenses from all, back then there were five major music labels and also hundreds and hundreds of indies because you needed to present the listener with a sense that basically everything is there and there is essentially no friction to discovering that which is new and you can wander this realm and all you really need is a good map, whether it is something that somebody, the editorial team assembled or a good algorithm or whatever it is, but a good map to wander this domain.

When you start walling things off, A, you undermine the joy of friction-free discovery, which is an incredibly valuable thing to deliver to your customer, both from a business standpoint and simply from a humanistic standpoint of you wanna bring delight to people, but it also creates an incredible opening vector for piracy.

And so something that's very different from the Rhapsody/Spotify/et cetera like experience is what we have now in video. Like wow, is that show on Hulu? Is it on Netflix? Is it on something like IFC channel? Is it on Discovery+, is it here, is it there? And the more frustration and toe-stubbing that people encounter when they are seeking something and they're already paying a very respectable amount of money per month to have access to content and they can't find it, the more that happens, the more people are gonna be driven to piracy solutions like to hell with it.

Never know where I'm gonna find something, I never know what it's gonna cost. Oftentimes, really interesting things are simply unavailable. That surprises me, the number of times that I've been looking for things I don't even think are that obscure that are just, it says not available in your geography period, mister, right?

So I think that that's a mistake. And then the other thing is for podcasters and lovers of podcasting, we should wanna resist this Waldegarden thing because A, it does smother this friction-free or eradicate this friction-free discovery unless you wanna sign up for lots of different services. And also dims the voice of somebody who might be able to have a far, far, far bigger impact by reaching far more neurons with their ideas.

I'm gonna use an example from, I guess it was probably the '90s or maybe it was the aughts, of Howard Stern who had the biggest megaphone or maybe the second biggest after Oprah megaphone in popular culture. And 'cause he was syndicated on hundreds and hundreds and hundreds of radio stations at a time when terrestrial broadcast was the main thing people listened to in their car, no more obviously.

But when he decided to go over to satellite radio, if I can't remember, it was XM or Sirius, maybe they'd already merged at that point. But when he did that, he made, totally his right to do it, a financial calculation that they were offering him a nine-figure sum to do that.

But his audience, because not a lot of people were subscribing to satellite radio at that point, his audience probably collapsed by, I wouldn't be surprised if it was as much as 95%. And so the influence that he had on the culture and his ability to sort of shape conversation and so forth just got muted.

- Yeah, and also there's a certain sense, especially in modern times where the walled gardens naturally lead to, I don't know if there's a term for it, but people who are not creatives starting to have power over the creatives. - Right, and even if they don't stifle it, if they're providing incentives within the platform to shape, shift, or even completely mutate or distort the show, I mean, imagine somebody has got a reasonably interesting idea for a podcast and they get signed up with, let's say Spotify, and then Spotify is gonna give them financing to get the thing spun up.

And that's great, and Spotify is gonna give them a certain amount of really powerful placement within the visual field of listeners. But Spotify has conditions for that. They say, look, we think that your podcast will be much more successful if you dumb it down about 60%, if you add some silly, dirty jokes, if you do this, you do that, and suddenly the person who is dependent upon Spotify for permission to come into existence and is really dependent, really wants to please them to get that money in, to get that placement, really wants to be successful, now all of a sudden you're having a dialogue between a complete non-creative, some marketing sort of data analytic person at Spotify and a creative that's going to shape what that show is.

So that could be much more common and ultimately having the aggregate, an even bigger impact than the cancellation, let's say, of somebody who says the wrong word or voices the wrong idea. I mean, that's kind of what you have, not kind of, it's what you have with film and TV, is that so much influence is exerted over the storyline and the plots and the character arcs and all kinds of things by executives who are completely alien to the experience and the skill set of being a showrunner in television, being a director in film, that is meant to like, we can't piss off the Chinese market here, we can't say that, we need to have cast members that have precisely these demographics reflected or whatever it is, that, and obviously, despite that extraordinary, at least TV shows are now being made, in terms of film, I think the quality has nosedived of the average, let's say, say American film coming out of a major studio, the average quality, and my view has nosedived over the past decade is it's kind of everything's gotta be a superhero franchise, but great stuff gets made despite that, but I have to assume that in some cases, at least in perhaps many cases, greater stuff would be made if there was less interference from non-creative executives.

- It's like the flip side of that, though, and this was the pitch of Spotify because I've heard their pitch, is Netflix, from everybody I've heard that I've spoken with about Netflix, is they actually empower the creator. - They do. - I don't know what the heck they do, but they do a good job of giving creators, even the crazy ones, like Tim Dillon, like Joe Rogan, like comedians, freedom to be their crazy selves, and the result is some of the greatest television, some of the greatest cinema, whatever you call it, ever made.

- True. - Right? And I don't know what the heck they're doing. - It's a relative thing. From what I understand, it's a relative thing. They're interfering far, far, far less than NBC or AMC would have interfered, so it's a relative thing, and obviously, they're the ones writing the checks, and they're the ones giving the platforms, so they have every right to their own influence, obviously, but my understanding is that they're relatively way more hands-off, and that has had a demonstrable effect, 'cause I agree, some of the greatest produced video content of all time, an incredibly inordinate percentage of that is coming out from Netflix in just a few years when the history of cinema goes back many, many decades.

- And Spotify wants to be that for podcasting, and I hope they do become that for podcasting, but I'm wearing my skeptical goggles or skeptical hat, whatever the heck it is, 'cause it's not easy to do, and it requires letting go of power, giving power to the creatives. It requires pivoting, which large companies, even as innovative as Spotify is, still now a large company, pivoting into a whole new space is very tricky, and difficult, so I'm skeptical, but hopeful.

What advice would you give to a young person today about life, about career? We talked about startups, we talked about music, we talked about the end of human civilization. Is there advice you would give to a young person today, maybe in college, maybe in high school, about their life?

- Well, let's see, I mean, there's so many domains you can advise on, and I'm not gonna give advice on life, because I fear that I would drift into sort of Hallmark bromides that really wouldn't be all that distinctive, and they might be entirely true. Sometimes the greatest insights about life turn out to be like the kinds of things you'd see on a Hallmark card, so I'm gonna steer clear of that.

On a career level, one thing that I think is unintuitive, but unbelievably powerful, is to focus not necessarily on being in the top sliver of 1% in excelling at one domain that's important and valuable, but to think in terms of intersections of two domains, which are rare, but valuable.

And there's a couple reasons for this. The first is, in an incredibly competitive world that is so much more competitive than it was when I was coming out of school, radically more competitive than when I was coming out of school, to navigate your way to the absolute pinnacle of any domain.

Let's say you wanna be really, really great at Python, Pickle Language, whatever it is. You wanna be one of the world's greatest Python developers, JavaScript, whatever your language is. Hopefully it's not Cobalt. - By the way, if you listen to this, I am actually looking for a Cobalt expert to interview, 'cause I find language fascinating, and there's not many of them, so please, if you know a world expert in Cobalt, or Fortran, but both, actually.

- Or if you are one. - Or if you are one, please email me. - Yeah, so I mean, if you're going out there and you wanna be in the top sliver 1%, a Python developer's a very, very difficult thing to do, particularly if you wanna be number one in the world, something like that.

And I'll use an analogy, is I had a friend in college who was on a track, and indeed succeeded at that, to become an Olympic medalist, and I think it was 100 meter breaststroke. And he mortgaged a significant percentage of his sort of college life to that goal, or I should say dedicated, or invested, or whatever you wanted to say, but he didn't participate in a lot of the social, a lot of the late night, a lot of the this, a lot of the that, because he was training so much.

And obviously he also wanted to keep up with his academics, and at the end of the day, story has a happy ending, in that he did medal in that. Bronze, not gold, but holy cow, anybody who gets an Olympic medal, that's an extraordinary thing, and at that moment, he was one of the top three people on Earth at that thing.

But wow, how hard to do that, how many thousands of other people went down that path and made similar sacrifices and didn't get there. It's very, very hard to do that. Whereas, and I'll use a personal example, when I came out of business school, I went to a good business school, and learned the things that were there to be learned, and I came out and I entered a world with lots of-- - Harvard Business School, by the way.

- Okay, yes, it was Harvard, it's true. - You're the first person who went there who didn't say where you went, which is beautiful, I appreciate that. It's one of the greatest business schools in the world. It's a whole 'nother fascinating conversation about that world, but anyway, yes. - But anyway, so I learned the things, you learn getting an MBA from a top program, and I entered a world that had hundreds of thousands of people who had MBAs, probably hundreds of thousands who had them from top 10 programs.

So I was not particularly great at being an MBA person. I was inexperienced relative to most of them, and there were a lot of them, but it was an okay MBA person, newly minted. But then as it happened, I found my way into working on the commercial internet in 1994.

So I went to a, at the time, giant and hot computing company called Silicon Graphics, which had enough heft and enough head count that they could take on and experienced MBAs and try to train them in the world of Silicon Valley. But within that company that had an enormous amount of surface area and was touching a lot of areas and had unbelievably smart people at the time, it was not surprising that SGI started doing really interesting and innovative and trailblazing stuff on the internet before almost anybody else.

And part of the reason was that our founder, Jim Clark, went off to co-found Netscape with Mark Andresen, so the whole company was like, "Wait, what was that? "What's this commercial internet thing?" So I end up in that group. Now, in terms of being a commercial internet person or a worldwide web person, again, I was, in that case, barely credentialed.

I couldn't write a stitch of code, but I had a pretty good mind for grasping the business and cultural significance of this transition. And this was, again, we were talking earlier about emerging areas. Within a few months, I was in the relatively top echelon of people in terms of just sheer experience.

'Cause let's say it was five months into the program, there were only so many people who had been doing worldwide web stuff commercially for five months. And then what was interesting, though, was the intersection of those two things. The commercial web, as it turned out, grew into an unbelievable vastness.

And so by being a pretty good, okay web person and a pretty good, okay MBA person, that intersection put me in a very rare group, which was web-oriented MBAs. And in those early days, you could probably count on your fingers the number of people who came out of really competitive programs who were doing stuff full-time on the internet.

And there was a greater appetite for great software developers in the internet domain, but there was an appetite and a real one and a rapidly growing one for MBA thinkers who were also seasoned and networked in the emerging world of the commercial worldwide web. And so finding an intersection of two things you can be pretty good at, but is a rare intersection and a special intersection, is probably a much easier way to make yourself distinguishable and in demand from the world than trying to be world-class at this one thing.

- So in the intersection is where there's to be discovered opportunity and success. That's really interesting. There's actually more intersection of fields than fields themselves, right? So-- - Yeah, I mean, I'll give you kind of a funny hypothetical here, but it's one I've been thinking about a little bit.

There's a lot of people in crypto right now. It'd be hard to be in the top percentile of crypto people, whether it comes from just having a sheer grasp of the industry, a great network within the industry, technological skills, whatever you wanna call it. And then there's this parallel world, an orthogonal world called crop insurance.

And there's, I'm sure that's a big world. Crop insurance is a very, very big deal, particularly in the wealthy and industrialized world where people, there's sophisticated financial markets, rule of law, and large agricultural concerns that are worried about that. Somewhere out there is somebody who is pretty crypto savvy, but probably not top 1%.

But also has kind of been in the crop insurance world and understands that a hell of a lot better than almost anybody who's ever had anything to do with cryptocurrency. And so I think that decentralized finance, DeFi, one of the interesting and I think very world positive things that I think it's almost inevitably we'll be bringing to the world is crop insurance for smallholding farmers.

I mean, people who have tiny, tiny plots of land in places like India, et cetera, where there is no crop insurance available to them because just the financial infrastructure doesn't exist. But it's highly imaginable that using Oracle networks that are trusted outside deliverers of factual information about rainfall in a particular area, you can start giving drought insurance to folks like this.

The right person to come up with that idea is not a crypto whiz who doesn't know a blasted thing about smallholding farmers. The right person to come up with that is not a crop insurance whiz who isn't quite sure what Bitcoin is. But somebody occupies that intersection. That's just one of gazillion examples of things that are gonna come along for somebody who occupies the right intersection of skills but isn't necessarily the number one person at either one of those expertises.

- That's making me kind of wonder about my own little things that I'm average at and seeing where the intersections that could be exploited. That's pretty profound. So we talked quite a bit about the end of the world and how we're both optimistic about us figuring our way out.

Unfortunately, for now at least, both you and I are going to die one day way too soon. First of all, that sucks. - It does. (laughing) - I mean, one I'd like to ask, if you ponder your own mortality, how does that kind of, what kind of wisdom insight does it give you about your own life?

And broadly, do you think about your life and what the heck it's all about? - Yeah, with respect to pondering mortality, I do try to do that as little as possible 'cause there's not a lot I can do about it. But it's inevitably there. And I think that what it does, when you think about it in the right way, is it makes you realize how unbelievably rare and precious the moments that we have here are, and therefore how consequential the decisions that we make about how to spend our time are.

Do you do those 17 nagging emails or do you have dinner with somebody who's really important to you who you haven't seen in three and a half years? If you had an infinite expanse of time in front of you, you might well rationally conclude I'm gonna do those emails because collectively, they're rather important and I have tens of thousands of years to catch up with my buddy, Tim.

But I think the scarcity of the time that we have helps us choose the right things if we're attuned to that. And we're attuned to the context that mortality puts over the consequence of every decision we make of how to spend our time. That doesn't mean that we're all very good at it.

Doesn't mean I'm very good at it. But it does add a dimension of choice and significance to everything that we elect to do. - It's kind of funny that you say you try to think about it as little as possible. I would venture to say you probably think about the end of human civilization more than you do about your own life.

- You're probably right. - Because that feels like a problem that could be solved. - Right. - And-- - Whereas the end of my own life can't be solved. Well, I don't know. I mean, there's transhumanists who have incredible optimism about near or intermediate future therapies that could really, really change human lifespan.

I really hope that they're right, but I don't have a whole lot to add to that project because I'm not a life scientist myself, so. - I'm in part also afraid of immortality. Not as much, but close to as I'm afraid of death itself. So it feels like the things that give us meaning because of the scarcity that surrounds it.

- Agreed. - I'm almost afraid of having too much of stuff. - Yeah. Although if there was something that said, "This can expand your enjoyable well-span "or lifespan by 75 years," I'm all in. - Well, part of the reason I wanted to not do a startup, really the only thing that worries me about doing a startup is if it becomes successful.

Because of how much I dream, how much I'm driven to be successful, that there will not be enough silence in my life, enough scarcity to appreciate the moments I appreciate now as deeply as I appreciate them now. Like, there's a simplicity to my life now that it feels like it might disappear with success.

- I wouldn't say might. (Luke laughs) I think if you start a company that has ambitious investors, ambitious for the returns that they'd like to see, that has ambitious employees, ambitious for the career trajectories they wanna be on and so forth, and is driven by your own ambition, there is a profound monogamy to that.

And it is very, very hard to carve out time to be creative, to be peaceful, to be so forth because with every new employee that you hire, that's one more mouth to feed. With every new investor that you take on, that's one more person to whom you really do wanna deliver great returns.

And as the valuation ticks up, the threshold to delivering great returns for your investors always rises. And so there is an extraordinary monogamy to being a founder CEO, above all for the first few years. And first in people's minds could be as many as 10 or 15. - But I guess the fundamental calculation is whether the passion for the vision is greater than the cost you'll pay.

- Right, it's all opportunity cost. It's all opportunity cost. In terms of time and attention and experience. - And some things, everyone's different, but I'm less calculating. Some things you just can't help. Sometimes you just dive in. - Oh yeah, I mean you can do balance sheets all you want on this versus that and what's the right, I mean I've done it in the past and it's never worked.

It's always been like, okay, what's my gut screaming at me to do? - But about the meaning of life, you ever think about that? - Yeah, I mean, this is where I'm gonna go all hallmarking on you, but I think that there's a few things and one of them is certainly love.

And the love that we experience and feel and cause to well up in others is something that's just so profound and goes beyond almost anything else that we can do. And whether that is something that lies in the past, like maybe there was somebody that you were dating and loved very profoundly in college and haven't seen in years, I don't think the significance of that love is in any way diminished by the fact that it had a notional beginning and end.

The fact is that you experience that and you triggered that in somebody else and that happened. And it doesn't have to be, certainly it doesn't have to be love of romantic partners alone, it's family members, it's love between friends, it's love between creatures. I had a dog for 10 years who passed away a while ago and experienced unbelievable love with her.

It can be love of that which you create. And we were talking about the flow states that we enter and the pride or lack of pride, or in the Minsky case, your hatred of that which you've done, but nonetheless, the creations that we make, and whether it's the love or the joy or the engagement or the perspective shift, that that cascades into other minds.

I think that's a big, big, big part of the meaning of life. It's not something that everybody participates in necessarily, although I think we all do, at least in a very local level by the example that we set, by the interactions that we have, but for people who create works that travel far and reach people they'll never meet, that reach countries they'll never visit, that reach people perhaps that come along and come across their ideas or their works or their stories or their aesthetic creations of other sorts long after they're dead.

I think that's a really, really big part of the fabric of the meaning of life. And so all these things, like love and creation, I think really is what it's all about. - And part of love is also the loss of it. There's a Louis episode with Louis C.K.

where an old gentleman is giving him advice that sometimes the sweetest parts of love is when you lose it and you remember it, sort of you reminisce on the loss of it. And there's some aspect in which, and I have many of those in my own life, that almost like the memories of it and the intensity of emotion you still feel about it is like the sweetest part.

You're like, after saying goodbye, you relive it. So that goodbye is also a part of love. The loss of it is also a part of love. I don't know, it's back to that scarcity. - I won't say the loss is the best part personally, but it definitely is an aspect of it.

And the grief you might feel about something that's gone makes you realize what a big deal it was. - Yeah. - Yeah. - Speaking of which, this particular journey, we went on together, come to an end. So I have to say goodbye, and I hate saying goodbye. Rob, this is truly an honor.

I've really been a big fan. People should definitely check out your podcast. You're a master at what you do in the conversation space, in the writing space. It's been an incredible honor that you would show up here and spend this time with me. I really, really appreciate it. - Well, it's been a huge honor to be here as well, and also a fan and have been for a long time.

- Thanks, Rob. Thanks for listening to this conversation with Rob Reed, and thank you to Athletic Greens, Belcampo, Fundrise, and NetSuite. Check them out in the description to support this podcast. And now, let me leave you with some words from Plato. We can easily forgive a child who's afraid of the dark.

The real tragedy of life is when men are afraid of the light. Thank you for listening, and hope to see you next time. (upbeat music) (upbeat music)