Back to Index

Daniel Schmachtenberger: Steering Civilization Away from Self-Destruction | Lex Fridman Podcast #191


Chapters

0:0 Introduction
1:31 Aliens and UFOs
20:15 Collective intelligence of human civilization
28:12 Consciousness
39:33 How much computation does the human brain perform?
43:12 Humans vs ants
50:30 Humans are apex predators
57:34 Girard's Mimetic Theory of Desire
77:31 We can never completely understand reality
80:54 Self-terminating systems
91:18 Catastrophic risk
121:30 Adding more love to the world
148:55 How to build a better world
166:7 Meaning of life
173:49 Death
179:29 The role of government in society
196:54 Exponential growth of technology
242:35 Lessons from my father
248:11 Even suffering is filled with beauty

Transcript

The following is a conversation with Daniel Schmachtenberger, a founding member of the Consilience Project that is aimed at improving public sensemaking and dialogue. He's interested in understanding how we humans can be the best version of ourselves as individuals and as collectives at all scales. Quick mention of our sponsors, Ground News, NetSuite, Four Sigmatic, Magic Spoon, and BetterHelp.

Check them out in the description to support this podcast. As a side note, let me say that I got a chance to talk to Daniel on and off the mic for a couple of days. We took a long walk the day before our conversation. I really enjoyed meeting him, just on a basic human level.

We talked about the world around us with words that carried hope for us individual ants actually contributing something of value to the colony. These conversations are the reasons I love human beings, our insatiable striving to lessen the suffering in the world. But more than that, there's a simple magic to two strangers meeting for the first time and sharing ideas, becoming fast friends, and creating something that is far greater than the sum of our parts.

I've gotten to experience some of that same magic here in Austin with a few new friends and in random bars in my travels across this country, where a conversation leaves me with a big stupid smile on my face and a new appreciation of this too short, too beautiful life.

This is the Lex Friedman Podcast, and here is my conversation with Daniel Schmachtenberger. If aliens were observing Earth through the entire history, just watching us, and were tasked with summarizing what happened until now, what do you think they would say? What do you think they would write up in that summary?

Like it has to be pretty short, less than a page. Like in "Hitchhiker's Guide," (Daniel laughing) there's I think like a paragraph or a couple sentences. How would you summarize, sorry, how would the alien summarize, do you think, all of human civilization? - My first thoughts take more than a page.

They'd probably distill it. 'Cause if they watched, well, I mean, first, I have no idea if their senses are even attuned to similar stuff to what our senses are attuned to, or what the nature of their consciousness is like relative to ours. And so let's assume that they're kind of like us, just technologically more advanced to get here from wherever they are.

That's the first kind of constraint on the thought experiment. And then if they've watched throughout all of history, they saw the burning of Alexandria, they saw that 2000 years ago in Greece, we were producing things like clocks, the Antikythera mechanism, and then that technology got lost. They saw that there wasn't just a steady dialectic of progress.

- So every once in a while, there's a giant fire that destroys a lot of things. There's a giant like commotion that destroys a lot of things. - Yeah, and it's usually self-induced. They would have seen that. And so as they're looking at us now, as we move past the nuclear weapons age into the full globalization, Anthropocene exponential tech age, still making our decisions relatively similarly to how we did in the stone age as far as rivalry game theory type stuff.

I think they would think that this is probably most likely one of the planets that is not gonna make it to being intergalactic 'cause we blow ourselves up in the technological adolescence. And if we are going to, we're gonna need some major progress rapidly in the social technologies that can guide and bind and direct the physical technologies so that we are safe vessels for the amount of power we're getting.

- Actually, "Hitchhiker's Guide" has a estimation about how much of a risk this particular thing poses to the rest of the galaxy. And I think, I forget what it was, I think it was medium or low. So their estimation would be that this species of ant-like creatures is not gonna survive long.

There's ups and downs in terms of technological innovation. The fundamental nature of their behavior from a game theory perspective hasn't really changed. They have not learned in any fundamental way how to control and properly incentivize or properly do the mechanism design of games to ensure long-term survival. And then they move on to another planet.

Do you think there is, in a more, slightly more serious question, do you think there's some number or perhaps a very, very large number of intelligent alien civilizations out there? - Yes, would be hard to think otherwise. I know, I think Postrom had a new article not that long ago on why that might not be the case, that the Drake equation might not be the kind of end story on it.

But when I look at the total number of Kepler planets just that we're aware of just galactically, and also like when those life forms were discovered in Mono Lake that didn't have the same six primary atoms, I think it had arsenic replacing phosphorus as one of the primary aspects of its energy metabolism, we get to think about that the building blocks might be more different.

So the physical constraints even that the planets have to have might be more different. It seems really unlikely, not to mention interesting things that we've observed that are still unexplained. As you've had guests on your show discussing Tic Tac and-- - Oh, the ones that have visited. - Yeah.

- Well, let's dive right into that. What do you make sense of the rich human psychology of there being hundreds of thousands, probably millions of witnesses of UFOs of different kinds on Earth, most of which I presume are conjured up by the human mind through the perceptual system. Some number might be true, some number might be reflective of actual physical objects, whether it's drones or testing military technology that's secret or other worldly technology.

What do you make sense of all of that? Because it's gained quite a bit of popularity recently. There's some sense of which that's us humans being hopeful and dreaming of other worldly creatures as a way to escape the dreariness of the human condition. But in another sense, it really could be something truly exciting that science should turn its eye towards.

So where do you place it? - Speaking of turning eye towards, this is one of those super fascinating, actually super consequential possibly, topics that I wish I had more time to study and just haven't allocated. So I don't have firm beliefs on this 'cause I haven't got to study it as much as I want.

So what I'm gonna say comes from a superficial assessment. While we know there are plenty of things that people thought of as UFO sightings that we can fully write off, we have other better explanations for them. What we're interested in is the ones that we don't have better explanations for and then not just immediately jumping to a theory of what it is, but holding it as unidentified and being curious and earnest.

I think the Tic Tac one is quite interesting and made it in major media recently, but I don't know if you ever saw the disclosure project. A guy named Stephen Greer organized a bunch of mostly US military and some commercial flight, people who had direct observation and classified information disclosing it at a CNN briefing.

And so you saw high-ranking generals, admirals, fighter pilots all describing things that they saw on radar with visual, with their own eyes or cameras, and also describing some phenomena that had some consistency across different people. And I find this interesting enough that I think it would be silly to just dismiss it.

And specifically, we can ask the question, how much of it is natural phenomena, ball lightning or something like that? And this is why I'm more interested in what fighter pilots and astronauts and people who are trained in being able to identify flying objects and atmospheric phenomena have to say about it.

I think the thing, then you could say, well, are they more advanced military craft? Is it some kind of human craft? The interesting thing that a number of them describe is something that's kind of like right angles at speed, or if not right angles, acute angles at speed, but something that looks like a different relationship to inertia than physics makes sense for us.

I don't think that there are any human technologies that are doing that even in really deep underground black projects. Now, one could say, okay, well, could it be a hologram? Well, would it show up on radar if radar is also seeing it? And so I don't know. I think there's enough.

I mean, and for that to be a massive coordinated PSYOP is as interesting and ridiculous in a way as the idea that it's UFOs from some extra planetary source. So it's up there on the interesting topics. - To me, if it is at all alien technology, it is the dumbest version of alien technologies.

It's so far away. It's like the old, old crappy VHS tapes of alien technology. These are like crappy drones that just floated or even like space to the level of like space junk because it is so close to our human technology. We talk about it moves in ways that's unlike what we understand about physics, but it still has very similar kind of geometric notions and something that we humans can perceive with our eyes, all those kinds of things.

I feel like alien technology most likely would be something that we would not be able to perceive, not because they're hiding, but because it's so far advanced that it would be much, it would be beyond the cognitive capabilities of us humans. Just as you were saying, as per your answer for aliens summarizing earth, the starting assumption is they have similar perception systems.

They have similar cognitive capabilities and that very well may not be the case. Let me ask you about staying in aliens for just a little longer because I think it's a good transition in talking about governments and human societies. Do you think if a US government or any government was in possession of an alien spacecraft or of information related to alien spacecraft, they would have the capacity structurally?

Would they have the processes? Would they be able to communicate that to the public effectively? Or would they keep it secret in a room and do nothing with it, both to try to preserve military secrets, but also because of the incompetence that's inherent to bureaucracies or either? - Well, we can certainly see when certain things become declassified 25 or 50 years later, that there were things that the public might've wanted to know that were kept secret for a very long time for reasons of at least supposedly national security, which is also a nice source of plausible deniability for people covering their ass for doing things that would be problematic and other purposes.

There's a scientist at Stanford who supposedly got some material that was recovered from Area 51 type area, did analysis on it using, I believe, electron microscopy and a couple other methods and came to the idea that it was a nanotech alloy that was something we didn't currently have the ability to do, was not naturally occurring.

So I've heard some things. And again, like I said, I'm not gonna stand behind any of these 'cause I haven't done the level of study to have high confidence. I think what you said also about would it be super low-tech alien craft, like would they necessarily move their atoms around in space or might they do something more interesting than that?

Might they be able to have a different relationship to the concept of space or information or consciousness? One of the things that the craft supposedly do is not only accelerate and turn in a way that looks non-inertial, but also disappear. So there's a question as to, like the two are not necessarily mutually exclusive and it could be possible to, some people run a hypothesis that they create intentional amounts of exposure as an invitation of a particular kind.

Who knows? Interesting field. - We tend to assume like SETI, that's listening out for aliens out there. I've just been recently reading more and more about gravitational waves and you have orbiting black holes that orbit each other. They generate ripples in space-time. For fun at night when I lay in bed, I think about what it would be like to ride those waves when they, not the low magnitude they are as when they reach Earth, but get closer to the black holes because it would basically be shrinking and expanding us in all dimensions, including time.

So it's actually ripples through space-time that they generate. Why is it that you couldn't use that as it travels at the speed of light? Travels at a speed, which is a very weird thing to say when you're morphing space-time. You could argue it's faster than the speed of light.

So if you're able to communicate by, to summon enough energy to generate black holes and to force them to orbit each other, why not travel as the ripples in space-time, whatever the hell that means, somehow combined with wormholes. So if you're able to communicate through, like we don't think of gravitational waves as something you can communicate with because the radio will have to be a very large size and very dense, but perhaps that's it.

Perhaps that's one way to communicate. It's a very effective way. And that would explain, we wouldn't even be able to make sense of that, of the physics that results in an alien species that's able to control gravity at that scale. - I think you just jumped up the Kardashev scale so far.

You're not just harnessing the power of a star, but harnessing the power of mutually rotating black holes. That's way above my physics pay grade to think about, including even non-rotating black hole versions of transwarp travel. I think, you can talk with Eric more about that. I think he has better ideas on it than I do.

My hope for the future of humanity mostly does not rest in the near term on our ability to get to other habitable planets in time. - And even more than that, in the list of possible solutions of how to improve human civilization, orbiting black holes is not on the first page for you.

- Not on the first page. - Okay, I bet you did not expect us to start this conversation here. But I'm glad the places it went. I am excited on a much smaller scale of Mars, Europa, or Titan, Venus potentially having very bacteria-like life forms. Just on a small human level, it's a little bit scary, but mostly really exciting that there might be life elsewhere, in the volcanoes, in the oceans, all around us, teeming, having little societies.

And whether there's properties about that kind of life, that's somehow different than ours. I don't know what would be more exciting if those colonies of single-cell type organisms, what would be more exciting, if they're different or if they're the same? If they're the same, that means through the rest of the universe, there's life forms like us, something like us everywhere.

If they're different, that's also really exciting 'cause there's life forms everywhere that are not like us. That's a little bit scary. - I don't know what's scarier, actually. (both laughing) I think-- - It's both scary and exciting no matter what, right? - The idea that they could be very different is philosophically very interesting for us to open our aperture on what life and consciousness and self-replicating possibilities could look like.

The question on are they different or the same, obviously there's lots of life here that is the same in some ways and different in other ways. When you take the thing that we call an invasive species, it's something that's still pretty the same, hydrocarbon-based thing, but co-evolved with co-selective pressures in a certain environment, we move it to another environment, it might be devastating to that whole ecosystem 'cause it's just different enough that it messes up the self-stabilizing dynamics of that ecosystem.

So the question of would they be different in ways where we could still figure out a way to inhabit a biosphere together or fundamentally not, fundamentally the nature of how they operate and the nature of how we operate would be incommensurable is a deep question. - Well, we offline talked about mimetic theory, right?

It seems like if there were sufficiently different where we would not even, we can coexist on different planes, it seems like a good thing. If we're close enough together to where we'd be competing, then you're getting into the world of viruses and pathogens and all those kinds of things to where we would, one of us would die off quickly through basically mass murder without-- - Even accidentally.

- Even accidentally. - If we just had a self-replicating, single-celled kind of creature that happened to not work well for the hydrocarbon life that was here, that got introduced because it either output something that was toxic or utilized up the same resource too quickly and it just replicated faster and mutated faster.

It wouldn't be a mimetic theory, conflict theory kind of harm, it would just be a von Neumann machine, a self-replicating machine that was fundamentally incompatible with these kinds of self-replicating systems with faster OODA loops. - For one final time, putting your alien/god hat on and you look at human civilization, do you think about the 7.8 billion people on Earth as individual little creatures, individual little organisms, or do you think of us as one organism with a collective intelligence?

What's the proper framework through which to analyze it, again, as an alien? - So that I know where you're coming from when you have asked the question the same way before the Industrial Revolution, before the Agricultural Revolution when there were half a billion people and no telecommunications connecting them?

- I would indeed ask the question the same way, but I would be less confident about your conclusions. It would be an actually more interesting way to ask the question at that time, but I would nevertheless ask it the same way, yes. - Well, let's go back further and smaller then.

Rather than just a single human or the entire human species, let's look at a relatively isolated tribe. In the relatively isolated, probably sub Dunbar number, sub 150 people tribe, do I look at that as one entity where evolution is selecting for it based on group selection, or do I think of it as 150 individuals that are interacting in some way?

Well, could those individuals exist without the group? No. The evolutionary adaptiveness of humans was involved critically group selection and individual humans alone trying to figure out stone tools and protection and whatever aren't what was selected for. And so I think the or is the wrong frame. I think it's individuals are affecting the group that they're a part of.

They're also dependent upon and being affected by the group that they're part of. And so this now starts to get in deep into political theories also, which is theories that orient towards the collective at different scales, whether a tribal scale or an empire or a nation state or something.

And ones that orient towards the individual liberalism and stuff like that. And I think there's very obvious failure modes on both sides. And so the relationship between them is more interesting to me than either of them. The relationship between the individual and the collective and the question around how to have a virtuous process between those.

So a good social system would be one where the organism of the individual and the organism of the group of individuals is they're both synergistic to each other. So what is best for the individuals and what's best for the whole is aligned. - But there is nevertheless an individual.

They're not, it's a matter of degrees, I suppose. But what defines a human more? The social network within which they've been brought up through which they've developed their intelligence or is it their own sovereign individual self? Like what's your intuition of how much, not just for evolutionary survival, but as intellectual beings, how much do we need others for our development?

- Yeah, I think we have a weird sense of this today relative to most previous periods of sapien history. I think the vast majority of sapien history is tribal. Like depending upon your early human model, two or 300,000 years of Homo sapiens in little tribes where they depended upon that tribe for survival and excommunication from the tribe was fatal.

I think they, and our whole evolutionary genetic history is in that environment. And the amount of time we've been out of it is relatively so tiny. And then we still depended upon extended families and local communities more. And the big kind of giant market complex where I can provide something to the market to get money, to be able to get other things from the market where it seems like I don't need anyone.

It's almost like disintermediating our sense of need, even though your and my ability to talk to each other using these mics and the phones that we coordinated on took millions of people over six continents to be able to run the supply chains that made all the stuff that we depend on, but we don't notice that we depend upon them.

They all seem fungible. If you take a baby, obviously that you didn't even get to a baby without a mom. Was it dependent, we depended upon each other, right? Without two parents at minimum and they depended upon other people. But if we take that baby and we put it out in the wild, it obviously dies.

So if we let it grow up for a little while, the minimum amount of time where it starts to have some autonomy and then we put it out in the wild and this has happened a few times, it doesn't learn language and it doesn't learn the small motor articulation that we learn.

It doesn't learn the type of consciousness that we end up having that is socialized. So I think we take for granted how much conditioning affects us. - Is it possible that it affects basically 90% of us or basically 99.9 or maybe the whole thing? The whole thing is the connection between us humans and that we're no better than apes without our human connections.

Because thinking of it that way forces us to think very differently about human society and how to progress forward if the connections are fundamental. - I just have to object to the no better than apes 'cause better here, I think you mean a specific thing which means have capacities that are fundamentally different than I think apes also depend upon troops.

And I think the idea of humans as better than nature in some kind of ethical sense ends up having heaps of problems. We'll table that, we can come back to it. But when we say what is unique about homo sapien capacity relative to the other animals we currently inhabit the biosphere with?

And I'm saying it that way because there were other early hominids that had some of these capacities. We believe our tool creation and our language creation and our coordination are all kind of the results of a certain type of capacity for abstraction. And other animals will use tools but they don't evolve the tools they use.

They keep using the same types of tools that they basically can find. So a chimp will notice that a rock can cut a vine that it wants to. And it'll even notice that a sharper rock will cut it better. And experientially it'll use the sharper rock. And if you even give it a knife it'll probably use the knife 'cause it's experiencing the effectiveness.

But it doesn't make stone tools because that requires understanding why one is sharper than the other. What is the abstract principle called sharpness to then be able to invent a sharper thing? That same abstraction makes language and the ability for abstract representation which makes the ability to coordinate in a more advanced set of ways.

So I do think our ability to coordinate with each other is pretty fundamental to the selection of what we are as a species. - I wonder if that coordination, that connection is actually the thing that gives birth to consciousness. That gives birth to, well let's start with self-awareness. - More like theory of mind.

- Theory of mind, yeah. I mean I suppose there's experiments that show that there's other mammals that have a very crude theory of mind. I'm not sure, maybe dogs, something like that. But actually dogs probably has to do with that they co-evolved with humans. See it'd be interesting if that theory of mind is what leads to consciousness in the way we think about it.

Is the richness of the subjective experience that is consciousness. I have an inkling sense that that only exists because we're social creatures. That doesn't come with the hardware and the software in the beginning. That's learned as an effective tool for communication almost. I think we think that consciousness is fundamental.

Maybe it's not. A bunch of folks kind of criticize the idea that the illusion of consciousness is consciousness. That it is just a facade we use to help us construct theories of mind. You almost put yourself in the world as a subjective being. And that experience, you want to richly experience it as an individual person so that I could empathize with your experience.

I find that notion compelling. Mostly because it allows you to then create robots that become conscious not by being quote unquote conscious but by just learning to fake it 'til they make it. Present a facade of consciousness with the task of making that facade very convincing to us humans and thereby it will become conscious.

I have a sense that in some way that will make them conscious if they're sufficiently convincing to humans. Is there some element of that that you find convincing? This is a much harder set of questions and deep end of the pool than starting with the aliens was. We went from aliens to consciousness.

This is not the trajectory I was expecting, nor you. But let us walk a while. We can walk a while and I don't think we will do it justice. So what do we mean by consciousness versus conscious self-reflective awareness? What do we mean by awareness, qualia, theory of mind?

There's a lot of terms that we think of as slightly different things and subjectivity, first person. I don't remember exactly the quote but I remember when reading when Sam Harris wrote the book "Free Will" and then Dennett critiqued it. And then there was some writing back and forth between the two because normally they're on the same side of kind of arguing for critical thinking and logical fallacies and philosophy of science against supernatural ideas.

And here Dennett believed there is something like free will. He is a determinist compatibilist but no consciousness and radical limitivist. And Sam was saying, no, there is consciousness but there's no free will. And that's like the most fundamental kinds of axiomatic senses they disagreed on but neither of them could say it was 'cause the other one didn't understand the philosophy of science or logical fallacies.

And they kind of spoke past each other. And at the end, if I remember correctly, Sam said something that I thought was quite insightful which was to the effect of, it seems 'cause they weren't making any progress in shared understanding. It seems that we simply have different intuitions about this.

And what you could see was that what the words meant, right? At the level of symbol grounding might be quite different. One of them might've had deeply different enough life experiences that what is being referenced and then also different associations of what the words mean. This is why when trying to address these things Charles Sanders Peirce said, "The first philosophy has to be semiotics "because if you don't get semiotics right, "we end up importing different ideas and bad ideas "right into the nature of the language that we're using." And then it's very hard to do epistemology or ontology together.

So I'm saying this to say why I don't think we're gonna get very far is I think we would have to go very slowly in terms of defining what we mean by words and fundamental concepts. - Well, and also allowing our minds to drift together for a time so that our definitions of these terms align.

I think there's a beauty that some people enjoy with Sam that he is quite stubborn on his definitions of terms without often clearly revealing that definition. So in his mind, he can, like, you could sense that he can deeply understand what he means exactly by a term like free will and consciousness.

And you're right. He's very specific in fascinating ways that not only does he think that free will is an illusion, he thinks he's able, not thinks, he says he's able to just remove himself from the experience of free will and just be like for minutes at a time, hours at a time, like really experience as if he has no free will.

Like he's a leaf flowing down the river. And given that, he's very sure that consciousness is fundamental. So here's this conscious leaf that's subjectively experiencing the floating and yet has no ability to control and make any decisions for itself. It's only the decisions have all been made. There's some aspect to which the terminology there perhaps is the problem.

- So that's a particular kind of meditative experience. And the people in the Vedantic tradition and some of the Buddhist traditions thousands of years ago described similar experiences and somewhat similar conclusions, some slightly different. There are other types of phenomenal experience that are the phenomenal experience of pure agency.

And like the Catholic theologian, but evolutionary theorist, Tarek Deshardan describes this. And that rather than a creator agent God in the beginning, there's a creative impulse or a creative process. And he would go into a type of meditation that identified as the pure essence of that kind of creative process.

And I think the types of experiences we've had, and then one, the types of experience we've had make a big deal to the nature of how we do symbol grounding. The other thing is the types of experiences we have can't not be interpreted through our existing interpretive frames. And most of the time our interpretive frames are unknown even to us, some of them.

And so this is a tricky topic. So I guess there's a bunch of directions we could go with it, but I wanna come back to what the impulse was that was interesting around what is consciousness and how does it relate to us as social beings? And how does it relate to the possibility of consciousness with AIs?

- Right, you're keeping us on track, which is wonderful. You're a wonderful hiking partner. - Okay. - Yes. Let's go back to the initial impulse of what is consciousness and how does the social impulse connect to consciousness? Is consciousness a consequence of that social connection? - I'm gonna state a position and not argue it 'cause it's honestly, like it's a long, hard thing to argue and we can totally do it another time if you want.

I don't subscribe to consciousness as an emergent property of biology or neural networks. Obviously, a lot of people do. Obviously, the philosophy of science orients towards that in, not absolutely, but largely. I think of the nature of first person, the universe of first person, of qualia, as experience, sensation, desire, emotion, phenomenology, but the felt sense, not the, we say emotion and we think of a neurochemical pattern or an endocrine pattern.

But all of the physical stuff, the third person stuff, has position and momentum and charge and stuff like that that is measurable, repeatable. I think of the nature of first person and third person as ontologically orthogonal to each other, not reducible to each other. They're different kinds of stuff.

So I think about the evolution of third person that we're quite used to thinking about from subatomic particles to atoms to molecules to on and on. I think about a similar kind of and corresponding evolution in the domain of first person from the way Whitehead talked about kind of prehension or proto qualia in earlier phases of self-organization and to higher orders of it and that there's correspondence, but that neither like the idealists do we reduce third person to first person, which is what idealists do.

Or neither like the physicalists or do we reduce first person to third person. Obviously, Bohm talked about an implicate order that was deeper than and gave rise to the explicate order of both. Nagel talks about something like that. I have a slightly different sense of that, but again, I'll just kind of not argue how that occurs for a moment.

So rather than say, does consciousness emerge from, I'll talk about do higher capacities of consciousness emerge in relationship with? So it's not first person as a category emerging from third person, but increased complexity within the nature of first person and third person co-evolving. Do I think that it seems relatively likely that more advanced neural networks have deeper phenomenology, more complex, where it goes just from basic sensation to emotion, to social awareness, to abstract cognition, to self-reflexive abstract cognition?

Yeah, but I wouldn't say that's the emergence of consciousness. I would say it's increased complexity within the domain of first person corresponding to increased complexity. And the correspondent should not automatically be seen as causal. We can get into the arguments for why that often is the case. So would I say that obviously the sapien brain is pretty unique and a single sapien now has that, right?

Even if it took sapiens evolving in tribes based on group selection to make that brain. So the group made it, now that brain is there. Now, if I take a single person with that brain out of the group and try to raise them in a box, they'll still not be very interesting even with the brain.

But the brain does give hardware capacities that if conditioned in relationship can have interesting things emerge. So do I think that the human biology, types of human consciousness and types of social interaction all co-emerged and co-evolved? Yes. - As a small aside, as you're talking about the biology, let me comment that I spent, this is what I do.

This is what I do with my life. This is why I will never accomplish anything is I spent much of the morning trying to do research on how many computations the brain performs and how much energy it uses versus the state of the ER, CPUs and GPUs. Arriving at about 20 quadrillion.

So that's two to the 10 to the 16 computations. So synaptic firings per second that the brain does. And that's about a million times faster than the, let's say the 20 thread state of the arts Intel CPU, the 10th generation. And then there's similar calculation for the GPU and all ended up also trying to compute that it takes 10 watts to run the brain about.

And then what does that mean in terms of calories per day, kilocalories, that's about, for an average human brain, that's 250 to 300 calories a day. And so it ended up being a calculation where you're doing about 20 quadrillion calculations that are fueled by something like, depending on your diet, three bananas.

So three bananas results in a computation that's about a million times more powerful than the current state of the art computers. - Now let's take that one step further. There's some assumptions built in there. The assumption is that one, what the brain is doing is just computation. Two, the relevant computations are synaptic firings and that there's nothing other than synaptic firings that we have to factor.

So I'm forgetting his name right now. There's a very famous neuroscientist at Stanford just passed away recently who did a lot of the pioneering work on glial cells and showed that his assessment glial cells did a huge amount of the thinking, not just neurons. And it opened up this entirely different field of like what the brain is and what consciousness is.

You look at Damasio's work on embodied cognition and how much of what we would consider consciousness or feeling is happening outside of the nervous system completely, happening in endocrine process involving lots of other cells and signal communication. You talk to somebody like Penrose who you've had on the show.

And even though the Penrose-Hammerhoff conjecture is probably not right, is there something like that that might be the case where we're actually having to look at stuff happening at the level of quantum computation and microtubules. I'm not arguing for any of those. I'm arguing that we don't know how big the unknown unknown set is.

- Well, at the very least, this has become like an infomercial for the human brain. But wait, there's more. At the very least, the three bananas buys you a million times-- - At the very least. - At the very least. - It's impressive. - And then you could have, and then the synaptic firings we're referring to is strictly the electrical signals.

It could be the mechanical transmission of information, there's chemical transmission of information, there's all kinds of other stuff going on. And there's memory that's built in that's also all tied in. Not to mention, which I'm learning more and more about, it's not just about the neurons. It's also about the immune system that's somehow helping with the computation.

So it's the entirety and the entire body is helping with the computation. So the three bananas-- - It could buy you a lot. - It could buy you a lot. But on the topic of sort of the greater degrees of complexity emerging in consciousness, I think few things are as beautiful and inspiring as taking a step outside of the human brain, just looking at systems or simple rules create incredible complexity.

Not create, incredible complexity emerges. So one of the simplest things to do that with is cellular automata. And there's, I don't know what it is, and maybe you can speak to it. We can certainly, we will certainly talk about the implications of this, but there's so few things that are as awe-inspiring to me as knowing the rules of a system and not being able to predict what the heck it looks like.

And it creates incredibly beautiful complexity that when zoomed out on, looks like there's actual organisms doing things that are much, that operate on a scale much higher than the underlying mechanism. So with cellular automata, that's cells that are born and die, born and die, and they only know about each other's neighbors.

And there's simple rules that govern that interaction of birth and death. And then they create, at scale, organisms that look like they take up hundreds or thousands of cells, and they're moving. They're moving around, they're communicating, they're sending signals to each other. And you forget, at moments at a time, before you remember, that the simple rules on cells is all that it took to create that.

It's sad in that we can't come up with a simple description of that system that generalizes the behavior of the large organisms. We can only come up, we can only hope to come up with the thing, the fundamental physics, or the fundamental rules of that system, I suppose. It's sad that we can't predict.

Everything we know about the mathematics of those systems, it seems like we can't, really in a nice way, like economics tries to do, to predict how this whole thing will unroll. But it's beautiful because how simple it is underneath it all. So what do you make of the emergence of complexity from simple rules?

What the hell is that about? - Yeah, well, we can see that something like flocking behavior, the murmuration, can be computer-coded. It's not a very hard set of rules to be able to see some of those really amazing types of complexity. And the whole field of complexity science and some of the sub-disciplines like stigmur G are studying how following fairly simple responses to a pheromone signal do ant colonies do this amazing thing where what you might describe as the organizational or computational capacity of the colony is so profound relative to what each individual ant is doing.

I am not anywhere near as well versed in the cutting edge of cellular automatas. I would like, unfortunately, in terms of topics that I would like to get to and haven't, like ET's more, Wolfram's a new kind of science I have only skimmed and read reviews of and not read the whole thing or his newer work since.

But his idea of the four basic kind of categories of emergent phenomena that can come from cellular automata and that one of them is kind of interesting and looks a lot like complexity, rather than just chaos or homogeneity or self-termination or whatever. I think this is very interesting. It does not instantly make me think that biology is operating on a similarly small set of rules or that human consciousness is.

I'm not that reductionistly oriented. And so if you look at say Santa Fe Institute, one of the co-founders, Stuart Kauffman, his work, you should really get him on your show. So a lot of the questions that you like, one of Kauffman's more recent books after investigations and some of the real fundamental stuff was called "Reinventing the Sacred" and it had to do with some of these exact questions in kind of non-reductionist approach, but that is not just silly hippie-ism.

And he was very interested in highly non-ergotic systems where you couldn't take a lot of behavior over a small period of time and predict what the behavior of subsets over a longer period of time would do. And then going further, someone who spent some time at Santa Fe Institute and then kind of made a whole new field that you should have on, Dave Snowden, who some people call the father of anthro-complexity or what is the complexity unique to humans?

He says something to the effect of that modeling humans as termites really doesn't cut it. Like we don't respond exactly identically to the same pheromone stimulus using stigmergy like it works for flows of traffic and some very simple human behaviors, but it really doesn't work for trying to make sense of the Sistine Chapel and Picasso and general relativity creation and stuff like that.

And it's because the termites are not doing abstraction, forecasting deep into the future and making choices now based on forecasts of the future, not just adaptive signals in the moment and evolutionary code from history. That's really different, right? Like making choices now that can factor deep modeling of the future.

And with humans, our uniqueness one to the next in terms of response to similar stimuli is much higher than it is with a termite. One of the interesting things there is that their uniqueness is extremely low. They're basically fungible within a class, right? There's different classes, but within a class they're basically fungible and their system uses that, very high numbers and lots of loss, right?

Lots of death and loss. - Do you think the termite feels that way? Don't you think we humans are deceiving ourselves about our uniqueness? Perhaps it doesn't just, isn't there some sense in which this emergence just creates different higher and higher levels of abstraction where at every layer, each organism feels unique?

Is that possible? That we're all equally dumb but at different scales? - No, I think uniqueness is evolving. I think that hydrogen atoms are more similar to each other than cells of the same type are. And I think that cells are more similar to each other than humans are.

And I think that highly K-selected species are more unique than R-selected species. So they're different evolutionary processes. The R-selected species where you have a whole, a lot of death and very high birth rates, you're not looking for as much individuality within or individual possible expression to cover the evolutionary search space within an individual.

You're looking at it more in terms of a numbers game. So yeah, I would say there's probably more difference between one orca and the next than there is between one Cape buffalo and the next. - Given that, it would be interesting to get your thoughts about mimetic theory where we're imitating each other in the context of this idea of uniqueness.

How much truth is there to that? How compelling is this worldview to you of Girardian mimetic theory of desire where maybe you can explain it from your perspective, but it seems like imitating each other is the fundamental property of the behavior of human civilization. - Well, imitation is not unique to humans, right?

Monkeys imitate. So a certain amount of learning through observing is not unique to humans. Humans do more of it. It's actually kind of worth speaking to this for a moment. Monkeys can learn new behaviors, new... We've even seen teaching an ape sign language and then the ape teaching other apes sign language.

So that's a kind of mimesis, right? Kind of learning through imitation. And that needs to happen if they need to learn or develop capacities that are not just coded by their genetics, right? So within the same genome, they're learning new things based on the environment. And so based on someone else learned something first.

And so let's pick it up. How much a creature is the result of just its genetic programming and how much it's learning is a very interesting question. And I think this is a place where humans really show up radically different than everything else. And you can see it in the neoteny, how long we're basically fetal.

That the closest ancestors to us, if we look at a chimp, a chimp can hold on to its mother's fur while she moves around day one. And obviously we see horses up and walking within 20 minutes. The fact that it takes a human a year to be walking and it takes a horse 20 minutes and you say how many multiples of 20 minutes go into a year?

Like that's a long period of helplessness that wouldn't work for a horse, right? Like they or anything else. And not only can we not hold on to mom in the first day, it's three months before we can move our head volitionally. So it's like, why are we embryonic for so long?

Basically it's like it's still fetal on the outside. Had to be because couldn't keep growing inside and actually ever get out with big heads and narrower hips from going upright. So here's a place where there's a co-evolution of the pattern of humans, specifically here are our neoteny and what that portends to learning with our being tool making and environment modifying creatures.

Which is because we have the abstraction to make tools, we change our environments more than other creatures change their environments. The next most environment modifying creature to us is like a beaver. And then you were in LA, you fly into LAX and you look at the just orthogonal grid going on forever in all directions.

And we've recently come into the Anthropocene where the surface of the earth is changing more from human activity than geological activity and then beavers. And you're like, okay, wow, we're really in a class of our own in terms of environment modifying. So as soon as we started tool making, we were able to change our environments much more radically.

We could put on clothes and go to a cold place. And this is really important because we actually went and became apex predators in every environment. We functioned like apex predators. The polar bear can't leave the Arctic, right? And the lion can't leave the Savannah and an orca can't leave the ocean.

And we went and became apex predators in all those environments because of our tool creation capacity. We could become better predators than them adapted to the environment or at least with our tools adapted to the environment. - So then every aspect towards any organism in any environment, we're incredibly good at becoming apex predators.

- Yes, and nothing else can do that kind of thing. There is no other apex predator that, but see the other apex predator is only getting better at being a predator through evolutionary process that's super slow. And that super slow process creates co-selective process with their environment. So as the predator becomes a tiny bit faster, it eats more of the slow prey, the genes of the fast prey and breed and the prey becomes faster.

And so there's this kind of balancing. We in, because of our tool making, we increased our predatory capacity faster than anything else could increase its resilience to it. As a result, we started outstripping the environment and extincting species following stone tools and going and becoming apex predator everywhere. This is why we can't keep applying apex predator theories 'cause we're not an apex predator.

We're an apex predator, but we're something much more than that. Like just for an example, the top apex predator in the world, an orca. An orca can eat one big fish at a time, like one tuna, and it'll miss most of the time or one seal. And we can put a mile long drift net out on a single boat and pull up an entire school of them, right?

We can deplete the entire oceans of them. That's not an orca, right? Like that's not an apex predator. And that's not even including that we can then genetically engineer different creatures. We can extinct species, we can devastate whole ecosystems. We can make built worlds that have no natural things that are just human built worlds.

We can build new types of natural creatures, synthetic life. So we are much more like little gods than we are like apex predators now, but we're still behaving as apex predators. And little gods that behave as apex predators causes a problem, kind of core to my assessment of the world.

- So what does it mean to be a predator? So a predator is somebody that effectively can mine the resources from a place, so for their survival, or is it also just purely like higher level objectives of violence and what is, can predators be predators towards the same, each other towards the same species?

Like are we using the word predator sort of generally, which then connects to conflict and military conflict, violent conflict in the space of human species? - Obviously we can say that plants are mining the resources of their environment in a particular way, using photosynthesis to be able to pull minerals out of the soil and nitrogen and carbon out of the air and like that.

And we can say herbivores are being able to mine and concentrate that. So I wouldn't say mining the environment is unique to predator. Predator is, you know, - Violence. - Generally being defined as mining other animals, right? We don't consider herbivores predators, but animal, which requires some type of violence capacity because animals move, plants don't move.

So it requires some capacity to overtake something that can move and try to get away. We'll go back to the Gerard thing, then we'll come back here. Why are we neotenous? Why are we embryonic for so long? Because are we, did we just move from the Savannah to the Arctic and we need to learn new stuff?

If we came genetically programmed, we would not be able to do that. Are we throwing spears or are we fishing or are we running an industrial supply chain or are we texting? What is the adaptive behavior? Horses today in the wild and horses 10,000 years ago were doing pretty much the same stuff.

And so since we make tools and we evolve our tools and then change our environment so quickly and other animals are largely the result of their environment, but we're environment modifying so rapidly, we need to come without too much programming so we can learn the environment we're in, learn the language, right?

Which is gonna be very important to learn the toolmaking, learn the, and so we have a very long period of relative helplessness because we aren't coded how to behave yet because we're imprinting a lot of software on how to behave that is useful to that particular time. So our mimesis is not unique to humans, but the total amount of it is really unique.

And this is also where the uniqueness can go up, right? Is because we are less just the result of the genetics and that means the kind of learning through history that they got coded in genetics and more the result of, it's almost like our hardware selected for software, right?

Like if evolution is kind of doing these, think of as a hardware selection. I have problems with computer metaphors for biology, but I'll use this one here. That we have not had hardware changes since the beginning of sapiens, but our world is really, really different. And that's all changes in software, right?

Changes in on the same fundamental genetic substrate, what we're doing with these brains and minds and bodies and social groups and like that. And so now Gerard specifically was looking at when we watch other people talking, so we learn language, you and I would have a hard time learning Mandarin today or it'd take a lot of work.

We'd be learning how to conjugate verbs and stuff, but a baby learns it instantly without anyone even really trying to teach it just through mimesis. So it's a powerful thing. They're obviously more neuroplastic than we are when they're doing that and all their attention is allocated to that. But they're also learning how to move their bodies and they're learning all kinds of stuff through mimesis.

One of the things that Gerard says is they're also learning what to want and they learn what to want. They learn desire by watching what other people want. And so intrinsic to this, people end up wanting what other people want. And if we can't have what other people have without taking it away from them, then that becomes a source of conflict.

So the mimesis of desire is the fundamental generator of conflict and that then the conflict energy within a group of people will build over time. This is a very, very crude interpretation of the theory. - Can we just pause on that? For people who are not familiar and for me who hasn't, I'm loosely familiar but haven't internalized it, but every time I think about it, it's a very compelling view of the world, whether it's true or not.

It's quite, it's like when you take everything Freud says as truth, it's a very interesting way to think about the world. In the same way, thinking about the mimetic theory of desire, that everything we want is imitation of other people's wants. We don't have any original wants. We're constantly imitating others.

And so, and not just others, but others we're exposed to. So there's these little local pockets, however defined local, of people imitating each other. And one that's super empowering because then you can pick which group you can join. Like, what do you wanna imitate? (laughs) It's the old, whoever your friends are, that's what your life is gonna be like.

That's really powerful. I mean, it's depressing that we're so unoriginal, but it's also liberating in that, if this holds true, that we can choose our life by choosing the people we hang out with. - So, okay. Thoughts that are very compelling, that seem like they're more absolute than they actually are, end up also being dangerous.

We wanna-- - Communism? (laughs) - I'm gonna discuss here where I think we need to amend this particular theory. But specifically, you just said something that everyone who's paid attention knows is true experientially, which is who you're around affects who you become. And as libertarian and self-determining and sovereign as we'd like to be, everybody, I think, knows that if you got put in a maximum security prison, aspects of your personality would have to adapt or you wouldn't survive there, right?

You would become different. If you grew up in Darfur versus Finland, you would be different with your same genetics. Like, just, there's no real question about that. And that even today, if you hang out in a place with ultra marathoners as your roommates or all people who are obese as your roommates, the statistical likelihood of what happens to your fitness is pretty clear, right?

Like the behavioral science of this is pretty clear. So, the whole saying we are the average of the five people we spend the most time around. I think the more self-reflective someone is and the more time they spend by themselves in self-reflection, the less this is true, but it's still true.

So, one of the best things someone can do to become more self-determined is be self-determined about the environments they wanna put themselves in. Because to the degree that there is some self-determination and some determination by the environment, don't be fighting an environment that is predisposing you in bad directions.

Try to put yourself in an environment that is predisposing the things that you want. In turn, try to affect the environment in ways that predispose positive things for those around you. - Or perhaps also to, there's probably interesting ways to play with this. You could probably put yourself, like form connections that have this perfect tension in all directions to where you're actually free to decide whatever the heck you want, because the set of wants within your circle of interactions is so conflicting that you're free to choose whichever one.

So, if there's enough tension, as opposed to everybody aligned like a flock of birds. - Yeah, I mean, you definitely want that all of the dialectics would be balanced. So, if you have someone who is extremely oriented to self-empowerment and someone who's extremely oriented to kind of empathy and compassion, both the dialectic of those is better than either of them on their own.

If you have both of them being inhabited better than you by the same person, spending time around that person will probably do well for you. I think the thing you just mentioned is super important when it comes to cognitive schools, which is I think one of the fastest things people can do to improve their learning and their not just cognitive learning, but their meaningful problem-solving communication and civic capacity, capacity to participate as a citizen with other people and making the world better, is to be seeking dialectical synthesis all the time.

And so, in the Hegelian sense, if you have a thesis, you have an antithesis. So, maybe we have libertarianism on one side and Marxist kind of communism on the other side. And one is arguing that the individual is the unit of choice. And so, we want to increase the freedom and support of individual choice, because as they make more agentic choices, it'll produce a better whole for everybody.

The other side saying, well, the individuals are conditioned by their environment who would choose to be born into Darfur rather than Finland. So, we actually need to collectively make environments that are good, because that the environment conditions individuals. So, you have a thesis and an antithesis. And then Hegel's ideas, you have a synthesis, which is a kind of higher order truth that understands how those relate in a way that neither of them do.

And so, it is actually at a higher order of complexity. So, the first part would be, can I steel man each of these? Can I argue each one well enough that the proponents of it are like, totally, you got that? And not just argue it rhetorically, but can I inhabit it where I can try to see and feel the world the way someone seeing and feeling the world that way would?

'Cause once I do, then I don't want to screw those people because there's truth in it, right? And I'm not gonna go back to war with them. I'm gonna go to finding solutions that could actually work at a higher order. If I don't go to a higher order, then there's war.

And, but then the higher order thing would be, well, it seems like the individual does affect the commons and the collective and other people. It also seems like the collective conditions individuals, at least statistically. And I can cherry pick out the one guy who got out of the ghetto and pulled himself up by his bootstraps.

But I can also say statistically that most people born into the ghetto show up differently than most people born into the Hamptons. And so, unless you wanna argue that and have you take your child from the Hamptons and put them in the ghetto, then like, come on, be realistic about this thing.

So how do we make, we don't want social systems that make weak dependent individuals, right? The welfare argument, but we also don't want no social system that supports individuals to do better. We don't want individuals where their self-expression and agency fucks the environment and everybody else and employs slave labor and whatever.

So can we make it to where individuals are creating holes that are better for conditioning other individuals? Can we make it to where we have holes that are conditioning increased agency and sovereignty? Right, that would be the synthesis. So the thing that I'm coming to here is, if people have that as a frame, and sometimes it's not just thesis and antithesis, it's like eight different views, right?

Can I steel man each view? This is not just, can I take the perspective, but am I seeking them? Am I actively trying to inhabit other people's perspective? Then can I really try to essentialize it and argue the best points of it, both the sense-making about reality and the values, why these values actually matter?

Then, just like I wanna seek those perspectives, then I wanna seek, is there a higher order set of understandings that could fulfill the values of and synthesize the sense-making of all of them simultaneously? Maybe I won't get it, but I wanna be seeking it, and I wanna be seeking progressively better ones.

So this is perspective seeking, driving perspective taking, and then seeking synthesis. I think that one cognitive disposition might be the most helpful thing. - Would you put a title of dialectic synthesis on that process? 'Cause that seems to be such a part, so like this rigorous empathy. Like, it's not just empathy, it's empathy with rigor.

Like you really want to understand and embody different worldviews, and then try to find a higher order synthesis. - Okay, so I remember last night you told me, when we first met, you said that you looked in somebody's eyes and you felt that you had suffered in some ways that they had suffered, and so you could trust them.

Shared pathos, right, creates a certain sense of kind of shared bonding and shared intimacy. So empathy is actually feeling the suffering of somebody else, and feeling the depth of their sentience. I don't wanna fuck them anymore, I don't wanna hurt them. I don't want to behave, I don't want my proposition to go through when I go and inhabit the perspective of the other people, if they feel that's really gonna mess them up, right?

And so the rigorous empathy, it's different than just compassion, which is I generally care. Like I have a generalized care, but I don't know what it's like to be them. I can never know what it's like to be them perfectly, and there's a humility you have to have, which is my most rigorous attempt is still not it.

My most rigorous attempt, mine, to know what it's like to be a woman is still not it. I have no question that if I was actually a woman, it would be different than my best guesses. I have no question if I was actually black, it'd be different than my best guesses.

So there's a humility in that which keeps me listening, 'cause I don't think that I know fully, but I want to, and I'm gonna keep trying better to. And then I want to across them, and then I wanna say, is there a way we can forward together and not have to be in war?

It has to be something that could meet the values that everyone holds, that could reconcile the partial sensemaking that everyone holds, and that could offer a way forward that is more agreeable than the partial perspectives at war with each other. - But so the more you succeed at this empathy with humility, the more you're carrying the burden of other people's pain, essentially.

- Now this goes back to the question of, do I see us as one being or 7.8 billion? I think the, if I'm overwhelmed with my own pain, I can't empathize that much, because I don't have the bandwidth, I don't have the capacity. If I don't feel like I can do something about a particular problem in the world, it's hard to feel it 'cause it's just too devastating.

And so a lot of people go numb and even go nihilistic because they just don't feel the agency. So as I actually become more empowered as an individual and have more sense of agency, I also become more empowered to be more empathetic for others and be more connected to that shared burden and want to be able to make choices on behalf of and in benefit of.

- So this way of living seems like a way of living that would solve a lot of problems in society from a cellular automata perspective. So if you have a bunch of little agents behaving in this way, my intuition, there'll be interesting complexities that emerge, but my intuition is it will create a society that's very different and recognizably better than the one we have today.

How much, oh wait, hold that question, 'cause I want to come back to it, but this brings us back to Gerard, which we didn't answer. The conflict theory. - Yes. - 'Cause about how to get past the conflict theory. - Yes, you know the Robert Frost poem about the two paths and you never had time to turn back to the other.

We're gonna have to do that quite a lot. We're gonna be living that poem over and over again. But yes, how to, let's return back. - Okay, so the rest of the argument goes, you learn to want what other people want, therefore fundamental conflict based in our desire because we want the thing that somebody else has.

And then people are, they're in conflict over trying to get the same stuff, power, status, attention, physical stuff, a mate, whatever it is. And then we learn the conflict by watching. And so then the conflict becomes mimetic. So the, and you know, we become on the Palestinian side or the Israeli side or the communist or capitalist side or the left or right politically or whatever it is.

And until eventually the conflict energy in the system builds up so much that some type of violence is needed to get the bad guy, whoever it is that we're gonna blame. And you know, George talks about why scapegoating was kind of a mechanism to minimize the amount of violence.

Let's blame a scapegoat as being more relevant than they really were. But if we all believe it, then we can all kind of calm down with the conflict energy. - It's a really interesting concept, by the way. I mean, you went, you beautifully summarized it, but the idea that there's a scapegoat, that there's a, this kind of thing naturally leads to a conflict.

And then they find the other, some group that's the other, that's either real or artificial as the cause of the conflict. - Well, it's always artificial because the cause of the conflict, and Gerard is the mimesis of desire itself, and how do we attack that? How do we attack that it's our own desire?

So this now gets to something more like Buddha said, right? Which was desire is the cause of suffering. Gerard and Buddha would kind of agree in this way. - So, but that explains, I mean, again, it's a compelling description of human history that we do tend to come up with the other.

And-- - Okay, kind of. I just had such a funny experience with someone critiquing Gerard the other day in such a elegant and beautiful and simple way. It's a friend who's grew up Aboriginal Australian, is a scholar of Aboriginal social technologies. And he's like, nah, man, Gerard just made shit up about how tribes work.

Like we come from a tribe, we've got tens of thousands of years, and we didn't have increasing conflict and then scapegoat and kill someone. We'd have a little bit of conflict and then we would dance and then everybody'd be fine. Like we'd dance around the campfire, everyone would like kind of physically get the energy out.

We'd look in each other's eyes, we'd have positive bonding, and then we're fine. And nobody, no scapegoats. And-- - I think that's called the Joe Rogan theory of desire, which is he's like all of human problems have to do with the fact that you don't do enough hard shit in your day.

So maybe you could just dance it, 'cause he says like doing exercise and running on the treadmill gets all the demons out. Maybe just dancing gets all the demons out. - So this is why I say we have to be careful with taking an idea that seems too explanatory and then taking it as a given and then saying, well, now that we're stuck with the fact that conflict is inexorable because mimetic desire and therefore how do we deal with the inexorability of the conflict and how to sublimate violence?

Well, no, the whole thing might be actually gibberish. Meaning it's only true in certain conditions and other conditions it's not true. So the deeper question is under which conditions is that true? Under which conditions is it not true? What do those other conditions make possible and look like? - And in general, we should stay away from really compelling models of reality because there's something about our brains that these models become sticky and we can't even think outside of them.

So-- - It's not that we stay away from them, it's that we know that the model of reality is never reality. That's the key thing. - Humility again, it goes back to just having the humility that you don't have a perfect model of reality. - There's an, the model of reality could never be reality.

The process of modeling is inherently information reduction. And I can never show that the unknown, unknown set has been factored. - It's back to the cellular automata. You can't put the genie back in the bottle. Like when you realize it's unfortunately, sadly impossible to create a model of cellular automata, even if you know the basic rules that predict to even any degree of accuracy what, how that system will evolve, which is fascinating mathematically, sorry.

I think about it quite a lot. It's very annoying. Wolfram has this rule 30, like you should be able to predict it. It's so simple, but you can't predict what's going to be, like there's a problem he defines, they try to predict some aspect of the middle column of the system.

Just anything about it, what's gonna happen in the future. And you can't, you can't. It sucks. 'Cause then we can't make sense of this world, you know, of reality in a definitive way. It's always like in the striving, like we're always striving. - Yeah, I don't think this sucks.

- So that's a feature, not a bug? - Well, that's assuming a designer. I would say, I don't think it sucks. I think it's not only beautiful, but maybe necessary for beauty. - The mess. So you're, so you're, you disagree with Jordan Peterson, you should clean up your room.

You like the rooms messy. - It's essential for the, for beauty. - It's not, it's not that, it's, okay. I take, I have no idea if it was intended this way. And so I'm just interpreting it a way I like. The commandment about having no false idols. To me, the way I interpret that is meaningful, is that reality is sacred to me.

I have a reverence for reality, but I know my best understanding of it is never complete. I know my best model of it is a model where I tried to make some kind of predictive capacity by reducing the complexity of it to a set of stuff that I could observe.

And then a subset of that stuff that I thought was the causal dynamics and then some set of mechanisms that are involved. And what we find is that it can be super useful. Like Newtonian gravity can help us do ballistic curves and all kinds of super useful stuff. And then we get to the place where it doesn't explain what's happening at the cosmological scale or at a quantum scale.

And at each time, what we're finding is we excluded stuff. And it also doesn't explain the reconciliation of gravity with quantum mechanics and the other kinds of fundamental laws. So models can be useful, but they're never true with a capital T. Meaning they're never an actual real full, they're never a complete description of what's happening in real systems.

They can be a complete description of what's happening in an artificial system that was the result of applying a model. So the model of a circuit board and the circuit board are the same thing. But I would argue that the model of a cell and the cell are not the same thing.

And I would say this is key to what we call complexity versus the complicated, which is a distinction Dave Snowden made well in defining the difference between simple, complicated, complex and chaotic systems. But one of the definers in complex systems is that no matter how you model the complex system, it will still have some emergent behavior not predicted by the model.

- Can you elaborate on the complex versus the complicated? - Complicated means we can fully explicate the phase space of all the things that it can do. We can program it. All human, not all, for the most part, human built things are complicated. They don't self-organize. They don't self-repair.

They're not self-evolving. And we can make a blueprint for them. - Sorry, for human systems? - For human technologies. - Human technologies, I'm sorry. Okay, so non-biological systems. - That are basically the application of models. And engineering is kind of applied science, science as the modeling process. But with- - But humans are complex.

- Complex stuff, with biological type stuff and sociological type stuff, it more has generator functions. And even those can't be fully explicated than it has, or our explanation can't prove that it has closure of what would be in the unknown, unknown set. Where we keep finding like, oh, it's just the genome.

Oh, well now it's the genome and the epigenome. And then a recursive change on the epigenome 'cause of the proteome. And then there's mitochondrial DNA and then virus is affected and fuck, right? So it's like we get overexcited when we think we found the thing. - So on Facebook, you know how you can list your relationship as complicated?

It should actually say it's complex. That's the more accurate description. Self-terminating is a really interesting idea that you talk about quite a bit. First of all, what is a self-terminating system? And I think you have a sense, correct me if I'm wrong, that human civilization as it currently is is a self-terminating system.

Why do you have that intuition? Combine it with the definition of what self-terminating means. - Okay, so if we look at human societies historically, human civilizations, it's not that hard to realize that most of the major civilizations and empires of the past don't exist anymore. So they had a life cycle, they died for some reason.

So we don't still have the early Egyptian empire or Inca or Maya or Aztec or any of those, right? And so they terminated. Sometimes it seems like they were terminated from the outside and more, sometimes it seems like they self-terminate. When we look at Easter Island, it was a self-termination.

So let's go ahead and take an island situation. If I have an island and we are consuming the resources on that island faster than the resources can replicate themselves and there's a finite space there, that system is gonna self-terminate. It's not gonna be able to keep doing that thing 'cause you'll get to a place of there's no resources left and then you get a...

So now if I'm utilizing the resources faster than they can replicate, or faster than they can replenish, and I'm actually growing our population in the process, I'm even increasing the rate of the utilization of resources, I might get an exponential curve and then hit a wall and then just collapse the exponential curve rather than do an S curve or some other kind of thing.

So self-terminating system is any system that depends upon a substrate system that is debasing its own substrate, that is debasing what it depends upon. - So you're right that if you look at empires, they rise and fall throughout human history, but not this time, bro. This one's gonna last forever.

- I like that idea. I think that if we don't understand why all the previous ones failed, we can't ensure that. And so I think it's very important to understand it well so that we can have that be a designed outcome with somewhat decent probability. - So where it's sort of in terms of consuming the resources on the island, we're a clever bunch, and we keep coming up, especially when on the horizon, there is a termination point.

We keep coming up with clever ways of avoiding disaster, of avoiding collapse, of constructing, this is where technological innovation, this is where growth comes in, coming up with different ways to improve productivity and the way society functions such that we consume less resources or get a lot more from the resources we have.

So there's some sense in which there is a, human ingenuity is a source for optimism about the future of this particular system that may not be self-terminating. If there's more innovation than there is consumption. - So overconsumption of resources is just one way a thing can self-terminate. We're just kind of starting here, but there are reasons for optimism and pessimism then they're both worth understanding.

And there's failure modes on understanding either without the other. As we mentioned previously, there's what I would call naive techno-optimism, naive techno-capital optimism that says stuff just has been getting better and better and we wouldn't wanna live in the dark ages and tech has done all this awesome stuff.

And we know the proponents of those models and that stuff is gonna kind of keep getting better. Of course there are problems, but human ingenuity rises to it. Supply and demand will solve the problems, whatever. - Would you put a rake as well on that or in that bucket?

Is there some specific people you have in mind or naive optimism is truly naive to where you're essentially just have an optimism that's blind to any kind of realities of the way technology progresses? - I don't think that anyone who thinks about it and writes about it is perfectly naive.

- Gotcha. - But there might be-- - It's a platonic ideal. - There might be a bias in the nature of the assessment. I would also say there's kind of naive techno-pessimism and there are critics of technology. I mean, you read the Unabomber's manifesto on why technology can't not result in our self-termination.

So we have to take it out before it gets any further. But also if you read a lot of the X-risk community, Bostrom and friends, it's like our total number of existential risks and the total probability of them is going up. And so I think that there are, we have to hold together where our positive possibilities and our risk possibilities are both increasing and then say for the positive possibilities to be realized long-term, all of the catastrophic risks have to not happen.

Any of the catastrophic risks happening is enough to keep that positive outcome from occurring. So how do we ensure that none of them happen? If we want to say, let's have a civilization that doesn't collapse. So again, collapse theory. It's worth looking at books like "The Collapse of Complex Societies" by Joseph Tainter.

It does an analysis of that many of the societies fell for internal institutional decay, civilizational decay reasons. Baudrillard in "Simulation and Simulacra" looks at a very different way of looking at how institutional decay in the collective intelligence of a system happens and it becomes kind of more internally parasitic on itself.

Obviously, Jared Diamond made a more popular book called "Collapse." And as we were mentioning, the Antikythera mechanism has been getting attention in the news lately. It's like a 2000 year old clock, right? Like metal gears. And does that mean we lost like 1500 years of technological progress? And from a society that was relatively technologically advanced.

So what I'm interested in here is being able to say, okay, well, why did previous societies fail? Can we understand that abstractly enough that we can make a civilizational model that isn't just trying to solve one type of failure, but solve the underlying things that generate the failures as a whole?

Are there some underlying generator functions or patterns that would make a system self-terminating? And can we solve those and have that be the kernel of a new civilizational model that is not self-terminating? And can we then be able to actually look at the categories of excerpts we're aware of and see that we actually have resilience in the presence of those, not just resilience, but anti-fragility.

And I would say for the optimism to be grounded, it has to actually be able to understand the risk space well and have adequate solutions for it. - So can we try to dig into some basic intuitions about the underlying sources of catastrophic failures of the system and overconsumption that's built in into self-terminating systems?

So both the overconsumption, which is like the slow death, and then there's the fast death of nuclear war and all those kinds of things, AGI, biotech, bioengineering, nanotechnology, my favorite, nanobots. Nanobots are my favorite because it sounds so cool to me that I could just know that I would be one of the scientists that would be full steam ahead in building them without sufficiently thinking about the negative consequences.

I would definitely be, I would be podcasting all about the negative consequences, but when I go back home, I'd be just in my heart, know the amount of excitement is a dumb descendant of ape, no offense to apes. So I wanna backtrack on my previous comments about negative comments about apes.

I have that sense of excitement that would result in problems. So sorry, a lot of things said, but can we start to pull it at a thread? 'Cause you've also provided a kind of a beautiful, general approach to this, which is this dialectic synthesis or just rigorous empathy. Whatever word we wanna put to it, that seems to be from the individual perspective as one way to sort of live in the world as we try to figure out how to construct non-self-terminating systems.

So what are some underlying sources? - Yeah, first I have to say, I actually really respect Drexler for emphasizing Grey Goo in "Engines of Creation" back in the day to make sure the world was paying adequate attention to the risks of the nanotech. As someone who was right at the cutting edge of what could be, there's definitely game theoretic advantage to those who focus on the opportunities and don't focus on the risks or pretend there aren't risks because they get to market first and then they externalize all of the costs through limited liability or whatever it is to the commons or wherever happen to have it.

Other people are gonna have to solve those, but now they have the power and capital associated. The person who looked at the risks and tried to do better design and go slower is probably not gonna move into positions of as much power or influence as quickly. So this is one of the issues we have to deal with is some of the bad game theoretic dispositions in the system relative to its own stability.

- And the key aspect to that, sorry to interrupt, is the externalities generated. - Yes. - What flavors of catastrophic risk are we talking about here? What's your favorite flavor in terms of ice cream? So mine is coconut. - Nobody seems to like coconut ice cream. So ice cream aside, what do you most worry about in terms of catastrophic risk that will help us kind of make concrete the discussion we're having about how to fix this whole thing?

- Yeah, I think it's worth taking a historical perspective briefly to just kind of orient everyone to it. We don't have to go all the way back to the aliens who've seen all of civilization, but to just recognize that for all of human history, as far as we're aware, there were existential risks to civilizations and they happened, right?

Like there were civilizations that were killed in war, that tribes that were killed in tribal warfare, whatever. So people faced existential risk to the group that they identified with. It's just, those were local phenomena, right? It wasn't a fully global phenomena. So an empire could fall and surrounding empires didn't fall.

Maybe they came in and filled the space. The first time that we were able to think about catastrophic risk, not from like a solar flare or something that we couldn't control, but from something that humans would actually create at a global level was World War II and the bomb.

Because it was the first time that we had tech big enough that could actually mess up everything at a global level. It could mess up habitability. We just weren't powerful enough to do that before. It's not that we didn't behave in ways that would have done it. We just only behaved in those ways at the scale we could affect.

And so it's important to get that there's the entire world before World War II, where we don't have the ability to make a non-habitable biosphere, non-habitable for us. And then there's World War II and the beginning of a completely new phase where global human induced catastrophic risk is now a real thing.

And that was such a big deal that it changed the entire world in a really fundamental way, which is when you study history, it's amazing how big a percentage of history is studying war, right? And the history of wars, you said European history, whatever. It's generals and wars and empire expansions.

And so the major empires near each other never had really long periods of time where they weren't engaged in war or preparation for war or something like that. That was, humans don't have a good precedent in the post tribal phase, the civilization phase of being able to solve conflicts without war for very long.

World War II was the first time where we could have a war that no one could win. And so the superpowers couldn't fight again. They couldn't do a real kinetic war. They could do diplomatic wars and cold war type stuff, and they could fight proxy wars through other countries that didn't have the big weapons.

And so mutually assured destruction and like coming out of World War II, we actually realized that nation states couldn't prevent world war. And so we needed a new type of supervening government in addition to nation states, which was the whole Bretton Woods world, the United Nations, the World Bank, the IMF, the globalization trade type agreements, mutually assured destruction.

That was, how do we have some coordination beyond just nation states between them since we have to stop war between at least the superpowers? And it was pretty successful, given that we've had like 75 years of no superpower on superpower war. We've had lots of proxy wars during that time.

We've had cold war. And I would say we're in a new phase now where the Bretton Woods solution is basically over, or almost over. - Can you describe the Bretton Woods solution? - Yeah, so the Bretton Woods, the series of agreements for how the nations would be able to engage with each other in a solution other than war was these IGOs, these intergovernmental organizations, and was the idea of globalization.

Since we could have global effects, we needed to be able to think about things globally, where we had trade relationships with each other, where it would not be profitable to war with each other. It'd be more profitable to actually be able to trade with each other. So our own self-interest was gonna drive our non-war interest.

And so this started to look like, and obviously this couldn't have happened that much earlier either, because industrialization hadn't gotten far enough to be able to do massive global industrial supply chains and ship stuff around quickly. But like we were mentioning earlier, almost all the electronics that we use today, just basic cheap stuff for us, is made on six continents, made in many countries.

There's no single country in the world that could actually make many of the things that we have, and from the raw material extraction to the plastics and polymers and the et cetera. And so the idea that we made a world that could do that kind of trade and create massive GDP growth, we could all work together to be able to mine natural resources and grow stuff.

With the rapid GDP growth, there was the idea that everybody could keep having more without having to take each other's stuff. And so that was part of kind of the Bretton Woods post-World War II model. The other was that we'd be so economically interdependent that blowing each other up would never make sense.

That worked for a while. Now, it also brought us up into planetary boundaries faster, the unrenewable use of resource and turning those resources into pollution on the other side of the supply chain. So obviously that faster GDP growth meant the overfishing of the oceans and the cutting down of the trees and the climate change and the toxic mining tailings going into the water and the mountaintop removal mining and all those types of things.

- That's the overconsumption side of the risk that we're talking about. - And so the answer of let's do positive GDP is the answer rapidly and exponentially obviously accelerated the planetary boundary side. And that was thought about for a long time, but it started to be modeled with the Club of Rome and limits of growth.

But it's just very obvious to say if you have a linear materials economy where you take stuff out of the earth faster, whether it's fish or trees or oil, you take it out of the earth faster than it can replenish itself. And you turn it into trash after using it for a short period of time, you put the trash in the environment faster than it can process itself.

And there's toxicity associated with both sides of this. You can't run an exponentially growing linear materials economy on a finite planet forever. That's not a hard thing to figure out. And it has to be exponential if there's an exponentiation in the monetary supply because of interest and then fractional reserve banking.

And to then be able to keep up with the growing monetary supply, you have to have growth of goods and services. So that's that kind of thing that has happened. But you also see that when you get these supply chains that are so interconnected across the world, you get increased fragility 'cause a collapse or a problem in one area then affects the whole world in a much bigger area as opposed to the issues being local.

So we got to see with COVID and an issue that started in one part of China affecting the whole world so much more rapidly than would have happened before Bretton Woods, right? Before international travel supply chains, you know, that whole kind of thing. And with a bunch of second and third order effects that people wouldn't have predicted, okay?

We have to stop certain kinds of travel because of viral contaminants, but the countries doing agriculture depend upon fertilizer they don't produce that is shipped into them and depend upon pesticides they don't produce. So we got both crop failures and crops being eaten by locusts in scale in Northern Africa and Iran and things like that because they couldn't get the supplies of stuff in.

So then you get massive starvation or future kind of hunger issues because of supply chain shutdowns. So you get this increased fragility and cascade dynamics where a small problem can end up leading to cascade effects. And also we went from two superpowers with one catastrophe weapon to now that same catastrophe weapon is there's more countries that have it, eight or nine countries that have it.

And there's a lot more types of catastrophe weapons. We now have catastrophe weapons with weaponized drones that can hit infrastructure targets with bio with, in fact, every new type of tech has created an arms race. So we have not with the UN or the other kind of intergovernmental organizations, we haven't been able to really do nuclear deproliferation.

We've actually had more countries get nukes and keep getting faster nukes, the race to hypersonics and things like that. And every new type of technology that has emerged has created an arms race. And so you can't do mutually assured destruction with multiple agents the way you can with two agents.

Two agents, it's much easier to create a stable Nash equilibrium that's forced. But the ability to monitor and say, if these guys shoot, who do I shoot? Do I shoot them? Do I shoot everybody? And so you get a three body problem. You get a very complex type of thing when you have multiple agents and multiple different types of catastrophe weapons, including ones that can be much more easily produced than nukes.

Nukes are really hard to produce. There's only uranium in a few areas. Uranium enrichment is hard. ICBMs are hard. But weaponized drones hitting smart targets is not so hard. There's a lot of other things where basically the scale at being able to manufacture them is going way, way down to where even non-state actors can have them.

And so when we talk about exponential tech and the decentralization of exponential tech, what that means is decentralized catastrophe weapon capacity. And especially in a world of increasing numbers of people feeling disenfranchised, frantic, whatever, for different reasons. So I would say the Bretton Woods world doesn't prepare us to be able to deal with lots of different agents, having lots of different types of catastrophe weapons you can't put mutually assured destruction on, where you can't keep doing growth of the materials economy in the same way because of hitting planetary boundaries and where the fragility dynamics are actually now their own source of catastrophic risk.

So now we're, so like there was all the world until World War II. And World War II is just from a civilization timescale point of view, it was just a second ago. It seems like a long time, but it is really not. We get a short period of relative peace at the level of superpowers while building up the military capacity for much, much, much worse war the entire time.

And then now we're at this new phase where the things that allowed us to make it through the nuclear power are not the same systems that will let us make it through the next stage. So what is this next post Bretton Woods? How do we become safe vessels, safe stewards of many different types of exponential technology is a key question when we're thinking about X-Risk.

- Okay, so, and I'd like to try to answer the how a few ways, but first on the mutually assured destruction. Do you give credit to the idea of two superpowers not blowing each other up with nuclear weapons to the simple game theoretic model of mutually assured destruction, or something you've said previously, this idea of inverse correlation, which I tend to believe between, now you were talking about tech, but I think it's maybe broadly true.

The inverse correlation between competence and propensity for destruction. So the bigger your weapons, not because you're afraid of mutually assured self-destruction, but because we're human beings and there's a deep moral fortitude there that somehow aligned with competence and being good at your job. That like, it's very hard to be a psychopath and be good at killing at scale.

Do you share any of that intuition? - Kind of. I think most people would say that Alexander the Great and Genghis Khan and Napoleon were effective people that were good at their job. That were actually maybe asymmetrically good at being able to organize people and do certain kinds of things that were pretty oriented towards certain types of destruction.

Or pretty willing to, maybe they would say they were oriented towards empire expansion, but pretty willing to commit certain acts of destruction in the name of it. - What are you worried about? The Genghis Khan, or you could argue he's not a psychopath. Are you worried about Genghis Khan, are you worried about Hitler, or are you worried about a terrorist who has a very different ethic, which is not even for, it's not trying to preserve and build and expand my community.

It's more about just the destruction in itself is the goal. - I think the thing that you're looking at that I do agree with is that there's a psychological disposition towards construction and a psychological disposition more towards destruction. Obviously everybody has both and can toggle between both. And oftentimes one is willing to destroy certain things.

We have this idea of creative destruction, right? Willing to destroy certain things to create other things. And utilitarianism and trolley problems are all about exploring that space. And the idea of war is all about that. I am trying to create something for our people and it requires destroying some other people.

Sociopathy is a funny topic 'cause it's possible to have very high fealty to your in-group and work on perfecting the methods of torture to the out-group at the same time 'cause you can dehumanize and then remove empathy. And I would also say that there are types. So the reason, the thing that gives hope about the orientation towards construction and destruction being a little different in psychologies is what it takes to build really catastrophic tech, even today where it doesn't take what it took to make a nuke, a small group of people could do it, takes still some real technical knowledge that required having studied for a while and some then building capacity.

And there's a question of, is that psychologically inversely correlated with the desire to damage civilization meaningfully? A little bit, a little bit, I think. - I think a lot. I think it's actually, I mean, this is the conversation I had, I think offline with Dan Carlin, which is like, it's pretty easy to come up with ways that any competent, I can come up with a lot of ways to hurt a lot of people.

And it's pretty easy. Like I alone could do it. And there's a lot of people as smart or smarter than me, at least in their creation of explosives. Why are we not seeing more insane mass murder? - I think there is something fascinating and beautiful about this. - Yes.

- And it does have to do with some deeply pro-social types of characteristics in humans. But when you're dealing with very large numbers, you don't need a whole lot of a phenomena. And so then you start to say, well, what's the probability that X won't happen this year, then won't happen in the next two years, three years, four years.

And then how many people are doing destructive things with lower tech? And then how many of them can get access to higher tech that they didn't have to figure out how to build? So when I can get commercial tech, and maybe I don't understand tech very well, but I understand it well enough to utilize it, not to create it, and I can repurpose it.

When we saw that commercial drone with a homemade thermite bomb hit the Ukrainian munitions factory and do the equivalent of an incendiary bomb level of damage, that was just home tech. That's just simple kind of thing. And so the question is not, does it stay being a small percentage of the population?

The question is, can you bind that phenomena nearly completely? And especially now as you start to get into bigger things, CRISPR gene drive technologies and various things like that, can you bind it completely long-term? Over what period of time? - Not perfectly though. That's the thing. I'm trying to say that there is some, let's call it, let's, a random word, love, that's inherent and that's core to human nature, that's preventing destruction at scale.

And you're saying, yeah, but there's a lot of humans. There's gonna be eight plus billion, and then there's a lot of seconds in the day to come up with stuff. There's a lot of pain in the world that can lead to a distorted view of the world such that you want to channel that pain into the destruction.

All those kinds of things. And it's only a matter of time that any one individual could do large damage, especially as we create more and more democratized, decentralized ways to deliver that damage, even if you don't know how to build the initial weapon. But the thing is, it seems like it's a race between the cheapening of destructive weapons and the capacity of humans to express their love towards each other.

And it's a race that so far, I know on Twitter, it's not popular to say, but love is winning, okay? So what is the argument that love is going to lose here against nuclear weapons and biotech and AI and drones? - Okay, I'm gonna come at the end of this to a how love wins.

So I just want you to know that that's where I'm oriented. - That's the end, okay. - But I'm gonna argue against why that is a given because it's not a given. I don't believe. And I think that- - This is like a good romantic comedy. So you're gonna create drama right now, but it will end in a happy ending.

- Well, it's because it's only a happy ending if we actually understand the issues well enough and take responsibility to shift it. Do I believe, like, there's a reason why there's so much more dystopic sci-fi than protopic sci-fi. And the some protopic sci-fi usually requires magic is because, or at least magical tech, right?

Dilithium crystals and warp drives and stuff. Because it's very hard to imagine people like the people we have been in the history books with exponential type technology and power that don't eventually blow themselves up, that make good enough choices as stewards of their environment and their commons and each other and et cetera.

So like, it's easier to think of scenarios where we blow ourselves up than it is to think of scenarios where we avoid every single scenario where we blow ourselves up. And when I say blow ourselves up, I mean the environmental versions, the terrorist versions, the war versions, the cumulative externalities versions.

- And I'm sorry if I'm interrupting your flow of thought, but why is it easier? Could it be a weird psychological thing where we either are just more capable to visualize explosions and destruction? And then the sicker thought, which is like we kind of enjoy for some weird reason thinking about that kind of stuff, even though we wouldn't actually act on it.

It's almost like some weird, like I love playing shooter games, first person shooters. And like, especially if it's like murdering zombie doom, you're shooting demons. I played one of my favorite games, Diablos, like slashing through different monsters and the screaming and pain and the hellfire. And then I go out into the real world to eat my coconut ice cream and I'm all about love.

So like, can we trust our ability to visualize how it all goes to shit as an actual rational way of thinking? - I think it's a fair question to say to what degree is there just kind of perverse fantasy and morbid exploration and whatever else that happens in our imagination.

But I don't think that's the whole of it. I think there is also a reality to the combinatorial possibility space and the difference in the probabilities that there's a lot of ways I could try to put the 70 trillion cells of your body together that don't make you. There's not that many ways I can put them together that make you.

There's a lot of ways I could try to connect the organs together that make some weird kind of group of organs on a desk but that doesn't actually make a functioning human. And you can kill an adult human in a second, but you can't get one in a second.

It takes 20 years to grow one and a lot of things to happen right. I could destroy this building in a couple minutes with demolition, but it took a year or a couple years to build it. There is-- - Calm down, Cole, this is just an example. It's not, he doesn't mean it.

- There's a gradient where entropy is easier and that there's a lot more ways to put a set of things together that don't work than the few that really do produce higher order synergies. And so, when we look at a history of war and then we look at exponentially more powerful warfare, an arms race that drives that in all these directions, and when we look at a history of environmental destruction and exponentially more powerful tech that makes exponential externalities multiplied by the total number of agents that are doing it and the cumulative effects, there's a lot of ways the whole thing can break, like a lot of different ways.

And for it to get ahead, it has to have none of those happen. And so, there's just a probability space where it's easier to imagine that thing. So, to say how do we have a protopic future, we have to say, well, one criteria must be that it avoids all of the catastrophic risks.

So, can we understand, can we inventory all the catastrophic risks? Can we inventory the patterns of human behavior that give rise to them? And could we try to solve for that? And could we have that be the essence of the social technology that we're thinking about to be able to guide, bind, and direct a new physical technology?

'Cause so far, our physical technology, like we were talking about the Genghis Khans and like that, that obviously use certain kinds of physical technology and armaments and also social technology and unconventional warfare for a particular set of purposes. But we have things that don't look like warfare, like Rockefeller and standard oil.

And it looked like a constructive mindset to be able to bring this new energy resource to the world. And it did. And the second order effects of that are climate change and all of the oil spills that have happened and will happen. And all of the wars in the Middle East over the oil that had been there and the massive political clusterfuck and human life issues that are associated with it and on and on, right?

And so it's also not just the orientation to construct a thing can have a narrow focus on what I'm trying to construct, but be affecting a lot of other things through second and third order effects I'm not taking responsibility for. - And you often, on another tangent, mentioned second, third, and fourth order effects.

- And the order. - And the order. - Cascading. - Which is really fascinating. Like starting with the third order, plus it gets really interesting. 'Cause we don't even acknowledge like the second order effects. - Right. - But like thinking, 'cause those, it could get bigger and bigger and bigger in ways we were not anticipating.

So how do we make those, so it sounds like part of the, part of the thing that you are thinking through in terms of a solution, how to create an anti-fragile, a resilient society, is to make explicit, acknowledge, understand the externalities, the second order, third order, fourth order, and the order effects.

How do we start to think about those effects? - Yeah, the war application is harm we're trying to cause, or that we're aware we're causing, right? The externality is harm that, at least supposedly, we're not aware we're causing, or at minimum, it's not our intention, right? Maybe we're either totally unaware of it, or we're aware of it, but it is a side effect of what our intention is.

It's not the intention itself. There are catastrophic risks from both types. The direct application of increased technological power to a rivalrous intent, which is gonna cause harm for some out-group for some in-group to win, but the out-group is also working on growing the tech, and if they don't lose completely, they reverse engineer the tech, upregulate it, come back with more capacity.

So there's the exponential tech arms race side of in-group, out-group rivalry using exponential tech that is one set of risks. And the other set of risks is the application of exponentially more powerful tech, not intentionally to try and beat an out-group, but to try to achieve some goal that we have, but to produce a second and third order effects that do have harm to the commons, to other people, to environment, to other groups, that might actually be bigger problems than the problem we were originally trying to solve with the thing we were building.

When Facebook was building a dating app and then building a social app where people could tag pictures, they weren't trying to build a democracy destroying app that would maximize time on site as part of its ad model through AI optimization of a newsfeed to the thing that made people spend most time on site, which is usually them being limbically hijacked more than something else, which ends up appealing to people's cognitive biases and group identities, and creates no sense of shared reality.

They weren't trying to do that, but it was a second order effect. And it's a pretty fucking powerful second order effect, and a pretty fast one, 'cause the rate of tech is obviously able to get distributed to much larger scale, much faster, and with a bigger jump in terms of total vertical capacity, then that's what it means to get to the verticalizing part of an exponential curve.

So just like we can see that oil had these second order environmental effects, and also social and political effects, and war and so much of the whole, like the total amount of oil used has a proportionality to total global GDP. And this way we have this, the petrodollar, and so the oil thing also had the externalities of a major aspect of what happened with military industrial complex and things like that.

So, but we can see the same thing with more current technologies, with Facebook and Google and other things. So I don't think we can run, and the more powerful the tech is, we build it for reason X, whatever reason X is. Maybe X is three things, maybe it's one thing, right?

We're doing the oil thing because we wanna make cars because it's a better method of individual transportation. We're building the Facebook thing 'cause we're gonna connect people socially in a personal sphere. But it interacts with complex systems, with ecologies, economies, psychologies, cultures. And so it has effects on other than the thing we're intending.

Some of those effects can end up being negative effects, but because this technology, if we make it to solve a problem, it has to overcome the problem. The problem's been around for a while, it's gonna overcome in a short period of time. So it usually has greater scale, greater rate of magnitude in some way.

That also means that the externalities that it creates might be bigger problems. And you can say, well, but then that's the new problem and humanity will innovate its way out of that. Well, I don't think that's paying attention to the fact that we can't keep up with exponential curves like that, nor do finite spaces allow exponential externalities forever.

And this is why a lot of the smartest people thinking about this are thinking, well, no, I think we're totally screwed unless we can make a benevolent AI singleton that rules all of us. Guys like Ostrom and others thinking in those directions, 'cause they're like, how do humans try to do multipolarity and make it work?

And I have a different answer of what I think it looks like that does have more to do with the love, but some applied social tech aligned with love. - 'Cause I have a bunch of really dumb ideas. I'd prefer to hear- - I'd like to hear some of them first.

I think the idea I would have is to be a bit more rigorous in trying to measure the amount of love you add or subtract from the world in second, third, fourth, fifth order effects. It's actually, I think, especially in the world of tech, quite doable. You just might not like, the shareholders may not like that kind of metric, but it's pretty easy to measure.

That's not even, I'm perhaps half joking about love, but we could talk about just happiness and well-being, long-term well-being. That's pretty easy for Facebook, for YouTube, for all these companies to measure that. They do a lot of kinds of surveys. I mean, there's very simple solutions here that you could just survey how, I mean, servers are in some sense useless because they're a subset of the population.

You're just trying to get a sense, it's very loose kind of understanding, but integrated deeply as part of the technology. Most of our tech is recommender systems. Most of the, sorry, not tech, online interaction is driven by recommender systems that learn very little data about you and use that data based on, mostly based on traces of your previous behavior to suggest future things.

This is how Twitter, this is how Facebook works, this is how AdSense for Google, AdSense works, this is how Netflix, YouTube work, and so on. And for them to just track, as opposed to engagement, how much you spend on a particular video, a particular site, is also track, give you the technology to do self-report of what makes you feel good, of what makes you grow as a person, of what makes you, the best version of yourself, the Rogan idea of the hero of your own movie.

And just add that little bit of information. If you have people, you have this happiness surveys of how you feel about the last five days, how would you report your experience? You can lay out a set of videos. This is kind of fascinating to watch. I don't know if you ever look at YouTube, the history of videos you've looked at.

It's fascinating, it's very embarrassing for me. Like, it'll be like a lecture, and then like a set of videos that I don't want anyone to know about, which is, which would be like, I don't know, maybe like five videos in a row where it looks like I watched the whole thing, which I probably did, about like how to cook a steak, even though, or just like the best chefs in the world cooking steaks, and I'm just like sitting there watching it for no purpose whatsoever, wasting away my life, or like funny cat videos, or like legit, that's always a good one.

And I could look back and rate which videos made me a better person and not. And I mean, on a more serious note, there's a bunch of conversations, podcasts, or lectures I've watched which made me a better person, and some of them made me a worse person. Quite honestly, not for stupid reasons, like I feel dumber, but because I do have a sense that that started me on a path of not being kind to other people.

For example, I'll give you, for my own, and I'm sorry for ranting, but maybe there's some usefulness to this kind of exploration of self. When I focus on creating, on programming, on science, I become a much deeper thinker and a kinder person to others. When I listen to too many, a little bit is good, but too many podcasts or videos about how our world is melting down, or criticizing ridiculous people, the worst of the quote-unquote woke, for example.

There's all these groups that are misbehaving in fascinating ways because they've been corrupted by power. The more I watch criticism of them, the worse I become. And I'm aware of this, but I'm also aware that for some reason it's pleasant to watch those sometimes. And so for me to be able to self-report that to the YouTube algorithm, to the systems around me, and they ultimately try to optimize to make me the best version of myself, which I personally believe would make YouTube a lot more money because I'd be much more willing to spend time on YouTube and give YouTube a lot more of my money.

That's great for business and great for humanity because it'll make me a kinder person. It'll increase the love quotient, the love metric, and it'll make them a lot of money. I feel like everything's aligned. And so you should do that not just for YouTube algorithm, but also for military strategy and whether to go to war or not, because one externality you can think of about going to war, which I think we talked about offline, is we often go to war with kind of governments, not with the people.

You have to think about the kids of countries that see a soldier and because of what they experience, their interaction with the soldier, hate is born. When you're like eight years old, six years old, you lose your dad, you lose your mom, you lose a friend, somebody close to you that wanna really powerful externality that could be reduced to love, positive and negative, is the hate that's born when you make decisions.

And that's going to take fruition, that that little seed is going to become a tree that then leads to the kind of destruction that we talk about. So, but in my sense, it's possible to reduce everything to a measure of how much love does this add to the world.

All that to say, do you have ideas of how we practically build systems that create a resilient society? - There were a lot of good things that you shared where there's like 15 different ways that we could enter this that are all interesting. So I'm trying to see which one will probably be most useful.

- Pick the one or two things that are least ridiculous. - When you were mentioning if we could see some of the second order effects or externalities that we aren't used to seeing, specifically the one of a kid being radicalized somewhere else, which engenders enmity in them towards us, which decreases our own future security.

Even if you don't care about the kid, if you care about the kid, it's a whole other thing. Yeah, I mean, I think when we saw this, when Jane Fonda and others went to Vietnam and took photos and videos of what was happening, and you got to see the pictures of the kids with napalm on them.

That like the anti-war effort was bolstered by that in a way it couldn't have been without that. There's a, until we can see the images, you can't have a mere neuron effect in the same way. And when you can, that starts to have a powerful effect. I think there's a deep principle that you're sharing there, which is that if we, we can have a rivalrous intent where our in-group, whatever it is, maybe it's our political party wanting to win within the US, maybe it's our nation state wanting to win in a war or an economic war over resource or whatever it is, that if we don't obliterate the other people completely, they don't go away.

They're not engendered to like us more. They didn't become less smart. So they have more enmity towards us and whatever technologies we employed to be successful, they will now reverse engineer, make iterations on and come back. And so you drive an arms race, which is why you can see that the wars were over history, employing more lethal weaponry.

And not just the kinetic war, the information war and the narrative war and the economic war, right? Like it just increased capacity in all of those fronts. And so what seems like a win to us on the short term might actually really produce losses in the long term. And what's even in our own best interest in the long term is probably more aligned with everyone else 'cause we inter-affect each other.

And I think the thing about globalism, globalization and exponential tech and the rate at which we affect each other and the rate at which we affect the biosphere that we're all affected by is that this kind of proverbial spiritual idea that we're all interconnected and need to think about that in some way, that it was easy for tribes to get, because everyone in the tribe so clearly saw their interconnection and dependence on each other.

But in terms of a global level, the speed at which we are actually interconnected, the speed at which the harm happening to something in Wuhan affects the rest of the world or a new technology developed somewhere affects the entire world or an environmental issue or whatever is making it to where we either actually all get, not as a spiritual idea, just even as physics, right?

We all get the interconnectedness of everything and that we either all consider that and see how to make it through more effectively together or failures anywhere end up becoming decreased quality of life and failures and increased risk everywhere. - Don't you think people are beginning to experience that at the individual level?

So governments are resisting it. They're trying to make us not empathize with each other, feel connected. But don't you think people are beginning to feel more and more connected? Like, isn't that exactly what the technology is enabling? Like social networks, we tend to criticize them, but isn't there a sense which we're experiencing?

- When you watch those videos that are criticizing, whether it's the woke Antifa side or the QAnon Trump supporter side, does it seem like they have increased empathy for people that are outside of their ideologic camp? - No, not at all. So I may be conflating my own experience of the world and that of the populace.

I tend to see those videos as feeding something that's a relic of the past. They figured out that drama fuels clicks, but whether I'm right or wrong, I don't know. But I tend to sense that that is not, that hunger for drama is not fundamental to human beings. That we want to actually, that we want to understand Antifa and we want to empathize.

We want to take radical ideas and be able to empathize with them and synthesize it all. - Okay, let's look at cultural outliers in terms of violence versus compassion. We can see that a lot of cultures have relatively lower in-group violence, bigger out-group violence. And there's some variance in them and variance at different times based on the scarcity or abundance of resource and other things.

But you can look at, say, Jains, whose whole religion is around nonviolence so much so that they don't even hurt plants. They only take fruits that fall off them and stuff. Or to go to a larger population, you take Buddhists, where for the most part, with a few exceptions, for the most part, across three millennia and across lots of different countries and geographies and whatever, you have 10 million people, plus or minus, who don't hurt bugs.

The whole spectrum of genetic variance that is happening within a culture of that many people and head traumas and whatever, and nobody hurts bugs. And then you look at a group where the kids grew up as child soldiers in Liberia or Darfur, where to make it to adulthood, pretty much everybody's killed people, hand to hand, and killed people who were civilian or innocent type of people.

And you say, okay, so we were very neotenous. We can be conditioned by our environment, and humans can be conditioned, where almost all the humans show up in these two different bell curves. It doesn't mean that the Buddhists had no violence. It doesn't mean that these people had no compassion, but they're very different Gaussian distributions.

And so I think one of the important things that I like to do is look at the examples of the populations, what Buddhism shows regarding compassion or what Judaism shows around education, the average level of education that everybody gets 'cause of a culture that is really working on conditioning it or various cultures.

What are the positive deviance outside of the statistical deviance to see what is actually possible? And then say, what are the conditioning factors? And can we condition those across a few of them simultaneously? And could we build a civilization like that? Becomes a very interesting question. So there's this kind of real politic idea that humans are violent.

Large groups of humans become violent. They become irrational, specifically those two things, rivalrous and violent and irrational. And so in order to minimize the total amount of violence and have some good decisions, they need ruled somehow. And that not getting that is some kind of naive utopianism that doesn't understand human nature yet.

This gets back to like mimesis of desire as an inexorable thing. I think the idea of the masses is actually a kind of propaganda that is useful for the classes that control to popularize the idea that most people are too violent, lazy, undisciplined and irrational to make good choices.

And therefore their choices should be sublimated in some kind of way. I think that if we look back at these conditioning environments, we can say, okay, so the kids, they go to a really fancy school and have a good developmental environment like Exeter Academy. There's still a Gaussian distribution of how well they do on any particular metric, but on average, they become senators.

And the worst ones become high-end lawyers or whatever. And then I look at the inner city school with a totally different set of things. And I see a very, very differently displaced Gaussian distribution, but a very different set of conditioning factors. So then I say the masses, well, if all those kids who were one of the parts of the masses got to go to Exeter and have that family and whatever, would they still be the masses?

Could we actually condition more social virtue, more civic virtue, more orientation towards dialectical synthesis, more empathy, more rationality widely? Yes. Would that lead to better capacity for something like participatory governance, democracy or republic or some kind of participatory governance? - Yes. - Yes. Is it necessary for it, actually?

Yes. And is it good for class interests? Not really. - By the way, when you say class interests, this is the powerful leading over the less powerful, that kind of idea? - Anyone that benefits from asymmetries of power doesn't necessarily benefit from decreasing those asymmetries of power and kind of increasing the capacity of people more widely.

And so when we talk about power, we're talking about asymmetries in agency, influence and control. - You think that hunger for power is fundamental to human nature? I think we should get that straight before we talk about other stuff. So like this pickup line that I use at a bar often, which is power corrupts and absolute power corrupts, absolutely.

Is that true or is that just a fancy thing to say? In modern society, there's something to be said, have we changed as societies over time in terms of how much we crave power? - That there is an impulse towards power that is innate in people and can be conditioned one way or the other, yes.

But you can see that Buddhist society does a very different thing with it at scale. That you don't end up seeing the emergence of the same types of sociopathic behavior and particularly then creating sociopathic institutions. And so it's like, is eating the foods that were rare in our evolutionary environment that give us more dopamine hit because they were rare and they're not anymore, salt, fat, sugar.

Is there something pleasurable about those where humans have an orientation to overeat if they can? Well, the fact that there is that possibility doesn't mean everyone will obligately be obese and die of obesity, right? Like it's possible to have a particular impulse and to be able to understand it, have other ones and be able to balance them.

And so to say that power dynamics are obligate in humans and we can't do anything about it is very similar to me to saying like everyone is gonna be obligately obese. - Yeah, so there's some degree to which those, the control of those impulses has to do with the conditioning early in life.

- Yes, and the culture that creates the environment to be able to do that and then the recursion on that. - Okay, so what if we were to, just bear with me, just asking for a friend, if we were to kill all humans on earth and then start over, is there ideas about how to build up, okay, we don't have to kill it, let's leave the humans on earth, they're fine, and go to Mars and start a new society.

Is there ways to construct systems of conditioning, education, of how we live with each other that would incentivize us properly? To not seek power, to not construct systems that are of asymmetry of power and to create systems that are resilient to all kinds of terrorist attacks, to all kinds of destructions?

- I believe so. - So is there some inklings we could, of course, you probably don't have all the answers, but you have insights about what that looks like. I mean, is it just rigorous practice of dialectic synthesis as essentially conversations with assholes of various flavors until they're not assholes anymore because you've become deeply empathetic with their experience?

- Okay, so there's a lot of things that we would need to construct to come back to this, like what is the basis of rivalry? How do you bind it? How does it relate to tech? If you have a culture that is doing less rivalry, does it always lose in war to those who do war better?

And how do you make something on the enactment of how to get there from here? - Great, great. So what's rivalry? Is rivalry bad or good? So is another word for rivalry competition? - Yes, I think, roughly yes. I think bad and good are kind of silly concepts here.

Good for some things, bad for other things. - For resilience. - Some contexts and others. Even that. Let me give you an example that relates back to the Facebook measuring thing you were mentioning a moment ago. First, I think what you're saying is actually aligned with the right direction and what I wanna get to in a moment, but it's not, the devil is in the details here.

So-- - I enjoy praise. It feeds my ego. I grow stronger, so I appreciate that. - I will make sure to include one piece every 15 minutes as we go. - Thank you. - So it's easier to measure, there are problems with this argument, but there's also utility to it.

So let's take it for the utility it has first. It's harder to measure happiness than it is to measure comfort. We can measure with technology that the shocks in a car are making the car bounce less, that the bed is softer and material science and those types of things.

And happiness is actually hard for philosophers to define because some people find that there's certain kinds of overcoming suffering that are necessary for happiness. There's happiness that feels more like contentment and happiness that feels more like passion. Is passion the source of all suffering or the source of all creativity?

There's deep stuff and it's mostly first person, not measurable third person stuff, even if maybe it corresponds to third person stuff to some degree, but we also see examples of, some of our favorite examples is people who are in the worst environments who end up finding happiness, right? Where the third person stuff looks to be less conducive and there's some Viktor Frankl, Nelson Mandela, whatever.

But it's pretty easy to measure comfort and it's pretty universal. And I think we can see that the industrial revolution started to replace happiness with comfort quite heavily as the thing it was optimizing for. And we can see that when increased comfort is given, maybe because of the evolutionary disposition that expending extra calories when for the majority of our history, we didn't have extra calories was not a safe thing to do.

Who knows why? When extra comfort is given, it's very easy to take that path, even if it's not the path that supports overall wellbeing long-term. And so we can see that, when you look at the techno-optimist idea that we have better lives than Egyptian pharaohs and kings and whatever, what they're largely looking at is how comfortable our beds are and how comfortable the transportation systems are and things like that, in which case there's massive improvement.

But we also see that in some of the nations where people have access to the most comfort, suicide and mental illness are the highest. And we also see that some of the happiest cultures are actually some of the ones that are in materially lame environments. And so there's a very interesting question here.

And if I understand correctly, you do cold showers. And Joe Rogan was talking about how he needs to do some fairly intensive kind of struggle that is a non-comfort to actually induce being better as a person, this concept of hormesis, that it's actually stressing an adaptive system that increases its adaptive capacity and that there's something that the happiness of a system has something to do with its adaptive capacity, its overall resilience, health, wellbeing, which requires a decent bit of discomfort.

And yet in the presence of the comfort solution, it's very hard to not choose it. And then as you're choosing it regularly to actually down-regulate your overall adaptive capacity. And so when we start saying, can we make tech where we're measuring for the things that it produces beyond just the measure of GDP or whatever particular measures look like the revenue generation or profit generation of my business, are all the meaningful things measurable?

And what are the right measures? And what are the externalities of optimizing for that measurement set? What meaningful things aren't included in that measurement set that might have their own externalities? These are some of the questions we actually have to take seriously. - Yeah, and I think they're answerable questions, right?

- Progressively better, not perfect. - Right, so first of all, let me throw out happiness and comfort out of the discussion. Those seem like useless. The distinction, 'cause I said they're useful, wellbeing is useful, but I think I take it back. I proposed new metrics in this brainstorm session, which is, so one is like personal growth, which is intellectual growth.

I think we're able to make that concrete for ourselves. Like you're a better person than you were a week ago, or a worse person than you were a week ago. I think we can ourselves report that and understand what that means. It's this gray area, and we try to define it, but I think we humans are pretty good at that, because we have a sense, an idealistic sense of the person we might be able to become.

We all dream of becoming a certain kind of person, and I think we have a sense of getting closer and not towards that person. Maybe this is not a great metric, fine. The other one is love, actually. Fuck if you're happy or not, or you're comfortable or not, how much love do you have towards your fellow human beings?

I feel like if you try to optimize that and increasing that, that's going to have, that's a good metric. How many times a day, sorry, if I can quantify, how many times a day have you thought positively of another human being? Just put that down as a number, and increase that number.

- I think the process of saying, okay, so let's not take GDP or GDP per capita as the metric we wanna optimize for, because GDP goes up during war, and it goes up with more healthcare spending from sicker people and various things that we wouldn't say correlate to quality of life.

Addiction drives GDP awesomely. - By the way, when I said growth, I wasn't referring to GDP. - I know, I'm giving an example now of the primary metric we use and why it's not an adequate metric, 'cause we're exploring other ones. So the idea of saying, what would the metrics for a good civilization be?

If I had to pick a set of metrics, what would the best ones be, if I was gonna optimize for those? And then really try to run the thought experiment more deeply and say, okay, so what happens if we optimize for that? Try to think through the first and second and third order effects of what happens that's positive, and then also say what negative things can happen from optimizing that?

What actually matters that is not included in that or in that way of defining it? Because love versus number of positive thoughts per day, I could just make a long list of names and just say positive thing about each one. It's all very superficial. Not include animals or the rest of life, have a very shallow total amount of it, but I'm optimizing the number, and if I get some credit for the number.

So, and this is when I said the model of reality isn't reality. When you make a set of metrics, say we're gonna optimize for this, whatever reality is that is not included in those metrics can be the areas where harm occurs, which is why I would say that wisdom is something like the discernment that leads to right choices beyond what metrics-based optimization would offer.

- Yeah, but another way to say that is wisdom is a constantly expanding and evolving set of metrics. - Which means that there is something in you that is recognizing a new metric that's important that isn't part of that metric set. So, there's a certain kind of connection, discernment, awareness, and this is-- - It's iterative game theory.

- There's a Girdles incompleteness theorem, right? Which is if the set of things is consistent, it won't be complete. So, we're gonna keep adding to it, which is why we were saying earlier, I don't think it's not beautiful. And especially if you were just saying one of the metrics you wanna optimize for at the individual level is becoming, right?

That we're becoming more. Well, that then becomes true for the civilization and our metric sets as well. And our definition of how to think about a meaningful life and a meaningful civilization. I can tell you what some of my favorite metrics are. - What's that? - Well, love is obviously not a metric.

- Oh, what you can bench? - Yeah. - It's a good metric. - Yeah, I wanna optimize that across the entire population, starting with infants. So, in the same way that love isn't a metric, but you could make metrics that look at certain parts of it. The thing I'm about to say isn't a metric, but it's a consideration.

'Cause I thought about this a lot. I don't think there is a metric, a right one. I think that every metric by itself, without this thing we talked about, of the continuous improvement becomes a paperclip maximizer. I think that's what the idea of false idol means in terms of the model of reality not being reality.

Then my sacred relationship is to reality itself, which also binds me to the unknown forever. To the known, but also to the unknown. And there's a sense of sacredness connected to the unknown that creates an epistemic humility that is always seeking not just to optimize the thing I know, but to learn new stuff.

And to be open to perceive reality directly. So, my model never becomes sacred. My model is useful. - So, the model can't be the false idol. - Correct. And this is why the first verse of the Tao Te Ching is the Tao that is nameable is not the eternal Tao.

The naming then can become the source of the 10,000 things that if you get too carried away with it, can actually obscure you from paying attention to reality beyond the models. - It sounds a lot like Stephen Wolfram, but in a different language, much more poetic. - I can imagine that.

- No, I'm referring to, I'm joking. But there's echoes of cellular automata, which you can't name. You can't construct a good model of cellular automata. You can only watch in awe. I apologize. I'm distracting your train of thought horribly and miserably. Making a difference. By the way, something robots aren't good at.

And dealing with the uncertainty of uneven ground. You've been okay so far. You've been doing wonderfully. So what's your favorite metrics? - Okay, so-- - That's why I know you're not a robot. You passed an interim test. - So one metric, and there are problems with this, but one metric that I like to just, as a thought experiment to consider, is 'cause you're actually asking, we're, I mean, I know you ask your guests about the meaning of life.

'Cause ultimately when you're saying what is a desirable civilization, you can't answer that without answering what is a meaningful human life? And to say what is a good civilization? 'Cause it's gonna be in relationship to that, right? And then you have whatever your answer is, how do you know?

What is the epistemic basis for postulating that? - There's also a whole 'nother reason for asking that question. I don't, I mean, that doesn't even apply to you whatsoever, which is it's interesting how few people have been asked questions like it. We joke about these questions as silly. - Right.

- It's funny to watch a person. And if I was more of an asshole, I would really stick on that question. - Right. - It's a silly question in some sense, but we haven't really considered what it means. Just a more concrete version of that question is what is a better world?

What is the kind of world we're trying to create? Really? Have you really thought about the kind of world? - I'll give you some kind of simple answers to that that are meaningful to me. But let me do the societal indices first, 'cause they're fun. - Yes. - We should take a note of this meaningful thing, 'cause it's important to come back to.

- Are you reminding me to ask you about the meaning of life? Noted. - I know. - Let me jot that down. - So, 'cause I think I stopped tracking it like 25 open threads. Okay. - Let it all burn. - One index that I find very interesting is the inverse correlation of addiction within the society.

The more a society produces addiction within the people in it, the less healthy I think the society is as a pretty fundamental metric. And so the more the individuals feel that there are less compulsive things in compelling them to behave in ways that are destructive to their own values.

And insofar as a civilization is conditioning and influencing the individuals within it, the inverse of addiction. - Broadly defined. - Correct. - Addiction. What's it? - Compulsive behavior that is destructive towards things that we value. - Yeah. - I think that's a very interesting one to think about. - That's a really interesting one, yeah.

- And this is then also where comfort and addiction start to get very close. And the ability to go in the other direction from addiction is the ability to be exposed to hypernormal stimuli and not go down the path of desensitizing to other stimuli and needing that hypernormal stimuli, which does involve a kind of hormesis.

So I do think the civilization of the future has to create something like ritualized discomfort. And... - Ritualized discomfort. - Yeah. I think that's what the sweat lodge and the vision quest and the solo journey and the ayahuasca journey and the Sundance were. I think it's even a big part of what yoga asana was, is to make beings that are resilient and strong, they have to overcome some things.

To make beings that can control their own mind and fear, they have to face some fears. But we don't want to put everybody in war or real trauma. And yet we can see that the most fucked up people we know had childhoods of a lot of trauma, but some of the most incredible people we know had childhoods of a lot of trauma, whether or not they happened to make it through and overcome that or not.

So how do we get the benefits of the stealing of character and the resilience and the whatever that happened from the difficulty without traumatizing people? A certain kind of ritualized discomfort that not only has us overcome something by ourselves, but overcome it together with each other where nobody bails when it gets hard 'cause the other people are there.

So it's both a resilience of the individuals and a resilience of the bonding. So I think we'll keep getting more and more comfortable stuff, but we have to also develop resilience in the presence of that for the anti-addiction direction and the fullness of character and the trustworthiness to others.

- So you have to be consistently injecting discomfort into the system, ritualize. I mean, this sounds like you have to imagine Sisyphus happy. You have to imagine Sisyphus with his rock, optimally resilient from a metrics perspective in society. So we want to constantly be throwing rocks at ourselves. - Not constantly.

You didn't have to do- - Frequently. - Periodically. - Periodically. - And there's different levels of intensity, different periodacies. Now, I do not think this should be imposed by states. I think it should emerge from cultures. And I think the cultures are developing people that understand the value of it.

So there is both a cultural cohesion to it, but there's also a voluntarism because the people value the thing that is being developed, they understand it. - And that's what conditioning, so it's conditioning. It's conditioning some of these values. - Conditioning is a bad word because we like our idea of sovereignty, but when we recognize the language that we speak and the words that we think in and the patterns of thought built into that language and the aesthetics that we like, and so much is conditioned in us just based on where we're born, you can't not condition people.

So all you can do is take more responsibility for what the conditioning factors are, and then you have to think about this question of what is a meaningful human life? Because we're, unlike the other animals born into environment that they're genetically adapted for, we're building new environments that we were not adapted for, and then we're becoming affected by those.

So then we have to say, well, what kinds of environments, digital environments, physical environments, social environments would we want to create that would develop the healthiest, happiest, most moral, noble, meaningful people? What are even those sets of things that matter? So you end up getting deep existential consideration at the heart of civilization design when you start to realize how powerful we're becoming and how much what we're building it in service towards matters.

- Before I pull, I think, three threads you just laid down, is there another metric index that you're interested in? - I'll tell you one more that I really like. There's a number, but the next one that comes to mind is, I have to make a very quick model.

Healthy human bonding, say we were in a tribal type setting, my positive emotional states and your positive emotional states would most of the time be correlated, your negative emotional states and mine. And so you start laughing, I start laughing, you start crying, my eyes might tear up. And we would call that the compassion-compersion axis.

I would, this is a model I find useful. So compassion is when you're feeling something negative, I feel some pain, I feel some empathy, something in relationship. Compersion is when you do well, I'm stoked for you. Right, like I actually feel happiness at your happiness. - I like compersion.

- Yeah, the fact that it's such an uncommon word in English is actually a problem culturally. - 'Cause I feel that often and I think that's a really good feeling to feel and maximize for actually. - That's actually the metric I'm gonna say. - Oh, wow. - Is the compassion-compersion axis is the thing I would optimize for.

Now, there is a state where my emotional states and your emotional states are just totally decoupled. And that is like sociopathy. I don't want to hurt you, but I don't care if I do or for you to do well or whatever. But there's a worse state and it's extremely common, which is where they're inversely coupled, where my positive emotions correspond to your negative ones and vice versa.

And that is the, I would call it the jealousy-sadism axis. The jealousy axis is when you're doing really well, I feel something bad. I feel taken away from, less than, upset, envious, whatever. And that's so common. But I think of it as kind of a low-grade psychopathology that we've just normalized.

The idea that I'm actually upset at the happiness or fulfillment or success of another is like a profoundly fucked up thing. No, we shouldn't shame it and repress it so it gets worse. We should study it. Where does it come from? And it comes from our own insecurities and stuff.

But then the next part that everybody knows is really fucked up is just on the same axis. It's the same inverted, which is to the jealousy or the envy is the, I feel badly when you're doing well. The sadism side is I actually feel good when you lose. Or when you're in pain, I feel some happiness that's associated.

And you can see when someone feels jealous, sometimes they feel jealous with a partner and then they feel they want that partner to get it. Revenge comes up or something. So sadism is really, jealousy is one step on the path to sadism from the healthy compassion-compersion axis. So I would like to see a society that is inversely, that is conditioning sadism and jealousy inversely, right?

The lower that amount and the more the compassion-compersion. And if I had to summarize that very simply, I'd say it would optimize for compersion. Which is, 'cause notice, that's not just saying love for you where I might be self-sacrificing and miserable and I love people, but I kill myself, which I don't think anybody thinks a great idea.

Or happiness where I might be sociopathically happy where I'm causing problems all over the place or even sadistically happy, but it's a coupling, right? That I'm actually feeling happiness in relationship to yours and even in causal relationship where I, my own agentic desire to get happier wants to support you too.

- That's actually, speaking of another pickup line, that's quite honestly what I, as a guy who's single, this is gonna come out very ridiculous 'cause it's like, oh yeah, where's your girlfriend, bro? But that's what I look for in a relationship 'cause it's like, it's so much, it's so, it's such an amazing life where you actually get joy from another person's success and they get joy from your success.

And then it becomes like, you don't actually need to succeed much for that to have a, like a, like a cycle of just like happiness that just increases like exponentially. It's weird. So like just be, just enjoying the happiness of others, the success of others. So this is like the, let's call this, 'cause the first person that drilled this into my head is Rogan, Joe Rogan.

He was the embodiment of that 'cause I saw somebody who was successful, rich, and nonstop, true, I mean, you could tell when somebody's full of shit and somebody's not, really genuinely enjoying the success of his friends. That was weird to me. That was interesting. And I mean, the way you're kind of speaking to it, the reason Joe stood out to me is I guess I haven't witnessed genuine expression of that often in this culture, of just real joy for others.

I mean, part of that has to do, there hasn't been many channels where you can watch or listen to people being their authentic selves. So I'm sure there's a bunch of people who live life with compersion. They probably don't seek public attention also, but that was, yeah, if there was any word that could express what I've learned from Joe and why he's been a really inspiring figure is that compersion.

And I wish our world had a lot more of that 'cause then, I mean, my own, sorry to go in a small tangent, but you're speaking how society should function, but I feel like if you optimize for that metric in your own personal life, you're going to live a truly fulfilling life.

I don't know what the right word to use, but that's a really good way to live life. - You will also learn what gets in the way of it and how to work with it that if you wanted to help try to build systems at scale or apply Facebook or exponential technologies to do that, you would have more actual depth of real knowledge of what that takes.

And this is, as you mentioned, that there's this virtuous cycle between when you get stoked on other people doing well and then they have a similar relationship to you and everyone is in the process of building each other up. And this is what I would say the healthy version of competition is versus the unhealthy version.

The healthy version, right, the root, I believe it's a Latin word that means to strive together. And it's that impulse of becoming where I want to become more, but I recognize that there's actually a hormesis, there's a challenge that is needed for me to be able to do that.

But that means that, yes, there's an impulse where I'm trying to get ahead, maybe I'm even trying to win, but I actually want a good opponent and I want them to get ahead too because that is where my ongoing becoming happens. And the win itself will get boring very quickly.

The ongoing becoming is where there's aliveness. And for the ongoing becoming, they need to have it too. And that's the strive together. So in the healthy competition, I'm stoked when they're doing really well 'cause my becoming is supported by it. - Now, this is actually a very nice segue into a model I like about what a meaningful human life is, if you want to go there.

- Let's go there. - We can go somewhere else if you want. - Well, I have three things I'm going elsewhere with, but if we were first, let us take a short stroll through the park of the meaning of life. Daniel, what is a meaningful life? - I think the semantics end up mattering 'cause a lot of people will take the word meaning and the word purpose almost interchangeably.

And they'll think kind of what is the meaning of my life? What is the meaning of human life? What is the meaning of life? What's the meaning of the universe? And what is the meaning of existence rather than non-existence? So there's a lot of kind of existential considerations there.

And I think there's some cognitive mistakes that are very easy. Like taking the idea of purpose. - Which is like a goal. - Which is a utilitarian concept. The purpose of one thing is defined in relationship to other things that have assumed value. And to say, what is the purpose of everything?

Well, it's a purpose is too small of a question. It's fundamentally a relative question within everything. What is the purpose of one thing relative to another? What is the purpose of everything? And there's nothing outside of it with which to say it. You actually just got to the limits of the utility of the concept of purpose.

It doesn't mean it's purposeless in the sense of something inside of it being purposeless. It means the concept is too small. Which is why you end up getting to, like in Taoism talking about the nature of it. Rather, there's a fundamental what, where the why can't go deeper. It is the nature of it.

But I'm gonna try to speak to a much simpler part, which is when people think about what is a meaningful human life? And kind of if we were to optimize for something at the level of individual life, but also how does optimizing for this at the level of the individual life lead to the best society for insofar as people living that way affects others and long-term to the world as a whole?

And how would we then make a civilization that was trying to think about these things? 'Cause you can see that there are a lot of dialectics where there's value on two sides, individualism and collectivism, or the ability to accept things and the ability to push harder and whatever. And there's failure modes on both sides.

And so when you were starting to say, okay, individual happiness, and you're like, wait, fuck, sadists can be happy while hurting people. It's not individual happiness, it's love. But wait, some people can self-sacrifice out of love in a way that actually ends up just creating codependency for everybody. Or, okay, so how do we think about all those things together?

This kind of came to me as a simple way that I kind of relate to it is that a meaningful life involves the mode of being, the mode of doing, and the mode of becoming. And it involves a virtuous relationship between those three. And that any of those modes on their own also have failure modes that are not a meaningful life.

The mode of being, the way I would describe it, if we're talking about the essence of it, is about taking in and appreciating the beauty of life that is now. It's a mode that is in the moment, and that is largely about being with what is. It's fundamentally grounded in the nature of experience and the meaningfulness of experience.

The prima facie meaningfulness of when I'm having this experience, I'm not actually asking what the meaning of life is. I'm actually full of it. I'm full of experiencing it. - The momentary experience. - Yes. So, taking in the beauty of life. Doing is adding to the beauty of life.

I'm gonna produce some art. I'm gonna produce some technology that will make life easier, more beautiful for somebody else. I'm going to do some science that will end up leading to better insights or other people's ability to appreciate the beauty of life more because they understand more about it, or whatever it is, or protect it.

I'm gonna protect it in some way. But that's adding to, or being in service of the beauty of life through our doing. And becoming is getting better at both of those. Being able to deepen our being, which is to be able to take in the beauty of life more profoundly, be more moved by it, touched by it.

And increasing our capacity with doing to add to the beauty of life more. And so, I hold that a meaningful life has to be all three of those. And where they're not in conflict with each other, ultimately, it grounds in being. It grounds in the intrinsic meaningfulness of experience.

And then my doing is ultimately something that will be able to increase the possibility of the quality of experience for others. And my becoming is a deepening on those. So it grounds in experience, and also the evolutionary possibility of experience. - And the point is to oscillate between these, never getting stuck on any one.

- Yeah. - Or I suppose in parallel, well, you can't really, attention is a thing. You can only allocate attention. I want moments where I am absorbed in the sunset and I'm not thinking about what to do next. And then the fullness of that can make it to where my doing doesn't come from what's in it for me.

'Cause I actually feel overwhelmingly full already. And then it's like, how can I make life better for other people that don't have as much opportunities I had? How can I add something wonderful? How can I just be in the creative process? And so I think where the doing comes from matters.

And if the doing comes from a fullness of being, it's inherently going to be paying attention to externalities. Or it's more oriented to do that than if it comes from some emptiness that is trying to get full in some way that is willing to cause sacrifices other places and where a chunk of its attention is internally focused.

And so when Buddha said, "Desire is the cause of all suffering," then later the vow of the Bodhisattva, which was to show up for all sentient beings in universe forever, is a pretty intense thing like desire. I would say there is a kind of desire, if we think of desire as a basis for movement, like a flow or a gradient, there's a kind of desire that comes from something missing inside, seeking fulfillment of that in the world.

That ends up being the cause of actions that perpetuate suffering. But there's also not just non-desire, there's a kind of desire that comes from feeling full at the beauty of life and wanting to add to it that is a flow this direction. And I don't think that is the cause of suffering.

I think that is, and the Western traditions, right? The Eastern traditions focused on that and kind of unconditional happiness outside of them, in the moment, outside of time. Western tradition said, "No, actually desire is the source of creativity "and we are here to be made in the image "and likeness of the creator.

"We're here to be fundamentally creative." But creating from where and in service of what? Creating from a sense of connection to everything and wholeness in service of the wellbeing of all of it is very different. Which is back to that compassion, compersion axis. - Being, doing, becoming. It's pretty powerful.

Also could potentially be algorithmatized into a robot just saying, where does death come into that? Being is forgetting, I mean, the concept of time completely. There's a sense to doing and becoming that has a deadline built in, the urgency built in. Do you think death is fundamental to this, to a meaningful life?

Acknowledging or feeling the terror of death, like Ernest Becker, or just acknowledging the uncertainty, the mystery, the melancholy nature of the fact that the ride ends? Is that part of this equation or it's not necessary? - Okay, look at how it could be related. I've experienced fear of death.

I've also experienced times where I thought I was gonna die. It felt extremely peaceful and beautiful. And it's funny because if we, we can be afraid of death because we're afraid of hell or bad reincarnation or the Bardo or some kind of idea of the afterlife we have, or we're projecting some kind of sentient suffering.

But if we're afraid of just non-experience, I noticed that every time I stay up late enough that I'm really tired, I'm longing for deep sleep and non-experience, right? Like I'm actually longing for experience to stop. And it's not morbid, it's not a bummer. It's, and I don't mind falling asleep.

And I sometimes when I wake up, wanna go back into it. And then when it's done, I'm happy to come out of it. So when we think about death and having finite time here, and we could talk about if we live for a thousand years instead of a hundred or something like that, it would still be finite time.

The one bummer with the age we die is that I generally find that people mostly start to emotionally mature just shortly before they die. But there's, if I get to live forever, I can just stay focused on what's in it for me forever. And if life continues and consciousness and sentience and people appreciating beauty and adding to it and becoming continues, my life doesn't, but my life can have effects that continue well beyond it.

Then life with a capital L starts mattering more to me than my life. My life gets to be a part of an in service too. And the whole thing about when old men plant trees, the shade of which they'll never get to be in. I remember the first time I read this poem by Hafez, the Sufi poet written in like 13th century or something like that.

And he talked about that if you're lonely to think about him and he was kind of leaning his spirit into yours across the distance of a millennium and would come for you with these poems. And I was thinking about people a millennium from now and caring about their experience and what they'd be suffering if they'd be lonely and could he offer something that could touch them.

And it's just fucking beautiful. And so like the most beautiful parts of humans have to do with something that transcends what's in it for me. And death forces you to that. - So not only does death create the urgency, it urgency of doing it. You're very right. It does have a sense in which it incentivizes the compersion and the compassion.

- And the widening, you remember Einstein had that quote, "Something to the effect of it's an optical delusion "of consciousness to believe there are separate things." There's this one thing we call universe and something about us being inside of a prison of perception that can only see a very narrow little bit of it.

But this might be just some weird disposition of mine, but when I think about the future after I'm dead and I think about consciousness, I think about young people falling in love for the first time and their experience. And I think about people being awed by sunsets and I think about all of it, right?

I can't not feel connected to that. - Do you feel some sadness to the very high likelihood that you will be forgotten completely by all of human history? You, Daniel, the name, that which cannot be named? - Systems like to self perpetuate. Egos do that. The idea that I might do something meaningful that future people will appreciate, of course there's like a certain sweetness to that idea.

But I know how many people did something, did things that I wouldn't be here without and that my life would be less without whose names I will never know. And I feel a gratitude to them. I feel a closeness. I feel touched by that. And I think to the degree that the future people are conscious enough, there is a, you know, a lot of traditions had this kind of, are we being good ancestors and respect for the ancestors beyond the names?

I think that's a very healthy idea. - But let me return to a much less beautiful and much less pleasant conversation. You mentioned prison. - Back to X-Risk, okay. - And conditioning. You mentioned something about the state. So what role, let's talk about companies, governments, parents, all the mechanisms that can be a source of conditioning.

Which flavor of ice cream do you like? Do you think the state is the right thing for the future? So governments that are elected, democratic systems that are representing representative democracy. Is there some kind of political system of governance that you find appealing? Is it parents, meaning a very close-knit tribes of conditioning that's the most essential?

And then you and Michael Malice would happily agree that it's anarchy, where the state should be dissolved or destroyed or burned to the ground, if you're Michael Malice, giggling, holding the torch as the fire burns. So which is it? Is the state, can the state be good or is the state bad for the conditioning of a beautiful world?

A or B, this is like an S and T test. - You like to give these simplified good or bad things. Would I like the state that we live in currently, the United States federal government, to stop existing today? No, I would really not like that. I think that'd be not quite bad for the world in a lot of ways.

Do I think that it's a optimal social system and maximally just and humane and all those things? And I want it to continue as is. No, also not that. But I am much more interested in it being able to evolve to a better thing without going through the catastrophe phase that I think it's just non-existence would give.

- So what size of state is good? In a sense, like, should we as a human society, as this world becomes more globalized, should we be constantly striving to reduce the, we can put on a map, like right now, literally, like the centers of power in the world. Some of them are tech companies, some of them are governments.

Should we be trying to, as much as possible, decentralize the power to where it's very difficult to point on the map the centers of power? And that means making the state, however, there's a bunch of different ways to make the government much smaller. That could be reducing, in the United States, reducing the funding for the government, all those kinds of things, the set of responsibilities, the set of powers.

It could be, I mean, this is far out, but making more nations, or maybe nations not in the space that are defined by geographic location, but rather in the space of ideas, which is what anarchy is about. So anarchy is about forming collectives based on their set of ideas, and doing so dynamically, not based on where you were born, and so on.

- I think we can say that the natural state of humans, if we want to describe such a thing, was to live in tribes that were below the Dunbar number, meaning that for a few hundred thousand years of human history, all of the groups of humans mostly stayed under that size.

And whenever it would get up to that size, it would end up cleaving. And so it seems like there's a pretty strong, but there weren't individual humans out in the wild doing really well, right? So we were a group animal, but with groups that had a specific size. So we could say, in a way, humans were being domesticated by those groups.

They were learning how to have certain rules to participate with the group without which you'd get kicked out, but that's still the wild state of people. - And maybe it's useful to do as a side statement, which I've recently looked at a bunch of papers around Dunbar's number, where the mean is actually 150.

If you actually look at the original papers-- - It's a range. - It's really a range. So it's actually somewhere under a thousand. So it's a range of like two to 500 or whatever it is. But you could argue that the, I think it actually is exactly two, the range is two to 520, something like that.

And this is the mean that's taken crudely. It's not a very good paper in terms of the actual numerically speaking. But it'd be interesting if there's a bunch of Dunbar numbers that could be computed for particular environments, particular conditions, so on. It is very true that they're likely to be something small, you know, under a million.

But it'd be interesting if we could expand that number in interesting ways that will change the fabric of this conversation. I just want to kind of throw that in there. I don't know if the 150 is baked in somehow into the hardware. We can talk about some of the things that it probably has to do with.

Up to a certain number of people, and this is gonna be variable based on the social technologies that mediate it to some degree. We can talk about that in a minute. Up to a certain number of people, everybody can know everybody else pretty intimately. So let's go ahead and just take 150 as an average number.

Everybody can know everyone intimately enough that if your actions made anyone else do poorly, it's your extended family, and you're stuck living with them, and you know who they are, and there's no anonymous people. There's no just them and over there. And that's one part of what leads to a kind of tribal process where what's good for the individual and good for the whole has a coupling.

Also below that scale, everyone is somewhat aware of what everybody else is doing. There's not groups that are very siloed. And as a result, it's actually very hard to get away with bad behavior. There's a forced kind of transparency. And so you don't need kind of like the state in that way, but lying to people doesn't actually get you ahead.

Sociopathic behavior doesn't get you ahead because it gets seen. And so there's a conditioning environment where the individual's behaving in a way that is aligned with the interests of the tribe is what gets conditioned. When it gets to be a much larger system, it becomes easier to hide certain things from the group as a whole, as well as to be less emotionally bound to a bunch of anonymous people.

I would say there's also a communication protocol where up to about that number of people, we could all sit around a tribal council and be part of a conversation around a really big decision. Do we migrate? Do we not migrate? Do we, you know, something like that. Do we get rid of this person?

And why would I want to agree to be a part of a larger group where everyone can't be part of that council? And so I am going to now be subject to law that I have no say in. If I could be part of a smaller group that could still survive and I get a say in the law that I'm subject to.

So I think the cleaving, and a way we can look at it beyond the Dunbar number two is we can look at that a civilization has binding energy that is holding them together and has cleaving energy. And if the binding energy exceeds the cleaving energy, that civilization will last.

And so there are things that we can do to decrease the cleaving energy within the society, things we can do to increase the binding energy. I think naturally we saw that had certain characteristics up to a certain size kind of tribalism. That ended with a few things. It ended with people having migrated enough that when you started to get resource wars, you couldn't just migrate away easily.

And so tribal warfare became more obligated. It involved the plow in the beginning of real economic surplus. So there were a few different kind of forcing functions. But we're talking about what size should it be, right? What size should a society be? And I think the idea, like if we think about your body for a moment as a self-organizing complex system that is multi-scaled, we think about-- - Our body is a wonderland.

- Our body is a wonderland, yeah. You have-- - That's a John Mayer song, I apologize. But yes, so if we think about our body and the billions of cells that are in it. - Well, you don't have, like think about how ridiculous it would be to try to have all the tens of trillions of cells in it with no internal organization structure, right?

Just like a sea of protoplasm, it wouldn't work. - Pure democracy. - And so you have cells and tissues, and then you have tissues and organs and organs and organ systems. And so you have these layers of organization. And then obviously the individual and a tribe and a ecosystem.

And each of the higher layers are both based on the lower layers, but also influencing them. I think the future of civilization will be similar, which is there's a level of governance that happens at the level of the individual, my own governance of my own choice. I think there's a level that happens at the level of a family.

We're making decisions together, we're inter-influencing each other and affecting each other, taking responsibility for. The idea of an extended family. And you can see that for a lot of human history, we had an extended family. We had a local community, a local church, or whatever it was. We had these intermediate structures.

Whereas right now there's kind of like the individual producer, consumer, taxpayer, voter, and the massive nation state global complex. And not that much in the way of intermediate structures that we relate with, and not that much in the way of real personal dynamics, all impersonalized, made fungible. And so I think that we have to have global governance.

Meaning, I think we have to have governance at the scale we affect stuff. And if anybody is messing up the oceans, that matters for everybody. So that can't only be national or only local. Everyone is scared of the idea of global governance 'cause we think about some top-down system of imposition that now has no checks and balances on power.

I'm scared of that same version. So I'm not talking about that kind of global governance. It's why I'm even using the word governance as a process rather than government as an imposed phenomena. So I think we have to have global governance, but I think we also have to have local governance.

And there has to be relationships between them that each, where there are both checks and balances and power flows of information. So I think governance at the level of cities will be a bigger deal in the future than governance at the level of nation states. 'Cause I think nation states are largely fictitious things that are defined by wars and agreements to stop wars and like that.

I think cities are based on real things that will keep being real where the proximity of certain things together, the physical proximity of things together gives increased value of those things. So you look at like Jeffrey West's work on scale and finding that companies and nation states and things that have a kind of complicated agreement structure get diminishing return of production per capita as the total number of people increases beyond about the tribal scale, but the city actually gets increasing productivity per capita, but it's not designed.

It's kind of this organic thing, right? So there should be governance at the level of cities because people can sense and actually have some agency there. Probably neighborhoods and smaller scales within it and also verticals. And some of it won't be geographic, it'll be network based, right? Networks of affinities.

So I don't think the future is one type of governance. Now, what we can say more broadly is say, when we're talking about groups of people that inter-affect each other, the idea of a civilization is that we can figure out how to coordinate our choice-making to not be at war with each other and hopefully increase total productive capacity in a way that's good for everybody.

Division of labor and specialty, so we all get more, better stuff and whatever. But it's a coordination of our choice-making. I think we can look at civilizations failing on the side of not having enough coordination of choice-making, so they fail on the side of chaos and then they cleave and an internal war comes about or whatever.

Or they can't make smart decisions and they overuse their resources or whatever. Or it can fail on the side of trying to get order via imposition, via force. And so it fails on the side of oppression, which ends up being for a while, functional-ish for the thing as a whole, but miserable for most people in it until it fails either because of revolt or because it can't innovate enough or something like that.

And so there's this toggling between order via oppression and chaos. And I think the idea of democracy, not the way we've implemented it, but the idea of it, whether we're talking about a representative democracy or a direct digital democracy, liquid democracy, a republic, or whatever, the idea of an open society, participatory governance, is can we have order that is emergent rather than imposed so that we aren't stuck with chaos and infighting and inability to coordinate, and we're also not stuck with oppression?

And what would it take to have emergent order? This is the most kind of central question for me these days, because if we look at what different nation states are doing around the world, and we see nation states that are more authoritarian, that in some ways are actually coordinating much more effectively.

So for instance, we can see that China has built high-speed rail, not just through its country, but around the world, and the US hasn't built any high-speed rail yet. You can see that it brought 300 million people out of poverty in a time where we've had increasing economic inequality happening.

You can see that if there was a single country that could make all of its own stuff, if the global supply chains failed, China would be the closest one to being able to start to go closed loop on fundamental things. Belt and Road Initiative, supply chain on rare earth metals, transistor manufacturing, that is like, oh, they're actually coordinating more effectively in some important ways.

In the last, call it 30 years. And that's imposed order. - Imposed order. - And we can see that if in the US, let's look at why real quick. We know why we created term limits so that we wouldn't have forever monarchs. That's the thing we were trying to get away from, and that there would be checks and balances on power and that kind of thing.

But that also has created a negative second order effect, which is nobody does long-term planning. Because somebody comes in who's got four years, they want reelected. They don't do anything that doesn't create a return within four years that will end up getting them elected, reelected. And so the 30 year industrial development to build high speed trains or the new kind of fusion energy or whatever it is, just doesn't get invested in.

And then if you have left versus right, where whatever someone does for four years, then the other guy gets in and undoes it for four years. And most of the energy goes into campaigning against each other. This system is just dissipating as heat. It's just burning up as heat.

And the system that has no term limits and no internal friction in fighting because they got rid of those people, can actually coordinate better. But I would argue it has its own fail states eventually and dystopic properties that are not the thing we want. - So the goal is to accomplish, to create a system that does long-term planning without the negative effects of a monarch or dictator that stays there for the long-term and accomplish that through, not through the imposition of a single leader, but through emergence.

So that doesn't, that perhaps, first of all, the technology in itself seems to maybe disagree, allow for different possibilities here, which is make primary the system, not the humans. So the basic, the medium on which the democracy happens, like a platform where people can make decisions, do the choice-making, the coordination of the choice-making, where emerges some kind of order to where like something that applies at the scale of the family, the extended family, the city, the country, the continent, the whole world.

And then does that so dynamically, constantly changing based on the needs of the people, sort of always evolving. And it would all be owned by Google. Like doesn't this, is there a way to, so first of all, you're optimistic that you could basically create, that technology can save us.

Technology at creating platforms, by technology I mean like software network platforms that allows humans to deliberate, like make government together dynamically without the need for a leader that's on a podium screaming stuff. That's one. And two, if you're optimistic about that, are you also optimistic about the CEOs of such platforms?

The idea that technology is values neutral, values agnostic, and people can use it for constructive or destructive purposes, but it doesn't predispose anything. It's just silly and naive. Technology elicits patterns of human behavior because those who utilize it and get ahead end up behaving differently because of their utilization of it.

And then other people, then they end up shaping the world or other people race to also get the power of the technology. And so there's whole schools of anthropology that look at the effect on social systems and the minds of people of the change in our tooling. Marvin Harris's work called "Cultural Materialism" looked at this deeply.

Obviously, Marshall McLuhan looked specifically at the way that information technologies change the nature of our beliefs, minds, values, social systems. I will not try to do this rigorously because there are academics who will disagree on the subtle details, but I'll do it kind of like illustratively. You think about the emergence of the plow, the ox drawn plow in the beginning of agriculture that came with it, where before that you had hunter-gatherer and then you had horticulture, kind of a digging stick, but not the plow.

Well, the world changed a lot with that, right? And a few of the changes that at least some theorists believe in is when the ox drawn plow started to proliferate, any culture that utilized it was able to start to actually cultivate grain 'cause just with a digging stick, you couldn't get enough grain for it to matter.

Grain was a storable caloric surplus. They could make it through the famines. They could grow their population. So the ones that used it got so much ahead that it became obligate and everybody used it. That corresponding with the use of a plow, animism went away everywhere, that it existed, because you can't talk about the spirit of the buffalo while beating the cow all day long to pull a plow.

So the moment that we do animal husbandry of that kind, where you have to beat the cow all day, you have to say, it's just a dumb animal. Man has dominion over earth. And the nature of even our religious and spiritual ideas change. You went from women primarily using the digging stick to do the horticulture or gathering before that, men doing the hunting stuff to now men had to use the plow because the upper body strength actually really mattered.

Women would have miscarriages when they would do it when they were pregnant. So all the caloric supply started to come from men where it had been from both before and the ratio of male, female gods changed to being mostly male gods following that. Obviously we went from very, that particular line of thought then also says that feminism followed the tractor and that the rise of feminism in the West started to follow women being able to say, we can do what men can because the male upper body strength wasn't differential once the internal combustion engine was much stronger and we can drive a tractor.

So I don't think to try to trace complex things to one cause is a good idea. So I think this is a reductionist view, but it has truth in it. And so the idea that technology is values agnostic is silly. Technology codes patterns of behavior that code rationalizing those patterns of behavior and believing in them.

The plow also is the beginning of the Anthropocene, right? It was the beginning of us changing the environment radically to clear cut areas to just make them useful for people, which also meant the change of the view of where the web of life, we're just a part of it, et cetera.

So all those types of things. - So that's brilliantly put, but by the way, that was just brilliant. But the question is, so it's not agnostic, but. - So we have to look at what the psychological effects of specific tech applied certain ways are and be able to say, it's not just doing the first order thing you intended.

It's doing like the effect on patriarchy and animism and the end of tribal culture and the beginning of empire and the class systems that came with that. We can go on and on about what the plow did. The beginning of surplus was inheritance, which then became the capital model and like lots of things.

So we have to say, when we're looking at the tech, how is, what are the values built into the way the tech is being built that are not obvious? - Right, so you always have to consider externalities. - Yes. - And this is no matter what. - And the externalities are not just physical to the environment, they're also to how the people are being conditioned and how the relationality between them is being conditioned.

- The question I'm asking you, so I personally would rather be led by a plow and a tractor than Stalin, okay? That's the question I'm asking you. In creating an emergent government where people, where there's a democracy that's dynamic, that makes choices, that does governance at like a very kind of liquid, like there's a bunch of fine resolution layers of abstraction of governance happening at all scales, right, and doing so dynamically where no one person has power at any one time that can dominate and impose rule, okay?

That's the Stalin version. I'm saying isn't the alternative that's emergent, empowered or made possible by the plow and the tractor, which is the modern version of that, is like the internet, the digital space where we can, the monetary system, where you have the cryptocurrency and so on, but you have much more importantly, to me at least, is just basic social interaction, the mechanisms of human transacting with each other in the space of ideas.

So yes, it's not agnostic, definitely not agnostic. You've had a brilliant rant there. The tractor has effects, but isn't that the way we achieve an emergent system of governance? - Yes, but I wouldn't say we're on track. - You haven't seen anything promising. - It's not that I haven't seen anything promising.

It's that to be on track requires understanding and guiding some of the things differently than is currently happening, and it's possible. That's actually what I really care about. So you couldn't have had a Stalin without having certain technologies emerge. He couldn't have ruled such a big area without transportation technologies, without the train, without the communication tech that made it possible.

So when you say you'd rather have a tractor or a plow than a Stalin, there's a relationship between them that is more recursive, which is new physical technologies allow rulers to rule with more power over larger distances, historically. Some things are more responsible for that than others. Like Stalin also ate stuff for breakfast, but the thing he ate for breakfast is less responsible for the starvation of millions than the train.

The train is more responsible for that. And then the weapons of war are more responsible. So some technology, let's not throw it all in the, you're saying technology has a responsibility here, but some is better than others. - I'm saying that people's use of technology will change their behavior.

So it has behavioral dispositions built in. The change of the behavior will also change the values in the society. - It's very complicated, right? - It will also, as a result, both make people who have different kinds of predispositions with regard to rulership and different kinds of new capacities.

And so we have to think about these things. It's kind of well understood that the printing press and then in early industrialism ended feudalism and created kind of nation states. So one thing I would say as a long trend that we can look at is that whenever there is a step function, a major leap in technology, physical technology, the underlying techno-industrial base with which we do stuff, it ends up coding for, it ends up predisposing a whole bunch of human behavioral patterns that the previous social system had not emerged to try to solve.

And so it usually ends up breaking the previous social systems, the way the plow broke the tribal system, the way that the industrial revolution broke the feudal system. And then new social systems have to emerge that can deal with that, the new powers, the new dispositions, whatever with that tech.

Obviously the nuke broke nation state governance being adequate and said, we can't ever have that again. So then it created this international governance apparatus world. So I guess what I'm saying is that the solution is not exponential tech following the current path of what the market incentivizes exponential tech to do, market being a previous social tech.

I would say that exponential tech, if we look at different types of social tech, so let's just briefly look at that democracy tried to do the emergent order thing. At least that's the story. And this is why if you look, this important part to build first. - It's kind of doing it, it's just doing it poorly, you're saying.

I mean, it is emergent order in some sense. I mean, that's the hope of democracy versus other forms of government. - Correct. I mean, I said at least the story because obviously it didn't do it for women and slaves early on, it doesn't do it for all classes equally, et cetera.

But the idea of democracy is participatory governance. And so you notice that the modern democracies emerged out of the European enlightenment. And specifically, because the idea that a lot of people, some huge number, not a tribal number, huge number of anonymous people who don't know each other are not bonded to each other, who believe different things, who grew up in different ways can all work together to make collective decisions.

Well, that affect everybody. And where some of them will make compromises and the thing that matters to them for what matters to other strangers. That's actually wild. Like it's a wild idea that that would even be possible. And it was kind of the result of this high enlightenment idea that we could all do the philosophy of science and we could all do the Hegelian dialectic.

Those ideas had emerged, right? And it was that we could all, so our choice-making, 'cause we've said a society is trying to coordinate choice-making. The emergent order is the order of the choices that we're making, not just at the level of the individuals, but what groups of individuals, corporations, nations, states, whatever do.

Our choices are based on, our choice-making is based on our sense-making and our meaning-making. Our sense-making is what do we believe is happening in the world? And what do we believe the effects of a particular thing would be? Our meaning-making is what do we care about, right? Our values generation, what do we care about that we're trying to move the world in the direction of?

If you ultimately are trying to move the world in a direction that is really, really different than the direction I'm trying to, we have very different values, we're gonna have a hard time. And if you think the world is a very different world, right, if you think that systemic racism is rampant everywhere and one of the worst problems, and I think it's not even a thing, if you think climate change is almost existential and I think it's not even a thing, we're gonna have a really hard time coordinating.

And so we have to be able to have shared sense-making of can we come to understand just what is happening together? And then can we do shared values generation? Okay, maybe I'm emphasizing a particular value more than you, but I can see how, I can take your perspective and I can see how the thing that you value is worth valuing, and I can see how it's affected by this thing.

So can we take all the values and try to come up with a proposition that benefits all of them better than the proposition I created just to benefit these ones that harms the ones that you care about, which is why you're opposing my proposition. We don't even try in the process of crafting a proposition currently to see, and this is the reason that the proposition when we vote on it gets half the votes almost all the time.

It almost never gets 90% of the votes is because it benefits some things and harms other things. We can say all theory of trade-offs, but we didn't even try to say, could we see what everybody cares about and see if there was a better solution? So- - How do we fix that try?

I wonder, is it as simple as the social technology education? - Well, no. The proposition crafting and refinement process has to be key to a democracy or to a government, and it's not currently. - But isn't that the humans creating that situation? So one way, there's two ways to fix that.

One is to fix the individual humans, which is the education early in life. And the second is to create somehow systems that- - Yeah, it's both. - So I understand the education part, but creating systems, that's why I mentioned the technologies, is creating social networks, essentially. - Yes, that's actually necessary.

Okay, so let's go to the first part and then we'll come to the second part. So democracy emerged as an enlightenment era idea that we could all do a dialectic and come to understand what other people valued. And so that we could actually come up with a cooperative solution rather than just, fuck you, we're gonna get our thing in war, right?

And that we could sense make together. We could all apply the philosophy of science and you weren't gonna stick to your guns on what the speed of sound is if we measured it and we found out what it was. And there's a unifying element to the objectivity in that way.

And so this is why I believe Jefferson said, if you could give me a perfect newspaper and a broken government, or I'm paraphrasing, or a broken government and perfect newspaper, I wouldn't hesitate to take the perfect newspaper. Because if the people understand what's going on, they can build a new government.

If they don't understand what's going on, they can't possibly make good choices. And Washington, I'm paraphrasing again, first president said, the number one aim of the federal government should be the comprehensive education of every citizen in the science of government. Science of government was the term of art. Think about what that means, right?

Science of government would be game theory, coordination theory, history, it wouldn't call it game theory yet. History, sociology, economics, right? All the things that lead to how we understand human coordination. I think it's so profound that he didn't say the number one aim of the federal government is rule of law.

And he didn't say it's protecting the border from enemies. Because if the number one aim was to protect the border from enemies, it could do that as military dictatorship quite effectively. And if the goal was rule of law, it could do it as a dictatorship, as a police state.

And so if the number one goal is anything other than the comprehensive education of all the citizens in the science of government, it won't stay democracy long. You can see, so both education and the fourth estate, the fourth estate being the, so education, can I make sense of the world?

Am I trained to make sense of the world? The fourth estate is what's actually going on currently, the news, do I have good unbiased information about it? Those are both considered prerequisite institutions for democracy to even be a possibility. And then at the scale it was initially suggested here, the town hall was the key phenomena where there wasn't a special interest group crafted a proposition.

The first thing I ever saw was the proposition, didn't know anything about it, and I got to vote yes or no. It was in the town hall, we all got to talk about it, and the proposition could get crafted in real time through the conversation, which is why there was that founding father statement that voting is the death of democracy.

Voting fundamentally is polarizing the population in some kind of sublimated war, and we'll do that as the last step. But what we wanna do first is to say, how does the thing that you care about that seems damaged by this proposition, how could that turn into a solution to make this proposition better?

Or this proposition still tends to the thing it's trying to tend to and tends to that better. Can we work on this together? And in a town hall, we could have that. As the scale increased, we lost the ability to do that. Now, as you mentioned, the internet could change that.

The fact that we had representatives that had to ride a horse from one town hall to the other one to see what the colony would do, that we stopped having this kind of developmental, propositional development process when the town hall ended. The fact that we have not used the internet to recreate this is somewhere between insane and aligned with class interests.

- I would push back to say that the internet has those things, it just has a lot of other things. I feel like the internet has places where that encourage synthesis of competing ideas and sense-making, which is what we're talking about. It's just that it's also flooded with a bunch of other systems that perhaps are out competing it under current incentives, perhaps has to do with capitalism and the market.

- Sure, Linux is awesome, right? And Wikipedia and places where you have, and they have problems, but places where you have open source sharing of information, vetting of information towards collective building. Is that building something like, how much has that affected our court systems or our policing systems or our military systems or our-- - First of all, I think a lot, but not enough.

I think that's something I told you offline yesterday is perhaps it's a whole nother discussion, but I don't think we're quite quantifying the impact on the world, the positive impact of Wikipedia. You said the policing, I mean, I just think the amount of empathy that, like knowledge, I think can't help but lead to empathy.

Just knowing, okay, just knowing, okay, I'll give you some pieces of information. Knowing how many people died in various wars, that already, that delta, when you have millions of people have that knowledge, it's like, it's a little like slap in the face, like, oh, like my boyfriend or girlfriend breaking up with me is not such a big deal when millions of people were tortured, you know, like just a little bit.

And when a lot of people know that because of Wikipedia or the effect, there's second order effects of Wikipedia, which is it's not that necessarily people read Wikipedia, it's like YouTubers who don't really know stuff that well will thoroughly read a Wikipedia article and create a compelling video describing that Wikipedia article that then millions of people watch and they understand that, holy shit, a lot of, there was such, first of all, there was such a thing as World War II and World War I, okay, like they can at least like learn about it, they can learn about, this was like recent, they can learn about slavery, they can learn about all kinds of injustices in the world.

And that I think has a lot of effects to our, to the way, whether you're a police officer, a lawyer, a judge, and the jury, or just the regular civilian citizen, the way you approach the, every other communication you engage in, even if the system of that communication is very much flawed.

So I think there's a huge positive effect on Wikipedia. That's my case for Wikipedia. So you should donate to Wikipedia. I'm a huge fan, but there's very few systems like it, which is sad to me. - So I think it would be a useful exercise for any listener of the show to really try to run the dialectical synthesis process with regard to a topic like this, and take the techno concern perspective with regard to information tech that folks like Tristan Harris take, and say, what are all of the things that are getting worse?

And what, and are any of them following an exponential curve and how much worse, how quickly could that be? And then, and do that fully without mitigating it. Then take the techno optimist perspective and see what things are getting better in a way that Kurzweil or Diamandis or someone might do, and try to take that perspective fully and say, are some of those things exponential?

And what could that portend? And then try to hold all that at the same time. And I think there are ways in which, depending upon the metrics we're looking at, things are getting worse on exponential curves and better on exponential curves for different metrics at the same time, which I hold as the destabilization of previous system.

And either an emergence to a better system or a collapse to a lower order are both possible. And so I want my optimism not to be about my assessment. I want my assessment to be just as fucking clear as it can be. I want my optimism to be what inspires the solution process on that clear assessment.

So I never want to apply optimism in the sense making. I want to just try to be clear. If anything, I want to make sure that the challenges are really well understood. But that's in service of an optimism that there are good potentials, even if I don't know what they are, that are worth seeking.

There's kind of a, there is some sense of optimism that's required to even try to innovate really hard problems. But then I want to take my pessimism and red team my own optimism to see is that solution not gonna work? Does it have second order effects? And then not get upset by that because I then come back to how to make it better.

So just a relationship between optimism and pessimism and the dialectic of how they can work. So when I, of course, we can say that Wikipedia is a pretty awesome example of a thing. We can look at the places where it has limits or has failed where on a celebrity topic or corporate interest topic, you can pay Wikipedia editors to edit more frequently and various things like that.

But you can also see where there's a lot of information that was kind of decentrally created, that is good information, that is more easily accessible to people than everybody buying their own Encyclopedia Britannica or walking down to the library and that can be updated in real time faster. And I think you're very right that the business model is a big difference because Wikipedia is not a for-profit corporation.

It is a, it's tending to the information commons and it doesn't have an agenda other than tending to the information commons. And I think the two masters issue is a tricky one when I'm trying to optimize for very different kinds of things where I have to sacrifice one for the other and I can't find synergistic satisfiers, which one?

And if I have a fiduciary responsibility to shareholder profit maximization and what does that end up creating? I think the ad model that Silicon Valley took, I think Jaron Lanier, I don't know if you've had him on the show, but he has interesting assessment of the nature of the ad model.

Silicon Valley wanting to support capitalism and entrepreneurs to make things, but also the belief that information should be free and also the network dynamics where the more people you got on, you got increased value per user per capita as more people got on. So you didn't want to do anything to slow the rate of adoption.

Some places actually, PayPal paying people money to join the network because the value of the network would be, there'd be a Metcalf-like dynamic proportional to the square of the total number of users. So the ad model made sense of how do we make it free, but also be a business, get everybody on, but not really thinking about what it would mean to, and this is now the whole idea that if you aren't paying for the product, you are the product.

If they have a fiduciary responsibility to their shareholder to maximize profit, their customer is the advertiser, the user who it's being built for is to do behavioral mod for them for advertisers. That's a whole different thing than that same type of tech could have been if applied with a different business model or a different purpose.

I think there's, because Facebook and Google and other information and communication platforms end up harvesting data about user behavior that allows them to model who the people are in a way that gives them more, sometimes specific information and behavioral information than even a therapist or a doctor or a lawyer or a priest might have in a different setting.

They basically are accessing privileged information. There should be a fiduciary responsibility. And in normal fiduciary law, if there's this principal agent thing, if you are a principal and I'm an agent on your behalf, I don't have a game theoretic relationship with you. If you're sharing something with me and I'm the priest or I'm the therapist, I'm never gonna use that information to try to sell you a used car or whatever the thing is.

But Facebook is gathering massive amounts of privileged information and using it to modify people's behavior for a behavior that they didn't sign up for wanting the behavior but what the corporation did. So I think this is an example of the physical tech evolving in the context of the previous social tech where it's being shaped in particular ways.

And here, unlike Wikipedia that evolved for the information commons, this evolved for fulfilling particular agentic purpose. Most people, when they're on Facebook, think it's just a tool that they're using. They don't realize it's an agent. It is a corporation with a profit motive and as I'm interacting with it, it has a goal for me different than my goal for myself.

And I might wanna be on for a short period of time. Its goal is maximize time on site. And so there is a rivalry that is take, but where there should be a fiduciary contract. I think that's actually a huge deal. And I think if we said, could we apply Facebook-like technology to develop people's citizenry capacity, right?

To develop their personal health and wellbeing and habits, as well as their cognitive understanding, the complexity with which they can process the health of their relationships, that would be amazing to start to explore. And this is now the thesis that we started to discuss before is, every time there is a major step function in the physical tech, it obsoletes the previous social tech and the new social tech has to emerge.

What I would say is that when we look at the nation state level of the world today, the more top-down authoritarian nation states are as the exponential tech started to emerge, the digital technology started to emerge. They were in a position for better long-term planning and better coordination. And so the authoritarian states started applying the exponential tech intentionally to make more effective authoritarian states.

And that's everything from like an internet of things, surveillance system, going into machine learning systems, to the Sesame Credit system, to all those types of things. And so they're upgrading their social tech using the exponential tech. Otherwise, within a nation state like the US, but democratic open societies, the countries, the states are not directing the technology in a way that makes a better open society, meaning better emergent order.

They're saying, well, the corporations are doing that and the state is doing the relatively little thing it would do aligned with the previous corporate law that no longer is relevant 'cause there wasn't fiduciary responsibility for things like that. There wasn't antitrust because this creates functional monopolies because of network dynamics, right?

Where YouTube has more users than Vimeo and every other video player together. Amazon has a bigger percentage of market share than all of the other markets together. You get one big dog per vertical because of network effect, which is a kind of organic monopoly that the previous antitrust law didn't even have a place.

That wasn't the thing. Anti-monopoly was only something that emerged in the space of government contracts. So what we see is the new exponential technology is being directed by authoritarian nation states to make better authoritarian nation states and by corporations to make more powerful corporations. And powerful corporations, when we think about the Scottish enlightenment, when the idea of markets was being advanced, the modern kind of ideas of markets, the biggest corporation was tiny compared to what the biggest corporation today is.

So the asymmetry of it relative to people was tiny. And the asymmetry now in terms of the total technology it employs, total amount of money, total amount of information processing is so many orders of magnitude. And rather than there be demand for an authentic thing that creates a basis for supply, as supply started to get way more coordinated and powerful and the demand wasn't coordinated 'cause you don't have a labor union of all the customers working together, but you do have a coordination on the supply side, supply started to recognize that it could manufacture demand.

It could make people want shit that they didn't want before that maybe wouldn't increase their happiness in a meaningful way, might increase addiction. Addiction is a very good way to manufacture demand. And so as soon as manufactured demand started through this is the cool thing and you have to have it for status or whatever it is, the intelligence of the market was breaking.

Now it's no longer a collective intelligence system that is upregulating real desire for things that are really meaningful. You were able to hijack the lower angels of our nature rather than the higher ones, the addictive patterns drive those and have people want shit that doesn't actually make them happier, make the world better.

And so we really also have to update our theory of markets because behavioral econ showed that homo economicus, the rational actor is not really a thing, but particularly at greater and greater scale can't really be a thing. Voluntarism isn't a thing where if my company doesn't wanna advertise on Facebook, I just will lose to the companies that do 'cause that's where all the fucking attention is.

And so then I can say it's voluntary, but it's not really if there's a functional monopoly. Same if I'm gonna sell on Amazon or things like that. So what I would say is these corporations are becoming more powerful than nation states in some ways. And they are also debasing the integrity of the nation states, the open societies.

So the democracies are getting weaker as a result of exponential tech and the kind of new tech companies that are kind of a new feudalism, tech feudalism, 'cause it's not a democracy inside of a tech company or the supply and demand relationship when you have manufactured demand and kind of monopoly type functions.

And so we have basically a new feudalism controlling exponential tech and authoritarian nation states controlling it. And those attractors are both shitty. And so I'm interested in the application of exponential tech to making better social tech that makes emergent order possible. And where then that emergent order can bind and direct the exponential tech in fundamentally healthy, not X risk oriented directions.

I think the relationship of social tech and physical tech can make it. I think we can actually use the physical tech to make better social tech, but it's not given that we do. If we don't make better social tech, then I think the physical tech empowers really shitty social tech that is not a world that we want.

- I don't know if it's the road we wanna go down, but I tend to believe that the market will create exactly the thing you're talking about, which I feel like there's a lot of money to be made in creating a social tech that creates a better citizen, that creates a better human being.

Your description of Facebook and so on, which is a system that creates addiction, which manufactures demand, is not obviously inherently the consequence of the markets. I feel like that's the first stage of us, like baby deer trying to figure out how to use the internet. I feel like there's much more money to be made with something that creates compersion and love, honestly.

I mean, I really, we can have this, I can make the business case for it. I don't think we wanna really have that discussion, but do you have some hope that that's the case? And I guess if not, then how do we fix the system of markets that work so well for the United States for so long?

- Like I said, every social tech worked for a while. Tribalism worked well for 200,000 or 300,000 years. I think social tech has to keep evolving. The social technologies with which we organize and coordinate our behavior have to keep evolving as our physical tech does. So I think the thing that we call markets, of course we can try to say, oh, even biology runs on markets, but the thing that we call markets, the underlying theory, homo economicus, demand, driving supply, that thing broke.

It broke with scale in particular and a few other things. So it needs updated in a really fundamental way. I think there's something even deeper than making money happening that in some ways will obsolete money-making. I think capitalism is not about business. So if you think about business, I'm gonna produce a good or a service that people want and bring it to the market so that people get access to that good or service.

That's the world of business, but that's not capitalism. Capitalism is the management and allocation of capital. Which financial services was a tiny percentage of the total market has become a huge percentage of the total market. It's a different creature. So if I was in business and I was producing a good or service and I was saving up enough money that I started to be able to invest that money and gain interest or do things like that, I start realizing I'm making more money on my money than I'm making on producing the goods and services.

So I stop even paying attention to goods and services and start paying attention to making money on money. And how do I utilize capital to create more capital? And capital gives me more optionality 'cause I can buy anything with it than a particular good or service that only some people want.

Capitalism, more capital ended up meaning more control. I could put more people under my employment. I could buy larger pieces of land, novel access to resource, mines, and put more technology under my employment. So it meant increased agency and also increased control. I think attentionalism is even more powerful.

So rather than enslave people where the people kind of always want to get away and put in the least work they can, there's a way in which economic servitude was just more profitable than slavery, right? Have the people work even harder voluntarily 'cause they wanna get ahead and nobody has to be there to whip them or control them or whatever.

This is a cynical take, but a meaningful take. So capital ends up being a way to influence human behavior, right? And yet where people still feel free in some meaningful way, they're not feeling like they're gonna be punished by the state if they don't do something. It's like punished by the market via homelessness or something.

But the market is this invisible thing I can't put an agent on, so it feels like free. And so if you want to affect people's behavior and still have them feel free, capital ends up being a way to do that. But I think affecting their attention is even deeper 'cause if I can affect their attention, I can both affect what they want and what they believe and what they feel.

And we statistically know this very clearly. Facebook has done studies that based on changing the feed, it can change beliefs, emotional dispositions, et cetera. And so I think there's a way that the harvest and directing of attention is even a more powerful system than capitalism. It is effective in capitalism to generate capital, but I think it also generates influence beyond what capital can do.

And so do we want to have some groups utilizing that type of tech to direct other people's attention? If so, towards what? Towards what metrics of what a good civilization and good human life would be? What's the oversight process? What is the- - Transparency. I can answer all the things you're mentioning.

I guarantee you, if I'm not such a lazy ass, I'll be part of the many people doing this, is transparency and control, like giving control to individual people. - Okay, so maybe the corporation has coordination on its goals that all of its customers or users together don't have. So there's some asymmetry of its goals, but maybe I could actually help all of the customers to coordinate almost like a labor union or whatever by informing and educating them adequately about the effects, the externalities on them.

This is not toxic waste going into the ocean or the atmosphere, it's their minds, their beings, their families, their relationships, such that they will in group change their behavior. And I think the, one way of saying what you're saying, I think, is that you think that you can rescue homo economicus from the rational actor that will pursue all the goods and services and choose the best one at the best price, the kind of Rand, von Mises Hayek, that you can rescue that from Dan Ariely and behavioral econ that says that's actually not how people make choices, they make it based on status hacking, largely whether it's good for them or not in the long term.

And the large asymmetric corporation can run propaganda and narrative warfare that hits people's status buttons and their limbic hijacks and their lots of other things in ways that they can't even perceive that are happening. They're not paying attention to that, the site is employing psychologists and split testing and whatever else.

So you're saying, I think we can recover homo economicus. - And not just through a single mechanism technology, there's the, not to keep mentioning the guy, but platforms like Joe Rogan and so on that help make viral the ways the education of negative externalities can become viral in this world.

- So interestingly, I actually agree with you that-- - Got 'em, four and a half hours in. - That we can-- - Tech can do some good. - Well, see what you're talking about is the application of tech here, broadcast tech, where you can speak to a lot of people and that's not gonna be strong enough 'cause the different people need spoken to differently, which means it has to be different voices that get amplified to those audiences, more like Facebook's tech, but nonetheless, we'll start with broadcast tech.

- Plants the first seed and then the word of mouth is a powerful thing. You need to do the first broadcast shotgun and then it like lands, a catapult or whatever, I don't know what the right weapon is, but then it just spreads the word of mouth through all kinds of tech, including Facebook.

- So let's come back to the fundamental thing. The fundamental thing is we want a kind of order at various scales from the conflicting parts of ourself, actually having more harmony than they might have to family, extended family, local, all the way up to global. We want emergent order where our choices have more alignment, right?

We want that to be emergent rather than imposed or rather than we want fundamentally different things or make totally different sense of the world where warfare of some kind becomes the only solution. Emergent order requires us in our choice-making, requires us being able to have related sense-making and related meaning-making processes.

Can we apply digital technologies and exponential tech in general to try to increase the capacity to do that? Where the technology called a town hall, the social tech that we'd all get together and talk, obviously is very scale limited and it's also oriented to geography rather than networks of aligned interest.

Can we build new, better versions of those types of things? And going back to the idea that a democracy or participatory governance depends upon comprehensive education in the science of government, which include being able to understand things like asymmetric information warfare on the side of governments and how the people can organize adequately.

Can you utilize some of the technologies now to be able to support increased comprehensive education of the people and maybe comprehensive informant-ness? So both fixing the decay in both education and the fourth estate that have happened so that people can start self-organizing to then influence the corporations, the nation states to do different things and/or build new ones themselves.

Yeah, fundamentally, that's the thing that has to happen. The exponential tech gives us a novel problem landscape that the world never had. The nuke gave us a novel problem landscape. And so that required this whole Bretton Woods world. The exponential tech gives us novel problem landscape. Our existing problem-solving processes aren't doing a good job.

We have had more countries get nukes. We haven't had nuclear deproliferation. We haven't achieved any of the UN sustainable development goals. We haven't kept any of the new categories of tech from making arms races. So our global coordination is not adequate to the problem landscape. So we need fundamentally better problem-solving processes, a market or a state as a problem-solving process.

We need better ones that can do the speed and scale of the current issues. Right now, speed is one of the other big things, is that by the time we regulated DDT out of existence or cigarettes not for people under 18, they'd already killed so many people and we let the market do the thing.

But as Elon has made the point, that won't work for AI. By the time we recognize afterwards that we have an autopoetic AI that's a problem, you won't be able to reverse it, that there's a number of things that when you're dealing with tech that is either self-replicating and disintermediates humans to keep going, doesn't need humans to keep going, or you have tech that just has exponentially fast effects, your regulation has to come early.

It can't come after the effects have happened, the negative effects have happened 'cause the negative effects could be too big too quickly. So we basically need new problem-solving processes that do better at being able to internalize externality, solve the problems on the right time scale and the right geographic scale.

And those new processes to not be imposed have to emerge from people wanting them and being able to participate in their development, which is what I would call kind of a new cultural enlightenment or renaissance that has to happen, where people start understanding the new power that exponential tech offers, the way that it is actually damaging current governance structures that we care about and creating an ex-risk landscape, but could also be redirected towards more pro-topic purposes and then saying, how do we rebuild new social institutions?

What are adequate social institutions where we can do participatory governance at scale and time? And how can the people actually participate to build those things? The solution that I see working requires a process like that. - And the result maximizes love. So again, Elon, you'd be right that love is the answer.

Let me take you back from the scale of societies to the scale that's far, far more important, which is the scale of family. You've written a blog post about your dad. We have various flavors of relationships with our fathers. What have you learned about life from your dad? - Well, people can read the blog post and see a lot of individual things that I learned that I really appreciated.

If I was to kind of summarize at a high level, I had a really incredible dad, very, very unusually positive set of experiences. He was committed, we were homeschooled, and he was committed to work from home to be available and prioritize fathering in a really deep way. And as a super gifted, super loving, very unique man, he also had his unique issues that were part of what crafted the unique brilliance and those things often go together.

And I say that because I think I had some unusual gifts and also some unusual difficulties. And I think it's useful for everybody to know their path probably has both of those. But if I was to say kind of at the essence of one of the things my dad taught me across a lot of lessons was like the intersection of self-empowerment, ideas and practices that self-empower towards collective good, towards some virtuous purpose beyond the self.

And he both said that a million different ways, taught it in a million different ways when we were doing construction and he was teaching me how to build a house. We were putting the wires to the walls before the drywall went on. He made sure that the way that we put the wires through was beautiful.

Like that the height of the holes was similar, that we twisted the wires in a particular way. And it's like, no one's ever gonna see it. And he's like, if a job's worth doing, it's worth doing well and excellence is its own reward and those types of ideas. And if there was a really shitty job to do, he'd say, see the job, do the job, stay out of the misery.

Just don't indulge any negativity, do the things that need done. And so there's like a, there's an empowerment and a nobility together. And yeah, extraordinarily fortunate. - Is there ways you think you could have been a better son? Is there things you regret? That's an interesting question. - Let me first say, just as a bit of a criticism, that what kind of man do you think you are not wearing a suit and tie?

A real man should. (laughing) Exactly, I agree with your dad on that point. You mentioned offline that he suggested a real man should wear a suit and tie. But outside of that, is there ways you could have been a better son? - Maybe next time on your show, I'll wear a suit and tie.

(laughing) My dad would be happy about that. - Please. - I can answer the question later in life, not early. I had just a huge amount of respect and reverence for my dad when I was young. So I was asking myself that question a lot. So there weren't a lot of things I knew that I wasn't seeking to apply.

There was a phase when I went through my kind of individuation differentiation where I had to make him excessively wrong about too many things. I don't think I had to, but I did. And he had a lot of kind of non-standard model beliefs about things, whether early kind of ancient civilizations or ideas on evolutionary theory or alternate models of physics.

They weren't irrational, but they didn't all have the standard of epistemic proof that I would need. And I went through, and some of them were kind of spiritual ideas as well. I went through a phase in my early 20s where I kind of had the attitude that Dawkins or Christopher Hitchens has that can kind of be excessively certain and sanctimonious, applying their reductionist philosophy of science to everything and kind of brutally dismissive.

I'm embarrassed by that phase. Not to say anything about those men and their path, but for myself. And so during that time, I was more dismissive of my dad's epistemology than I would have liked to have been. I got to correct that later, apologize for it. But that was the first thought that came to mind.

- You've written the following. I've had the experience countless times, making love, watching a sunset, listening to music, feeling the breeze, that I would sign up for this whole life and all of its pains just to experience this exact moment. This is a kind of worldless knowing. It's the most important and real truth I know, that experience itself is infinitely meaningful and pain is temporary.

And seen clearly, even the suffering is filled with beauty. I've experienced countless lives worth of moments worthy of life, such an unreasonable fortune. A few words of gratitude from you, beautifully written. Is there some beautiful moments? Now you have experienced countless lives worth of those moments, but is there some things that if you could, in your darker moments, you can go to to relive, to remind yourself that the whole ride is worthwhile?

Maybe skip the making love part. We don't wanna know about that. - I mean, I feel unreasonably fortunate that it is such a humongous list because, I mean, I feel fortunate to have had exposure to practices and philosophies and way of seeing things that makes me see things that way.

So I can take responsibility for seeing things in that way and not taking for granted really wonderful things, but I can't take credit for being exposed to the philosophies that even gave me that possibility. It's not just with my wife, it's with every person who I really love when we're talking, I look at their face.

I, in the context of a conversation, feel overwhelmed by how lucky I am to get to know them. And like there's never been someone like them in all of history and there never will be again. And they might be gone tomorrow, I might be gone tomorrow. And like, I get this moment with them.

And when you take in the uniqueness of that fully and the beauty of it, it's overwhelmingly beautiful. And I remember the first time I did a big dose of mushrooms and I was looking at a tree for a long time. And I was just crying with overwhelm at how beautiful the tree was.

And it was a tree outside the front of my house that I'd walked by a million times and never looked at like this. And it wasn't the dose of mushrooms where I was hallucinating like where the tree was purple. Like the tree still looked like, if I had to describe it, say it's green and it has leaves, looks like this, but it was way fucking more beautiful, like capturing than it normally was.

And I'm like, why is it so beautiful if I would describe it the same way? And I realized I had no thoughts taking me anywhere else. - Yeah. - Like what it seemed like the mushrooms were doing was just actually shutting the narrative off that would have me be distracted so I could really see the tree.

And then I'm like, fuck, when I get off these mushrooms, I'm gonna practice seeing the tree because it's always that beautiful and I just miss it. And so I practice being with it and quieting the rest of the mind and then being like, wow. And if it's not mushrooms, like people will have peak experiences where they'll see life and how incredible it is.

It's always there. - It's funny that I had this exact same experience on quite a lot of mushrooms, just sitting alone and looking at a tree and exactly as you described it, appreciating the undistorted beauty of it. And it's funny to me that here's two humans, very different with very different journeys where at some moment in time, both looking at a tree like idiots for hours (both laughing) and just in awe and happy to be alive.

And yeah, even just that moment alone is worth living for. But you did say humans and we have a moment together as two humans and you mentioned shots. (both laughing) That's a, I have to ask, what are we looking at? - When I went to go get a smoothie before coming here, I got you a keto smoothie that you didn't want 'cause you're not just keto but fasting.

But I saw the thing with you and your dad where you did shots together. And this place happened to have shots of ginger turmeric cayenne juice of some kind. - With some Himalayan salt. - I didn't necessarily plan it for being on the show. I just brought it. - Wow.

- But we can do it that way. - I think we shall toast like heroes, Daniel. It's a huge honor. - What do we toast to? What do we toast to? - We toast to this moment, this unique moment that we get to share together. - I'm very grateful to be here in this moment with you.

And yeah, I'm grateful that you invited me here. We met for the first time and I will never be the same for the good and the bad. - I am. - That is really interesting. That feels way healthier than the vodka my dad and I were drinking. So I feel like a better man already.

Daniel, this is one of the best conversations I've ever had. I can't wait to have many more. - Likewise. - This has been an amazing experience. Thank you for wasting all your time today. I wanna say in terms of what you're mentioning about, like the, that you work in machine learning and the optimism that wants to look at the issues, but wants to look at how this increased technological power could be applied to solving them.

And that even thinking about the broadcast of like, can I help people understand the issues better and help organize them? Like fundamentally you're oriented like Wikipedia. What I see to really try to tend to the information commons without another agentic interest distorting it. And for you to be able to get guys like Lee Smolin and Roger Penrose and like the greatest thinkers that are alive and have them on the show.

And most people would never be exposed to them and talk about it in a way that people can understand. I think it's an incredible service. I think you're doing great work. So I was really happy to hear from you. - Thank you, Daniel. Thanks for listening to this conversation with Daniel Schmachtenberger.

And thank you to Ground News, NetSuite, Four Sigmatic, Magic Spoon and BetterHelp. Check them out in the description to support this podcast. And now let me leave you with some words from Albert Einstein. "I know not with what weapons World War III will be fought, "but World War IV will be fought with sticks and stones." Thank you for listening and hope to see you next time.

(upbeat music) (upbeat music)