back to index

Michael Levin: Biology, Life, Aliens, Evolution, Embryogenesis & Xenobots | Lex Fridman Podcast #325


Chapters

0:0 Introduction
1:40 Embryogenesis
9:8 Xenobots: biological robots
22:55 Sense of self
32:26 Multi-scale competency architecture
43:57 Free will
53:27 Bioelectricity
66:44 Planaria
78:33 Building xenobots
102:8 Unconventional cognition
126:39 Origin of evolution
133:41 Synthetic organisms
140:27 Regenerative medicine
144:13 Cancer suppression
148:15 Viruses
153:28 Cognitive light cones
158:3 Advice for young people
162:47 Death
172:17 Meaning of life

Whisper Transcript | Transcript Only Page

00:00:00.000 | It turns out that if you train a planarian and then cut their heads off,
00:00:03.440 | the tail will regenerate a brand new brain that still remembers the original information.
00:00:07.600 | I think planaria hold the answer to pretty much every deep question of life. For one thing,
00:00:13.040 | they're similar to our ancestors. So they have true symmetry, they have a true brain,
00:00:16.240 | they're not like earthworms. They're, you know, they're much more advanced life form.
00:00:18.960 | They have lots of different internal organs, but they're these little, they're about, you know,
00:00:22.000 | maybe two centimeters in the centimeter to two in size. They have a head and a tail.
00:00:27.120 | And the first thing is planaria are immortal. So they do not age. There's no such thing as
00:00:31.520 | an old planarian. So that right there tells you that these theories of thermodynamic
00:00:35.120 | limitations on lifespan are wrong. It's not that well over time everything degrades. No,
00:00:40.640 | planaria can keep it going for probably, you know, how long have they been around? 400 million years.
00:00:45.840 | Right? So these are the actual, so the planaria in our lab are actually in physical continuity
00:00:50.560 | with planaria that were here 400 million years ago. The following is a conversation with Michael
00:00:56.880 | Levin, one of the most fascinating and brilliant biologists I've ever talked to. He and his lab at
00:01:03.440 | Tufts University works on novel ways to understand and control complex pattern formation in biological
00:01:10.400 | systems. Andrej Karpathy, a world-class AI researcher, is the person who first introduced
00:01:16.160 | me to Michael Levin's work. I bring this up because these two people make me realize that
00:01:22.480 | biology has a lot to teach us about AI, and AI might have a lot to teach us about biology.
00:01:29.200 | This is the Lex Friedman Podcast. To support it, please check out our sponsors in the description.
00:01:35.600 | And now, dear friends, here's Michael Levin.
00:01:39.040 | Embryogenesis is the process of building the human body from a single cell. I think it's
00:01:44.720 | one of the most incredible things that exists on earth from a single embryo. So how does this
00:01:50.400 | process work? Yeah, it is an incredible process. I think it's maybe the most magical process there
00:01:56.800 | is. And I think one of the most fundamentally interesting things about it is that it shows
00:02:02.080 | that each of us takes the journey from so-called just physics to mind, right? Because we all start
00:02:07.600 | life as a single quiescent, unfertilized oocyte, and it's basically a bag of chemicals, and you
00:02:13.120 | look at that and you say, "Okay, this is chemistry and physics." And then nine months and some years
00:02:17.200 | later, you have an organism with high-level cognition and preferences and an inner life
00:02:22.400 | and so on. And what embryogenesis tells us is that that transformation from physics to mind is
00:02:27.520 | gradual. It's smooth. There is no special place where a lightning bolt says, "Boom, now you've
00:02:32.960 | gone from physics to true cognition." That doesn't happen. And so we can see in this process that the
00:02:38.240 | whole mystery, the biggest mystery of the universe, basically, how you get mind from matter.
00:02:43.360 | - From just physics in quotes. So where's the magic into the thing? How do we get from
00:02:49.680 | information encoded in DNA and make physical reality out of that information?
00:02:55.040 | - So one of the things that I think is really important if we're gonna bring in
00:02:58.880 | DNA into this picture is to think about the fact that what DNA encodes is the hardware of life.
00:03:05.120 | DNA contains the instructions for the kind of micro-level hardware that every cell gets to
00:03:09.440 | play with. So all the proteins, all the signaling factors, the ion channels, all the cool little
00:03:13.840 | pieces of hardware that cells have, that's what's in the DNA. The rest of it is in so-called generic
00:03:20.000 | laws. And these are laws of mathematics. These are laws of computation. These are laws of
00:03:24.400 | physics, of all kinds of interesting things that are not directly in the DNA. And that process,
00:03:31.760 | you know, I think the reason I always put "just physics" in quotes is because I don't think there
00:03:36.400 | is such a thing as just physics. I think that thinking about these things in binary categories,
00:03:41.040 | like this is physics, this is true cognition, this is as if, it's only faking, all these kinds of
00:03:45.200 | things. I think that's what gets us in trouble. I think that we really have to understand that it's
00:03:48.880 | a continuum and we have to work up the scaling, the laws of scaling. And we can certainly talk
00:03:53.280 | about that. There's a lot of really interesting thoughts to be had there.
00:03:56.240 | - So the physics is deeply integrated with the information. So the DNA doesn't exist on its own.
00:04:03.120 | The DNA is integrated in some sense in response to the laws of physics at every scale, the laws
00:04:10.880 | of the environment it exists in. - Yeah, the environment and also the laws of the universe.
00:04:16.320 | I mean, the thing about the DNA is that once evolution discovers a certain kind of machine,
00:04:23.040 | that if the physical implementation is appropriate, it's sort of, and this is hard to talk
00:04:28.160 | about because we don't have a good vocabulary for this yet, but it's a very kind of platonic notion
00:04:32.960 | that if the machine is there, it pulls down interesting things that you do not have to
00:04:40.800 | evolve from scratch because the laws of physics give it to you for free. So just as a really
00:04:44.960 | stupid example, if you're trying to evolve a particular triangle, you can evolve the first
00:04:48.880 | angle and you evolve the second angle, but you don't need to evolve the third. You know what it
00:04:52.000 | is already. Now, why do you know? That's a gift for free from geometry in a particular space. You
00:04:56.080 | know what that angle has to be. And if you evolve an ion channel, which is ion channels are basically
00:05:00.560 | transistors, right? They're voltage gated current conductances. If you evolve that ion channel,
00:05:05.360 | you immediately get to use things like truth tables. You get logic functions. You don't have
00:05:08.960 | to evolve the logic function. You don't have to evolve a truth table. It doesn't have to be in
00:05:12.080 | the DNA. You get it for free, right? And the fact that if you have NAND gates, you can build
00:05:16.240 | anything you want. You get that for free. All you have to evolve is that first step, that first
00:05:20.720 | little machine that enables you to couple to those laws. And there's laws of adhesion and many other
00:05:26.080 | things. And this is all that interplay between the hardware that's set up by the genetics and
00:05:32.160 | the software that's made, right? The physiological software that basically does all the computation
00:05:37.040 | and the cognition and everything else is a real interplay between the information and the DNA and
00:05:42.000 | the laws of physics of computation and so on. - So is it fair to say just like this idea that
00:05:47.120 | the laws of mathematics are discovered, they're latent within the fabric of the universe in that
00:05:53.520 | same way the laws of biology are kind of discovered? - Yeah, I think that's absolutely. And it's
00:05:58.080 | probably not a popular view, but I think that's right on the money. Yeah. - Well, I think that's
00:06:01.680 | a really deep idea. Then embryogenesis is the process of revealing, of embodying, of manifesting
00:06:13.040 | these laws. You're not building the laws. You're just creating the capacity to reveal. - Yes. I
00:06:21.520 | think, again, not the standard view of molecular biology by any means, but I think that's right on
00:06:26.000 | the money. I'll give you a simple example. Some of our latest work with these xenobots, right? So
00:06:30.160 | what we've done is to take some skin cells off of an early frog embryo and basically ask about
00:06:34.720 | their plasticity. If we give you a chance to sort of reboot your multicellularity in a different
00:06:39.120 | context, what would you do? Because what you might assume by looking at the thing about embryogenesis
00:06:44.000 | is that it's super reliable, right? It's very robust. And that really obscures some of its most
00:06:49.600 | interesting features. We get used to it. We get used to the fact that acorns make oak trees and
00:06:53.680 | frog eggs make frogs. And we say, "Well, what else is it going to make?" That's what it makes. That's
00:06:57.120 | a standard story. But the reality is... And so you look at these skin cells and you say, "Well,
00:07:03.520 | what do they know how to do?" Well, they know how to be a passive, boring, two-dimensional outer
00:07:07.760 | layer keeping the bacteria from getting into the embryo. That's what they know how to do.
00:07:10.960 | Well, it turns out that if you take these skin cells and you remove the rest of the embryo,
00:07:15.840 | so you remove all of the rest of the cells and you say, "Well, you're by yourself now. What do
00:07:20.080 | you want to do?" So what they do is they form this multi-little creature that runs around the dish.
00:07:25.760 | They have all kinds of incredible capacities. They navigate through mazes. They have various
00:07:30.080 | behaviors that they do both independently and together. Basically, they implement von Neumann's
00:07:37.280 | dream of self-replication. Because if you sprinkle a bunch of loose cells into the dish, what they do
00:07:41.920 | is they run around, they collect those cells into little piles. They sort of mush them together
00:07:46.160 | until those little piles become the next generation of xenobots. So you've got this machine that
00:07:50.080 | builds copies of itself from loose material in its environment. None of this are things that you
00:07:55.840 | would have expected from the frog genome. In fact, the genome is wild-type. There's nothing wrong
00:08:00.320 | with their genetics. Nothing has been added, no nanomaterials, no genomic editing, nothing.
00:08:04.560 | And so what we have done there is engineer by subtraction. What you've done is you've removed
00:08:10.640 | the other cells that normally basically bully these cells into being skin cells. And you find
00:08:14.960 | out that what they really want to do is to be this... Their default behavior is to be a xenobot.
00:08:20.880 | But in vivo, in the embryo, they get told to be skinned by these other cell types.
00:08:25.680 | And so now here comes this really interesting question that you just posed.
00:08:30.080 | When you ask where does the form of the tadpole and the frog come from, the standard answer is,
00:08:35.680 | well, it's selection. So over millions of years, it's been shaped to produce the specific body
00:08:42.560 | that's fit for froggy environments. Where does the shape of the xenobot come from? There's never been
00:08:47.280 | any xenobots. There's never been selection to be a good xenobot. These cells find themselves in the
00:08:51.200 | new environment. In 48 hours, they figure out how to be an entirely different proto-organism with
00:08:56.960 | new capacities like kinematic self-replication. That's not how frogs or tadpoles replicate.
00:09:01.040 | We've made it impossible for them to replicate their normal way. Within a couple of days,
00:09:04.640 | these guys find a new way of doing it that's not done anywhere else in the biosphere.
00:09:07.760 | LR: Well, actually, let's step back and define what are xenobots?
00:09:11.440 | CB: So a xenobot is a self-assembling little proto-organism. It's also a biological robot.
00:09:17.920 | Those things are not distinct. It's a member of both classes.
00:09:20.800 | LR: How much is it biology? How much is it robot?
00:09:24.400 | CB: At this point, most of it is biology because what we're doing is we're discovering natural
00:09:30.880 | behaviors of the cells and also of the cell collectives. Now, one of the really important
00:09:36.400 | parts of this was that we're working together with Josh Bongard's group at University of Vermont.
00:09:41.040 | They're computer scientists. They do AI. And they've basically been able to use an evolutionary,
00:09:47.120 | a simulated evolution approach to ask, "How can we manipulate these cells, give them signals,
00:09:52.080 | not rewire their DNA, so not hardware, but experience signals? So can we remove some
00:09:56.400 | cells? Can we add some cells? Can we poke them in different ways to get them to do other things?"
00:10:00.640 | So in the future, there's going to be—we're now, and this is future unpublished work, but
00:10:05.040 | we're doing all sorts of interesting ways to reprogram them to new behaviors. But before
00:10:09.200 | you can start to reprogram these things, you have to understand what their innate capacities are.
00:10:13.440 | LR: Okay, so that means engineering, programming, you're engineering them in the future.
00:10:19.920 | And in some sense, the definition of a robot is something you in part engineer versus evolve.
00:10:28.640 | I mean, it's such a fuzzy definition anyway. In some sense, many of the organisms within our body
00:10:35.920 | are kinds of robots. And I think robots is a weird line because we tend to see robots as the other.
00:10:44.000 | I think there will be a time in the future when there's going to be something akin to the civil
00:10:48.800 | rights movements for robots, but we'll talk about that later perhaps. Anyway, so how do you—can
00:10:56.480 | we just linger on it? How do you build a xenobot? What are we talking about here?
00:11:00.560 | From when does it start, and how does it become the glorious xenobot?
00:11:08.800 | CB: Yeah. So just to take one step back, one of the things that a lot of people get stuck on is
00:11:14.240 | they say, "Well, engineering requires new DNA circuits, or it requires new nanomaterials."
00:11:22.240 | The thing is, we are now moving from old school engineering, which used passive materials,
00:11:27.520 | right? The things like wood, metal, things like this, that basically the only thing you could
00:11:30.960 | depend on is that they were going to keep their shape. That's it. They don't do anything else.
00:11:33.840 | It's on you as an engineer to make them do everything they're going to do. And then there
00:11:37.680 | were active materials and now computation materials. This is a whole new era. These are
00:11:41.680 | agential materials. You're now collaborating with your substrate because your material has an agenda.
00:11:47.200 | These cells have billions of years of evolution. They have goals. They have preferences. They're
00:11:51.280 | not just going to sit where you put them. That's hilarious that you have to talk your
00:11:55.040 | material into keeping its shape. Yeah, that is exactly right. That is exactly right.
00:11:59.440 | Stay there. It's like getting a bunch of cats or something and trying to organize the shape out of
00:12:04.640 | them. It's funny. We're on the same page here because in a paper, this is currently just been
00:12:09.040 | accepted in Nature by Engineering. One of the figures I have is building a tower out of Legos
00:12:13.680 | versus dogs, right? So think about the difference, right? If you build out of Legos, you have full
00:12:18.400 | control over where it's going to go. But if somebody knocks it over, it's game over. With
00:12:22.960 | the dogs, you cannot just come and stack them. They're not going to stay that way. But the good
00:12:26.480 | news is that if you train them, then somebody knocks it over, they'll get right back up.
00:12:30.000 | So it's all right. So as an engineer, what you really want to know is what can I depend on this
00:12:34.080 | thing to do, right? A lot of people have definitions of robots as far as what they're made of or how
00:12:39.120 | they got here, design versus evolve, whatever. I don't think any of that is useful. I think as an
00:12:43.600 | engineer, what you want to know is how much can I depend on this thing to do when I'm not around
00:12:48.400 | to micromanage it? What level of dependency can I give this thing? How much agency does it have?
00:12:54.400 | Which then tells you what techniques do you use? So do you use micromanagement? Like you put
00:12:57.840 | everything where it goes? Do you train it? Do you give it signals? Do you try to convince it to do
00:13:02.080 | things, right? How intelligent is your substrate? And so now we're moving into this area where
00:13:07.200 | you're working with agential materials. That's a collaboration. That's not old style engineering.
00:13:12.640 | >> What's the word you're using? Agential?
00:13:14.320 | >> Agential, yeah.
00:13:15.040 | >> What's that mean?
00:13:15.520 | >> Agency. It comes from the word agency. So basically the material has agency, meaning that
00:13:19.520 | it has some level of, obviously not human level, but some level of preferences, goals,
00:13:26.080 | memories, ability to remember things, to compute into the future, meaning anticipate.
00:13:30.080 | When you're working with cells, they have all of that to various degrees.
00:13:34.400 | >> Is that empowering or limiting, having material that has a mind of its own, literally?
00:13:39.680 | >> I think it's both, right? So it raises difficulties because it means that
00:13:43.120 | if you're using the old mindset, which is a linear kind of extrapolation of what's going
00:13:48.880 | to happen, you're going to be surprised and shocked all the time because biology does not
00:13:54.240 | do what we linearly expect materials to do. On the other hand, it's massively liberating. And so
00:13:59.600 | in the following way, I've argued that advances in regenerative medicine require us to take
00:14:04.240 | advantage of this because what it means is that you can get the material to do things that you
00:14:09.040 | don't know how to micromanage. So just as a simple example, right? If you had a rat and you wanted
00:14:15.040 | this rat to do a circus trick, put a ball in the little hoop, you can do it the micromanagement
00:14:19.760 | way, which is try to control every neuron and try to play the thing like a puppet, right? And maybe
00:14:23.440 | someday that'll be possible, maybe. Or you can train the rat. And this is why humanity for
00:14:28.080 | thousands of years before we knew any neuroscience, we had no idea what's between the ears of any
00:14:32.320 | animal, we were able to train these animals because once you recognize the level of agency of a
00:14:37.280 | certain system, you can use appropriate techniques. If you know the currency of motivation, reward,
00:14:42.080 | and punishment, you know how smart it is, you know what kinds of things it likes to do.
00:14:45.280 | You are searching a much more, much smoother, much nicer problem space than if you try to
00:14:50.400 | micromanage the thing. And in regenerative medicine, when you're trying to get, let's say,
00:14:54.320 | an arm to grow back or an eye to repair a cell birth defect or something, do you really want to
00:14:58.880 | be controlling tens of thousands of genes at each point to try to micromanage it? Or do you want to
00:15:04.640 | find the high-level modular controls that say, build an arm here? You already know how to build
00:15:09.360 | an arm, you did it before, do it again. So that's, I think it's both. It's both difficult and it
00:15:14.320 | challenges us to develop new ways of engineering, and it's hugely empowering.
00:15:18.560 | Okay, so how do you do, I mean, maybe sticking with the metaphor of dogs and cats,
00:15:24.000 | I presume you have to figure out the, find the dogs and dispose of the cats.
00:15:33.760 | Because, you know, it's like the old herding cats is an issue. So you may be able to train dogs,
00:15:39.120 | I suspect you will not be able to train cats. Or if you do, you're never going to be able to trust
00:15:45.520 | them. So is there a way to figure out which material is amenable to herding? Is it in the lab
00:15:53.840 | work or is it in simulation? Right now it's largely in the lab because we, our simulations do not
00:15:59.840 | capture yet the most interesting and powerful things about biology. So the simulation does,
00:16:04.960 | what we're pretty good at simulating are feed-forward emergent types of things, right?
00:16:10.720 | So cellular automata, if you have simple rules and you sort of roll those forward for every agent or
00:16:16.400 | every cell in the simulation, then complex things happen, you know, ant colony algorithms, things
00:16:20.480 | like that. We're good at that and that's fine. The difficulty with all of that is that it's
00:16:25.120 | incredibly hard to reverse. So this is a really hard inverse problem, right? If you look at a
00:16:28.960 | bunch of termites and they make a thing with a single chimney and you say, "Well, I like it,
00:16:32.720 | but I'd like two chimneys." How do you change the rules of behavior-free termites so they make two
00:16:37.200 | chimneys, right? Or if you say, "Here are a bunch of cells that are creating this kind of organism.
00:16:41.920 | I don't think that's optimal. I'd like to repair that birth defect." How do you control all the
00:16:47.040 | individual low-level rules, right? All the protein interactions and everything else. Rolling it back
00:16:51.120 | from the anatomy that you want to the low-level hardware rules is in general intractable. It's
00:16:56.080 | an inverse problem that's generally not solvable. So right now it's mostly in the lab because what
00:17:01.600 | we need to do is we need to understand how biology uses top-down controls. So the idea is not bottom
00:17:06.720 | up emergence, but the idea of things like goal-directed test-operate-exit kinds of loops,
00:17:13.520 | where it's basically an error minimization function over a new space. It's not a space
00:17:17.840 | of gene expression, but for example, a space of anatomy. So just as a simple example, if you have
00:17:22.560 | a salamander and it's got an arm, you can amputate that arm anywhere along the length. It will grow
00:17:28.560 | exactly what's needed and then it stops. That's the most amazing thing about regeneration is that
00:17:32.400 | it stops. It knows when to stop. When does it stop? It stops when a correct salamander arm has
00:17:36.480 | been completed. So that tells you that's a means-ends kind of analysis where it has to know
00:17:43.040 | what the correct limb is supposed to look like, right? So it has a way to ascertain the current
00:17:47.520 | shape. It has a way to measure that delta from what shape it's supposed to be. And then it will
00:17:51.600 | keep taking actions, meaning remodeling and growing and everything else until that's complete.
00:17:55.600 | So once you know that, and we've taken advantage of this in the lab to do some really wild things
00:17:59.440 | with both planaria and frog embryos and so on. Once you know that, you can start playing with
00:18:04.880 | that homeostatic cycle. You can ask, for example, "Well, how does it remember what the correct shape
00:18:09.680 | is and can we mess with that memory? Can we give it a false memory of what the shape should be and
00:18:13.120 | let the cells build something else? Or can we mess with the measurement apparatus?" Right?
00:18:16.800 | So it gives you those kinds of... So the idea is to basically appropriate a lot of the
00:18:24.160 | approaches and concepts from cognitive neuroscience and behavioral science into things that previously
00:18:31.280 | were taken to be dumb materials. And you'd get yelled at in class for being anthropomorphic
00:18:36.480 | if you said, "Well, my cells want to do this and my cells want to do that." And I think that's a
00:18:40.560 | major mistake that leaves a ton of capabilities on the table. - So thinking about biologic systems
00:18:45.200 | as things that have memory, have almost something like cognitive ability, but I mean,
00:18:53.520 | how incredible is it that the salamander arm is being rebuilt not with a dictator. It's kind of
00:19:03.440 | like the cellular automata system. All the individual workers are doing their own thing.
00:19:07.280 | So where's that top-down signal that does the control coming from? How can you find it?
00:19:14.560 | - Yeah. - Like, why does it stop growing?
00:19:16.720 | How does it know the shape? How does it have memory of the shape? And how does it tell everybody to
00:19:22.000 | be like, "Whoa, whoa, whoa, slow down, we're done." - So the first thing to think about, I think,
00:19:26.480 | is that there are no examples anywhere of a central dictator in this kind of science, because
00:19:34.320 | everything is made of parts. And so we, even though we feel as a unified central sort of
00:19:41.120 | intelligence and kind of point of cognition, we are a bag of neurons, right? All intelligence is
00:19:46.640 | collective intelligence. This is important to kind of think about, because a lot of people think,
00:19:52.000 | "Okay, there's real intelligence, like me, and then there's collective intelligence, which is
00:19:56.480 | ants and flocks of birds and termites and things like that. And maybe it's appropriate to think of
00:20:02.880 | them as an individual, and maybe it's not, and a lot of people are skeptical about that and so on.
00:20:07.840 | But you've got to realize that we are not, there's no such thing as this indivisible diamond
00:20:12.720 | of intelligence that's like this one central thing that's not made of parts. We are all made of parts.
00:20:17.120 | And so if you believe, which I think is hard to get around, that we in fact have a centralized
00:20:24.320 | set of goals and preferences and we plan and we do things and so on, you are already committed to the
00:20:29.840 | fact that a collection of cells is able to do this, because we are a collection of cells. There's no
00:20:33.760 | getting around that. In our case, what we do is we navigate the three-dimensional world,
00:20:37.680 | and we have behavior. >> This is blowing my mind right now,
00:20:40.240 | because we are just a collection of cells. >> Oh, yeah, yeah.
00:20:42.480 | >> So when I'm moving this arm, I feel like I'm the central dictator of that action. But there's a lot
00:20:51.440 | of stuff going on. All the cells here are collaborating in some interesting way. They're
00:20:58.000 | getting signal from the central nervous system. >> Well, even the central nervous system is
00:21:02.720 | misleadingly named, because it isn't really central. Again, it's what-
00:21:07.040 | >> It's just a bunch of cells. >> It's just a bunch of cells. I mean,
00:21:09.280 | there are no singular indivisible intelligences anywhere. Every example that we've ever seen
00:21:16.880 | is a collective of something. It's just that we're used to it. We're used to that, we're used to,
00:21:21.360 | okay, this thing is kind of a single thing, but it's really not. You zoom in, you know what you
00:21:24.400 | see. You see a bunch of cells running around. >> Is there some unifying, I mean, we're
00:21:29.360 | jumping around, but that's something that you look at as the bioelectrical signal versus the
00:21:36.000 | biochemical, the chemistry, the electricity. Maybe the life is in that versus the cells.
00:21:46.880 | There's an orchestra playing, and the resulting music is the dictator.
00:21:56.480 | >> That's not bad. That's Dennis Noble's view of things. He has two really good books where he
00:22:02.880 | talks about this musical analogy. I like it. >> Is it wrong, though?
00:22:08.640 | >> No, I don't think it's wrong. I don't think it's wrong. I think the important thing about it
00:22:15.200 | is that we have to come to grips with the fact that a true, proper cognitive intelligence can
00:22:24.080 | still be made of parts. Those things are, and in fact, it has to be. I think it's a real shame,
00:22:28.480 | but I see this all the time. When you have a collective like this, whether it be a group of
00:22:34.240 | robots or a collection of cells or neurons or whatever, as soon as we gain some insight into
00:22:41.440 | how it works, meaning that, oh, I see, in order to take this action, here's the information that
00:22:45.920 | got processed via this chemical mechanism or whatever, immediately people say, oh, well,
00:22:50.720 | then that's not real cognition. That's just physics. I think this is fundamentally flawed
00:22:55.200 | because if you zoom into anything, what are you going to see? Of course, you're just going to see
00:22:58.880 | physics. What else could be underneath? It's not going to be fairy dust. It's going to be physics
00:23:02.000 | and chemistry. But that doesn't take away from the magic of the fact that there are certain ways to
00:23:06.400 | arrange that physics and chemistry, and in particular, the bioelectricity, which I like a lot,
00:23:10.320 | to give you an emergent collective with goals and preferences and memories and anticipations
00:23:18.640 | that do not belong to any of the subunits. So, I think what we're getting into here,
00:23:22.160 | and we can talk about how this happens during embryogenesis and so on, what we're getting into
00:23:26.640 | is the origin of a self with a capital S. So, we are selves. There are many other kinds of selves,
00:23:33.680 | and we can tell some really interesting stories about where selves come from and how they become
00:23:37.440 | unified. - Yeah, is this the first,
00:23:39.520 | or at least humans tend to think that this is the level at which the self with a capital S is first
00:23:45.680 | born, and we really don't want to see human civilization or Earth itself as one living
00:23:54.080 | organism. That's very uncomfortable to us. - It is, yeah.
00:23:57.840 | - But where's the self born? - We have to grow up past that. So,
00:24:02.800 | what I like to do is, I'll tell you two quick stories about that. I like to roll backwards.
00:24:07.440 | So, if you start and you say, "Okay, here's a paramecium," and you see it, it's a single cell
00:24:12.800 | organism, you see it doing various things, and people will say, "Okay, I'm sure there's some
00:24:16.640 | chemical story to be told about how it's doing it, so that's not true cognition," and people
00:24:20.640 | will argue about that. I like to work it backwards. I say, "Let's agree that you and I, as we sit here,
00:24:26.800 | are examples of true cognition, if anything, is if there's anything that's true cognition,
00:24:30.400 | we are examples of it." Now, let's just roll back slowly. So, you roll back to the time
00:24:34.400 | when you were a small child and used to doing whatever, and then just sort of day by day,
00:24:38.640 | you roll back, and eventually, you become more or less that paramecium, and then you're sort of even
00:24:43.360 | below that, as an unfertilized oocyte. So, to my knowledge, no one has come up with any convincing
00:24:52.640 | discrete step at which my cognitive powers disappear. Biology doesn't offer any specific
00:24:59.920 | step. It's incredibly smooth and slow and continuous. And so, I think this idea that
00:25:04.560 | it just sort of magically shows up at one point, and then humans have true selves that don't exist
00:25:10.960 | elsewhere, I think it runs against everything we know about evolution, everything we know about
00:25:14.640 | developmental biology. These are all slow, continuous. And the other really important
00:25:19.040 | story I want to tell is where embryos come from. So, think about this for a second. Amniote embryos,
00:25:24.000 | so this is humans, birds, and so on, mammals and birds, and so on. Imagine a flat disk of cells,
00:25:30.000 | so there's maybe 50,000 cells. And in that, so when you get an egg from a fertilized, let's say
00:25:35.760 | you buy a fertilized egg from a farm, right? That egg will have about 50,000 cells in a flat disk,
00:25:43.200 | looks like a little tiny little frisbee. And in that flat disk, what'll happen is
00:25:48.160 | there'll be one set of cells will become special, and it will tell all the other cells, "I'm going
00:25:56.480 | to be the head, you guys don't be the head." And so it'll amplify symmetry, breaking amplification,
00:26:00.640 | and you get one embryo. There's some neural tissue and some other stuff forms.
00:26:04.000 | Now, you say, "Okay, I had one egg and one embryo, and there you go, what else could it be?"
00:26:09.680 | Well, the reality is, and I used to, I did all of this as a grad student, if you take a little needle
00:26:15.520 | and you make a scratch in that blastoderm, in that disk, such that the cells can't talk to each other
00:26:20.480 | for a while, it heals up, but for a while, they can't talk to each other. What'll happen is that
00:26:24.800 | both regions will decide that they can be the embryo, and there'll be two of them. And then
00:26:29.280 | when they heal up, they become conjoined twins, and you can make two, you can make three, you can
00:26:32.800 | make lots. So the question of how many cells are in there cannot be answered until it's actually
00:26:40.240 | played all the way through. It isn't necessarily that there's just one, there can be many.
00:26:43.760 | So what you have is you have this medium, this undifferentiated, I'm sure there's a psychological
00:26:48.880 | version of this somewhere that I don't know the proper terminology, but you have this
00:26:53.280 | list like ocean of potentiality. You have these thousands of cells, and some number of individuals
00:26:59.120 | are going to be formed out of it, usually one, sometimes zero, sometimes several. And they form
00:27:05.280 | out of these cells because a region of these cells organizes into a collective that will have goals,
00:27:11.280 | goals that individual cells don't have. For example, make a limb, make an eye, how many eyes?
00:27:16.160 | Well, exactly two. So individual cells don't know what an eye is. They don't know how many eyes
00:27:19.760 | you're supposed to have, but the collective does. The collective has goals and memories and
00:27:23.360 | anticipations that the individual cells don't. And the establishment of that boundary with its own
00:27:28.400 | ability to pursue certain goals, that's the origin of selfhood.
00:27:34.640 | - But is that goal in there somewhere? Are they always destined? Are they discovering that goal?
00:27:45.600 | Where the hell did evolution discover this? When you went from the prokaryotes to eukaryotic cells,
00:27:52.640 | and then they started making groups. And when you make a certain group, you make it sound
00:27:59.760 | such a tricky thing to try to understand. You make it sound like the cells didn't get together
00:28:07.840 | and came up with a goal. But the very act of them getting together revealed the goal that was
00:28:16.640 | always there. There was always that potential for that goal.
00:28:19.440 | - So the first thing to say is that there are way more questions here than certainties. Okay,
00:28:23.520 | so everything I'm telling you is cutting edge, developing stuff. So it's not as if any of us
00:28:28.480 | know the answer to this. But here's my opinion on this. I don't think that evolution produces
00:28:36.160 | solutions to specific problems. In other words, specific environments. Like here's a frog that
00:28:40.080 | can live well in a froggy environment. I think what evolution produces is problem solving machines
00:28:46.160 | that will solve problems in different spaces. So not just three-dimensional space. This goes
00:28:51.200 | back to what we were talking about before. The brain is evolutionarily a late development.
00:28:56.640 | It's a system that is able to pursue goals in three-dimensional space by giving commands to
00:29:02.240 | muscles. Where did that system come from? That system evolved from a much more ancient,
00:29:06.160 | evolutionarily much more ancient system where collections of cells gave instructions for cell
00:29:12.880 | behaviors, meaning cells move to divide, to die, to change into different cell types,
00:29:18.640 | to navigate morphous space, the space of anatomies, the space of all possible anatomies.
00:29:23.440 | And before that, cells were navigating transcriptional space, which is a space
00:29:27.280 | of all possible gene expressions. And before that, metabolic space. So what evolution has done,
00:29:31.840 | I think, is produced hardware that is very good at navigating different spaces using a bag of tricks,
00:29:39.440 | right? Which I'm sure many of them we can steal for autonomous vehicles and robotics and various
00:29:43.360 | things. And what happens is that they navigate these spaces without a whole lot of commitment
00:29:48.480 | to what the space is. In fact, they don't know what the space is, right? We are all brains in a
00:29:52.240 | vat, so to speak. Every cell does not know, right? Every cell is some other cell's external environment,
00:29:59.280 | right? So where does that border between you and the outside world, you don't really know where
00:30:04.000 | that is, right? Every collection of cell has to figure that out from scratch. And the fact that
00:30:08.640 | evolution requires all of these things to figure out what they are, what effectors they have,
00:30:13.760 | what sensors they have, where does it make sense to draw a boundary between me and the outside world?
00:30:17.760 | The fact that you have to build all that from scratch, this autopoiesis, is what defines the
00:30:22.720 | border of a self. Now, biology uses a multi-scale competency architecture, meaning that every level
00:30:30.160 | has goals. So molecular networks have goals, cells have goals, tissues, organs, colonies.
00:30:36.160 | And it's the interplay of all of those that enable biology to solve problems in new ways,
00:30:42.640 | for example, in xenobots and various other things. It's exactly as you said, in many ways,
00:30:50.640 | the cells are discovering new ways of being, but at the same time, evolution certainly shapes all
00:30:56.080 | this. So evolution is very good at this agential bioengineering, right? When evolution is discovering
00:31:02.640 | a new way of being an animal, an animal or a plant or something, sometimes it's by changing
00:31:07.120 | the hardware, you know, protein, changing protein structure and so on. But much of the time,
00:31:12.160 | it's not by changing the hardware, it's by changing the signals that the cells give to each other.
00:31:15.840 | It's doing what we as engineers do, which is try to convince the cells to do various things by
00:31:19.760 | using signals, experiences, stimuli. That's what biology does. It has to, because it's not dealing
00:31:24.800 | with a blank slate. Every time, as you know, if you're evolution and you're trying to make an
00:31:30.720 | organism, you're not dealing with a passive material that is fresh and you have to specify.
00:31:35.520 | It already wants to do certain things. So the easiest way to do that search, to find whatever
00:31:40.000 | is going to be adaptive, is to find the signals that are going to convince the cells to do various
00:31:45.280 | things, right? - Your sense is that evolution operates both in the software and the hardware,
00:31:50.080 | and it's just easier and more efficient to operate in the software. - Yes, and I should also say,
00:31:56.080 | I don't think the distinction is sharp. In other words, I think it's a continuum,
00:32:00.080 | but I think it's a meaningful distinction where you can make changes to a particular protein and
00:32:05.840 | now the enzymatic function is different and it metabolizes differently and whatever, and that
00:32:09.440 | will have implications for fitness. Or you can change the huge amount of information in the
00:32:17.120 | genome that isn't structural at all. It's signaling. It's when and how do cells say
00:32:22.080 | certain things to each other, and that can have massive changes as far as how it's going to solve
00:32:26.400 | problems. - I mean, this idea of multi-hierarchical competence architecture, which is incredible to
00:32:31.920 | think about. So this hierarchy that evolution builds, I don't know who's responsible for this,
00:32:39.440 | I also see the incompetence of bureaucracies of humans when they get together.
00:32:45.840 | So how the hell does evolution build this? Where at every level, only the best get to stick around,
00:32:54.880 | they somehow figure out how to do their job without knowing the bigger picture,
00:32:58.240 | and then there's the bosses that do the bigger thing somehow. Or you can now abstract away the
00:33:06.720 | small group of cells as an organ or something, and then that organ does something bigger
00:33:13.040 | in the context of the full body or something like this. How is that built? Is there some
00:33:19.920 | intuition you can kind of provide of how that's constructed, that hierarchical competence
00:33:26.480 | architecture? I love that. Competence, just the word competence is pretty cool in this context,
00:33:31.920 | because everybody's good at their job somehow. - Yeah, no, it's really key. And the other nice
00:33:36.240 | thing about competency is that, so my central belief in all of this is that engineering is the
00:33:42.400 | right perspective on all of this stuff, because it gets you away from subjective terms. People
00:33:48.960 | talk about sentience and this and that, those things are very hard to define, people argue
00:33:53.440 | about them philosophically. I think that engineering terms like competency, like
00:33:58.400 | pursuit of goals, all of these things are empirically incredibly useful, because you
00:34:05.520 | know it when you see it. And if it helps you build, if I can pick the right level, I say,
00:34:10.480 | this thing has, I believe this is X level of competency, I think it's like a thermostat,
00:34:16.480 | or I think it's like a better thermostat, or I think it's various other kinds of,
00:34:22.960 | many different kinds of complex systems. If that helps me to control and predict and build such
00:34:28.240 | systems, then that's all there is to say, there's no more philosophy to argue about.
00:34:31.520 | So I like competency in that way, because you can quantify, you have to, in fact, you have to,
00:34:35.360 | you have to make a claim, competent at what? And then, or if I say, if I tell you it has a goal,
00:34:39.200 | the question is, what's the goal and how do you know? And I say, well, because every time I deviated
00:34:43.440 | from this particular state, that's what it spends energy to get back to, that's the goal, and we can
00:34:47.120 | quantify it and we can be objective about it. So we're not used to thinking about this, I give a
00:34:53.760 | talk sometimes called, why don't robots get cancer? And the reason robots don't get cancer is because
00:34:58.160 | generally speaking, with a few exceptions, our architectures have been, you've got a bunch of
00:35:02.160 | dumb parts and you hope that if you put them together, the overlying machine will have some
00:35:08.080 | intelligence and do something rather, right? But the individual parts don't care, they don't have
00:35:11.360 | an agenda. Biology isn't like that, every level has an agenda and the final outcome is the result
00:35:18.800 | of cooperation and competition, both within and across levels. So for example, during embryogenesis,
00:35:24.240 | your tissues and organs are competing with each other, and it's actually a really important part
00:35:27.920 | of development, there's a reason they compete with each other, they're not all just sort of
00:35:32.160 | helping each other, they're also competing for information, for metabolic, for limited metabolic
00:35:38.000 | constraints. But to get back to your other point, which is, this seems like really efficient and
00:35:45.360 | good and so on compared to some of our human efforts, we also have to keep in mind that
00:35:50.240 | what happens here is that each level bends the option space for the level beneath, so that your
00:35:57.440 | parts basically, they don't see the geometry, so I'm using, and I think I take this seriously,
00:36:05.280 | terminology from like relativity, right, where the space is literally bent. So the option space
00:36:12.240 | is deformed by the higher level so that the lower levels, all they really have to do is go down their
00:36:16.640 | concentration gradient, they don't have to, in fact, they can't know what the big picture is.
00:36:20.720 | But if you bend the space just right, if they do what locally seems right, they end up doing your
00:36:25.520 | bidding, they end up doing things that are optimal in the higher space. Conversely, because the
00:36:31.520 | components are good at getting their job done, you as the higher level don't need to try to compute
00:36:37.520 | all the low-level controls, all you're doing is bending the space, you don't know or care how
00:36:41.280 | they're going to do it. Give you a super simple example, in the tadpole, we found that, okay, so
00:36:46.160 | tadpoles need to become frogs and to go from a tadpole head to a frog head, you have to rearrange
00:36:51.440 | the face, so the eyes have to move forward, the jaws have to come out, the nostrils move,
00:36:54.880 | everything moves. It used to be thought that because all tadpoles look the same and all frogs
00:36:59.600 | look the same, if you just remember, if every piece just moves in the right direction, the right
00:37:02.800 | amount, then you get your frog, right? So we decided to test, I had this hypothesis that I
00:37:08.000 | thought actually the system is probably more intelligent than that, so what did we do?
00:37:11.600 | We made what we call Picasso tadpoles. So everything is scrambled, so the eyes are on the
00:37:15.920 | back of the head, the jaws are off to the side, everything is scrambled. Well, guess what they
00:37:18.960 | make? They make pretty normal frogs because all the different things move around in novel paths,
00:37:24.000 | configurations, until they get to the correct froggy, sort of frog face configuration,
00:37:28.240 | then they stop. So the thing about that is now imagine evolution, right? So you make some sort
00:37:34.240 | of mutation and it does, like every mutation, it does many things. So something good comes of it,
00:37:40.560 | but also it moves your mouth off to the side, right? Now, if there wasn't this multi-scale
00:37:46.160 | competency, you can see where this is going, if there wasn't this multi-scale competency,
00:37:49.360 | the organism would be dead, your fitness is zero because you can't eat, and you would never get to
00:37:53.200 | explore the other beneficial consequences of that mutation. You'd have to wait until you find some
00:37:57.680 | other way of doing it without moving the mouth, that's really hard. So the fitness landscape would
00:38:02.000 | be incredibly rugged, evolution would take forever. The reason it works, well, one of the reasons it
00:38:06.400 | works so well is because you do that, no worries, the mouth will find its way where it belongs,
00:38:11.920 | right? So now you get to explore. So what that means is that all of these mutations that otherwise
00:38:16.240 | would be deleterious are now neutral because the competency of the parts make up for all kinds of
00:38:22.640 | things. So all the noise of development, all the variability in the environment, all these things,
00:38:27.520 | the competency of the parts makes up for it. So that's all fantastic, right? That's all great.
00:38:34.160 | The only other thing to remember when we compare this to human efforts is this,
00:38:37.680 | every component has its own goals in various spaces, usually with very little regard for
00:38:43.040 | the welfare of the other levels. So as a simple example, you as a complex system, you will go out
00:38:50.320 | and you will do jujitsu or whatever, you'll have some go, you have to go rock climbing and scrape
00:38:54.560 | a bunch of cells off your hands, and then you're happy as a system, right? You come back and you've
00:38:58.800 | accomplished some goals and you're really happy. Those cells are dead, they're gone, right? Did
00:39:02.480 | you think about those cells? Not really, right? You had some bruising, albino.
00:39:05.600 | >> Selfish SOB.
00:39:06.880 | >> Right? That's it. And so that's the thing to remember is that, and we know this from history,
00:39:13.280 | is that just being a collective isn't enough because what the goals of that collective will
00:39:19.520 | be relative to the welfare of the individual parts is a massively open question.
00:39:23.040 | >> The ends justify the means. I'm telling you, Stalin was onto something.
00:39:26.720 | >> So that's the danger.
00:39:28.160 | >> Exactly, that's the danger of, for us humans, we have to construct ethical systems
00:39:35.280 | under which we don't take seriously the full mechanism of biology and apply it to the way
00:39:42.640 | the world functions, which is an interesting line we've drawn. The world that built us
00:39:49.440 | is the one we reject in some sense when we construct human societies. The idea that this
00:39:57.760 | country was founded on that all men are created equal, that's such a fascinating idea. It's like
00:40:04.160 | you're fighting against nature and saying, well, there's something bigger here than
00:40:12.480 | a hierarchical competency architecture.
00:40:15.360 | >> Yeah.
00:40:16.240 | >> But there's so many interesting things you said. So from an algorithmic perspective,
00:40:21.280 | the act of bending the option space, that's really profound. Because if you look at the way
00:40:31.280 | AI systems are built today, there's a big system, like you said, with robots, and it has a goal,
00:40:38.560 | and it gets better and better at optimizing that goal, at accomplishing that goal.
00:40:42.080 | But if biology built a hierarchical system where everything is doing computation,
00:40:49.120 | and everything is accomplishing the goal, not only that, it's kind of dumb,
00:40:55.200 | with the limited, with the bent option space, it's just doing the thing that's the easiest thing for
00:41:04.480 | it in some sense. And somehow that allows you to have turtles on top of turtles, literally,
00:41:12.640 | dumb systems on top of dumb systems, that as a whole creates something incredibly smart.
00:41:17.920 | >> Yeah, I mean, every system has some degree of intelligence in its own problem domain. So
00:41:24.560 | cells will have problems they're trying to solve in physiological space and transcriptional space,
00:41:30.640 | and then I could give you some cool examples of that. But the collective is trying to solve
00:41:34.720 | problems in anatomical space, right, and forming a creature and growing your blood vessels and so on.
00:41:40.240 | And then the whole body is solving yet other problems. They may be in social space and
00:41:46.080 | linguistic space and three-dimensional space. And who knows, the group might be solving problems in
00:41:50.880 | I don't know, some sort of financial space or something. So one of the major differences with
00:41:59.520 | most AIs today is A, the kind of flatness of the architecture, but also of the fact that
00:42:06.160 | they are constructed from outside their borders. So to a large extent, and of course, there are
00:42:16.880 | counter examples now, but to a large extent, our technology has been such that you create a machine
00:42:21.200 | or a robot, it knows what its sensors are, it knows what its effectors are, it knows the boundary
00:42:26.880 | between it and the outside world, all of this is given from the outside. Biology constructs this
00:42:31.600 | from scratch. Now, the best example of this that originally in robotics was actually Josh
00:42:37.680 | Bongard's work in 2006, where he made these robots that did not know their shape to start with. So
00:42:42.800 | like a baby, they sort of floundered around, they made some hypotheses, well, I did this and I moved
00:42:46.720 | in this way, well, maybe I'm a whatever, maybe I have wheels, or maybe I have six legs or whatever,
00:42:51.040 | right, and they would make a model and eventually would crawl around. So that's, I mean, that's
00:42:54.320 | really good, that's part of the autopoiesis, but we can go a step further, and some people are doing
00:42:58.320 | this, and then we're sort of working on some of this too, is this idea that let's even go back
00:43:02.800 | further, you don't even know what sensors you have, you don't know where you end and the outside world
00:43:07.040 | begins. All you have is certain things like active inference, meaning you're trying to minimize
00:43:11.520 | surprise, right? You have some metabolic constraints, you don't have all the energy you
00:43:15.280 | need, you don't have all the time in the world to think about everything you want to think about.
00:43:18.880 | So that means that you can't afford to be a micro-reductionist, you know, all this data
00:43:23.040 | coming in, you have to coarse-grain it and say, I'm going to take all this stuff, I'm going to
00:43:26.560 | call that a cat, I'm going to take all this, I'm going to call that the edge of the table I don't
00:43:29.600 | want to fall off of, and I don't want to know anything about the microstates, what I want to
00:43:32.960 | know is what is the optimal way to cut up my world, and by the way, this thing over here, that's me,
00:43:37.520 | and the reason that's me is because I have more control over this than I have over any of this
00:43:41.280 | other stuff, and so now you can begin to, right, so that's self-construction, that figuring out,
00:43:45.760 | making models of the outside world, and then turning that inwards and starting to make a
00:43:49.280 | model of yourself, right, which immediately starts to get into issues of agency and control, because
00:43:55.040 | in order to, if you are under metabolic constraints, meaning you don't have the energy,
00:44:00.800 | right, all the energy in the world, you have to be efficient, that immediately forces you to start
00:44:05.840 | telling stories about coarse-grained agents that do things, right, you don't have the energy to,
00:44:10.480 | like Laplace's demon, you know, calculate every possible state that's going to happen, you have
00:44:16.000 | to, you have to coarse-grain, and you have to say, that is the kind of creature that does things,
00:44:20.720 | either things that I avoid or things that I will go towards, that's a mate or food or whatever
00:44:24.400 | it's going to be, and so right at the base of simple, very simple organisms starting to make
00:44:30.480 | models of agents doing things, that is the origin of models of free will, basically, right,
00:44:38.800 | because you see the world around you as having agency, and then you turn that on yourself,
00:44:42.560 | and you say, wait, I have agency too, I do things, right, and then you make decisions about what
00:44:47.360 | you're going to do, so all of this, one model is to view all of those kinds of things as
00:44:52.640 | being driven by that early need to determine what you are and to do so, and to then take actions in
00:45:00.160 | the most energetically efficient space possible, right. - So free will emerges when you try to
00:45:05.680 | simplify, tell a nice narrative about your environment. - I think that's very plausible,
00:45:10.640 | yeah. - Do you think free will is an illusion? So you're kind of implying that it's a useful hack.
00:45:18.080 | - Well, I'll say two things. The first thing is I think it's very plausible to say that
00:45:23.680 | any organism that self, or any agent that self, whether it's biological or not, any agent that
00:45:30.560 | self-constructs under energy constraints is going to believe in free will. We'll get to whether it
00:45:36.960 | has free will momentarily, but I think what it definitely drives is a view of yourself and the
00:45:41.840 | outside world as an agential view. I think that's inescapable. - So that's true for even primitive
00:45:46.800 | organisms. - I think so. Now, obviously, you have to scale down, right, so they don't have the kinds
00:45:53.920 | of complex metacognition that we have, so they can do long-term planning and thinking about free will
00:45:58.560 | and so on, but-- - But the sense of agency is really useful to accomplish tasks, simple or
00:46:04.320 | complicated. - That's right, in all kinds of spaces, not just in obvious three-dimensional
00:46:08.640 | space. I mean, we're very good at, the thing is, humans are very good at detecting agency
00:46:13.680 | of medium-sized objects moving at medium speeds in the three-dimensional world, right? We see a
00:46:19.600 | bowling ball and we see a mouse and we immediately know what the difference is, right, and how we're
00:46:22.880 | gonna-- - Mostly things you can eat or get eaten by. - Yeah, yeah, that's our training set, right?
00:46:28.000 | From the time you're little, your training set is visual data on this little chunk of your experience,
00:46:33.200 | but imagine if, from the time that we were born, we had innate senses of your blood chemistry,
00:46:39.680 | if you could feel your blood chemistry the way you can see, right, you had a high bandwidth connection,
00:46:43.600 | and you could feel your blood chemistry and you could see, you could sense all the things that
00:46:47.120 | your organs were doing, so your pancreas, your liver, all the things. If we had that, we would
00:46:52.240 | be very good at detecting intelligence in physiological space. We would know the level of
00:46:57.360 | intelligence that our various organs were deploying to deal with things that were coming, to anticipate
00:47:01.440 | the stimuli, but we're just terrible at that. We don't infect, in fact, people don't even,
00:47:05.840 | you talk about intelligence, so these are the paper spaces, and a lot of people think that's
00:47:09.520 | just crazy because all we know is motion. - We do have access to that information, so it's
00:47:15.200 | actually possible that, so evolution could, if we wanted to, construct an organism that's able to
00:47:21.200 | perceive the flow of blood through your body. The way you see an old friend and say, "Yo, what's up?
00:47:29.760 | How's the wife and the kids?" In that same way, you would feel a connection to the liver.
00:47:36.320 | - Yeah, yeah. I think, you know-- - Maybe other people's liver, or no,
00:47:39.600 | just your own? Because you don't have access to other people's liver.
00:47:42.800 | - Not yet, but you could imagine some really interesting connection, right?
00:47:45.840 | - Like sexual selection, like, "Ooh, that girl's got a nice liver."
00:47:49.600 | - Well, that's what-- - The way her blood flows,
00:47:52.880 | the dynamics of the blood is very interesting. It's novel. I've never seen one of those.
00:47:58.960 | - But you know, that's exactly what we're trying to half-ass when we
00:48:02.560 | judgment of beauty by facial symmetry and so on. That's a half-assed assessment of exactly that,
00:48:08.800 | of exactly that. Because if your cells could not cooperate enough to keep your organism symmetrical,
00:48:13.520 | you can make some inferences about what else is wrong, right? That's a very basic--
00:48:18.720 | - Interesting, yeah. So that, in some deep sense, actually, that is what we're doing. We're
00:48:25.280 | trying to infer how, we use the word healthy, but basically, how functional is this biological
00:48:34.720 | system I'm looking at so I can hook up with that one and make offspring?
00:48:40.960 | - Yeah, yeah. Well, what kind of hardware might their genomics give me that might be useful
00:48:45.840 | in the future? - I wonder why evolution didn't give us
00:48:48.720 | higher resolution signal. Like, why the whole peacock thing with the feathers? It doesn't seem,
00:48:55.040 | it's a very low bandwidth signal for sexual selection.
00:48:59.600 | - It is. I'm gonna, and I'm not an expert on this stuff, but--
00:49:02.160 | - On peacocks? - Well, no, but I'll take a stab at the
00:49:06.320 | reason. I think that it's because it's an arms race. You see, you don't want everybody to know
00:49:11.040 | everything about you. So I think that as much as, and in fact, there's another interesting part of
00:49:16.880 | this arms race, which is, if you think about this, the most adaptive, evolvable system is one that
00:49:24.320 | has the most level of top-down control, right? If it's really easy to say to a bunch of cells,
00:49:29.920 | "Make another finger," versus, "Okay, here's 10,000 gene expression changes that you need to do to
00:49:35.840 | make it to change your finger," right? The system with good top-down control that has memory, and
00:49:41.040 | we need to get back to that, by the way, that's a question I neglected to answer about where the
00:49:44.960 | memory is and so on. A system that uses all of that is really highly evolvable, and that's
00:49:50.480 | fantastic. But guess what? It's also highly subject to hijacking by parasites, by cheaters
00:49:57.680 | of various kinds, by conspecifics. We found that, and that goes back to the story of the pattern
00:50:03.520 | memory in these planaria, there's a bacterium that lives on these planaria. That bacterium has an
00:50:08.240 | input into how many heads the worm is gonna have, because it hijacks that control system, and it's
00:50:14.000 | able to make a chemical that basically interfaces with the system that calculates how many heads
00:50:18.320 | you're supposed to have, and they can make them have two heads. And so you can imagine that if
00:50:21.920 | you are too, so you wanna be understandable for your own parts to understand each other,
00:50:25.520 | but you don't wanna be too understandable, because you'll be too easily controllable.
00:50:28.880 | And so I think that, my guess is that that opposing pressure keeps us from being a super
00:50:36.400 | high bandwidth kind of thing where we can just look at somebody and know everything about them.
00:50:40.240 | - So it's a kind of biological game of Texas Hold 'Em.
00:50:43.200 | - Yeah.
00:50:43.600 | - You're showing some cards and you're hiding other cards, and that's part of it,
00:50:47.440 | and there's bluffing and there's, and all that, and then there's probably whole species that
00:50:52.880 | would do way too much bluffing. That's probably where peacocks fall. There's a book that,
00:50:59.440 | I don't remember if I read or if I read summaries of the book, but it's about evolution of beauty
00:51:07.200 | in birds. Where's that from? Is that a book, or does Richard Dawkins talk about it? But basically,
00:51:12.000 | there's some species start to over-select for beauty. Not over-select, they just,
00:51:18.000 | some reason, select for beauty. There is a case to be made, actually, now I'm starting to remember,
00:51:23.120 | I think Darwin himself made a case that you can select based on beauty alone.
00:51:28.480 | - Yeah.
00:51:29.520 | - So that beauty, there's a point where beauty doesn't represent some underlying biological
00:51:35.680 | truth. You start to select for beauty itself, and I think the deep question is there some
00:51:42.000 | evolutionary value to beauty? But it's an interesting kind of thought that this,
00:51:50.400 | can we deviate completely from the deep biological truth to actually appreciate some kind of,
00:51:57.600 | the summarization in itself? Let me get back to memory, 'cause this is a really interesting idea.
00:52:05.280 | How do a collection of cells remember anything? How do biological systems remember anything?
00:52:12.080 | How is that akin to the kind of memory we think of humans as having within our big cognitive engine?
00:52:18.400 | - Yeah. One of the ways to start thinking about bioelectricity is to ask ourselves,
00:52:23.040 | where did neurons and all these cool tricks that the brain uses to run these amazing
00:52:31.120 | problem-solving abilities on, and basically an electrical network, right? Where did that come
00:52:35.440 | from? That didn't just evolve up here out of nowhere, it must have evolved from something.
00:52:39.040 | And what it evolved from was a much more ancient ability of cells to form networks to solve other
00:52:45.520 | kinds of problems, for example, to navigate morphous space, to control the body's shape.
00:52:49.040 | And so all of the components of neurons, so ion channels, neurotransmitter machinery,
00:52:56.960 | electrical synapses, all this stuff is way older than brains, way older than neurons. In fact,
00:53:00.960 | older than multicellularity. And so it was already there, even bacterial biofilms,
00:53:05.920 | there's some beautiful work from UCSD on brain-like dynamics and bacterial biofilms.
00:53:11.120 | So evolution figured out very early on that electrical networks are amazing at having
00:53:15.680 | memories, at integrating information across distance, at different kinds of optimization
00:53:19.760 | tasks, image recognition and so on, long before there were brains.
00:53:23.680 | - Can you actually step back and we'll return to it? What is bioelectricity? What is biochemistry?
00:53:30.080 | What are electrical networks? I think a lot of the biology community focuses on
00:53:36.160 | the chemicals as the signaling mechanisms that make the whole thing work. You have, I think,
00:53:45.760 | to a large degree, uniquely, maybe you can correct me on that, have focused on the
00:53:51.360 | bioelectricity, which is using electricity for signaling. There's also probably mechanical--
00:53:58.400 | - Sure, sure, back again.
00:53:59.440 | - Knocking on the door. So what's the difference and what's an electrical network?
00:54:06.160 | - Yeah, so I wanna make sure and kind of give credit where credit is due. So as far back as
00:54:11.920 | 1903 and probably late 1800s already, people were thinking about the importance of electrical
00:54:18.080 | phenomena in life. So I'm for sure not the first person to stress the importance of electricity.
00:54:23.760 | People, there were waves of research in the 30s, in the 40s, and then again, in the kind of 70s,
00:54:31.680 | 80s and 90s of sort of the pioneers of bioelectricity who did some amazing work on
00:54:36.080 | all this. I think what we've done that's new is to step away from this idea that, and I'll describe
00:54:42.560 | what the bioelectricity is, is step away from the idea that, well, here's another piece of physics
00:54:46.560 | that you need to keep track of to understand physiology and development, and to really start
00:54:51.040 | looking at this as saying, no, this is a privileged computational layer that gives you access to the
00:54:56.960 | actual cognition of the tissue of basal cognition. So merging that developmental biophysics with
00:55:02.160 | ideas and cognition of computation and so on, I think that's what we've done that's new.
00:55:05.920 | But people have been talking about bioelectricity for a really long time, and so I'll define that.
00:55:10.560 | So what happens is that if you have a single cell, cell has a membrane, in that membrane are
00:55:17.680 | proteins called ion channels, and those proteins allow charged molecules, potassium, sodium,
00:55:22.400 | chloride, to go in and out under certain circumstances. And when there's an imbalance
00:55:28.000 | of those ions, there becomes a voltage gradient across that membrane. And so all cells, all living
00:55:34.160 | cells, try to hold a particular kind of voltage difference across the membrane, and they spend a
00:55:39.520 | lot of energy to do so. So that's a single cell. When you have multiple cells, the cell sitting
00:55:47.600 | next to each other, they can communicate their voltage state to each other via a number of
00:55:52.320 | different ways, but one of them is this thing called a gap junction, which is basically like a
00:55:55.760 | little submarine hatch that just kind of docks, right? And the ions from one side can flow to the
00:56:00.480 | other side and vice versa. - Isn't it incredible that this evolved?
00:56:04.640 | Isn't that wild? 'Cause that didn't exist. - Correct, this had to be evolved.
00:56:11.440 | - It had to be invented. - That's right.
00:56:13.040 | - So somebody invented electricity in the ocean. When did this get invented?
00:56:17.200 | - Yeah, so I mean, it is incredible. The guy who discovered gap junctions,
00:56:22.800 | Werner Lowenstein, I visited him, he was really old.
00:56:25.280 | - A human being? - He discovered them.
00:56:27.280 | - 'Cause you know what, 'cause who really discovered them lived probably four billion years ago.
00:56:32.480 | - Good point. - So give credit where credit is due.
00:56:35.040 | - Good point. He rediscovered gap junctions. But when I visited him in Woods Hole maybe 20 years
00:56:42.720 | ago now, he told me that he was writing, and unfortunately he passed away and I think this
00:56:48.320 | book never got written. He was writing a book on gap junctions and consciousness. And I think it
00:56:53.120 | would have been an incredible book because gap junctions are magic. I'll explain why in a minute.
00:56:57.840 | What happens is that, just imagine, the thing about both these ion channels and these gap
00:57:03.600 | junctions is that many of them are themselves voltage sensitive. So that's a voltage sensitive
00:57:09.760 | current conductance, that's a transistor. And as soon as you've invented one, immediately you now
00:57:14.960 | get access to, from this platonic space of mathematical truths, you get access to all of
00:57:21.040 | the cool things that transistors do. So now when you have a network of cells, not only do they talk
00:57:26.800 | to each other, but they can send messages to each other and the differences of voltage can propagate.
00:57:31.200 | Now to neuroscientists, this is old hat because you see this in the brain, right? This action
00:57:34.960 | potential is the electricity. They have these awesome movies where you can take a transparent
00:57:41.760 | animal, like a zebrafish, and you can literally look down and you can see all the firings as the
00:57:47.040 | fish is making decisions about what to eat and things like this, right? It's amazing. Well,
00:57:50.240 | your whole body is doing that all the time, just much slower. So there are very few things that
00:57:55.440 | neurons do that all the cells in your body don't do. They all do very similar things, just on a
00:58:00.560 | much slower timescale. And whereas your brain is thinking about how to solve problems in three
00:58:05.520 | dimensional space, the cells in the embryo are thinking about how to solve problems in anatomical
00:58:10.640 | space. They're trying to have memories like, "Hey, how many fingers are we supposed to have? Well,
00:58:13.680 | how many do we have now? What do we do to get from here to there?" That's the kind of problems
00:58:17.360 | they're thinking about. And the reason that gap junctions are magic is, imagine, right, from the
00:58:24.480 | earliest time. Here are two cells. This cell, how can they communicate? Well, the simple version is,
00:58:32.080 | this cell could send a chemical signal, it floats over and it hits a receptor on this cell, right?
00:58:37.120 | Because it comes from outside, this cell can very easily tell that that came from outside.
00:58:41.280 | Whatever information is coming, that's not my information. That information is coming from
00:58:45.760 | the outside. So I can trust it, I can ignore it, I can do various things with it, whatever,
00:58:50.000 | but I know it comes from the outside. Now, imagine instead that you have two cells with a gap
00:58:53.680 | junction between them. Something happens, let's say this cell gets poked, there's a calcium spike,
00:58:57.680 | and the calcium spike or whatever small molecule signal propagates through the gap junction to this
00:59:02.560 | cell. There's no ownership metadata on that signal. This cell does not know now that it came
00:59:08.880 | from outside because it looks exactly like its own memories would have looked like of whatever
00:59:13.600 | had happened, right? So gap junctions, to some extent, wipe ownership information on data,
00:59:19.280 | which means that if you and I are sharing memories and we can't quite tell who the
00:59:24.560 | memories belong to, that's the beginning of a mind melt. That's the beginning of a scale up
00:59:28.640 | of cognition from here's me and here's you to no, now there's just us.
00:59:32.720 | - So they enforce a collective intelligence gap junction.
00:59:35.920 | - That's right. It helps, it's the beginning. It's not the whole story by any means,
00:59:38.640 | but it's the start. - Where's state stored of the system?
00:59:44.800 | Is it in part in the gap junctions themselves? Is it in the cells?
00:59:48.640 | - There are many, many layers to this as always in biology. So there are chemical networks. So
00:59:55.520 | for example, gene regulatory networks, right, which are basically any kind of chemical pathway
01:00:00.320 | where different chemicals activate and repress each other, they can store memories. So in a
01:00:04.480 | dynamical system sense, they can store memories. They can get into stable states that are hard to
01:00:09.120 | pull them out of, right? So that becomes, once they get in, that's a memory, a permanent memory
01:00:12.720 | of some or semi-permanent memory of something that's happened. There are cytoskeletal structures,
01:00:17.200 | right, that are physically, they store memories in physical configuration. There are electrical
01:00:24.080 | memories like flip-flops where there is no physical, right? So if you look, I show my
01:00:29.680 | students this example as a flip-flop and the reason that it stores a zero or one is not because
01:00:34.800 | piece of the hardware moved. It's because there's a cycling of the current in one side of the thing.
01:00:42.160 | If I come over and I hold the other side to a high voltage for a brief period of time,
01:00:48.800 | it flips over and now it's here, but none of the hardware moved. The information is in a stable,
01:00:54.000 | dynamical sense. And if you were to x-ray the thing, you couldn't tell me if it was zero or
01:00:57.760 | one because all you would see is where the hardware is. You wouldn't see the energetic
01:01:00.960 | state of the system. So there are bioelectrical states that are held in that exact way, like
01:01:07.280 | volatile RAM basically, like in the electrical state of the system.
01:01:10.640 | It's very akin to the different ways the memory is stored in a computer.
01:01:14.880 | So there's RAM, there's hard drives.
01:01:18.640 | You can make that mapping, right? So I think the interesting thing is that based on the biology,
01:01:23.520 | we can have a more sophisticated, you know, I think we can revise some of our computer engineering
01:01:30.640 | methods because there are some interesting things that biology does that we haven't done yet,
01:01:35.520 | but that mapping is not bad. I mean, I think it works in many ways.
01:01:38.400 | Yeah, I wonder, because I mean, the way we build computers, at the root of computer science is the
01:01:43.280 | idea of proof of correctness. We program things to be perfect, reliable. You know, this idea of
01:01:52.240 | resilience and robustness to unknown conditions is not as important. So that's what biology is
01:01:58.000 | really good at. So I don't know what kind of systems, I don't know how we go from a computer
01:02:03.760 | to a biological system in the future. - Yeah, I think that, you know, the thing about
01:02:08.640 | biology, like is all about making really important decisions really quickly on very
01:02:13.920 | limited information. I mean, that's what biology is all about. You have to act, you have to act now,
01:02:18.080 | the stakes are very high, and you don't know most of what you need to know to be perfect.
01:02:22.400 | And so there's not even an attempt to be perfect or to get it right in any sense.
01:02:26.960 | There are just things like active inference, minimize surprise, optimize some efficiency,
01:02:33.840 | and some things like this, that guides the whole business. - I mentioned to you offline that
01:02:39.600 | somebody who's a fan of your work is Andrej Karpathy, and he's, amongst many things, also
01:02:47.360 | writes occasionally a great blog. He came up with this idea, I don't know if he coined the term,
01:02:55.360 | but of Software 2.0, where the programming is done in the space of configuring these artificial
01:03:04.720 | neural networks. Is there some sense in which that would be the future of programming for us humans,
01:03:10.640 | where we're less doing like Python-like programming and more, how would that look like?
01:03:21.520 | But basically doing the hyperparameters of something akin to a biological system,
01:03:28.000 | and watching it go, and keeping adjusting it, and creating some kind of feedback loop within
01:03:34.400 | the system so it corrects itself. And then we watch it over time accomplish the goals we want
01:03:41.680 | it to accomplish. Is that kind of the dream of the dogs that you described in the Nature paper?
01:03:47.200 | - Yeah, I mean, what you just painted is a very good description of our efforts at regenerative
01:03:55.440 | medicine as a kind of somatic psychiatry. So the idea is that you're not trying to micromanage. I
01:04:01.920 | mean, think about the limitations of a lot of the medicines today. We try to interact down at the
01:04:10.160 | level of pathways, right? So we're trying to micromanage it. What's the problem? Well, one
01:04:16.000 | problem is that for almost every medicine other than antibiotics, once you stop it, the problem
01:04:22.480 | comes right back. You haven't fixed anything. You were addressing symptoms. You weren't actually
01:04:25.760 | curing anything, again, except for antibiotics. That's one problem. The other problem is you have
01:04:30.560 | massive amount of side effects because you were trying to interact at the lowest level, right?
01:04:36.080 | It's like, I'm gonna try to program this computer by changing the melting point of copper. Maybe
01:04:43.360 | you can do things that way, but my God, it's hard to program at the hardware level. So what I think
01:04:50.560 | we're starting to understand is that, and by the way, this goes back to what you were saying before
01:04:55.520 | that we could have access to our internal state, right? So people who practice that kind of stuff,
01:05:00.640 | right? So yoga and biofeedback and those, those are all the people that uniformly will say things
01:05:05.760 | like, well, the body has an intelligence and this and that, right? Those two sets overlap perfectly
01:05:10.080 | because that's exactly right. Because once you start thinking about it that way, you realize that
01:05:15.280 | the better locus of control is not always at the lowest level. This is why we don't all program
01:05:20.080 | with a soldering iron, right? We take advantage of the high level intelligences that are there,
01:05:26.320 | which means trying to figure out, okay, which of your tissues can learn? What can they learn?
01:05:30.080 | Why is it that certain drugs stop working after you take them for a while with this habituation,
01:05:36.320 | right? And so can we understand habituation, sensitization, associative learning,
01:05:40.560 | these kinds of things in chemical pathways, we're going to have a completely different way, I think,
01:05:45.600 | we're going to have a completely different way of using drugs and of medicine in general,
01:05:50.320 | when we start focusing on the goal states and on the intelligence of our subsystems, as opposed to
01:05:56.000 | treating everything as if the only path was micromanagement from chemistry upwards.
01:05:59.840 | - Well, can you speak to this idea of somatic psychiatry? What are somatic cells? How do they
01:06:05.760 | form networks that use bioelectricity to have memory and all those kinds of things? What are
01:06:12.640 | somatic cells, like basics here? - Somatic cells just means the cells of your body, so much just
01:06:16.720 | means body, right? So somatic cells are just the, I'm not even specifically making a distinction
01:06:20.480 | between somatic cells and stem cells or anything like that. I mean, basically all the cells in your
01:06:24.400 | body, not just neurons, but all the cells in your body. They form electrical networks during
01:06:29.280 | embryogenesis, during regeneration. What those networks are doing in part is processing information
01:06:35.840 | about what our current shape is and what the goal shape is. Now, how do I know this? Because I can
01:06:42.000 | give you a couple of examples. One example is when we started studying this, we said, "Okay,
01:06:46.880 | here's a planarian. A planarian is a flatworm. It has one head and one tail normally." And the
01:06:52.160 | amazing, there's several amazing things about planaria, but basically they kind of, I think
01:06:56.640 | planaria hold the answer to pretty much every deep question of life. For one thing, they're similar to
01:07:02.560 | our ancestors. So they have true symmetry, they have a true brain. They're not like earthworms.
01:07:06.080 | They're a much more advanced life form. They have lots of different internal organs, but they're
01:07:09.600 | these little, they're about maybe two centimeters in the centimeter to two in size. They have a
01:07:14.720 | brain, a head and a tail. And the first thing is planaria are immortal. So they do not age. There's
01:07:19.920 | no such thing as an old planarian. So that right there tells you that these theories of thermodynamic
01:07:24.720 | limitations on lifespan are wrong. It's not that well over time of everything degrades. No,
01:07:29.680 | planaria can keep it going for probably how long, if they've been around, 400 million years.
01:07:34.800 | Right? So the planaria in our lab are actually in physical continuity with planaria that were
01:07:40.320 | here 400 million years ago. - So there's planaria that have lived that long, essentially. What does
01:07:46.240 | it mean, physical continuity? - Because what they do is they split in half. The way they reproduce
01:07:51.280 | is they split in half. So the planaria, the back end grabs the petri dish, the front end takes off,
01:07:56.640 | and they rip themselves in half. - But isn't it some sense where
01:08:00.480 | like you are a physical continuation? - Yes, except that we go through a bottleneck of one cell,
01:08:07.920 | which is the egg. They do not. I mean, they can. There's certain planaria that- - Got it. So we go
01:08:11.760 | through a very ruthless compression process, and they don't. - Yes, like an autoencoder, you know,
01:08:17.600 | squash down to one cell and then back out. These guys just tear themselves in half, and then each,
01:08:22.640 | and then, and so the other amazing thing about them is they regenerate. So you can cut them
01:08:25.840 | into pieces. The record is, I think, 276 or something like that by Thomas Hunt Morgan.
01:08:30.240 | And each piece regrows a perfect little worm. They know exactly, every piece knows exactly
01:08:35.920 | what's missing, what needs to happen. In fact, if you chop it in half, as it grows the other half,
01:08:43.600 | the original tissue shrinks so that when the new tiny head shows up, they're proportional.
01:08:48.000 | So it keeps perfect proportion. If you starve them, they shrink. If you feed them again,
01:08:52.560 | they expand. Their control, their anatomical control is just insane. - Somebody cut them
01:08:57.440 | into over 200 pieces? - Yeah, yeah, yeah. Thomas Hunt Morgan did. - Hashtag science. - Yep, amazing.
01:09:02.560 | Yeah, and maybe more. I mean, they didn't have antibiotics back then. I bet he lost some due
01:09:05.600 | to infection. I bet it's actually more than that. I bet you could do more than that. - Humans can't
01:09:09.600 | do that. - Well, yes, I mean, again, true, except that-- - Maybe you can at the embryonic level. - Well,
01:09:17.120 | that's the thing, right? So when I talk about this, I say, just remember that as amazing as it
01:09:22.240 | is to grow a whole planarian from a tiny fragment, half of the human population can grow a full body
01:09:26.960 | from one cell, right? So development is really, you can look at development as just an example
01:09:33.360 | of regeneration. - Yeah, to think, we'll talk about regenerative medicine, but there's some
01:09:38.960 | sense it would be like that worm in like 500 years, where I can just go, regrow a hand. - Yep,
01:09:46.480 | given time, it takes time to grow large things, but-- - For now. - Yeah, I think so. - You can
01:09:52.080 | probably, why not accelerate? Oh, biology takes its time? - I'm not gonna say anything is impossible,
01:09:58.240 | but I don't know of a way to accelerate these processes. I think it's possible. I think we are
01:10:01.840 | going to be regenerative, but I don't know of a way to make it fast. - I can just think, people
01:10:06.320 | from a few centuries from now will be like, well, they used to have to wait a week for the hand to
01:10:13.200 | regrow. It's like when the microwave was invented. You can toast your, what's that called when you
01:10:20.400 | put a cheese on a toast? (laughs) It's delicious is all I know. I'm blanking, anywho. All right,
01:10:29.280 | so planaria, why were we talking about the magical planaria, that they have the mystery of life?
01:10:34.320 | - Yeah, so the reason we're talking about planaria is not only are they immortal, okay,
01:10:37.680 | not only do they regenerate every part of the body, they generally don't get cancer, right,
01:10:43.680 | so which we can talk about why that's important. They're smart, they can learn things, so you can
01:10:47.280 | train them, and it turns out that if you train a planaria and then cut their heads off, the tail
01:10:52.320 | will regenerate a brand new brain that still remembers the original information. - Do they
01:10:56.400 | have a bioelectrical network going on or no? - Yes, yes. - So their somatic cells are forming
01:11:02.080 | a network, and that's what you mean by true brain? What's the requirement for a true brain?
01:11:06.560 | - Like everything else, it's a continuum, but a true brain has certain characteristics as far
01:11:11.760 | as the density, like a localized density of neurons that guides behavior. - In the head.
01:11:16.240 | - Exactly. - Connected to the head.
01:11:17.520 | - Exactly, if you cut their head off, the tail doesn't do anything, it just sits there until
01:11:22.320 | the new brain is, until a new brain regenerates. They have all the same neurotransmitters that you
01:11:27.120 | and I have, but here's why we're talking about them in this context. So here's your planaria,
01:11:32.240 | you cut off the head, you cut off the tail, you have a middle fragment. That middle fragment has
01:11:35.440 | to make one head and one tail. How does it know how many of each to make, and where do they go?
01:11:39.680 | How come it doesn't switch? How come, right? So we did a very simple thing, and we said, okay,
01:11:46.240 | let's make the hypothesis that there's a somatic electrical network that remembers the correct
01:11:52.000 | pattern, and that what it's doing is recalling that memory and building to that pattern. So what
01:11:56.000 | we did was we used a way to visualize electrical activity in these cells, right? It's a variant of
01:12:02.400 | what people use to look for electricity in the brain. And we saw that that fragment has a very
01:12:07.120 | particular electrical pattern, you can literally see it once we developed the technique. It has a
01:12:13.200 | very particular electrical pattern that shows you where the head and the tail goes, right? You can
01:12:18.640 | just see it. And then we said, okay, well, now let's test the idea that that's a memory that
01:12:22.880 | actually controls where the head and the tail goes. Let's change that pattern. So basically,
01:12:26.320 | incept the false memory. And so what you can do is you can do that in many different ways. One way is
01:12:30.480 | with drugs that target ion channels to say, and so you pick these drugs and you say, okay,
01:12:35.120 | I'm going to do it so that instead of this one head, one tail electrical pattern,
01:12:40.080 | you have a two-headed pattern, right? You're just editing the electrical information in the network.
01:12:44.560 | When you do that, guess what the cells build? They build a two-headed worm. And the coolest
01:12:48.080 | thing about it, now, no genetic changes, so we haven't touched the genome. The genome is totally
01:12:51.760 | wild type. But the amazing thing about it is that when you take these two-headed animals and you cut
01:12:55.760 | them into pieces again, some of those pieces will continue to make two-headed animals.
01:13:00.560 | So that information, that memory, that electrical circuit, not only does it hold the information for
01:13:06.800 | how many heads, not only does it use that information to tell the cells what to do to
01:13:10.480 | regenerate, but it stores it. Once you've reset it, it keeps. And we can go back. We can take a
01:13:14.800 | two-headed animal and put it back to one-headed. So now imagine, so there's a couple of interesting
01:13:19.520 | things here that have implications for understanding what the genomes and things like that.
01:13:23.280 | Imagine I take this two-headed animal. Oh, and by the way, when they reproduce, when they tear
01:13:28.000 | themselves in half, you still get two-headed animals. So imagine I take them and I throw them
01:13:31.840 | in the Charles River over here. So a hundred years later, some scientists come along and they scoop
01:13:35.360 | up some samples and they go, "Oh, there's a single-headed form and a two-headed form. Wow,
01:13:39.200 | a speciation event. Cool. Let's sequence the genome and see why, what happened." Genomes are
01:13:43.680 | identical. There's nothing wrong with the genome. So if you ask the question, how does, so this goes
01:13:47.680 | back to your very first question is where do body plans come from, right? How does the planarian know
01:13:52.000 | how many heads it's supposed to have? Now it's interesting because you could say DNA, but what
01:13:57.360 | as it turns out, the DNA produces a piece of hardware that by default says one head. The way
01:14:05.120 | that when you turn on a calculator, by default, it's a zero every single time, right? When you
01:14:08.400 | turn it on, it just says zero. But it's a programmable calculator as it turns out. So
01:14:12.080 | once you've changed that, next time it won't say zero. It'll say something else. And the same thing
01:14:16.400 | here. So you can make one-headed, two-headed, you can make no-headed worms. We've done some other
01:14:20.560 | things along these lines, some other really weird constructs. So this question of, right, so again,
01:14:27.040 | it's really important. The hardware software distinction is really important because the
01:14:31.920 | hardware is essential because without proper hardware, you're never going to get to the right
01:14:35.440 | physiology of having that memory. But once you have it, it doesn't fully determine what the
01:14:40.560 | information is going to be. You can have other information in there and it's reprogrammable by
01:14:44.320 | us, by bacteria, by various parasites probably, things like that. The other amazing thing about
01:14:49.520 | these planarias, think about this, most animals, when we get a mutation in our bodies, our children
01:14:54.800 | don't inherit it, right? So you could go on, you could run around for 50, 60 years getting mutations,
01:14:58.880 | your children don't have those mutations because we go through the egg stage. Planaria tear
01:15:02.960 | themselves in half and that's how they reproduce. So for 400 million years, they keep every mutation
01:15:08.240 | that they've had that doesn't kill the cell that it's in. So when you look at these planaria,
01:15:12.640 | their bodies are what's called mixoploid, meaning that every cell might have a different number of
01:15:16.000 | chromosomes. They look like a tumor. If you look at the genome, it's an incredible mess because
01:15:21.360 | they accumulate all this stuff and yet their body structure is, they are the best regenerators on
01:15:27.040 | the planet. Their anatomy is rock solid even though their genome is all kinds of crap. So this is
01:15:31.920 | kind of a scandal, right? That when we learn that, what are genomes? What genomes determine your body?
01:15:37.920 | Okay, why is the animal with the worst genome have the best anatomical control, the most cancer
01:15:41.840 | resistant, the most regenerative, right? Really, we're just beginning to start to understand this
01:15:46.800 | relationship between the genomically determined hardware and by the way, just as of a couple of
01:15:52.160 | months ago, I think I now somewhat understand why this is, but it's really a major puzzle.
01:15:58.240 | I mean, that really throws a wrench into the whole nature versus nurture because you usually associate
01:16:07.600 | electricity with the nurture and the hardware with the nature and there's just this weird
01:16:15.200 | integrated mess that propagates through generations.
01:16:19.360 | Yeah, it's much more fluid. It's much more complex. You can imagine what's happening here.
01:16:25.840 | It's just imagine the evolution of an animal like this, that multiscale, this goes back to this
01:16:30.720 | multiscale competency, right? Imagine that you have an animal where its tissues have some degree
01:16:38.800 | of multiscale competency. So for example, like we saw in the tadpole, if you put an eye on its tail,
01:16:44.080 | they can still see out of that eye, right? There's incredible plasticity. So if you have an animal
01:16:49.040 | and it comes up for selection and the fitness is quite good, evolution doesn't know whether the
01:16:55.760 | fitness is good because the genome was awesome or because the genome was kind of junky, but the
01:16:59.920 | competency made up for it, right? And things kind of ended up good. So what that means is that the
01:17:04.560 | more competency you have, the harder it is for selection to pick the best genomes. It hides
01:17:09.760 | information, right? And so that means that, so what happens, evolution basically starts, all the
01:17:16.880 | hard work is being done to increase the competency because it's harder and harder to see the genomes.
01:17:22.000 | And so I think in planaria, what happened is that there's this runaway phenomenon where all the
01:17:26.320 | effort went into the algorithm such that we know you've got a crappy genome, we can't clean up the
01:17:32.000 | genome, we can't keep track of it. So what's going to happen is what survives are the algorithms that
01:17:37.120 | can create a great worm no matter what the genome is. So everything went into the algorithm, which
01:17:42.480 | of course then reduces the pressure on keeping a clean genome. So this idea of, right, and different
01:17:48.560 | animals have this to different levels, but this idea of putting energy into an algorithm that
01:17:54.640 | does not overtrain on priors, right? It can't assume, I mean, I think biology is this way in
01:17:58.960 | general, evolution doesn't take the past too seriously because it makes these basically
01:18:04.080 | problem-solving machines as opposed to exactly what, to deal with exactly what happened last time.
01:18:10.160 | - Yeah, problem-solving versus memory recall. So a little memory, but a lot of problem-solving.
01:18:15.520 | - I think so, yeah, in many cases, yeah. - Problem-solving.
01:18:22.160 | I mean, it's incredible that those kinds of systems are able to be constructed,
01:18:25.600 | especially how much they contrast with the way we build problem-solving systems in the AI world.
01:18:32.240 | Back to xenobots. I'm not sure if we ever described how xenobots are built, but you have a
01:18:41.120 | paper titled "Biological Robots, Perspectives on an Emerging Interdisciplinary Field," and in the
01:18:47.360 | beginning, you mentioned that the word xenobots is controversial. Do you guys get in trouble for
01:18:53.680 | using xenobots, or what, do people not like the word xenobots? Are you trying to be provocative
01:18:59.360 | with the word xenobots versus biological robots? I don't know. Is there some drama that we should
01:19:04.880 | be aware of? - There's a little bit of drama.
01:19:06.880 | I think the drama is basically related to people having very fixed ideas about what terms mean,
01:19:16.480 | and I think in many cases, these ideas are completely out of date with where science is now,
01:19:23.840 | and for sure, they're out of date with what's going to be. These concepts are not going to
01:19:31.200 | survive the next couple of decades. So if you ask a person, and including a lot of people in biology
01:19:36.720 | who kind of want to keep a sharp distinction between biologicals and robots, right? So what's
01:19:40.560 | a robot? Well, a robot, it comes out of a factory, it's made by humans, it is boring, it is meaning
01:19:45.600 | that you can predict everything it's going to do, it's made of metal and certain other inorganic
01:19:49.520 | materials, living organisms are magical, they arise, right? And so on. So there's these
01:19:53.920 | distinctions. I think these distinctions, I think, were never good, but they're going to be completely
01:20:01.200 | useless going forward. And so part of this, a couple of papers, that's one paper, and there's
01:20:05.360 | another one that Josh Bongard and I wrote where we really attack the terminology, and we say these
01:20:10.160 | binary categories are based on very non-essential kind of surface limitations of technology and
01:20:18.960 | imagination that were true before, but they've got to go. And so we call them xenobots. So
01:20:23.920 | xeno for Xenopus laevis, where it's the frog that these guys are made of, but we think it's an
01:20:29.360 | example of a biobot technology, because ultimately, once we understand how to communicate and
01:20:39.040 | manipulate the inputs to these cells, we will be able to get them to build whatever we want them to
01:20:45.440 | build. And that's robotics, right? It's the rational construction of machines that have useful
01:20:49.680 | purposes. I absolutely think that this is a robotics platform, whereas some biologists don't.
01:20:54.800 | - But it's built in a way that all the different components are doing their own computation,
01:21:02.080 | so in a way that we've been talking about. So you're trying to do top-down control on that
01:21:05.840 | biological system. - That's exactly right. And in the future,
01:21:07.760 | all of this will merge together, because of course, at some point, we're going to throw
01:21:11.040 | in synthetic biology circuits, right? New transcriptional circuits to get them to do
01:21:16.000 | new things. Of course, we'll throw some of that in, but we specifically stayed away from all of
01:21:19.440 | that because in the first few papers, and there's some more coming down the pike that are, I think,
01:21:23.840 | going to be pretty dynamite, that we want to show what the native cells are made of. Because what
01:21:30.160 | happens is, if you engineer the heck out of them, right? If we were to put in new transcription
01:21:34.640 | factors and some new metabolic machinery and whatever, people will say, "Okay, you engineered
01:21:39.040 | this and you made it do whatever, and fine." I wanted to show, and the whole team wanted to show
01:21:46.640 | the plasticity and the intelligence in the biology. What does it do that's surprising before
01:21:52.480 | you even start manipulating the hardware in that way? - Yeah, don't try to over-control the thing.
01:21:59.600 | Let it flourish. The full beauty of the biological system. Why Xenopus laevis? How do you pronounce
01:22:06.720 | it? The frog. - Xenopus laevis, yeah.
01:22:08.640 | - Why this frog? - It's been used since, I think,
01:22:11.520 | the '50s. It's just very convenient because we keep the adults in this very fine frog habitat.
01:22:18.560 | They lay eggs. They lay tens of thousands of eggs at a time. The eggs develop right in front of your
01:22:24.480 | eyes. It's the most magical thing you can see because normally, if you were to deal with mice
01:22:30.000 | or rabbits or whatever, you don't see the early stages because everything's inside the mother.
01:22:33.520 | Everything's in a petri dish at room temperature. You have an egg, it's fertilized, and you can just
01:22:37.920 | watch it divide and divide and divide. On all the organs for me, you just see it. At that point,
01:22:42.240 | the community has developed lots of different tools for understanding what's going on and also
01:22:48.720 | for manipulating. People use it for understanding birth defects and neurobiology and cancer
01:22:54.640 | immunology also. - So you get the whole
01:22:56.080 | embryogenesis in the petri dish. That's so cool to watch. Is there videos of this?
01:23:02.400 | - Oh, yeah. Yeah, yeah. There's amazing videos online. I mean, mammalian embryos are super cool
01:23:08.160 | too. For example, monozygotic twins are what happens when you cut a mammalian embryo in half.
01:23:12.560 | You don't get two half bodies. You get two perfectly normal bodies because it's a regeneration
01:23:16.400 | event. Development is just the kind of regeneration, really.
01:23:19.840 | - And why this particular frog? It's just because they were doing it in the '50s and...
01:23:25.120 | - It breeds well in... It's easy to raise in the laboratory, and it's very prolific. And all the
01:23:33.840 | tools, basically, for decades, people have been developing tools. There's other... Some people use
01:23:37.760 | other frogs, but I have to say, this is important. Xenobots are fundamentally not anything about
01:23:43.360 | frogs. So I can't say too much about this because it's not published in peer reviewed yet, but we've
01:23:48.480 | made xenobots out of other things that have nothing to do with frogs. This is not a frog
01:23:52.640 | phenomenon. We started with frog because it's so convenient, but this plasticity is not a frog.
01:23:58.880 | It's not related to the fact that they're frogs. - What happens when you kiss it? Does it turn to
01:24:03.040 | a prince? No. Or princess? Which way? Prince. Yeah, prince. - It should be a prince. Yeah.
01:24:07.680 | That's an experiment that I don't believe we've done. And if we have, I don't want to know about.
01:24:10.720 | - Well, we can collaborate. I can take on the lead on that effort. Okay, cool. How does the
01:24:18.240 | cells coordinate? Let's focus in on just the embryogenesis. So there's one cell. So it divides,
01:24:25.600 | doesn't have to be very careful about what each cell starts doing once they divide.
01:24:32.160 | - Yes. - And when there's three of them,
01:24:35.520 | it's like the co-founders or whatever, like, "Slow down. You're responsible for this." When do they
01:24:43.040 | become specialized and how do they coordinate that specialization? - So this is the basic science of
01:24:48.960 | developmental biology. There's a lot known about all of that. But I'll tell you what I think is
01:24:54.960 | kind of the most important part, which is, yes, it's very important who does what. However,
01:25:01.200 | because going back to this issue of why I made this claim that biology doesn't take the past
01:25:07.440 | too seriously. And what I mean by that is it doesn't assume that everything is the way it's
01:25:12.800 | expected to be. And here's an example of that. This was done. This was an old experiment going
01:25:18.320 | back to the '40s. But basically, imagine it's a newt, a salamander. And it's got these little
01:25:24.160 | tubules that go to the kidneys, right? This little tube. Take a cross-section of that tube,
01:25:28.000 | you see eight to 10 cells that have cooperated to make this little tube and cross-section, right?
01:25:32.880 | So one amazing thing you can do is you can mess with the very early cell division to make the
01:25:40.800 | cells gigantic, bigger. You can make them different sizes. You can force them to be
01:25:44.240 | different sizes. So if you make the cells different sizes, the whole newt is still the same size.
01:25:50.160 | So if you take a cross-section through that tubule, instead of eight to 10 cells, you might have four
01:25:54.640 | or five, or you might have three, until you make the cell so enormous that one single cell wraps
01:26:02.000 | around itself and gives you that same large-scale structure with a completely different molecular
01:26:07.920 | mechanism. So now instead of cell-to-cell communication to make a tubule, instead of that,
01:26:12.800 | it's one cell using the cytoskeleton to bend itself around. So think about what that means.
01:26:16.800 | In the service of a large-scale... Talk about top-down control, right? In the service of a
01:26:21.360 | large-scale anatomical feature, different molecular mechanisms get called up. So now,
01:26:26.320 | think about this. You're a newt cell and trying to make an embryo. If you had a fixed idea of who
01:26:31.760 | was supposed to do what, you'd be screwed because now your cells are gigantic. Nothing would work.
01:26:35.680 | There's an incredible tolerance for changes in the size of the parts and the amount of DNA in those
01:26:42.080 | parts. All sorts of stuff. The life is highly interoperable. You can put electrodes in there,
01:26:47.680 | you can put weird nanomaterials, it still works. This is that problem-solving action, right? It's
01:26:53.680 | able to do what it needs to do even when circumstances change. That is the hallmark
01:26:59.600 | of intelligence, right? William James defined intelligence as the ability to get to the same
01:27:03.360 | goal by different means. That's this. You get to the same goal by completely different means.
01:27:07.840 | And so why am I bringing this up? It's just to say that, yeah, it's important for the cells to
01:27:12.080 | do the right stuff, but they have incredible tolerances for things not being what you expect
01:27:16.960 | and to still get their job done. So if you're, you know, all of these things are not hardwired.
01:27:23.840 | There are organisms that might be hardwired. For example, the nematode C. elegans.
01:27:27.200 | In that organism, every cell is numbered, meaning that every C. elegans has exactly the same number
01:27:32.480 | of cells as every other C. elegans. They're all in the same place. They all divide. There's literally
01:27:35.680 | a map of how it works. In that sort of system, it's much more cookie cutter. But most organisms
01:27:43.120 | are incredibly plastic in that way. - Is there something particularly magical to you about the
01:27:48.960 | whole developmental biology process? Is there something you could say? 'Cause you just said it.
01:27:55.520 | They're very good at accomplishing the goal, the job they need to do, the competency thing,
01:28:00.400 | but you get a freaking organism from one cell. It's like, I mean, it's very hard to intuit that
01:28:10.320 | whole process. To even think about reverse engineering that process. - Right, very hard.
01:28:16.080 | To the point where I often, just imagine, I sometimes ask my students to do this thought
01:28:20.640 | experiment. Imagine you were shrunk down to the scale of a single cell and you were in the middle
01:28:25.120 | of an embryo and you were looking around at what's going on. And the cells running around, some cells
01:28:28.800 | are dying. Every time you look, it's kind of a different number of cells for most organisms.
01:28:32.800 | And so I think that if you didn't know what embryonic development was, you would have no clue
01:28:38.480 | that what you're seeing is always gonna make the same thing. Nevermind knowing what that is. Nevermind
01:28:43.440 | being able to say, even with full genomic information, being able to say, what the hell
01:28:46.640 | are they building? We have no way to do that. But just even to guess that, wow, the outcome of all
01:28:52.480 | this activity is, it's always gonna build the same thing. - The imperative to create the final you
01:29:00.080 | as you are now is there already. So you can, you would, so if you start from the same embryo,
01:29:06.240 | you create a very similar organism. - Yeah, except for cases like the xenobots, when you give them a
01:29:14.640 | different environment, they come up with a different way to be adaptive in that environment.
01:29:18.320 | But overall, I mean, so I think to kind of summarize it, I think what evolution is really
01:29:25.920 | good at is creating hardware that has a very stable baseline mode, meaning that left to its
01:29:33.440 | own devices, it's very good at doing the same thing, but it has a bunch of problem solving
01:29:38.000 | capacity such that if any assumptions don't hold, if your cells are a weird size, or you get the
01:29:42.480 | wrong number of cells, or there's a, you know, somebody stuck an electrode halfway through the
01:29:46.480 | body, whatever, it will still get most of what it needs to do done. - You've talked about the magic
01:29:54.080 | and the power of biology here. If we look at the human brain, how special is the brain in this
01:29:59.200 | context? You're kind of minimizing the importance of the brain, or lessening its, we think of all
01:30:06.400 | the special computation happens in the brain, everything else is like the help. You're kind
01:30:12.080 | of saying that the whole thing is doing computation. But nevertheless, how special is the human brain
01:30:20.400 | in this full context of biology? - Yeah, I mean, look, there's no getting away from the fact that
01:30:26.240 | the human brain allows us to do things that we could not do without it. - You can say the same
01:30:30.960 | thing about the liver. - Yeah, no, this is true. And so, you know, my goal is not, no, you're right,
01:30:39.120 | my goal is not-- - You're just being polite to the brain right now. - Well-- - You're being a
01:30:42.400 | politician, like, listen, everybody has a use. - Everybody has a role, yeah. - It's a very
01:30:46.880 | important role. - That's right. - We have to acknowledge the importance of the brain, you know?
01:30:50.880 | There are more than enough people who are cheerleading the brain, right? So I don't feel
01:30:58.000 | like, nothing I say is going to reduce people's excitement about the human brain. And so,
01:31:02.320 | I emphasize other things. - You think it gets too much credit. - I don't think it gets too much credit,
01:31:07.040 | I think other things don't get enough credit. I think the brain is, the human brain is incredible
01:31:11.760 | and special and all that. I think other things need more credit. And I also think that
01:31:17.760 | this, and I'm sort of this way about everything, I don't like binary categories about almost
01:31:21.680 | anything, I like a continuum. And the thing about the human brain is that by accepting that
01:31:27.760 | as some kind of an important category or essential thing, we end up with all kinds of weird
01:31:35.360 | pseudo-problems and conundrums. So for example, when we talk about it, you know, if you want to
01:31:40.960 | talk about ethics and other things like that, and what, you know, this idea that
01:31:48.480 | surely if we look out into the universe, surely we don't believe that this human brain is the only
01:31:53.440 | way to be sentient, right? Surely we don't, you know, and to have high level cognition. I just,
01:31:57.760 | I can't even wrap my mind around this idea that that is the only way to do it. No doubt there are
01:32:02.640 | other architectures made of completely different principles that achieve the same thing.
01:32:07.520 | And once we believe that, then that tells us something important. It tells us that things that
01:32:13.120 | are not quite human brains or chimeras of human brains and other tissue or human brains or other
01:32:19.520 | kinds of brains and novel configurations, or things that are sort of brains, but not really,
01:32:23.840 | or plants or embryos or whatever, might also have important cognitive status. So that's the only
01:32:31.040 | thing. I think we have to be really careful about treating the human brain as if it was some kind of
01:32:35.280 | like sharp binary category, you know, you are or you aren't. I don't believe that exists.
01:32:40.240 | So when we look at all the beautiful variety of semi-biological architectures out there in the
01:32:48.320 | universe, how many intelligent alien civilizations do you think are out there?
01:32:54.080 | Ah, boy, I have no expertise in that whatsoever.
01:32:58.400 | You haven't met any?
01:32:59.440 | I have met the ones we've made. I think that…
01:33:03.600 | I mean, exactly, in some sense, with synthetic biology, are you not creating aliens?
01:33:10.000 | I absolutely think so. Because look, all of life, all standard model systems are an N of 1
01:33:17.760 | course of evolution on Earth, right? And trying to make conclusions about biology from looking at
01:33:24.640 | life on Earth is like testing your theory on the same data that generated it. It's all kind of like
01:33:30.560 | locked in. So we absolutely have to create novel examples that have no history on Earth. You know,
01:33:40.000 | xenobots have no history of selection to be a good xenobot. The cells have selection for
01:33:43.920 | various things, but the xenobot itself never existed before. And so we can make chimeras,
01:33:48.080 | you know, we make frogalotls that are sort of half frog, half axolotl. You can make all sorts
01:33:52.640 | of highbrow constructions of living tissue with robots and whatever. We need to be making these
01:33:57.920 | things until we find actual aliens, because otherwise we're just looking at an N of 1 set
01:34:03.120 | of examples, all kinds of frozen accidents of evolution and so on. We need to go beyond that
01:34:07.840 | to really understand biology. - But we're still, even when you do
01:34:11.200 | a synthetic biology, you're locked in to the basic components of the way biology is done on this Earth.
01:34:19.360 | - Yeah, yeah, yeah, yeah. Still limited. - And also the basic constraints of the
01:34:24.640 | environment, even artificial environments that are constructed in the lab are tied up to the
01:34:28.560 | environment. I mean, what do you, okay, let's say there is, I mean, what I think is there's
01:34:34.880 | a nearly infinite number of intelligent civilizations living or dead out there.
01:34:42.640 | If you pick one out of the box, what do you think it would look like?
01:34:50.720 | So when you think about synthetic biology or creating synthetic organisms,
01:34:57.440 | how hard is it to create something that's very different?
01:35:02.240 | - Yeah, I think it's very hard to create something that's very different, right?
01:35:06.720 | We are just locked in both experimentally and in terms of our imagination, right? It's very hard.
01:35:15.520 | - And you also emphasized several times the idea of shape. The individual cell get together with
01:35:21.520 | other cells and they're gonna build a shape. So it's shape and function, but shape is a critical
01:35:28.240 | thing. - Yeah. So here, I'll take a stab. I mean, I agree with you to whatever extent that we can
01:35:33.920 | say anything. I do think that there's probably an infinite number of different architectures
01:35:41.200 | with interesting cognitive properties out there. What can we say about them? I think that
01:35:46.000 | the only things that are going, I don't think we can rely on any of the typical stuff, carbon-based,
01:35:52.880 | like I think all of that is just us having a lack of imagination. But I think the things that
01:36:01.200 | are going to be universal, if anything is, are things, for example, driven by resource limitation,
01:36:08.320 | the fact that you are fighting a hostile world and you have to draw a boundary between yourself
01:36:13.520 | and the world somewhere. The fact that that boundary is not given to you by anybody, you have
01:36:17.280 | to assume it, estimate it yourself. And the fact that you have to coarse grain your experience and
01:36:23.280 | the fact that you're gonna try to minimize surprise. And the fact that, like these are the
01:36:27.040 | things that I think are fundamental about biology. None of the facts about the genetic code or even
01:36:31.760 | the fact that we have genes or the biochemistry of it. I don't think any of those things are
01:36:35.040 | fundamental, but it's gonna be a lot more about the information and about the creation of the self.
01:36:39.920 | The fact that, so in my framework, selves are demarcated by the scale of the goals that they
01:36:46.320 | can pursue. So from little tiny local goals to like massive planetary scale goals for certain
01:36:51.520 | humans and everything in between. So you can draw this like cognitive light cone that determines
01:36:57.520 | the scale of the goals you could possibly pursue. I think those kinds of frameworks like that,
01:37:04.160 | like active inference and so on are going to be universally applicable, but none of the other
01:37:08.400 | things that are typically discussed. - Quick pause, Dean DeBettenberg.
01:37:12.800 | - We were just talking about, you know, aliens and all that. That's a funny thing, which is,
01:37:17.280 | I don't know if you've seen them. There's a kind of debate that goes on about cognition and plants
01:37:21.840 | and what can you say about different kinds of computation and cognition and plants. And I
01:37:25.360 | always look at that some way. If you're weirded out by cognition and plants, you're not ready
01:37:30.880 | for exobiology, right? If something that's that similar here on earth is already like freaking
01:37:36.240 | you out, then I think there's going to be all kinds of cognitive life out there that we're
01:37:40.160 | going to have a really hard time recognizing. - I think robots will help us like expand our
01:37:47.280 | mind about cognition. Either that or like xenobots, and they maybe becomes the same thing,
01:37:57.600 | is really when the human engineers the thing, at least in part, and then is able to achieve
01:38:06.160 | some kind of cognition that's different than what you're used to, then you start to understand like,
01:38:11.280 | oh, every living organism's capable of cognition. Oh, I need to kind of broaden my understanding
01:38:18.960 | what cognition is. But do you think plants, like when you eat them, are they screaming?
01:38:24.720 | - I don't know about screaming. I think you have to-- - That's what I think when I eat a salad.
01:38:28.320 | - Yeah. - Good.
01:38:29.200 | - Yeah. I think you have to scale down the expectations in terms of, right? So probably
01:38:34.160 | they're not screaming in the way that we would be screaming. However, there's plenty of data on
01:38:38.560 | plants being able to do anticipation and certain kinds of memory and so on. I think what you just
01:38:46.880 | said about robots, I hope you're right, and I hope that's, but there's two ways that people
01:38:51.760 | can take that, right? So one way is exactly what you just said to try to kind of expand their
01:38:55.360 | notions for that category. The other way people often go is they just sort of define the term,
01:39:04.320 | if it's not a natural product, it's just faking, right? It's not really intelligence if it was
01:39:10.000 | made by somebody else, because it's that same thing. They can see how it's done. And once you
01:39:14.720 | see how it's, it's like a magic trick when you see how it's done, it's not as fun anymore. And
01:39:20.480 | I think people have a real tendency for that. And they sort of, which I find really strange in the
01:39:24.240 | sense that if somebody said to me, we have this sort of blind, like a hill climbing search,
01:39:31.920 | and then we have a really smart team of engineers, which one do you think is going to produce
01:39:37.680 | a system that has good intelligence? I think it's really weird to say that it only comes from the
01:39:42.160 | blind search, right? It can't be done by people who, by the way, can also use evolutionary
01:39:46.160 | techniques if they want to, but also rational design. I think it's really weird to say that
01:39:50.080 | real intelligence only comes from natural evolution. So I hope you're right. I hope
01:39:55.360 | people take it the other way. >> There's a nice shortcut. So I work with
01:39:59.760 | Lego robots a lot now for my own personal pleasure. Not in that way, internet. So,
01:40:10.400 | four legs. And one of the things that changes my experience of the robots a lot is
01:40:17.440 | when I can't understand why I did a certain thing. And there's a lot of ways to engineer that.
01:40:23.280 | Me, the person that created the software that runs it, there's a lot of ways for me to build
01:40:30.400 | that software in such a way that I don't exactly know why it did a certain basic decision. Of
01:40:36.960 | course, as an engineer, you can go in and start to look at logs. You can log all kind of data,
01:40:41.760 | sensory data, the decisions you made, all the outputs in your own networks and so on.
01:40:46.960 | But I also try to really experience that surprise and that really experience as another person would
01:40:54.320 | that totally doesn't know how it's built. And I think the magic is there in not knowing how it
01:40:59.680 | works. I think biology does that for you through the layers of abstraction. Because nobody really
01:41:10.720 | knows what's going on inside the biologicals. Each one component is clueless about the big picture.
01:41:17.600 | >> I think there's actually really cheap systems that can illustrate that kind of thing, which is
01:41:22.960 | even like fractals. You have a very small, short formula in Z and you see it and there's no magic,
01:41:32.080 | you're just going to crank through Z squared plus C, whatever, you're just going to crank through
01:41:35.840 | it. But the result of it is this incredibly rich, beautiful image that just like, wow,
01:41:42.960 | all of that was in this 10 character long string, amazing. So the fact that you can know everything
01:41:50.480 | there is to know about the details and the process and all the parts and everything,
01:41:54.800 | there's literally no magic of any kind there. And yet the outcome is something that you would never
01:42:01.120 | have expected. And it's just, it just, you know, is incredibly rich and complex and beautiful. So
01:42:06.480 | there's a lot of that. >> You write that you work on developing
01:42:11.120 | conceptual frameworks for understanding unconventional cognition. So the kind of
01:42:15.440 | thing we've been talking about, I just like the term unconventional cognition.
01:42:20.400 | And you want to figure out how to detect, study and communicate with the thing.
01:42:23.760 | You've already mentioned a few examples, but what is unconventional cognition? Is it as simply as
01:42:29.920 | everything else outside of what we define usually as cognition, cognitive science, the stuff going
01:42:35.520 | on between our ears, or is there some deeper way to get at the fundamentals of what is cognition?
01:42:42.400 | >> Yeah, I think like, and I'm certainly not the only person who works in unconventional
01:42:49.520 | cognition. >> So it's the term used.
01:42:51.440 | >> Yeah, that's one that I, so I've coined a number of weird terms, but that's not one of
01:42:55.120 | mine. Like that's an existing thing. So for example, somebody like Andy Adamatsky, who I
01:42:59.360 | don't know if you've had him on, if you haven't, you should. He's a very interesting guy. He's a
01:43:05.200 | computer scientist and he does unconventional cognition and slime molds and all kinds of weird,
01:43:09.440 | he's a real weird cat, really interesting. Anyway, so that's, you know, there's a bunch of terms that
01:43:15.280 | I've come up with, but that's not one of mine. So I think like many terms, that one is really
01:43:21.600 | defined by the times, meaning that unconventional, things that are unconventional cognition today are
01:43:27.120 | not going to be considered unconventional cognition at some point. It's one of those things.
01:43:33.200 | And so it's, you know, it's this really deep question of how do you recognize, communicate
01:43:41.440 | with classify cognition when you cannot rely on the typical milestones, right? So typical,
01:43:49.120 | you know, again, if you stick with the history of life on earth, like these exact model systems,
01:43:55.200 | you would say, ah, here's a particular structure of the brain. And this one has fewer of those.
01:43:58.640 | And this one has a bigger frontal cortex and this one, right? So these are landmarks that we're
01:44:03.840 | used to and it allows us to make very kind of rapid judgments about things. But if you can't
01:44:10.000 | rely on that, either because you're looking at a synthetic thing or an engineered thing or an alien
01:44:15.360 | thing, then what do you do, right? How do you, and so that's what I'm really interested in. I'm
01:44:19.600 | interested in mind in all of its possible implementations, not just the obvious ones
01:44:25.040 | that we know from looking at brains here on earth. - Whenever I think about something like
01:44:31.040 | unconventional cognition, I think about cellular automata. I'm just captivated by the beauty of
01:44:36.800 | the thing. The fact that from simple little objects, you can create some such beautiful
01:44:45.440 | complexity that very quickly you forget about the individual objects and you see the things that it
01:44:52.400 | creates as its own organisms. That blows my mind every time. Like, honestly, I could full-time just
01:45:01.920 | eat mushrooms and watch cellular automata. Don't even have to do mushrooms. Just cellular automata.
01:45:08.800 | It feels like, I mean, from the engineering perspective, I love when a very simple system
01:45:15.840 | captures something really powerful because then you can study that system to understand something
01:45:21.280 | fundamental about complexity, about life on earth. Anyway, how do I communicate with a thing?
01:45:28.000 | If a cellular automata can do cognition, if a plant can do cognition, if a xenobot can do
01:45:36.720 | cognition, how do I whisper in its ear and get an answer back to how do I have a conversation?
01:45:43.600 | How do I have a xenobot on a podcast? - That's a really interesting line of
01:45:49.920 | investigation that that opens up. I mean, we've thought about this. So, you need a few things.
01:45:55.120 | You need to understand the space in which they live. So, not just the physical modality, like,
01:46:01.520 | can they see light? Can they feel vibration? I mean, that's important, of course, because that's
01:46:04.320 | how you deliver your message. But not just the ideas for a communication medium, not just the
01:46:09.200 | physical medium, but saliency, right? So, what are important to this? What's important to this
01:46:16.000 | system? And systems have all kinds of different levels of sophistication of what you could expect
01:46:22.080 | to get back. And I think what's really important, I call this the spectrum of persuadability,
01:46:28.080 | which is this idea that when you're looking at a system, you can't assume where on the spectrum it
01:46:33.360 | is. You have to do experiments. And so, for example, if you look at a gene regulatory network,
01:46:41.440 | which is just a bunch of nodes that turn each other on and off at various rates, you might look
01:46:45.920 | at that and you say, "Wow, there's no magic here. I mean, clearly this thing is as deterministic as
01:46:50.400 | it gets. It's a piece of hardware. The only way we're going to be able to control it is by rewiring
01:46:54.800 | it, which is the way molecular biology works, right? We can add nodes, remove nodes, whatever."
01:46:58.400 | Well, so we've done simulations and shown that biological, and now we're doing this in the lab,
01:47:03.600 | the biological networks like that have associative memory. So, they can actually learn,
01:47:08.880 | they can learn from experience. They have habituation, they have sensitization,
01:47:11.840 | they have associative memory, which you wouldn't have known if you assumed that they have to be
01:47:15.680 | on the left side of that spectrum. So, when you're going to communicate with something, and we've
01:47:19.280 | even, Charles Abramson and I have written a paper on behaviorist approaches to synthetic organism,
01:47:26.080 | meaning that if you're given something, you have no idea what it is or what it can do,
01:47:29.600 | how do you figure out what its psychology is, what its level is, what does it... And so,
01:47:33.840 | we literally lay out a set of protocols, starting with the simplest things and then moving up to
01:47:37.920 | more complex things where you can make no assumptions about what this thing can do,
01:47:41.200 | right? Just from you, you have to start and you'll find out. So, when you're going to... So,
01:47:45.680 | here's a simple... I mean, here's one way to communicate with something. If you can train it,
01:47:49.680 | that's a way of communicating. So, if you can provide... If you can figure out what the currency
01:47:53.440 | of reward of positive and negative reinforcement is, right? And you can get it to do something it
01:47:58.800 | wasn't doing before based on experiences you've given it, you have taught it one thing. You have
01:48:03.760 | communicated one thing, that such and such an action is good, some other action is not good.
01:48:08.720 | That's like a basic atom of... A primitive atom of communication. - What about, in some sense,
01:48:15.840 | if it gets you to do something you haven't done before, is it answering back?
01:48:20.240 | - Yeah, most certainly. And I've seen cartoons, I think maybe Gary Larson or somebody had a cartoon
01:48:26.240 | of these rats in the maze and the one rat assists to the other. You look at this, every time I walk
01:48:31.520 | over here, he starts scribbling on the clipboard that he has, it's awesome.
01:48:35.440 | - If we step outside ourselves and really measure how much... Like, if I actually measure how much
01:48:44.400 | I've changed because of my interaction with certain cellular automata, I mean, you really
01:48:50.640 | have to take that into consideration about, like, well, these things are changing you too.
01:48:56.400 | I know you know how it works and so on, but you're being changed by the thing.
01:49:01.760 | - Yeah, absolutely. I think I read, I don't know any details, but I think I read something about
01:49:06.400 | how wheat and other things have domesticated humans in terms of, right? But by their properties
01:49:12.480 | change the way that the human behavior and societal structure is.
01:49:15.440 | - So in that sense, cats are running the world. 'Cause they've took over the... So first of all,
01:49:22.480 | so first they, while not giving a shit about humans, clearly, with every ounce of their being,
01:49:30.400 | they've somehow got just millions and millions of humans to take them home and feed them.
01:49:39.440 | And then not only the physical space that they take over, they took over the digital space.
01:49:44.960 | They dominate the internet in terms of cuteness, in terms of memability. And so they're like,
01:49:52.560 | they got themselves literally inside the memes that become viral and spread on the internet.
01:49:58.160 | And they're the ones that are probably controlling humans. That's my theory.
01:50:02.480 | Another, that's a follow-up paper after the frog kissing. Okay. I mean, you mentioned
01:50:07.840 | sentience and consciousness. You have a paper titled "Generalizing Frameworks for Sentience
01:50:18.880 | Beyond Natural Species." So beyond normal cognition, if we look at sentience and consciousness,
01:50:31.040 | and I wonder if you draw an interesting distinction between those two, elsewhere,
01:50:35.840 | outside of humans and maybe outside of earth, you think aliens have sentience. And if they do,
01:50:46.800 | how do we think about it? So when you have this framework, what is this paper? What is the way
01:50:52.240 | you propose to think about sentience? - Yeah, that particular paper was a very
01:50:57.280 | short commentary on another paper that was written about crabs. It was a really good paper on them,
01:51:02.640 | crabs and various, like a rubric of different types of behaviors that could be applied to
01:51:08.960 | different creatures and they're trying to apply it to crabs and so on. Consciousness,
01:51:14.400 | we can talk about it if you want, but it's a whole separate kettle of fish. I almost never
01:51:19.040 | talk about consciousness. - Kettle of crabs.
01:51:20.240 | - In this case, yes. I almost never talk about consciousness per se. I've said very little about
01:51:26.000 | it, but we can talk about it if you want. Mostly what I talk about is cognition, because I think
01:51:31.440 | that that's much easier to deal with in a rigorous experimental way. I think that all of these terms
01:51:41.280 | have, you know, sentience and so on, have different definitions. And fundamentally,
01:51:49.040 | I think that people can, as long as they specify what they mean ahead of time, I think people can
01:51:55.200 | define them in various ways. The only thing that I really kind of insist on is that the right way
01:52:03.120 | to think about all this stuff is from an engineering perspective. What does it help me
01:52:09.360 | to control, predict, and does it help me do my next experiment? So that's not a universal
01:52:17.120 | perspective. So some people have philosophical kind of underpinnings and those are primary.
01:52:23.200 | And if anything runs against that, then it must automatically be wrong. So some people will say,
01:52:28.080 | "I don't care what else. If your theory says to me that thermostats have little tiny goals,
01:52:33.520 | I'm not." So that's it. I just did like, that's my philosophical, you know, preconception. It's
01:52:39.360 | like thermostats do not have goals and that's it. So that's one way of doing it. And some people do
01:52:43.760 | it that way. I do not do it that way. And I think that we can't, I don't think we can know much of
01:52:48.960 | anything from a philosophical armchair. I think that all of these theories and ways of doing
01:52:54.320 | things stand or fall based on just basically one set of criteria. Does it help you run a
01:52:59.920 | rich research program? That's it. - Yeah. I agree with you totally. But
01:53:03.520 | so forget philosophy, what about the poetry of ambiguity? What about at the limits of the things
01:53:10.640 | you can engineer using terms that can be defined in multiple ways and living within that?
01:53:18.720 | Uncertainty in order to play with words until something lands that you can engineer. I mean,
01:53:25.440 | that's to me where consciousness sits currently. Nobody really understands the heart problem of
01:53:31.360 | consciousness, the subject, what it feels like. Because it really feels like, it feels like
01:53:36.480 | something to be this biological system. This conglomerate of a bunch of cells in this hierarchy
01:53:42.160 | of competencies feels like something. And yeah, I feel like one thing. And is that just,
01:53:48.080 | is that just a side effect of a complex system? Or is there something more that humans have?
01:54:00.160 | Or is there something more that any biological system has? Some kind of magic, some kind of,
01:54:04.960 | not just a sense of agency, but a real sense with a capital letter S of agency.
01:54:11.760 | - Yeah. Boy, yeah, that's a deep question. - Is there room for poetry and engineering
01:54:17.520 | or no? - No, there definitely is. And a lot of the poetry comes in when we realize that none of
01:54:23.520 | the categories we deal with are sharp as we think they are, right? And so in the different areas of
01:54:30.800 | all these spectra are where a lot of the poetry sits. I have many new theories about things,
01:54:35.440 | but I in fact do not have a good theory about consciousness that I plan to trot out.
01:54:39.600 | - And you almost don't see it as useful for your current work to think about consciousness.
01:54:44.000 | - I think it will come. I have some thoughts about it, but I don't feel like they're going
01:54:46.960 | to move the needle yet on that. - And you want to ground it in engineering
01:54:51.600 | always. - Well, I mean, so if we really tackle
01:54:57.680 | consciousness per se in terms of the heart problem, that isn't necessarily going to be
01:55:03.520 | groundable in engineering, right? That aspect of cognition is, but actual consciousness per se,
01:55:09.920 | first person perspective, I'm not sure that that's groundable in engineering. And I think
01:55:14.160 | specifically what's different about it is there's a couple of things. So let's, you know, here we go.
01:55:19.760 | I'll say a couple of things about consciousness. One thing is that what makes it different is that
01:55:25.760 | for every other aspect of science, when we think about having a correct or a good theory of it,
01:55:35.200 | we have some idea of what format that theory makes predictions in. So whether those be numbers or
01:55:41.680 | whatever, we have some idea. We may not know the answer. We may not have the theory, but we know
01:55:45.680 | that when we get the theory, here's what it's going to output. And then we'll know if it's right or
01:55:49.600 | wrong. For actual consciousness, not behavior, not neural correlates, but actual first person
01:55:54.880 | consciousness, if we had a correct theory of consciousness or even a good one, what the hell
01:56:00.240 | would, what format would it make predictions in, right? Because all the things that we know about
01:56:06.800 | basically boil down to observable behaviors. So the only thing I can think of when I think
01:56:11.280 | about that is, is, is what it'll be poetry or it'll be, it'll be, it'll be something to,
01:56:17.920 | if I ask you, okay, you've got a great theory of consciousness and here's this, here's this
01:56:22.720 | creature, maybe it's a natural one, maybe it's an engineer one, whatever. And I want you to tell me
01:56:26.960 | what your theory says about this, this, this being what it's like to be this being the only thing I
01:56:34.000 | can imagine you giving me is some piece of art, a poem or, or something that once I've taken it in,
01:56:40.560 | I share, I, I, I now have a similar state as whatever, right? That's, that's about as good
01:56:47.120 | as I can come up with. - Well, it's possible that once you have a good understanding of consciousness,
01:56:53.120 | it would be mapped to some things that are more measurable. So for example, it's possible that
01:56:59.120 | a conscious being is one that's able to suffer. So you start to look at pain and suffering.
01:57:09.360 | You can start to connect it closer to things that you can measure that in terms of how they reflect
01:57:19.760 | themselves in behavior and problem solving and creation and attainment of goals, for example,
01:57:28.400 | which I think suffering is one of the, you know, life is suffering. It's one of the,
01:57:33.520 | one of the big aspects of the, the human condition. And so if consciousness is somehow a,
01:57:41.200 | maybe at least a catalyst for suffering, you could start to get like echoes of it. And you start,
01:57:48.960 | you, you start to see like the actual effects of consciousness on behavior, that it's not just about
01:57:54.480 | subjective experience. It's like, it's really deeply integrated in the problem solving,
01:58:00.080 | decision making of a system, something like this. But also it's possible that we realize,
01:58:06.240 | this is not a philosophical statement. Philosophers can write their books. I welcome it.
01:58:12.080 | You know, I take the Turing test really seriously. I don't know why people really don't like it
01:58:18.800 | when a robot convinces you that it's intelligent. I think that's a really incredible accomplishment.
01:58:26.960 | And there's some deep sense in which that is intelligence. If it looks like it's intelligent,
01:58:32.640 | it is intelligent. And I think there's some deep aspect of a system that appears to be conscious.
01:58:42.480 | In some deep sense, it is conscious. At least for me, we have to consider that possibility.
01:58:51.680 | And a system that appears to be conscious is an engineering challenge.
01:58:57.440 | Yeah, I don't disagree with any of that. I mean, especially intelligence, I think is a publicly
01:59:03.040 | observable thing. I, and I mean, you know, science fiction has dealt with this for a century or more,
01:59:10.720 | much more maybe, this idea that when you are confronted with something that just doesn't meet
01:59:16.560 | any of your typical assumptions, so you can't look in the skull and say, oh, well, there's that
01:59:21.120 | frontal cortex, so then I guess we're good. So this thing lands on your front lawn and this,
01:59:26.480 | you know, the little door opens and something trundles out and it's sort of like, you know,
01:59:31.440 | kind of shiny and aluminum looking and it hands you this poem that it wrote while it was on,
01:59:36.720 | you know, flying over and how happy it is to meet you. Like, what's going to be your criteria,
01:59:41.200 | right? For whether you get to take it apart and see what makes it tick or whether you have to,
01:59:44.960 | you know, be nice to it and whatever, right? Like all the criteria that we have now
01:59:49.760 | and, you know, that people are using and as you said, a lot of people are down on the Turing
01:59:54.000 | test and things like this, but what else have we got? You know, because measuring the cortex size
01:59:58.880 | isn't going to cut it, right, in the broader scheme of things. So I think this is, it's a
02:00:04.800 | wide open problem that, right, that we, you know, our solution to the problem of other minds,
02:00:11.040 | it's very simplistic, right? We give each other credit for having minds just because we sort of
02:00:15.520 | on a, you know, on an anatomical level we're pretty similar and then so that's good enough, but how
02:00:20.160 | far is that going to go? So I think that's really primitive. So yeah, I think it's a major unsolved
02:00:25.760 | problem. - It's a really challenging
02:00:27.520 | direction of thought to the human race that you talked about, like embodied minds. If you start
02:00:36.800 | to think that other things other than humans have minds, that's really challenging. Because all men
02:00:44.560 | are created equal starts being like, all right, well, we should probably treat not just cows with
02:00:53.680 | respect, but like plants and not just plants, but some kind of organized conglomerates of cells
02:01:04.400 | in a petri dish. In fact, some of the work we're doing, like you're doing, and the whole community
02:01:10.960 | of science is doing with biology, people might be like, we were really mean to viruses.
02:01:15.760 | - Yeah. I mean, yeah, the thing is, you're right, and I get, I certainly get phone calls about
02:01:22.240 | people complaining about frog skin and so on, but I think we have to separate the sort of deep
02:01:28.640 | philosophical aspects versus what actually happens. So what actually happens on earth is that people
02:01:33.920 | with exactly the same anatomical structure kill each other, you know, on a daily basis, right?
02:01:39.840 | So I think it's clear that simply knowing that something else is equally, or maybe more
02:01:45.680 | cognitive or conscious than you are, is not a guarantee of kind behavior, that much we know of.
02:01:52.960 | And so then we look at a commercial farming of mammals and various other things. And so I think
02:01:59.120 | on a practical basis, long before we get to worrying about things like frog skin,
02:02:05.760 | we have to ask ourselves, why are we, what can we do about the way that we've been behaving
02:02:10.800 | towards creatures, which we know for a fact are, because of our similarities, are basically just
02:02:16.000 | like us. You know, that's kind of a whole other social thing. But fundamentally, you know, of
02:02:21.360 | course you're absolutely right in that we are also, think about this, we are on this planet
02:02:26.560 | in some way, incredibly lucky, it's just dumb luck, that we really only have one dominant species.
02:02:34.160 | It didn't have to work out that way. So you could easily imagine that there could be a planet
02:02:38.320 | somewhere with more than one equally, or maybe near equally intelligent species, and then,
02:02:44.880 | but they may not look anything like each other, right? So there may be multiple ecosystems where
02:02:50.000 | there are things of similar to human-like intelligence, and then you'd have all kinds
02:02:55.200 | of issues about, you know, how do you relate to them when they're physically not like you at all,
02:03:00.320 | but yet, you know, in terms of behavior and culture and whatever, it's pretty obvious that
02:03:05.200 | they've got as much on the ball as you have. Or maybe imagine that there was another
02:03:09.600 | group of beings that was like, on average, you know, 40 IQ points lower, right? Like, we're
02:03:15.920 | pretty lucky in many ways, we don't really have, even though we sort of, you know, we still act
02:03:20.720 | badly in many ways, but the fact is, you know, all humans are more or less in that same range,
02:03:26.960 | but it didn't have to work out that way. - Well, but I think that's part of the way
02:03:31.520 | life works on Earth, or maybe human civilization works, is it seems like we want ourselves to be
02:03:39.520 | quite similar, and then within that, you know, where everybody's about the same,
02:03:45.360 | relatively IQ, intelligence, problem-solving capabilities, even physical characteristics,
02:03:50.960 | but then we'll find some aspect of that that's different. And that seems to be like,
02:03:57.440 | I mean, it's really dark to say, but it seems to be not even a bug, but like a feature
02:04:07.680 | of the early development of human civilization. You pick the other, your tribe versus the other
02:04:16.080 | tribe, and you war, it's a kind of evolution in the space of memes, the space of ideas, I think,
02:04:23.760 | and you war with each other. So we're very good at finding the other, even when the characteristics
02:04:29.360 | are really the same. And that's, I don't know what, that, I mean, I'm sure so many of these
02:04:35.760 | things echo in the biological world in some way. - Yeah, yeah. There's a fun experiment that I did,
02:04:42.240 | my son actually came up with this, we did a biology unit together, he's so homeschooled,
02:04:47.600 | and so we did this a couple of years ago, we did this thing where, imagine, so you got this
02:04:51.200 | slime mold, right, Physarum polycephalum, and it grows on a petri dish of agar, and it sort of
02:04:57.120 | spreads out, and it's a single-celled protist, but it's like this giant thing. And so you put down a
02:05:02.800 | piece of oat, and it wants to go get the oat, and it sort of grows towards the oat. So what you do
02:05:06.560 | is you take a razor blade, and you just separate the piece of the whole culture that's growing
02:05:11.120 | towards the oat, you just kind of separate it. And so now, think about the interesting decision
02:05:16.640 | making calculus for that little piece. I can go get the oat, and therefore I won't have to share
02:05:22.640 | those nutrients with this giant mass over there, so the nutrients per unit volume is gonna be
02:05:26.880 | amazing, so I should go eat the oat. But if I first rejoin, because Physarum, once you cut it,
02:05:32.000 | has the ability to join back up, if I first rejoin, then that whole calculus becomes impossible,
02:05:37.760 | because there is no more me anymore, there's just we, and then we will go eat this thing.
02:05:41.920 | So this interesting, you can imagine a kind of game theory where the number of agents isn't fixed,
02:05:48.400 | and that it's not just cooperate or defect, but it's actually merge and whatever.
02:05:52.080 | So that kind of computation, how does it do that decision making?
02:05:56.320 | Yeah, so it's really interesting. And so empirically, what we found is that it tends
02:06:02.240 | to merge first. It tends to merge first, and then the whole thing goes. But it's really interesting
02:06:06.560 | that that calculus, do we even have, I mean, I'm not an expert in the economic game theory and all
02:06:11.600 | that, but maybe there's a, we made some sort of hyperbolic discounting or something. But maybe
02:06:15.680 | this idea that the actions you take not only change your payoff, but they change who or what you are,
02:06:24.560 | and that you may not, you could take an action after which you don't exist anymore,
02:06:28.400 | or you are radically changed, or you are merged with somebody else. As far as I know,
02:06:34.880 | we're still missing a formalism for even knowing how to model any of that.
02:06:39.200 | Do you see evolution, by the way, as a process that applies here on Earth? Or is it some,
02:06:43.840 | where did evolution come from? Yeah. So this thing that from the very origin of life that
02:06:50.960 | took us to today, what the heck is that? I think evolution is inevitable in the sense that
02:06:58.240 | if you combine, and basically, I think one of the most useful things that was done in early
02:07:03.360 | computing, I guess in the 60s, it started with evolutionary computation and just showing how
02:07:08.080 | simple it is that if you have imperfect heredity and competition together, those two things,
02:07:17.520 | well, three things, right? So heredity, imperfect heredity, and competition or selection, those
02:07:22.240 | three things, and that's it. Now you're off to the races, right? And so that can be, it's not
02:07:28.160 | just on Earth, because it can be done in the computer, it can be done in chemical systems,
02:07:31.360 | it can be done in, Lee Smolin says it works on cosmic scales. So I think that that kind of thing
02:07:40.000 | is incredibly pervasive and general, it's a general feature of life. It's interesting to think about,
02:07:47.520 | the standard thought about this is that it's blind, right? Meaning that the intelligence of
02:07:55.280 | the process is zero, it's stumbling around. And I think that back in the day when the options were,
02:08:03.360 | it's dumb like machines, or it's smart like humans, then of course, the scientists went
02:08:07.680 | in this direction because nobody wanted creationism. And so they said, "Okay, it's got to be
02:08:10.880 | completely blind." I'm not actually sure, right? Because I think that everything is a continuum.
02:08:16.880 | And I think that it doesn't have to be smart with foresight like us, but it doesn't have to be
02:08:21.600 | completely blind either. I think there may be aspects of it, and in particular, this kind of
02:08:26.560 | multi-scale competency might give it a little bit of look ahead maybe, or a little bit of problem
02:08:32.400 | solving sort of baked in. But that's going to be completely different in different systems.
02:08:38.400 | But I do think it's general, I don't think it's just on Earth, I think it's a very fundamental
02:08:43.120 | thing. - And it does seem to have a kind of direction that it's taking us, that's somehow,
02:08:49.680 | perhaps is defined by the environment itself. It feels like we're headed towards something.
02:08:54.640 | Like we're playing out a script that was just like a single cell defines the entire organism.
02:09:01.920 | It feels like from the origin of Earth itself, it's playing out a kind of script.
02:09:08.880 | - Yeah. - Like we can't really go any other way.
02:09:12.000 | - I mean, so this is very controversial, and I don't know the answer, but people have argued that
02:09:18.080 | this is called sort of rewinding the tape of life, right? And some people have argued, I think Conway
02:09:24.640 | Morris maybe has argued that there's a deep attractor, for example, to the human kind of
02:09:32.640 | structure, and that if you were to rewind it again, you'd basically get more or less the same
02:09:36.640 | thing. And then other people have argued that, no, it's incredibly sensitive to frozen accidents,
02:09:40.800 | and that once certain stochastic decisions are made downstream, everything is going to be
02:09:45.280 | different. I don't know. I don't know. We're very bad at predicting attractors in the space of
02:09:51.920 | complex systems, generally speaking, right? We don't know. So maybe evolution on Earth has these
02:09:57.200 | deep attractors that no matter what has happened, pretty much would likely to end up there,
02:10:01.760 | or maybe not. I don't know. - Well, it's a really difficult idea
02:10:04.560 | to imagine that if you ran Earth a million times, 500,000 times, you would get Hitler.
02:10:11.760 | - Yeah. - We don't like to think like that. We think
02:10:16.320 | because at least maybe in America, you like to think that individual decisions can change the
02:10:23.600 | world, and if individual decisions can change the world, then surely any perturbation results in a
02:10:31.680 | totally different trajectory. But maybe there's, in this competency hierarchy,
02:10:38.240 | it's a self-correcting system that just ultimately, there's a bunch of chaos that ultimately is
02:10:44.000 | leading towards something like a superintelligent artificial intelligence system. The answer is 42.
02:10:50.400 | I mean, there might be a kind of imperative for life that it's headed to, and we're too focused
02:11:00.000 | on our day-to-day life of getting coffee and snacks and having sex and getting a promotion at work,
02:11:09.040 | not to see the big imperative of life on Earth that it's headed towards something.
02:11:14.000 | - Yeah, maybe, maybe. It's difficult. I think one of the things that's important about
02:11:21.040 | chimeric bioengineering technologies, all of those things, are that we have to start
02:11:28.880 | developing a better science of predicting the cognitive goals of composite systems. So we're
02:11:34.960 | just not very good at it, right? We don't know if I create a composite system, and this could
02:11:41.200 | be Internet of Things or swarm robotics or a cellular swarm or whatever, what is the emergent
02:11:48.640 | intelligence of this thing? First of all, what level is it going to be at? And if it has goal
02:11:52.480 | directed capacity, what are the goals going to be? Like we are just not very good at predicting that
02:11:57.440 | yet. And I think that it's an existential level need for us to be able to, because we're building
02:12:07.520 | these things all the time, right? We're building both physical structures like swarm robotics,
02:12:12.720 | and we're building social financial structures and so on with very little ability to predict what
02:12:20.080 | sort of autonomous goals that system is going to have, of which we are now cogs. And so, right,
02:12:24.720 | so learning to predict and control those things is going to be critical. So if you're right,
02:12:30.480 | and there is some kind of attractor to evolution, it would be nice to know what that is, and then
02:12:35.760 | to make a rational decision of whether we're going to go along or we're going to pop out of it or try
02:12:39.600 | to pop out of it, because there's no guarantee. I mean, that's the other kind of important thing.
02:12:44.400 | A lot of people, I get a lot of complaints from people emailing and say, "What you're doing,
02:12:50.720 | it isn't natural." And I'll say, "Look, natural, that'd be nice if somebody was making sure that
02:12:56.880 | natural was matched up to our values, but no one's doing that." Evolution optimizes for biomass,
02:13:03.760 | that's it. Nobody's optimizing, it's not optimizing for your happiness, I don't think necessarily it's
02:13:08.320 | optimizing for intelligence or fairness or any of that stuff. - I'm going to find that person that
02:13:14.080 | emailed you, beat them up, take their place, steal everything they own, and say, "Now this is
02:13:21.440 | natural." - This is natural, yeah, exactly, because it comes from an old world view where you could
02:13:27.600 | assume that whatever is natural, that that's probably for the best, and I think we're long
02:13:32.000 | out of that Garden of Eden kind of view. So I think we can do better, and we have to, right?
02:13:37.920 | Natural just isn't great for a lot of life forms. - What are some cool synthetic organisms that
02:13:44.000 | you think about, you dream about? When you think about embodied mind, what do you imagine? What do
02:13:50.240 | you hope to build? - Yeah, on a practical level, what I really hope to do is to gain enough of
02:13:56.880 | an understanding of the embodied intelligence of organs and tissues such that we can achieve
02:14:03.840 | a radically different regenerative medicine, so that we can say, basically, and I think about it
02:14:11.280 | in terms of, okay, what's the goal, end game for this whole thing? To me, the end game is something
02:14:20.080 | that you would call an anatomical compiler. So the idea is you would sit down in front of the
02:14:24.000 | computer, and you would draw the body or the organ that you wanted. Not molecular details,
02:14:30.640 | but this is what I want. I want a six-legged frog with a propeller on top, or I want a heart that
02:14:35.920 | looks like this, or I want a leg that looks like this. And what it would do, if we knew what we
02:14:39.760 | were doing, is put out, convert that anatomical description into a set of stimuli that would have
02:14:47.040 | to be given to cells to convince them to build exactly that thing. I probably won't live to see
02:14:51.840 | it, but I think it's achievable. And I think with that, if we can have that, then that is basically
02:14:58.320 | the solution to all of medicine except for infectious disease. So birth defects, traumatic
02:15:04.800 | injury, cancer, aging, degenerative disease. If we knew how to tell cells what to build,
02:15:09.200 | all of those things go away. So those things go away, and the positive feedback spiral of
02:15:15.760 | economic costs, where all of the advances are increasingly more heroic and expensive
02:15:21.200 | interventions of a sinking ship when you're like 90 and so on, right? All of that goes away,
02:15:25.680 | because basically instead of trying to fix you up as you degrade, you progressively regenerate.
02:15:32.480 | You apply the regenerative medicine early before things degrade. So I think that'll have massive
02:15:37.600 | economic impacts over what we're trying to do now, which is not at all sustainable. And that's what I
02:15:43.440 | hope. I hope that we get...so to me, yes, the xenobots will be doing useful things, cleaning
02:15:50.000 | up the environment, cleaning out your joints and all that kind of stuff. But more important than
02:15:55.520 | that, I think we can use these synthetic systems to try to understand, to develop a science of
02:16:04.320 | detecting and manipulating the goals of collective intelligences of cells, specifically for regenerative
02:16:09.840 | medicine. And then sort of beyond that, if we sort of think further beyond that, what I hope
02:16:15.200 | is that, kind of like what you said, all of this drives a reconsideration of how we formulate
02:16:21.440 | ethical norms. Because this old school...so in the olden days, what you could do is,
02:16:26.960 | as you were confronted with something, you could tap on it, right? And if you heard a metallic
02:16:32.240 | clanging sound, you'd said, "Ah, fine," right? So you could conclude it was made in a factory,
02:16:36.080 | I can take it apart, I can do whatever, right? If you did that and you got a sort of a squishy
02:16:40.320 | kind of warm sensation, you'd say, "Ah, I need to be more or less nice to it," and whatever.
02:16:45.040 | That's not going to be feasible. It was never really feasible, but it was good enough because
02:16:49.120 | we didn't know any better. That needs to go. And I think that by breaking down those artificial
02:16:56.880 | barriers, someday we can try to build a system of ethical norms that does not rely on these
02:17:04.800 | completely contingent facts of our earthly history, but on something much, much deeper that
02:17:09.760 | really takes agency and the capacity to suffer and all that, takes that seriously.
02:17:16.160 | The capacity to suffer and the deep questions I would ask of a system is, "Can I eat it and can
02:17:21.600 | I have sex with it?" Which is the two fundamental tests of, again, the human condition. So I can
02:17:30.480 | basically do what Dali does in the physical space. So print out, like a 3D print, Pepe the Frog with
02:17:40.640 | a propeller head, propeller hat, is the dream. Well, I want to, yes and no. I mean, I want to
02:17:48.000 | get away from the 3D printing thing because that will be available for some things much earlier. I
02:17:53.280 | mean, we can already do bladders and ears and things like that because it's micro-level control.
02:17:58.240 | When you 3D print, you are in charge of where every cell goes. And for some things,
02:18:01.840 | for like this thing, they had that, I think, 20 years ago or maybe earlier than that, you could
02:18:06.000 | do that. So yeah, I would like to emphasize the Dali part where you provide a few words
02:18:11.200 | and it generates a painting. So here you say, "I want a frog with these features," and then it would
02:18:18.960 | go direct a complex biological system to construct something like that.
02:18:24.880 | Yeah. The main magic would be, I mean, I think from looking at Dali and so on, it looks like
02:18:30.080 | the first part is kind of solved now where you go from the words to the image. Like that seems more
02:18:34.960 | or less solved. The next step is really hard. This is what keeps things like CRISPR and genomic
02:18:41.120 | editing and so on. This is what limits all the impacts for regenerative medicine because going
02:18:48.880 | back to, "Okay, this is the knee joint that I want," or, "This is the eye that I want. Now,
02:18:53.120 | what genes do I edit to make that happen?" Right? Going back in that direction is really hard. So
02:18:57.680 | instead of that, it's going to be, "Okay, I understand how to motivate cells to build particular
02:19:01.520 | structures. Can I rewrite the memory of what they think they're supposed to be building such that
02:19:05.760 | then I can take my hands off the wheel and let them do their thing?"
02:19:09.760 | So some of that is experiment, but some of that maybe AI can help too. Just like with protein
02:19:14.320 | folding, this is exactly the problem that protein folding in the most simple medium tried and has
02:19:24.640 | solved with alpha fold, which is how does the sequence of letters result in this three-dimensional
02:19:33.360 | shape? I guess it didn't solve it because you have to, if you say, "I want this shape, how do I then
02:19:40.800 | have a sequence of letters?" Yeah. The reverse engineering step is really tricky.
02:19:46.160 | It is. I think where, and we're doing some of this now, is to use AI to try and build
02:19:54.400 | actionable models of the intelligence of the cellular collectives. So try to help us
02:19:58.480 | gain models that... And we've had some success in this. We did something like this for
02:20:04.480 | repairing birth defects of the brain in frog. We've done some of this for normalizing melanoma,
02:20:12.400 | where you can really start to use AI to make models of how would I impact this thing if I
02:20:19.760 | wanted to, given all the complexities and given all the controls that it knows how to do.
02:20:27.280 | So when you say regenerative medicine, so we talked about creating biological organisms, but
02:20:34.000 | if you regrow a hand, that information is already there. The biological system has that information.
02:20:42.320 | So how does regenerative medicine work today? How do you hope it works? What's the hope there?
02:20:49.440 | How do you make it happen?
02:20:51.520 | Well, today there's a set of popular approaches. So one is 3D printing. So the idea is I'm going
02:20:57.760 | to make a scaffold of the thing that I want. I'm going to seed it with cells and then there it is.
02:21:01.520 | Right? So kind of direct, and then that works for certain things. You can make a bladder that way,
02:21:05.280 | or an ear, something like that. The other idea is some sort of stem cell transplant. The idea is
02:21:12.240 | if we put in stem cells with appropriate factors, we can get them to generate certain kinds of
02:21:17.040 | neurons for certain diseases and so on. All of those things are good for relatively simple
02:21:23.440 | structures, but when you want an eye or a hand or something else, I think, and this may be an
02:21:29.440 | unpopular opinion, I think the only hope we have in any reasonable kind of timeframe is to understand
02:21:36.160 | how the thing was motivated to get made in the first place. So what is it that made those cells
02:21:41.920 | in the beginning create a particular arm with a particular set of sizes and shapes and number
02:21:48.720 | of fingers and all that? And why is it that a salamander can keep losing theirs and keep
02:21:51.920 | regrowing theirs and a planarian can do the same, even more so? To me, kind of ultimate
02:21:58.160 | regenerative medicine was when you can tell the cells to build whatever it is you need them to
02:22:03.360 | build. Right? And so that we can all be like planaria, basically.
02:22:07.600 | Do you have to start at the very beginning or can you do a shortcut? Because if you're
02:22:13.520 | growing a hand, you already got the whole organism.
02:22:17.120 | Yeah. So here's what we've done, right? So we've more or less solved that in frogs. So frogs,
02:22:22.480 | unlike salamanders, do not regenerate their legs as adults. And so we've shown that with a very
02:22:28.800 | kind of simple intervention. So what we do is there's two things. You need to have a signal
02:22:36.400 | that tells the cells what to do, and then you need some way of delivering it. And so this is
02:22:40.080 | worked together with David Kaplan. And I should do a disclosure here. We have a company called
02:22:44.640 | Morphoceuticals, I think it's a spin-off, where we're trying to address limb regeneration. So
02:22:51.280 | we've solved it in the frog and we're now in trials in mice. So now we're in mammals now.
02:22:55.920 | I can't say anything about how it's going, but the frog thing is solved. So what you do is after-
02:22:59.920 | You can have a little frog Lou Skywalker with every growing hand.
02:23:03.440 | Yeah, basically. So what you do is we did with legs instead of forearms. And what you do is
02:23:08.400 | after amputation, normally they don't regenerate, you put on a wearable bioreactor. So it's this
02:23:13.280 | thing that goes on and Dave Kaplan's lab makes these things. And inside, it's a very controlled
02:23:20.480 | environment. It is a silk gel that carries some drugs, for example, ion channel drugs.
02:23:25.600 | And what you're doing is you're saying to these cells, "You should regrow what normally goes here."
02:23:31.680 | So that whole thing is on for 24 hours. Then you take it off and you don't touch the leg again.
02:23:37.520 | This is really important because what we're not looking for is a set of micromanagement,
02:23:42.240 | printing or controlling the cells. We want to trigger. We want to interact with it early on
02:23:46.960 | and then not touch it again, because we don't know how to make a frog leg, but the frog knows
02:23:50.720 | how to make a frog leg. So 24 hours, 18 months of leg growth after that, without us touching it
02:23:56.720 | again. And after 18 months, you get a pretty good leg. That kind of shows this proof of concept that
02:24:01.520 | early on when the cells, right after injury, when they're first making a decision about what they're
02:24:05.040 | going to do, you can impact them. And once they've decided to make a leg, they don't need you after
02:24:09.680 | that. They can do their own thing. So that's an approach that we're now taking. - What about
02:24:14.240 | cancer suppression? That's something you mentioned earlier. How can all of these ideas help with
02:24:19.200 | cancer suppression? - So let's go back to the beginning and ask what cancer is. So I think
02:24:25.040 | asking why there's cancer is the wrong question. I think the right question is, why is there ever
02:24:29.360 | anything but cancer? So in the normal state, you have a bunch of cells that are all cooperating
02:24:35.200 | towards a large scale goal. If that process of cooperation breaks down and you've got a cell
02:24:40.320 | that is isolated from that electrical network that lets you remember what the big goal is,
02:24:44.640 | you revert back to your unicellular lifestyle. Now think about that border between self and world,
02:24:49.840 | right? Normally when all these cells are connected by gap junctions into an electrical network,
02:24:54.160 | they are all one self, right? Meaning that their goals, they have these large tissue level goals
02:25:01.360 | and so on. As soon as a cell is disconnected from that, the self is tiny, right? And so at that
02:25:06.640 | point, and so a lot of people model cancer cells as being more selfish and all that. They're not
02:25:12.080 | more selfish. They're equally selfish. It's just that their self is smaller. Normally the self is
02:25:15.840 | huge. Now they've got tiny little selves. Now what are the goals of tiny little selves? Well,
02:25:19.760 | proliferate and migrate to wherever life is good. And that's metastasis. That's proliferation and
02:25:24.160 | metastasis. So one thing we found, and people have noticed years ago that when cells convert to
02:25:30.400 | cancer, the first thing they see is they close the gap junctions. And it's a lot like, I think,
02:25:35.440 | it's a lot like that experiment with the slime mold where until you close that gap junction,
02:25:39.920 | you can't even entertain the idea of leaving the collective because there is no you at that point,
02:25:44.160 | right? Your mind melded with this whole other network. But as soon as the gap junction is
02:25:48.160 | closed, now the boundary between you and now the rest of the body is just outside environment to
02:25:53.680 | you. You're just a unicellular organism and the rest of the body's environment.
02:25:58.320 | So we studied this process and we worked out a way to artificially control the bioelectric
02:26:05.760 | state of these cells to physically force them to remain in that network. And so then what that
02:26:11.520 | means is that nasty mutations like KRAS and things like that, these really tough oncogenic mutations
02:26:17.760 | that cause tumors, if you do them and then, but then artificially control the bioelectrics,
02:26:26.160 | you greatly reduce tumorigenesis or normalize cells that had already begun to convert. You
02:26:33.040 | basically, they go back to being normal cells. And so this is another much like with the planaria,
02:26:37.520 | this is another way in which the bioelectric state kind of dominates what the genetic state is. So
02:26:43.680 | if you sequence the nucleic acid, you'll see the KRAS mutation. You'll say, "Ah, well, that's going
02:26:48.720 | to be a tumor." But there isn't a tumor because bioelectrically you've kept the cells connected
02:26:53.360 | and they're just working on making nice skin and kidneys and whatever else. So we've started
02:26:58.320 | moving that to human glioblastoma cells and we're hoping for a patient in the future,
02:27:04.640 | interaction with patients. - So is this one of the possible ways in which we may "cure cancer"?
02:27:12.160 | - I think so. Yeah, I think so. I think the actual cure, I mean, there are other technology,
02:27:16.960 | you know, immune therapy, I think it's a great technology. Chemotherapy, I don't think is a good
02:27:22.240 | technology. I think we got to get off of that. - So chemotherapy just kills cells?
02:27:27.440 | - Yeah, well, chemotherapy hopes to kill more of the tumor cells than of your cells. That's it.
02:27:32.960 | It's a fine balance. The problem is the cells are very similar because they are your cells.
02:27:37.120 | And so if you don't have a very tight way of distinguishing between them, then the toll that
02:27:43.760 | chemo takes on the rest of the body is just unbelievable. - And immunotherapy tries to get
02:27:47.760 | the immune system to do some of the work. - Exactly. Yeah, I think that's potentially a
02:27:51.760 | very good approach. If the immune system can be taught to recognize enough of the cancer cells,
02:27:59.280 | that's a pretty good approach. But I think our approach is in a way more fundamental
02:28:03.840 | because if you can keep the cells harnessed towards organ level goals as opposed to
02:28:09.360 | individual cell goals, then nobody will be making a tumor or metastasizing and so on.
02:28:14.240 | - So we've been living through a pandemic. What do you think about viruses in this full,
02:28:21.280 | beautiful, biological context we've been talking about? Are they beautiful to you? Are they
02:28:27.120 | terrifying? Also, maybe, let's say, are they, since we've been discriminating this whole
02:28:35.920 | conversation, are they living? Are they embodied minds? Embodied minds that are assholes.
02:28:42.480 | - As far as I know, and I haven't been able to find this paper again, but somewhere I saw in the
02:28:48.320 | last couple of months, there was some paper showing an example of a virus that actually had physiology.
02:28:53.680 | So there was something was going on, I think proton flux or something on the virus itself.
02:28:57.520 | But barring that, generally speaking, viruses are very passive. They don't do anything by
02:29:03.200 | themselves. And so I don't see any particular reason to attribute much of a mind to them. I
02:29:09.360 | think they represent a way to hijack other minds for sure, like cells and other things.
02:29:18.240 | - But that's an interesting interplay though. If they're hijacking other minds,
02:29:23.520 | you know, the way we were talking about living organisms, that they can interact with each other
02:29:28.640 | and alter each other's trajectory by having interacted. I mean, that's a deep,
02:29:37.120 | meaningful connection between a virus and a cell. And I think both are transformed by the
02:29:45.920 | experience. And so in that sense, both are living. - Yeah, yeah. You know, the whole category,
02:29:53.280 | I don't, this question of what's living and what's not living, I really, I'm not sure. And I know
02:29:58.880 | there's people that work on this and I don't want to piss anybody off, but I have not found that
02:30:04.240 | particularly useful as to try and make that a binary kind of distinction. I think level of
02:30:10.880 | cognition is very interesting as a continuum, but living and non-living, I really know what to do
02:30:18.000 | with that. I don't know what you do next after making that distinction. - That's why I make the
02:30:23.360 | very binary distinction. Can I have sex with it or not? Can I eat it or not? Those, 'cause those are
02:30:29.360 | actionable, right? - Yeah. Well, I think that's a critical point that you brought up because how you
02:30:33.520 | relate to something is really what this is all about, right? As an engineer, how do I control it?
02:30:39.760 | But maybe I shouldn't be controlling it. Maybe I should be, you know, can I have a relationship
02:30:44.080 | with it? Should I be listening to its advice? Like all the way from, you know, I need to take it
02:30:48.720 | apart all the way to I better do what it says, 'cause it seems to be pretty smart and everything
02:30:53.520 | in between, right? That's really what we're asking about. - Yeah, we need to understand our relationship
02:30:59.120 | to it. We're searching for that relationship, even in the most trivial senses. You came up with a lot
02:31:04.800 | of interesting terms. We've mentioned some of them. A gentrile material, that's a really interesting one.
02:31:12.080 | That's a really interesting one for the future of computation and artificial intelligence and
02:31:18.480 | computer science and all of that. There's also, let me go through some of them, if they spark some
02:31:25.120 | interesting thought for you. There's teleophobia, the unwarranted fear of erring on the side of
02:31:30.960 | too much agency when considering a new system. - Yeah, I mean- - That's the opposite. I mean,
02:31:37.120 | being afraid of maybe anthropomorphizing the thing. - This will get some people ticked off, I think.
02:31:42.880 | But I don't think, I think the whole notion of anthropomorphizing is a holdover from a
02:31:50.720 | pre-scientific age where humans were magic and everything else wasn't magic, and you were
02:31:56.880 | anthropomorphizing when you dared suggest that something else has some features of humans.
02:32:02.880 | And I think we need to be way beyond that. And this issue of anthropomorphizing, I think,
02:32:08.800 | is a cheap charge. I don't think it holds any water at all other than when somebody makes a
02:32:16.000 | cognitive claim. I think all cognitive claims are engineering claims, really. So when somebody says,
02:32:21.280 | "This thing knows," or, "This thing hopes," or, "This thing wants," or, "This thing predicts,"
02:32:24.960 | all you can say is, "Fabulous, give me the engineering protocol that you've derived using
02:32:31.120 | that hypothesis, and we will see if this thing helps us or not, and then we can make a rational
02:32:36.560 | decision." - I also like anatomical compiler, a future system representing the long-term endgame
02:32:43.360 | of the science of morphogenesis that reminds us how far away from true understanding we are.
02:32:49.200 | Someday, you will be able to sit in front of an anatomical computer, specify the shape of the
02:32:54.880 | animal or plant that you want, and it will convert that shape specification to a set of stimuli
02:33:00.640 | that will have to be given to cells to build exactly that shape. No matter how weird,
02:33:06.800 | it ends up being you have total control. Just imagine the possibility for memes
02:33:12.960 | in the physical space. One of the glorious accomplishments of human civilizations is memes
02:33:19.360 | in digital space. Now, this could create memes in physical space. I am both excited and terrified
02:33:26.400 | by that possibility. Cognitive light cone, I think we also talked about. The outer boundary in space
02:33:33.120 | and time of the largest goal a given system can work towards. Is this kind of like shaping the set
02:33:40.960 | of options? - It's a little different than options. It's really focused on... So, back in this, I first
02:33:49.120 | came up with this, but back in 2018, I want to say, there was a conference, a Templeton conference,
02:33:55.040 | where they challenged us to come up with frameworks. I think, actually, it's the here,
02:33:59.520 | it's the diverse intelligence community that... - Summer Institute. - Yeah, they had a summer
02:34:03.680 | institute, but... - That's the logo, it's the bee with some circuits. - Yeah, it's got different
02:34:07.840 | life forms. So, the whole program is called diverse intelligence, and they challenged us to
02:34:14.000 | come up with a framework that was suitable for analyzing different kinds of intelligence
02:34:19.280 | together, right? Because the kinds of things you do to a human are not good with an octopus,
02:34:24.000 | not good with a plant, and so on. So, I started thinking about this, and I asked myself,
02:34:30.480 | what do all cognitive agents, no matter what their providence, no matter what their
02:34:35.920 | architecture is, what do cognitive agents have in common? And it seems to me that what they
02:34:41.680 | have in common is some degree of competency to pursue a goal. And so, what you can do then is,
02:34:46.960 | you can draw. And so, what I ended up drawing was this thing that it's kind of like a backwards
02:34:51.840 | Minkowski cone diagram, where all of space is collapsed into one axis, and then here,
02:34:58.320 | and then time is this axis. And then what you can do is, you can draw for any creature,
02:35:02.320 | you can semi-quantitatively estimate what are the spatial and temporal
02:35:09.040 | goals that it's capable of pursuing. So, for example, if you are a tick, and all you really
02:35:18.000 | are able to pursue is maxima or bacterium, maximizing the level of some chemical in your
02:35:23.520 | vicinity, right, that's all you've got, it's a tiny little icon, then you're a simple system
02:35:27.600 | like a tick or a bacterium. If you are something like a dog, well, you've got some ability to
02:35:34.560 | care about some spatial region, some temporal, you can remember a little bit backwards, you can
02:35:41.040 | predict a little bit forwards, but you're never, ever going to care about what happens in the next
02:35:45.920 | town over four weeks from now. It's just, as far as we know, it's just impossible for that
02:35:50.320 | kind of architecture. If you're a human, you might be working towards world peace long after you're
02:35:55.680 | dead, right? So, you might have a planetary scale goal that's enormous, right? And then there may be
02:36:02.880 | other greater intelligences somewhere that can care in the linear range about numbers of creatures
02:36:07.680 | that, you know, some sort of Buddha-like character that can care about everybody's welfare, like,
02:36:11.600 | really care the way that we can't. And so, that, it's not a mapping of what you can sense,
02:36:18.640 | how far you can sense, right? It's not a mapping of how far you can act, it's a mapping of how big
02:36:23.600 | are the goals you are capable of envisioning and working towards. And I think that enables you to
02:36:28.960 | put synthetic kinds of constructs, AIs, aliens, swarms, whatever, on the same diagram, because
02:36:39.040 | we're not talking about what you're made of or how you got here, we're talking about what are the
02:36:42.240 | size and complexity of the goals towards which you can work.
02:36:46.160 | - Is there any other terms that pop into mind that are interesting?
02:36:50.000 | - I'm trying to remember. I have a list of them somewhere on my website.
02:36:53.760 | - Target morphology, yeah, people should definitely check it out. Morphoceutical,
02:36:58.880 | I like that one. Ionoceutical.
02:37:01.200 | - Yeah, yeah. I mean, those refer to different types of interventions in the regenerative
02:37:06.560 | medicine space. So a morphoceutical is something that, it's a kind of intervention that really
02:37:12.480 | targets the cell's decision-making process about what they're going to build. And ionoceuticals
02:37:18.800 | are like that, but more focused specifically on the bioelectrics. I mean, there's also, of course,
02:37:22.720 | biochemical, biomechanical, who knows what else, maybe optical kinds of signaling systems there
02:37:28.160 | as well. Target morphology is interesting. It really, it's designed to capture this idea that
02:37:36.160 | it's not just feed-forward emergence, and oftentimes in biology, I mean, of course,
02:37:40.240 | that happens too, but in many cases in biology, the system is specifically working towards a
02:37:46.000 | target in anatomical morpho space, right? It's a navigation task, really. These kinds of problem
02:37:50.800 | solving can be formalized as navigation tasks, and that they're really going towards a particular
02:37:59.600 | region. How do you know? Because you deviate them and then they go back.
02:38:02.240 | - Let me ask you, because you've really challenged a lot of ideas in biology in the work you do,
02:38:12.640 | probably because some of your rebelliousness comes from the fact that you came from a different field
02:38:18.960 | of computer engineering. But could you give advice to young people today in high school
02:38:24.400 | or college that are trying to pave their life story, whether it's in science or elsewhere,
02:38:32.400 | how they can have a career they can be proud of or a life they can be proud of? Advice.
02:38:37.600 | - Boy, it's dangerous to give advice because things change so fast. But one central thing I can say,
02:38:42.640 | moving up and through academia and whatnot, you will be surrounded by really smart people.
02:38:49.200 | And what you need to do is be very careful at distinguishing specific critique versus kind of
02:38:57.120 | meta advice. And what I mean by that is, if somebody really smart and successful and obviously
02:39:05.280 | competent is giving you specific critiques on what you've done, that's gold. That's an opportunity to
02:39:12.320 | hone your craft to get better at what you're doing, to learn, to find your mistakes, like,
02:39:16.000 | that's great. If they are telling you what you ought to be studying, how you ought to approach
02:39:22.400 | things, what is the right way to think about things, you should probably ignore most of that.
02:39:28.400 | And the reason I make that distinction is that a lot of really successful people are very well
02:39:36.160 | calibrated on their own ideas and their own field and their own sort of area. And they know exactly
02:39:42.960 | what works and what doesn't and what's good and what's bad, but they're not calibrated on your
02:39:46.720 | ideas. And so the things they will say, "Oh, this is a dumb idea. Don't do this and you shouldn't
02:39:53.280 | do that." That stuff is generally worse than useless. It can be very demoralizing and really
02:40:03.200 | limiting. And so what I say to people is read very broadly, work really hard, know what you're
02:40:09.200 | talking about, take all specific criticism as an opportunity to improve what you're doing,
02:40:14.800 | and then completely ignore everything else. Because I just tell you from my own experience,
02:40:20.320 | most of what I consider to be interesting and useful things that we've done,
02:40:24.960 | very smart people have said, "This is a terrible idea. Don't do that."
02:40:29.200 | Yeah, I think we just don't know. We have no idea beyond our own. At best, we know what we
02:40:36.160 | ought to be doing. We very rarely know what anybody else should be doing.
02:40:39.360 | Yeah, and their ideas, their perspective has been also calibrated, not just on their field
02:40:45.120 | and specific situation, but also on a state of that field at a particular time in the past.
02:40:51.840 | So there's not many people in this world that are able to achieve revolutionary success multiple
02:40:58.400 | times in their life. So whenever you say somebody very smart, usually what that means is somebody
02:41:04.400 | who's smart, who achieved a success at a certain point in their life, and people often get stuck
02:41:10.800 | in that place where they found success. To be constantly challenging your world view is a very
02:41:15.920 | difficult thing. So yeah, also at the same time, probably if a lot of people tell...
02:41:23.760 | That's the weird thing about life. If a lot of people tell you that something is stupid or is
02:41:30.880 | not going to work, that either means it's stupid, it's not going to work, or it's actually a great
02:41:38.400 | opportunity to do something new. And you don't know which one it is. And it's probably equally
02:41:44.560 | likely to be either. Well, I don't know the probabilities. Depends how lucky you are,
02:41:50.480 | depends how brilliant you are. But you don't know. And so you can't take that advice as actual data.
02:41:55.440 | Yeah. You have to, and this is kind of hard to describe and fuzzy, but I'm a firm believer that
02:42:04.000 | you have to build up your own intuition. So over time, you have to take your own risks that seem
02:42:09.440 | like they make sense to you, and then learn from that, and build up so that you can trust your own
02:42:14.960 | gut about what's a good idea, even when... And then sometimes you'll make mistakes and it'll
02:42:18.800 | turn out to be a dead end, and that's fine. That's science. But what I tell my students is
02:42:24.480 | is life is hard, and science is hard, and you're going to sweat and bleed and everything. And you
02:42:30.320 | should be doing that for ideas that really fire you up inside. And really don't let the common
02:42:41.600 | denominator of standardized approaches to things slow you down.
02:42:46.240 | - So you mentioned planaria being in some sense immortal. What's the role of death in life?
02:42:53.120 | What's the role of death in this whole process we have? Is it, when you look at biological systems,
02:42:58.880 | is death an important feature, especially as you climb up the hierarchy of competency?
02:43:08.240 | - Boy, that's an interesting question. I think that it's certainly a factor that promotes
02:43:17.360 | change and turnover and an opportunity to do something different the next time
02:43:22.880 | for a larger scale system. So apoptosis, it's really interesting. I mean, death is really
02:43:29.040 | interesting in a number of ways. One is you could think about what was the first thing to die?
02:43:33.440 | That's an interesting question. What was the first creature that you could say actually died?
02:43:38.000 | It's a tough thing because we don't have a great definition for it. So if you bring a
02:43:44.080 | cabbage home and you put it in your fridge, at what point are you going to say it's died?
02:43:49.600 | So it's kind of hard to know. There's one paper in which I talk about this idea that, I mean,
02:43:59.840 | think about this and imagine that you have a creature that's aquatic, let's say it's a frog
02:44:07.680 | or something or a tadpole, and the animal dies in the pond, it dies for whatever reason. Most of the
02:44:15.280 | cells are still alive. So you could imagine that if when it died, there was some sort of breakdown
02:44:20.640 | of the connectivity between the cells, a bunch of cells crawled off, they could have a life as
02:44:27.040 | amoebas. Some of them could join together and become a xenobot and toodle around, right? So
02:44:32.640 | we know from planaria that there are cells that don't obey the Hayflick limit and just sort of
02:44:36.240 | live forever. So you could imagine an organism that when the organism dies, it doesn't disappear,
02:44:41.360 | rather the individual cells that are still alive crawl off and have a completely different kind of
02:44:45.840 | lifestyle and maybe come back together as something else or maybe they don't. So all of this, I'm sure
02:44:50.240 | is happening somewhere on some planet. So death in any case, I mean, we already kind of knew this
02:44:57.360 | because the molecules, we know that when something dies, the molecules go through the ecosystem, but
02:45:02.160 | even the cells don't necessarily die at that point. They might have another life in a different way.
02:45:07.920 | And you can think about something like HeLa, right? The HeLa cell line, that's had this incredible
02:45:13.280 | life. There are way more HeLa cells now than there were when she was alive.
02:45:18.400 | - It seems like as the organisms become more and more complex, like if you look at the mammals,
02:45:22.560 | their relationship with death becomes more and more complex. So the survival imperative
02:45:29.760 | starts becoming interesting and humans are arguably the first species that have invented
02:45:37.360 | the fear of death, the understanding that you're going to die, let's put it this way.
02:45:42.400 | So not like instinctual, like I need to run away from the thing that's gonna eat me,
02:45:50.000 | but starting to contemplate the finiteness of life.
02:45:54.480 | I mean, so one thing about the human cognitive light cone is that for the first, as far as we
02:46:01.200 | know, for the first time, you might have goals that are longer than your lifespan,
02:46:05.280 | that are not achievable, right? So if you were, let's say, and I don't know if this is true,
02:46:09.280 | but if you're a goldfish and you have a 10 minute attention span, I'm not sure if that's true,
02:46:13.520 | but let's say there's some organism with a short kind of cognitive light cone that way,
02:46:18.000 | all of your goals are potentially achievable because you're probably going to live the next
02:46:22.720 | 10 minutes. So whatever goals you have, they are totally achievable. If you're a human,
02:46:26.560 | you could have all kinds of goals that are guaranteed not achievable because they just
02:46:29.920 | take too long, like guaranteed you're not going to achieve them. So I wonder if, you know,
02:46:33.440 | is that a perennial, you know, sort of thorn in our psychology that drives some psychoses or
02:46:40.480 | whatever? I have no idea. Another interesting thing about that, actually, and I've been
02:46:44.640 | thinking about this a lot in the last couple of weeks, this notion of giving up. So you would
02:46:50.320 | think that evolutionarily the most adaptive way of being is that you go, you fight as long as you
02:47:01.120 | physically can. And then when you can't, you can't. And there's this photograph, there's videos you
02:47:05.600 | can find of insects crawling around where like, you know, like most of it is already gone and
02:47:09.760 | it's still sort of crawling, you know, like Terminator style, right? Like as far as, as
02:47:14.800 | long as you physically can, you keep going. Mammals don't do that. So a lot of mammals,
02:47:19.440 | including rats, have this thing where when they think it's a hopeless situation,
02:47:25.120 | they literally give up and die when physically they could have kept going. I mean, humans certainly do
02:47:29.200 | this. And there's some like really unpleasant experiments that this guy, I forget his name,
02:47:33.600 | did with drowning rats where rats normally drown after a couple of minutes. But if you teach them
02:47:39.440 | that, if you just tread water for a couple of minutes, you'll get rescued. They can tread
02:47:42.800 | water for like an hour. And so, right, and so they literally just give up and die. And so
02:47:46.720 | evolutionarily, that doesn't seem like a good strategy at all. Evolutionarily, it seems like
02:47:51.680 | what's the benefit ever of giving up? You just do what you can and one time out of a thousand,
02:47:55.360 | you'll actually get rescued, right? But this issue of actually giving up suggests some very
02:48:00.800 | interesting metacognitive controls where you've now gotten to the point where survival actually
02:48:06.320 | isn't the top drive. And that for whatever, you know, there are other considerations that have
02:48:10.640 | like taken over. And I think that's uniquely a mammalian thing, but I don't know.
02:48:14.640 | - Yeah, the Camus, the existentialist question of why live, just the fact that humans commit
02:48:23.120 | suicide is a really fascinating question from an evolutionary perspective.
02:48:27.600 | - And what was the first, and that's the other thing, like what is the simplest
02:48:31.280 | system, whether evolved or natural or whatever, that is able to do that, right? Like you can
02:48:38.800 | think, you know, what other animals are actually able to do that? I'm not sure.
02:48:41.680 | - Maybe you could see animals over time, for some reason, lowering the value of
02:48:49.280 | survive at all costs gradually until other objectives might become more important.
02:48:55.440 | - Maybe, I don't know how evolutionarily how that gets off the ground. That just seems like that
02:48:59.760 | would have such a strong pressure against it, you know? Just imagine a population with a lower,
02:49:09.680 | you know, if you were a mutant in a population that had less of a survival imperative,
02:49:16.240 | would your genes outperform the others? It seems not.
02:49:19.520 | - Is there such a thing as population selection? Because maybe suicide is a way
02:49:23.680 | for organisms to decide themselves that they're not fit for the environment somehow.
02:49:31.600 | - Yeah, that's a really, you know, population level selection is a kind of a deep controversial
02:49:38.080 | area, but it's tough because on the face of it, if that was your genome, it wouldn't get
02:49:44.160 | propagated because you would die and then your neighbor who didn't have that would have all the
02:49:48.160 | kids. - It feels like there could be some deep truth there that we're not understanding.
02:49:52.720 | What about you yourself as one biological system? Are you afraid of death?
02:49:58.000 | - To be honest, I'm more concerned with, especially now getting older and having helped a couple of
02:50:05.920 | people pass, I think about what's a good way to go, basically. Like nowadays, I don't know what
02:50:15.280 | that is. You know, sitting in a facility that sort of tries to stretch you out as long as you can,
02:50:22.560 | that doesn't seem good. And there's not a lot of opportunities to sort of, I don't know,
02:50:27.600 | sacrifice yourself for something useful, right? There's not terribly many opportunities for that
02:50:31.520 | in modern society. So I don't know. I'm not particularly worried about death itself, but
02:50:37.920 | I've seen it happen and it's not pretty. And I don't know what a better alternative is.
02:50:47.840 | - So the existential aspect of it does not worry you deeply, the fact that this ride ends?
02:50:55.520 | - No, it began, I mean, the ride began, right? So there was, I don't know how many
02:51:00.800 | billions of years before that I wasn't around, so that's okay.
02:51:04.160 | - But isn't the experience of life, it's almost like feels like you're immortal?
02:51:09.920 | Because the way you make plans, the way you think about the future, I mean, if you look at your own
02:51:16.160 | personal rich experience, yes, you can understand, okay, eventually I died. There's people I love
02:51:23.120 | that have died, so surely I will die and it hurts and so on. But like, it's so easy to get lost in
02:51:32.000 | feeling like this is gonna go on forever. - Yeah, it's a little bit like the people who
02:51:35.680 | say they don't believe in free will, right? I mean, you can say that, but when you go to
02:51:40.080 | a restaurant, you still have to pick a soup and stuff. So I don't know if I know, I've actually
02:51:45.120 | seen that happen at lunch with a well-known philosopher and he didn't believe in free will
02:51:50.160 | and the waitress came around and he was like, "Well, let me see." I was like, "What are you
02:51:53.680 | doing here? You're gonna choose a sandwich?" So I think it's one of those things. I think you can
02:51:59.520 | know that you're not gonna live forever, but it's not practical to live that way unless, so you buy
02:52:07.120 | insurance and then you do some stuff like that. But mostly, I think you just live as if you can
02:52:14.400 | make plans. - We talked about all kinds of life, we talked about all kinds of embodied minds.
02:52:22.240 | What do you think is the meaning of it all? What's the meaning of all the biological lives
02:52:27.840 | we've been talking about here on Earth? Why are we here? - I don't know that that's a well-posed
02:52:36.000 | question other than the existential question you posed before. - Is that question hanging out with
02:52:42.320 | the question of what is consciousness and they're at a retreat somewhere? - Not sure because- -
02:52:49.280 | Sipping pina coladas because they're ambiguously defined. - Maybe. I'm not sure that any of these
02:52:57.120 | things really ride on the correctness of our scientific understanding, but I mean, just for
02:53:02.960 | an example, I've always found it weird that people get really worked up to find out realities about
02:53:16.160 | their bodies. For example, have you seen Ex Machina? You've seen that? And so there's this
02:53:22.480 | great scene where he's cutting his hand to find out if he's full of cogs. Now, to me, if I open
02:53:28.720 | up and I find out if I'm a bunch of cogs, my conclusion is not, "Oh crap, I must not have true
02:53:33.920 | cognition. That sucks." My conclusion is, "Wow, cogs can have true cognition. Great." So it seems
02:53:41.520 | to me, I guess I'm with Descartes on this one, that whatever the truth ends up being of what is
02:53:48.320 | consciousness, how it can be conscious, none of that is going to alter my primary experience,
02:53:53.280 | which is this is what it is. And if a bunch of molecular networks can do it, fantastic. If it
02:53:58.080 | turns out that there's a non-corporeal soul, great, we'll study that, whatever. But the fundamental
02:54:06.240 | existential aspect of it is, if somebody told me today that, "Yeah, you were created yesterday,
02:54:13.280 | and all your memories are fake, kind of like Boltzmann brains, right? And Hume's skepticism,
02:54:20.400 | all that." Yeah, okay. But here I am now. So let's... - The experience is primal. So
02:54:28.720 | that's the thing that matters. So the backstory doesn't matter. The explanation...
02:54:35.200 | - I think so. From a first-person perspective. Now, scientifically, it's all very interesting.
02:54:39.360 | From a third-person perspective, I could say, "Wow, that's amazing that this happens, and how
02:54:44.320 | does it happen?" And whatever. But from a first-person perspective, I could care less.
02:54:49.840 | What I've learned from any of these scientific facts is, "Okay, well, I guess then that's what
02:54:56.400 | is sufficient to give me my amazing first-person perspective."
02:55:00.400 | - Well, I think if you dig deeper and deeper and get surprising answers to
02:55:06.480 | why the hell we're here, it might give you some guidance on how to live.
02:55:12.960 | - Maybe, maybe. I don't know. That would be nice. On the one hand, you might be right,
02:55:20.160 | because on the one hand, I don't know what else could possibly give you that guidance, right? So
02:55:24.640 | you would think that it would have to be that, or it would have to be science because there isn't
02:55:28.000 | anything else. So maybe. On the other hand, I am really not sure how you go from any,
02:55:35.920 | you know, what they call from an is to an ought, right? From any factual description of what's
02:55:39.920 | going on. This goes back to the natural, right? Just because somebody says, "Oh, man, that's
02:55:44.240 | completely not natural. That's never happened on Earth before." I'm not impressed by that whatsoever.
02:55:49.520 | I think whatever has or hasn't happened, we are now in a position to do better if we can, right?
02:55:56.400 | - Well, that's also good because you said there's science and there's nothing else.
02:56:03.600 | It's really tricky to know how to intellectually deal with a thing that science doesn't currently
02:56:12.000 | understand, right? So like, the thing is, if you believe that science solves everything,
02:56:20.800 | you can too easily in your mind think our current understanding, like we've solved everything.
02:56:30.640 | - Right, right.
02:56:31.360 | - Like it jumps really quickly to not science as a mechanism, as a process,
02:56:38.320 | but more like the science of today. Like you could just look at human history and throughout
02:56:43.920 | human history, just physicists and everybody would claim we've solved everything.
02:56:49.440 | - Sure, sure, sure, sure.
02:56:50.480 | - Like there's a few small things to figure out and we basically solved everything.
02:56:55.440 | Where in reality, I think asking like, "What is the meaning of life?" is resetting the palette
02:57:01.920 | of like, we might be tiny and confused and don't have anything figured out. It's almost
02:57:10.240 | going to be hilarious a few centuries from now when they look back at how dumb we were.
02:57:15.360 | - Yeah, I 100% agree. So when I say science and nothing else, I certainly don't mean the
02:57:22.960 | science of today because I think overall, I think we know very little. I think most of the things
02:57:29.520 | that we're sure of now are going to be, as you said, are going to look hilarious down the line.
02:57:33.920 | So I think we're just at the beginning of a lot of really important things.
02:57:37.680 | When I say nothing but science, I also include the kind of first person, what I call science,
02:57:45.280 | that you do. So the interesting thing about, I think, about consciousness and studying consciousness
02:57:50.240 | and things like that in the first person is unlike doing science in the third person, where you as
02:57:55.840 | the scientist are minimally changed by it, maybe not at all. So when I do an experiment, I'm still
02:58:00.080 | me. There's the experiment, whatever I've done, I've learned something. So that's a small change,
02:58:03.120 | but overall, that's it. In order to really study consciousness, you are part of the experiment.
02:58:10.240 | You will be altered by that experiment, right? Whatever it is that you're doing, whether it's
02:58:14.400 | some sort of contemplative practice or some sort of psychoactive, whatever, you are now your own
02:58:23.280 | experiment and you are right in. So I fold that in. I think that's part of it. I think that exploring
02:58:29.120 | our own mind and our own consciousness is very important. I think much of it is not captured by
02:58:33.760 | what currently is third person science, for sure. But ultimately, I include all of that in science
02:58:40.640 | with a capital S in terms of a rational investigation of both first and third person
02:58:47.520 | aspects of our world. - We are our own experiment,
02:58:52.720 | as beautifully put. And when two systems get to interact with each other, that's the kind
02:58:59.360 | of experiment. So I'm deeply honored that you would do this experiment with me today.
02:59:04.800 | - Oh, thanks so much. Thanks for having me. - Michael, I'm a huge fan of your work.
02:59:06.960 | - Likewise. - Thank you for doing everything
02:59:08.560 | you're doing. I can't wait to see the kind of incredible things you build. So thank you for
02:59:14.960 | talking today. - Really appreciate being here. Thank you.
02:59:16.880 | - Thank you for listening to this conversation with Michael Levin. To support this podcast,
02:59:21.840 | please check out our sponsors in the description. And now let me leave you with some words from
02:59:26.720 | Charles Darwin in "The Origin of Species." "From the war of nature, from famine and death,
02:59:35.520 | the most exalted object which we're capable of conceiving, namely the production of the higher
02:59:41.360 | animals, directly follows. There's grandeur in this view of life, with its several powers,
02:59:48.560 | having been originally breathed into a few forms or into one, and that whilst this planet has gone
02:59:56.160 | cycling on according to the fixed laws of gravity, from its most simple beginning, endless forms,
03:00:02.640 | most beautiful and most wonderful have been and are being evolved."
03:00:07.840 | Thank you for listening. I hope to see you next time.
03:00:12.400 | (upbeat music)
03:00:12.980 | (upbeat music)
03:00:13.480 | (upbeat music)
03:00:13.980 | (upbeat music)
03:00:16.480 | [BLANK_AUDIO]