back to indexLee Cronin: Controversial Nature Paper on Evolution of Life and Universe | Lex Fridman Podcast #404
Chapters
0:0 Introduction
1:15 Assembly theory paper
21:45 Assembly equation
34:57 Discovering alien life
53:16 Evolution of life on Earth
61:12 Response to criticism
78:50 Kolmogorov complexity
90:40 Nature review process
111:34 Time and free will
117:59 Communication with aliens
139:57 Cellular automata
144:26 AGI
161:15 Nuclear weapons
167:0 Chem Machina
179:54 GPT for electron density
189:24 God
00:00:00.000 |
Every star in the sky probably has planets, and life is probably emerging on these planets. 00:00:07.760 |
But I think the commentarial space associated to these planets is so different. Our causal cones 00:00:13.600 |
are never going to overlap, or not easily. And this is the thing that makes me sad about alien 00:00:18.400 |
life, why it's why we have to create alien life in the lab as quickly as possible, because I don't 00:00:24.640 |
know if we are going to be able to be able to build architectures that will intersect with 00:00:32.800 |
alien intelligence architectures. And intersect, you don't mean in time or space? Time and the 00:00:38.800 |
ability to communicate. The ability to communicate. Yeah, my biggest fear in a way is that life is 00:00:44.080 |
everywhere, but we become infinitely more lonely because of our scaffolding in that commentarial 00:00:49.120 |
space. - The following is a conversation with Lee Cronin, his third time on this podcast. He is a 00:00:56.960 |
chemist from University of Glasgow, who is one of the most fascinating, brilliant, and fun to talk 00:01:02.640 |
to scientists I've ever had the pleasure of getting to know. This is Alex Friedman Podcast. To support 00:01:08.960 |
it, please check out our sponsors in the description. And now, dear friends, here's Lee Cronin. 00:01:15.840 |
- So your big assembly theory paper was published in Nature. Congratulations. 00:01:23.520 |
to say, a lot of controversy, but also a lot of interesting discussion. So maybe I can try to 00:01:29.920 |
summarize assembly theory, and you tell me if I'm wrong. - Go for it. 00:01:32.880 |
- So assembly theory says that if we look at any object in the universe, any object, 00:01:39.440 |
that we can quantify how complex it is by trying to find the number of steps it took to create it, 00:01:46.160 |
and also we can determine if it was built by a process akin to evolution by looking at how many 00:01:52.880 |
copies of the object there are. - Yeah, that's spot on. 00:01:56.080 |
- Spot on. I was not expecting that. Okay, so let's go through definitions. 00:02:01.440 |
So there's a central equation I'd love to talk about, but definition-wise, what is an object? 00:02:09.280 |
- Yeah, an object, so if I'm going to try to be as meticulous as possible, 00:02:16.640 |
objects need to be finite and they need to be decomposable into subunits. All human-made 00:02:26.960 |
artifacts are objects. Is a planet an object? Probably yes, if you scale out. So an object is 00:02:34.480 |
finite and countable and decomposable, I suppose mathematically. But yeah, I still wake up some 00:02:41.760 |
days and go, think to myself, what is an object? Because it's a non-trivial question. 00:02:50.400 |
- Persists over time. I'm quoting from the paper here. An object that's finite is distinguishable. 00:02:57.760 |
So that's a weird adjective, distinguishable. - We've had so many people help offering to 00:03:05.760 |
rewrite the paper after it came out, you wouldn't believe it's so funny. 00:03:08.800 |
- Persists over time and is breakable such that the set of constraints to construct it 00:03:14.880 |
from elementary building blocks is quantifiable. Such that the set of constraints to construct it 00:03:21.280 |
from elementary building blocks is quantifiable. - The history is in the objects. It's kind of 00:03:27.680 |
cool, right? - So what defines the object is its history 00:03:32.800 |
or memory, whichever is the sexier word. - I'm happy with both depending on the day. 00:03:38.160 |
- Okay. So the set of steps it took to create the object. So there's a sense 00:03:43.360 |
in which every object in the universe has a history. And that is part of the thing that 00:03:52.160 |
is used to describe its complexity. How complicated it is. Okay. What is an assembly index? 00:03:58.960 |
- So the assembly index, if you're to take the object apart and be super lazy about it or minimal, 00:04:07.920 |
it's like you've got a really short term memory. So what you do is you lay all the parts on the 00:04:13.440 |
path and you find the minimum number of steps you take on the path to add the parts together 00:04:21.680 |
to reproduce the object. And that minimum number is the assembly index. It's a minimum bound. 00:04:27.760 |
And it was always my intuition the minimum bound in assembly theory was really important. I only 00:04:31.520 |
worked out why a few weeks ago, which was kind of funny because I was just like, no, this is 00:04:34.880 |
sacrosanct. I don't know why. It will come to me one day. And then when I was pushed by a bunch of 00:04:38.960 |
mathematicians, we came up with the correct physical explanation, which I can get to, 00:04:47.280 |
but it's the minimum and it's really important. It's the minimum. And the reason I knew the 00:04:51.840 |
minimum was right is because we could measure it. So almost before this paper came out, 00:04:55.760 |
we've published papers, explain how you can measure the assembly index of molecules. 00:05:00.480 |
- Okay. So that's not so trivial to figure out. So when you look at an object, we can say a 00:05:06.560 |
molecule, we can say object more generally to figure out the minimum number of steps it takes 00:05:12.240 |
to create that object. That doesn't seem like a trivial thing to do. 00:05:16.720 |
- So with molecules, it's not trivial, but it is possible because what you can do, 00:05:23.360 |
and because I'm a chemist, so I'm kind of like, I see the lens of the world through just chemistry. 00:05:27.920 |
I break the molecule apart and break bonds. And if you break up, if you take a molecule and you 00:05:35.280 |
break it all apart, you have a bunch of atoms. And then you can say, okay, I'm going to then 00:05:40.320 |
form bond, take the atoms and form bonds and go up the chain of events to make the molecule. 00:05:46.240 |
And that's what made me realize, take a toy example, literally toy example, 00:05:49.840 |
take a Lego object, which is broken up of Lego blocks. So you could do exactly the same thing. 00:05:55.120 |
In this case, the Lego blocks are naturally the smallest. They're the atoms in the actual composite 00:06:02.320 |
Lego architecture. But then if you maybe take a couple of blocks and put them together in a 00:06:09.360 |
certain way, maybe they're offset in some way, that offset is on the memory. You can use that 00:06:14.800 |
offset again with only a penalty of one, and you can then make a square triangle and keep going. 00:06:19.520 |
And you remember those motifs on the chain. So you can then leap from the start with all the 00:06:26.640 |
Lego blocks or atoms just laid out in front of you and say, right, I'll take you, you, you, 00:06:30.880 |
connect and do the least amount of work. So it's really like the smallest steps you can take on 00:06:38.480 |
the graph to make the object. And so for molecules, it came relatively intuitively. 00:06:43.040 |
And then we started to apply it to language. We've even started to apply it to mathematical 00:06:47.280 |
theorems. I'm so well out of my depth, but it looks like you can take minimum set of axioms 00:06:51.920 |
and then start to build up mathematical architectures in the same way. And then the 00:06:57.920 |
shortest path to get there is something interesting that I don't yet understand. 00:07:01.280 |
- So what's the computational complexity of figuring out the shortest path 00:07:05.840 |
with molecules, with language, with mathematical theorems? It seems like once you have the fully 00:07:14.080 |
constructed Lego castle, or whatever your favorite Lego world is, figuring out how to get there from 00:07:22.640 |
the basic building blocks, is that an NP hard problem? It's a hard problem. 00:07:28.480 |
- It's a hard problem, but actually, if you look at it, so the best way to look at it, 00:07:32.320 |
let's take a molecule. So if the molecule has 13 bonds, first of all, take 13 copies of the 00:07:37.360 |
molecule and just cut all the bonds. So take cut 12 bonds, and then you just put them in order. 00:07:41.360 |
And then that's how it works. So, and you keep looking for symmetry or copies, so you can then 00:07:49.920 |
shorten it as you go down. And that becomes combinatorially quite hard. 00:07:55.040 |
For some natural product molecules, it becomes very hard. It's not impossible, 00:08:00.080 |
but we're looking at the bounds on that at the moment. But as the object gets bigger, 00:08:04.880 |
it becomes really hard. But that's the bad news, but the good news is there are shortcuts. 00:08:10.160 |
And we might even be able to physically measure the complexity without computationally calculating 00:08:16.560 |
it, which is kind of insane. - Wait, how would you do that? 00:08:19.680 |
- Well, in the case of molecule, so if you shine light on a molecule, let's take an infrared, 00:08:25.520 |
the molecule has each of the bonds absorbs the infrared differently in what we call the 00:08:31.680 |
fingerprint region. And so it's a bit like, and because it's quantized as well, you have all these 00:08:38.080 |
discrete kind of absorbances. And my intuition after we realized we could cut molecules up in 00:08:44.080 |
mass spec, that was the first go at this. We did it with using infrared and the infrared gave us an 00:08:49.440 |
even better correlation assembly index. And we used another technique as well, in addition to 00:08:54.720 |
infrared called NMR, nuclear magnetic resonance, which tells you about the number of different 00:08:58.960 |
magnetic environments in a molecule. And that also worked out. So we have three techniques, 00:09:04.000 |
which each of them independently gives us the same or tending towards the same assembly index 00:09:10.000 |
for molecule that we can calculate mathematically. - Okay, so these are all methods of mass 00:09:15.120 |
spectrometry, mass spec, you scan a molecule, it gives you data in the form of a mass spectrum, 00:09:21.840 |
and you're saying that the data correlates to the assembly index. 00:09:27.760 |
shortcut, first of all, to chemistry, and second of all, beyond that? 'Cause that seems like a 00:09:32.800 |
nice hack and you're extremely knowledgeable about various aspects of chemistry, so you can say, 00:09:39.840 |
"Okay, it kind of correlates." But the whole idea behind assembly theory paper, and perhaps why it's 00:09:47.280 |
so controversial, is that it reaches bigger. It reaches for the bigger general theory of objects 00:09:57.040 |
in the universe. - Yeah, I'd say so, I'd agree. 00:09:59.360 |
So I've started assembly theory of emoticons with my lab, believe it or not. So we take emojis, 00:10:06.240 |
pixelate them, and work out the assembly index of the emoji, and then work out how many emojis 00:10:11.600 |
you can make on the path of emojis. So there's the uber emoji from which all other emojis emerge, 00:10:18.080 |
and then you can, so you can then take a photograph, and by looking at the shortest path, 00:10:22.480 |
by reproducing the pixels to make the image you want, you can measure that. So then you start to 00:10:29.600 |
be able to take spatial data. Now there's some problems there. What is then the definition of 00:10:35.120 |
the object? How many pixels? How do you break it down? And so we're just learning all this right 00:10:41.360 |
now. - So how do you compute the, how would you begin to compute the assembly index of a graphical, 00:10:50.320 |
like, a set of pixels on a 2D plane that form a thing? - So you would, first of all, determine the 00:10:57.200 |
resolution. So then what is your x, y, and what number on the x and y plane? And then look at the 00:11:03.440 |
surface area, and then you take all your emojis and make sure they're all looked at the same 00:11:07.280 |
resolution. - Yes. - And then we would basically then do the exactly the same thing we would do for 00:11:13.840 |
cutting the bonds. You'd cut bits out of the emoji, and look at the, you'd have a bag of pixels, 00:11:20.320 |
so, and you would then add those pixels together to make the overall emoji. - Wait a minute, 00:11:26.800 |
but, like, first of all, not every pixel's, I mean, this is at the core, sort of, machine learning, 00:11:33.760 |
computer vision, not every pixel is that important, and there's, like, macro features, there's micro 00:11:38.800 |
features, and all that kind of stuff. - Exactly. - Like, you know, the eyes appear in a lot of them, 00:11:43.920 |
the smile appears in a lot of them. - So in the same way in chemistry we assume the bond is 00:11:49.680 |
fundamental, what we do in there here is we assume the resolution at the scale at which we do it is 00:11:54.800 |
fundamental, and we're just working that out, and that, you're right, that will change, right? Because 00:11:58.400 |
as you take your lens out a bit, you, it will change dramatically, but it, but it's just a new way of 00:12:04.480 |
looking at not just compression, what we do right now in computer science and data, one big, kind of, 00:12:10.400 |
kind of, misunderstanding is assembly theory is telling you about how compressed the object is, 00:12:18.560 |
that's not right, it's a how much information is required on a chain of events, because the nice 00:12:26.240 |
thing is if, when you do compression in computer science, we're wandering a bit here, but it's, 00:12:30.640 |
kind of, worth wandering, I think, and you, you assume you have instantaneous access to all the 00:12:36.000 |
information in the memory. - Yeah. - In assembly theory, you say, no, you don't get access to that 00:12:40.000 |
memory until you've done the work, and then when you've done access to that memory, you can have 00:12:43.360 |
access, but not to the next one, and this is how, in assembly theory, we talk about the four 00:12:48.000 |
universes, the assembly universe, the assembly possible, and the assembly contingent, and then 00:12:53.360 |
the assembly observed, and they're all, all scales in this combinatorial universe. - Yeah, can you 00:12:59.440 |
explain each one of them? - Yep, so the assembly universe is like anything goes, just, it's just 00:13:04.640 |
combinatorial, kind of, explosion in everything. - So that's the biggest one? - That's the biggest 00:13:08.640 |
one, it's massive. - Assembly universe, assembly possible, assembly contingent, assembly observed, 00:13:16.160 |
and on the y-axis is assembly steps in time, and, you know, in the x-axis, as the thing 00:13:24.400 |
expands through time, more and more unique objects appear. - So, yeah, so assembly universe, everything 00:13:31.600 |
goes. - Yep. - Assembly possible, laws of physics come in, in this case, in chemistry bonds. In 00:13:37.360 |
assembly, so that means... - Those are actually constraints, I guess? - Yes, and they're the only 00:13:41.680 |
constraints, they're the constraints of the base, so the way to look at it, you've got all your atoms, 00:13:45.440 |
they're quantized, you can just bung them together, so then you can become a, kind of, so, in the way, 00:13:50.160 |
in computer science speak, I suppose, the assembly universe is just, like, no laws of physics, 00:13:54.240 |
things can fly through mountains, beyond the speed of light. In the assembly possible, you have to 00:14:00.880 |
apply the laws of physics, but you can get access to all the motifs instantaneously, with no effort, 00:14:07.680 |
so that means you could make anything. Then the assembly contingent says, no, you can't have access 00:14:13.680 |
to the highly assembled object in the future, until you've done the work in the past, on the 00:14:17.360 |
causal chain, and that's really, the really interesting shift, where you go from assembly 00:14:23.520 |
possible, to assembly contingent. That is really the key thing in assembly theory, that says you 00:14:30.320 |
cannot just have instantaneous access to all those memories, you have to have done the work somehow, 00:14:36.160 |
the universe has to have somehow built a system that allows you to select that path, 00:14:43.200 |
rather than other paths, and then the final thing, the assembly observed, is basically us saying, oh, 00:14:50.080 |
these are the things we actually see, we can go backwards now, and understand that they have been 00:14:56.640 |
created by this causal process. - Wait a minute, so when you say the universe has to construct the 00:15:01.840 |
system, that does the work, is that like the environment that allows for like selection? 00:15:08.240 |
- Yeah, yeah, yeah. - That's the thing that does the selection? 00:15:10.560 |
- You could think about in terms of a von Neumann constructor, first it's a selection, a ribosome, 00:15:14.960 |
Tesla plant, assembling Teslas, you know, the difference between the assembly universe, 00:15:22.080 |
in Tesla land, and the Tesla factory is, everyone says, no, Teslas are just easy, they just spring 00:15:27.680 |
out, you know how to make them all, in a Tesla factory, you have to put things in sequence, 00:15:31.360 |
and out comes a Tesla. - So you're talking about the factory? 00:15:33.600 |
- Yes, this is really nice, super important point, is that when I talk about the universe having a 00:15:38.880 |
memory, or there's some magic, it's not that, it's that tells you that there must be a process 00:15:45.520 |
encoded somewhere in physical reality, be it a cell, a Tesla factory, or something else that 00:15:52.400 |
is making that object, I'm not saying there's some kind of woo-woo memory in the universe, 00:15:57.440 |
you know, morphic resonance or something, I'm saying that there is an actual causal process 00:16:04.320 |
that is being directed, constrained in some way, so it's not kind of just making everything. 00:16:09.840 |
- Yeah, but Lee, what's the factory that made the factory? 00:16:13.840 |
So what is the, so first of all, you assume the laws of physics has just sprung to existence at 00:16:23.520 |
the beginning, those are constraints, but what makes the factory the environment that does the 00:16:28.480 |
selection? - This is the question of, well, it's the first interesting question that I want to 00:16:33.760 |
answer, out of four, I think the factory emerges in the interplay between the environment and the 00:16:41.440 |
objects that are being built, and here, let me, I'll have a go at explaining to you the shortest 00:16:47.440 |
path. So why is the shortest path important? Imagine you've got, I'm gonna have to go chemistry 00:16:53.360 |
for a moment and then abstract it, so imagine you've got a given environment that you have a 00:17:01.040 |
budget of atoms, you're just flinging together, and the objective of those atoms that are being 00:17:07.280 |
flung together in, say, molecule A, have to make, they decompose, so molecules decompose over time, 00:17:14.800 |
so the molecules in this environment, in this magic environment, have to not die, 00:17:20.240 |
but they do die, there's a, they have a half-life, so the only way the molecules can get through 00:17:25.520 |
that environment out the other side, let's pretend the environment is a box, you can go in and out 00:17:29.040 |
without dying, and there's a, there's just an infinite supply of atoms coming, or a, well, 00:17:34.400 |
a large supply, the molecule gets built, but the molecule that is able to template itself being 00:17:42.560 |
built, and survives in the environment, will basically reign supreme. Now, let's say that 00:17:50.960 |
molecule takes 10 steps, now, and it's using a finite set of atoms, right, or now let's say 00:17:58.320 |
another molecule, smart-ass molecule we'll call it, comes in, and can survive in that environment, 00:18:03.680 |
and can copy itself, but it only needs five steps. The molecule that only needs five steps, 00:18:11.040 |
because it's continue, both molecules are being destroyed, but they're creating themselves 00:18:14.640 |
faster, they can be destroyed, you can see that the shortest path reigns supreme. So, 00:18:20.800 |
the shortest path tells us something super interesting about the minimal amount of 00:18:24.640 |
information required to propagate that motif in time and space, and it's just like a kind of, 00:18:32.480 |
it seems to be like some kind of conservation law. - So, one of the intuitions you have 00:18:38.080 |
is the propagation of motifs in time will be done by the things that can construct themselves 00:18:43.760 |
in the shortest path. - Yeah. - So, like, you can assume that most of the objects in the universe 00:18:50.960 |
are built in the shortest, in the most efficient way. - The, so... - Big loop I just took there. 00:18:58.480 |
- Yeah, no, yes and no, because there are other things. So, in the limit, yes, because you want 00:19:04.080 |
to tell the difference between things that have required a factory to build them and just random 00:19:09.200 |
processes, but you can find instances where the shortest path isn't taken for an individual object, 00:19:18.000 |
individual function, and people go, "Ah, that means the shortest path isn't right." And then I say, 00:19:24.720 |
"Well, I don't know, I think it's right still." Because, so, of course, because there are other 00:19:30.960 |
driving forces, it's not just one molecule. Now, when you start to, now you start to consider two 00:19:35.680 |
objects, you have a joint assembly space, and it's not, now it's a compromise between not just making 00:19:41.440 |
A and B in the shortest path, you want to make A and B in the shortest path, which might mean that 00:19:46.800 |
A is slightly longer, you have a compromise. So, when you see slightly more nesting in the 00:19:52.880 |
construction, when you take a given object, that can look longer, but that's because the overall 00:19:58.320 |
function is the object is still trying to be efficient. And this is still very hand-wavy, 00:20:04.560 |
and maybe having no legs to stand on, but we think we're getting somewhere with that. 00:20:09.520 |
- And there's probably some parallelization, right? So, this is all, this is not sequential, 00:20:14.480 |
the building is, I guess, when you're talking about complex objects, you don't have to 00:20:21.120 |
work sequentially, you can work in parallel, you can get your friends together, and they can... 00:20:24.960 |
- Yeah, and the thing we're working on right now is how to understand these parallel processes. 00:20:31.440 |
Now there's a new thing we've introduced called assembly depth, and assembly depth can be 00:20:38.160 |
lower than the assembly index for a molecule when they're cooperating together, because exactly this 00:20:44.800 |
parallel processing is going on. And my team have been working this out in the last few weeks, 00:20:50.000 |
because we're looking at what compromises does nature need to make when it's making molecules 00:20:54.720 |
in the cell. And I wonder if, you know, I'm maybe like, well, I'm always leaping out of 00:21:01.360 |
my competence, but in economics, I'm just wondering if you could apply this in economic 00:21:06.960 |
processes. It seems like capitalism is very good at finding shortest path, you know, every time, 00:21:11.600 |
but there are ludicrous things that happen because actually the cost function has been minimized. 00:21:15.920 |
And so I keep seeing parallels everywhere where there are complex nested systems, where if you 00:21:21.280 |
give it enough time, and you introduce a bit of heterogeneity, the system readjusts and finds a 00:21:26.640 |
new shortest path. But the shortest path isn't fixed on just one molecule now, it's in the actual 00:21:31.920 |
existence of the object over time. And that object could be a city, it could be a cell, 00:21:37.040 |
it could be a factory, but I think we're going way beyond molecules, and my competence probably 00:21:42.400 |
should go back to molecules, but hey. - All right, before we get too far, 00:21:46.320 |
let's talk about the assembly equation. Okay, how should we do this? Now, let me just even read that 00:21:52.800 |
part of the paper. We define assembly as the total amount of selection necessary to produce an 00:21:59.920 |
ensemble of observed objects, quantified using equation one. The equation basically has A on 00:22:08.080 |
one side, which is the assembly of the ensemble, and then a sum from one to N, where N is the total 00:22:17.920 |
number of unique objects, and then there is a few variables in there that include the assembly index, 00:22:24.640 |
the copy number, which we'll talk about. That's an interesting, I don't remember you talking about 00:22:30.160 |
that. That's an interesting addition, and I think a powerful one. It has to do with what? That you 00:22:36.800 |
can create pretty complex objects randomly, and in order to know that they're not random, 00:22:42.400 |
that there's a factory involved, you need to see a bunch of them. That's the intuition there. It's 00:22:47.760 |
an interesting intuition, and then some normalization. What else is it? - N minus one, 00:22:55.120 |
just to make sure that more than one object, one object could be a one-off and random, 00:22:59.360 |
and then you have more than one identical object. That's interesting. - When there's two of a thing. 00:23:05.040 |
- Two of a thing is super important, especially if the assembly index is high. 00:23:09.280 |
- We could say several questions here. One, let's talk about selection. 00:23:14.000 |
What is this term selection? What is this term evolution that we're referring to? Which aspect 00:23:19.840 |
of Darwinian evolution are we referring to that's interesting here? - This is probably what the 00:23:28.400 |
paper, we should talk about the paper for a second. What it did is it kind of annoyed, 00:23:32.720 |
we didn't know it. I mean, it got attention, and obviously the angry people were annoyed. 00:23:39.200 |
- There's angry people in the world. That's good. - So what happened is the evolutionary 00:23:42.640 |
biologists got angry. We were not expecting that, because we thought evolutionary biologists would 00:23:46.560 |
be cool. I knew that some, not many, computational complexity people would get angry, because I'd 00:23:52.880 |
kind of been poking them, and maybe I deserved it, but I was trying to poke them in a productive way, 00:23:59.040 |
and then the physicists kind of got grumpy, because the initial conditions tell everything. 00:24:03.680 |
The prebiotic chemists got slightly grumpy, because there's not enough chemistry in there, 00:24:08.000 |
and then finally, when the creationists said it wasn't creationist enough, I was like, 00:24:10.960 |
I've done my job. - Well, you say in the physics, 00:24:14.000 |
they say, because you're basically saying that physics is not enough to tell the story of how 00:24:22.560 |
- And then they said, a few physics is the beginning and the end of the story. 00:24:27.920 |
- Yeah. So what happened is the reason why people put the phone down on the call, 00:24:33.040 |
the paper, if you're reading the paper like a phone call, they got to the abstract, 00:24:37.920 |
and in the abstract-- - The first sentence is pretty-- 00:24:40.560 |
- The first two sentences caused everybody-- - Scientists have grappled with reconciling 00:24:45.200 |
biological evolution with the immutable laws of the universe defined by physics. 00:24:50.720 |
- True, right? There's nothing wrong with that statement, totally true. 00:24:54.080 |
- Yeah. "These laws underpin life's origin, evolution and the development of human culture 00:25:01.040 |
and technology, yet they do not predict the emergence of these phenomena." Wow. 00:25:06.880 |
First of all, we should say the title of the paper, this paper was accepted and published in Nature. 00:25:12.320 |
The title is "Assembly Theory Explains and Quantifies Selection and Evolution." Very 00:25:16.880 |
humble title. And the entirety of the paper, I think, presents interesting ideas but reaches high. 00:25:25.120 |
- I am not, I would do it all again. This paper was actually on the pre-print server 00:25:34.400 |
- Yeah, I think, yeah, I don't regret anything. - You and Frank Sinatra did it your way. 00:25:38.800 |
- What I love about being a scientist is kind of sometimes, because I'm a bit dim, I'm like, 00:25:44.400 |
and I don't understand what people are telling me, I want to get to the point. 00:25:46.880 |
This paper says, "Hey, laws of physics are really cool, the universe is great, 00:25:52.160 |
but they don't really, it's not intuitive that you just run the standard model and get life out." 00:26:00.160 |
I think most physicists might go, "Yeah, there's, you know, it's not just, we can't just go back and 00:26:05.600 |
say that's what happened because physics can't explain the origin of life yet." It doesn't mean 00:26:10.560 |
it won't or can't, okay, just to be clear. Sorry, intelligent designers, we are going to get there. 00:26:15.920 |
Second point, we say that evolution works but we don't know how evolution got going, 00:26:22.720 |
so biological evolution and biological selection. So for me, this seems like a simple continuum. 00:26:28.320 |
So when I mentioned selection and evolution in the title, I think, and in the abstract, 00:26:33.120 |
we should have maybe prefaced that and said non-biological selection and non-biological 00:26:37.840 |
evolution. And then that might have made it even more crystal clear, but I didn't think that biology, 00:26:43.280 |
evolutionary biology, should be so bold to claim ownership of selection and evolution. 00:26:49.040 |
And secondly, a lot of evolutionary biologists seem to dismiss the origin of life question, 00:26:53.200 |
just say it's obvious. And that causes a real problem scientifically because when two different, 00:26:58.800 |
when the physicists are like, "We own the universe, the universe is good, we explain all of 00:27:03.280 |
it, look at us." And even biologists say, "We can explain biology." And the poor chemist in the 00:27:08.560 |
middle going, "But hang on." And this paper kind of says, "Hey, there is an interesting 00:27:16.000 |
disconnect between physics and biology, and that's at the point at which memories get made 00:27:24.000 |
in chemistry through bonds. And hey, let's look at this closely and see if we can quantify it." 00:27:28.960 |
So yeah, I mean, I never expected the paper to kind of get that much interest. And still, 00:27:34.960 |
I mean, it's only been published just over a month ago now. 00:27:37.280 |
- So just to linger on the selection, what is the broader sense of what selection means? 00:27:46.240 |
- Yeah, that's a really, good. For selection, selection, so I think for selection you need, 00:27:53.280 |
so this is where for me, the concept of an object is something that can persist in time and 00:27:58.720 |
not die, but basically can be broken up. So if I was going to kind of bolster the 00:28:04.800 |
definition of an object, so if something can form and persist for a long period of time 00:28:14.720 |
under an existing environment that could destroy other, and I'm going to use anthropomorphic terms, 00:28:22.960 |
I apologise, but weaker objects or less robust, then the environment could have selected that. 00:28:30.560 |
So good chemistry examples, if you took some carbon and you made a chain of carbon atoms, 00:28:36.160 |
whereas if you took some, I don't know, some carbon, nitrogen and oxygen and made chains from 00:28:42.400 |
those, you'd start to get different reactions and rearrangements. So a chain of carbon atoms 00:28:48.560 |
might be more resistant to falling apart under acidic or basic conditions 00:28:55.040 |
versus another set of molecules. So it survives in that environment, so the acid pond, 00:28:59.840 |
the resistant molecule can get through, and then that molecule goes into another environment, 00:29:08.400 |
so that environment now maybe being an acid pond is a basic pond, or maybe it's an oxidising pond, 00:29:14.640 |
and so if you've got carbon and it goes in an oxidising pond, maybe the carbon starts to 00:29:18.720 |
oxidise and break apart. So you go through all these kind of obstacle courses, if you like, 00:29:24.080 |
given by reality. So selection is the ability that happens when an object survives in an 00:29:30.240 |
environment for some time, but, and this is the thing that's super subtle, the object has to be 00:29:39.520 |
continually being destroyed and made by process. So it's not just about the object now, it's about 00:29:44.720 |
the process and time that makes it, because a rock could just stand on the mountainside for 00:29:50.160 |
four billion years and nothing happened to it, and that's not necessarily really advanced selection. 00:29:55.600 |
So for selection to get really interesting, you need to have a turnover in time, 00:29:59.520 |
you need to be continually creating objects, producing them, what we call discovery time. 00:30:05.040 |
So there's a discovery time for an object, when that object is discovered, if it's say a molecule 00:30:10.480 |
that can then act on itself, or the chain of events that caused itself to bolster its formation, 00:30:16.080 |
then you go from discovery time to production time, and suddenly you have more of it in the 00:30:21.280 |
universe. So it could be a self-replicating molecule, and the interaction of the molecule 00:30:26.080 |
in the environment, in the warm little pond, or in the sea, or wherever, in the bubble, 00:30:30.640 |
could then start to build a proto-factory, the environment. So really, to answer your question, 00:30:36.160 |
what the factory is, the factory is the environment, but it's not very autonomous, 00:30:41.600 |
it's not very redundant, there's lots of things that could go wrong. 00:30:44.640 |
So once you get high enough up the hierarchy of networks of interactions, 00:30:51.040 |
something needs to happen, that needs to be compressed into a smaller volume and made 00:30:55.200 |
resistant and robust. Because in biology, selection and evolution is robust. You have 00:31:01.200 |
error correction built in, you have really, you know, there's good ways of basically making sure 00:31:06.240 |
propagation goes on. So really the difference between inorganic, abiotic selection and evolution, 00:31:12.880 |
and evolution and stuff in biology is robustness. The ability to kind of propagate over, 00:31:22.400 |
the ability to survive in lots of different environments. Whereas our poor little inorganic 00:31:28.480 |
soul molecule, whatever, just dies in lots of different environments. So there's something 00:31:34.400 |
super special that happens from the inorganic molecule in the environment that kills it, 00:31:40.560 |
to where you've got evolution and cells can survive everywhere. 00:31:43.280 |
- How special is that? How do you know those kinds of evolution factors are everywhere in 00:31:49.440 |
the universe? - I don't, and I'm excited because I think 00:31:54.800 |
selection isn't special at all. I think what is special is the history of the environments on 00:32:02.480 |
Earth that gave rise to the first cell, that now has, you know, has taken all those environments 00:32:09.360 |
and is now more autonomous. And I would like to think that, you know, this paper 00:32:17.120 |
could be very wrong, but I don't think it's very wrong. It's certainly wrong, but it's less wrong 00:32:24.400 |
than some other ideas, I know, right? And if this allows, inspires us to go and look for selection 00:32:29.520 |
in the universe, because we now have an equation where we can say, we can look for selection going 00:32:33.920 |
on and say, "Oh, that's interesting. We seem to have a process that's giving us high copy number 00:32:41.920 |
objects that also are highly complex, but that doesn't look like life as we know it." And we 00:32:47.200 |
use that and say, "Oh, there's a hydrothermal vent. Oh, there's a process going on. There's 00:32:50.640 |
molecular networks," because the assembly equation is not only meant to identify at the higher end 00:32:57.200 |
advanced selection, what you get, I would call it in biology, you super advanced selection. 00:33:03.120 |
And even, I mean, you could use the assembly equation to look for technology and God forbid, 00:33:08.880 |
we could talk about consciousness and abstraction, but let's keep it primitive, 00:33:12.720 |
molecules and biology. So I think the real power of the assembly equation is to say how much 00:33:17.920 |
selection is going on in this space. And there's a really simple thought experiment I could do. 00:33:23.520 |
If you have a little Petri dish, and on that Petri dish, you put some simple food. So the 00:33:28.320 |
assembly index of all the sugars and everything is quite low. And you put a single cell of E. coli 00:33:35.120 |
cell. And then you say, "I'm going to measure the assembly in this amount of assembly in the box." 00:33:40.960 |
So it's quite low, but the rate of change of assembly dA/dt will go voom sigmoidal as it 00:33:48.080 |
eats all the food. And the number of E. coli cells will replicate because they take all the food, 00:33:54.320 |
they can copy themselves. The assembly index of all the molecules goes up, up, up and up 00:33:58.160 |
until the food is exhausted in the box. So now the E. coli stop... I mean, 00:34:04.640 |
dying is probably a strong word. They stop respiring because all the food is gone. But 00:34:09.120 |
suddenly the amount of assembly in the box has gone up gigantically because of that one E. coli 00:34:14.240 |
factory has just eaten through, milled lots of other E. coli factories, run out of food and 00:34:19.280 |
stopped. And so in the initial box, although the amount of assembly was really small, 00:34:26.400 |
it was able to replicate and use all the food and go up. And that's what we're trying to do 00:34:30.960 |
in the lab actually, is kind of make those kinds of experiments and see if we can spot 00:34:36.080 |
the emergence of molecular networks that are producing complexity as we feed in raw materials 00:34:42.400 |
and we feed a challenge, an environment. We try and kill the molecules. And really, 00:34:48.480 |
that's the main kind of idea for the entire paper. - Yeah, and see if you can measure the changes in 00:34:54.960 |
the assembly index throughout the whole system. Okay, what about if I show up to a new planet, 00:35:00.000 |
we'll go to Mars or some other planet from a different solar system, 00:35:03.200 |
and how do we use assembly index there to discover alien life? - Very simply, actually. Let's say 00:35:13.120 |
we'll go to Mars with a mass spectrometer with a sufficiently high resolution. So what you have to 00:35:17.760 |
be able to do... So a good thing about mass spec is that you can select a molecule from the mass, 00:35:26.640 |
and then if it's high enough resolution, you can be more and more sure that you're just 00:35:30.560 |
seeing identical copies. You can count them. And then you fragment them and you count the number 00:35:35.760 |
of fragments and look at the molecular weight. And the higher the molecular weight and the higher 00:35:40.640 |
the number of the fragments, the higher the assembly index. So if you go to Mars and you 00:35:44.400 |
take a mass spec or high enough resolution, and you can find molecules... I'll give a guide on 00:35:50.240 |
Earth. If you could find molecules, say, greater than 350 molecular weight with more than 15 00:35:56.160 |
fragments, you have found artefacts that can only be produced, at least on Earth, by life. 00:36:03.760 |
Now you would say, "Oh, maybe the geological process." I would argue very vehemently that 00:36:09.360 |
that is not the case. But we can say, "Look, if you don't like the cutoff on Earth, 00:36:14.000 |
go up higher, 30, 100," right? Because there's going to be a point where you'll find a molecule 00:36:19.440 |
with so many different parts, the chances of you getting a molecule that has 100 different parts 00:36:25.600 |
and finding a million identical copies, you know, that's just impossible. That could never happen 00:36:33.440 |
in an infinite set of universes. - Can you just linger on this copy 00:36:38.480 |
number thing? A million different copies. What do you mean by copies and why is the 00:36:46.560 |
number of copies important? - Yeah, that was so interesting. And I 00:36:52.320 |
always understood the copy number is really important, but I never explained it properly 00:36:57.440 |
for ages. And I kept having this... It goes back to this, if I give you a, I don't know, 00:37:07.040 |
a really complicated molecule and I say, "It's complicated," you could say, "Hey, 00:37:09.920 |
that's really complicated, but is it just really random?" And so I realized that ultimate randomness 00:37:16.080 |
and ultimate complexity are indistinguishable until you can see a structure in the randomness. 00:37:24.640 |
So you can see copies. - So copies implies structure. 00:37:30.320 |
- Yeah, the factory. - I mean, there's a deep, 00:37:34.240 |
profound thing in there. 'Cause if you just have a random process, you're going to get a lot of 00:37:43.600 |
complex, beautiful, sophisticated things. What makes them complex in the way we think life is 00:37:51.760 |
complex or, yeah, something like a factory that's operating under a selection process is there should 00:37:59.600 |
be copies. Is there some looseness about copies? What does it mean for two objects to be equal? 00:38:06.080 |
- It's all to do with the telescope or the microscope you're using. And so at the maximum 00:38:12.960 |
resolution... So the nice thing about chemists is they have this concept of the molecule and 00:38:18.400 |
they're all familiar with the molecule. And molecules you can hold in your hand, lots of them, 00:38:24.240 |
identical copies. A molecule is actually a super important thing in chemistry to say, "Look, 00:38:28.800 |
you can have a mole of a molecule, an Avogadro's number of molecules, 00:38:32.480 |
and they're identical." What does that mean? That means that the molecular composition, 00:38:36.400 |
the bonding and so on, the configuration is indistinguishable. You can hold them together, 00:38:41.840 |
you can overlay them. So the way I do it is if I say, "Here's a bag of 10 identical molecules, 00:38:48.080 |
let's prove they're identical." You pick one out of the bag and you basically observe it using some 00:38:53.760 |
technique and then you take it away and then you take another one out. If you observe it using 00:38:58.000 |
technique, you can see no differences, they're identical. It's really interesting to get right, 00:39:01.760 |
because if you take, say, two molecules, molecules can be in different vibrational and rotational 00:39:07.680 |
states, they're moving all the time. So with this respect, identical molecules have identical 00:39:11.920 |
bonding. In this case, we don't even talk about chirality because we don't have a chirality 00:39:17.360 |
detector. So two identical molecules in one conception, assembly theory, basically considers 00:39:24.000 |
both hands as being the same. But of course, they're not, they're different. As soon as you 00:39:29.760 |
have a chiral distinguisher to detect the left and the right hand, they become different. 00:39:34.800 |
And so it's to do with the detection system that you have and the resolution. 00:39:39.360 |
- So I wonder if there's an art and science to which detection system is used when you 00:39:49.120 |
- So you're talking about chemistry a lot today. We have kind of standardized detection systems, 00:39:56.000 |
right, of how to compare molecules. So when you start to talk about emojis and language and 00:40:03.520 |
mathematical theorems and, I don't know, more sophisticated things at a different scale, 00:40:10.880 |
at a smaller scale than molecules, at a larger scale than molecules, 00:40:15.360 |
like what detection, like if we look at the difference between you and me, Lex and Lee, 00:40:21.120 |
are we the same? Are we different? - Sure. I mean, of course we're different 00:40:25.680 |
close up, but if you zoom out a little bit, we'll morphologically look the same. 00:40:29.760 |
You know, height and characteristics, hair length, stuff like that. 00:40:35.040 |
- Well, also like the species. - Yeah, yeah, yeah. 00:40:37.840 |
- And also there's a sense why we're both from Earth. 00:40:42.480 |
- Yeah, I agree. I mean, this is the power of assembly theory in that regard. 00:40:45.840 |
So the way to look at it, if you have a box of objects, if they're all indistinguishable 00:40:55.440 |
then using your technique, what you then do is you then look at the assembly index. 00:41:03.040 |
Now, if the assembly index of them is really low, right, and they're all indistinguishable, 00:41:07.920 |
then it's telling you that you have to go to another resolution. 00:41:11.040 |
So that would be, you know, it's kind of a sliding scale. It's kind of nice. 00:41:14.560 |
- Got it. So those two kind of are intentional with each other. 00:41:20.960 |
- That's really, really interesting. So, okay, so you show up to a new planet, 00:41:26.880 |
you'll be doing what? - I would do mass spec. 00:41:29.680 |
- On a sample of what? Like, first of all, like how big of a scoop do you take? 00:41:33.920 |
Do you just take the scoop? Like what? So we're looking for primitive life. 00:41:40.800 |
- I would look, yeah, so if you're just going to Mars or Titan or Enceladus or somewhere, 00:41:48.720 |
so a number of ways of doing it. So you could take a large scoop or you go for the atmosphere 00:41:52.560 |
and detect stuff. So you could make a life meter, right? So one of Sarah's colleagues at ASU, 00:42:03.200 |
Paul Davis, keeps calling it a life meter, which is quite a nice idea because you think about it, 00:42:08.800 |
if you've got a living system that's producing these highly complex molecules and they drift 00:42:14.960 |
away and they're in a highly kind of demanding environment, they could be burnt, right? So they 00:42:21.600 |
could just be falling apart. So you want to sniff a little bit of complexity and say warmer, warmer, 00:42:25.920 |
warmer. Oh, we found life. We found the alien. We found the alien Elon Musk smoking a joint 00:42:31.280 |
in the bottom of the cave on Mars or Elon himself, whatever, right? And you say, okay, found it. 00:42:35.840 |
So what you can do is a mass spectrometer, you could just look for things in the gas phase, 00:42:41.120 |
or you go on the surface, drill down because you want to find molecules that are, 00:42:45.680 |
you've either got to find the source living system because the problem with just looking 00:42:52.000 |
for complexity is it gets burnt away. So in a harsh environment on say on the surface of Mars, 00:42:59.440 |
there's a very low probability that you're going to find really complex molecules because 00:43:03.760 |
of all the radiation and so on. If you drill down a little bit, you could drill down a bit 00:43:08.160 |
into soil that's billions of years old. Then I would put in some solvent, water, alcohol or 00:43:15.200 |
something, or take a scoop, make it volatile, put it into the mass spectrometer and just try and 00:43:22.480 |
detect high complexity, high abundant molecules. And if you get them, hey, presto, you can have 00:43:27.920 |
evidence of life. Wouldn't that then be great if you could say, okay, we've found evidence of life. 00:43:33.520 |
Now we want to keep the life meter, keep searching for more and more complexity 00:43:39.200 |
until you actually find living cells. You can get those new living cells and then you could bring 00:43:44.400 |
them back to earth, or you could try and sequence them. You could see that they have different DNA 00:43:49.760 |
life meter. How would you build a life meter? Let's say we're together starting a new company 00:43:55.840 |
launching a life meter. - Mass spectrometer would be the 00:43:58.000 |
first way of doing it. - No, no, no. That's one of the major 00:44:01.760 |
components of it. I'm talking about if it's a device, we've got a branding logo, we're going 00:44:07.200 |
to talk about that later. What's the input? How do you get a metered output? 00:44:15.120 |
- I would take my life meter, our life meter, there you go. 00:44:24.960 |
It would have both infrared and mass spec. It would have two ports so it could shine a light. 00:44:30.320 |
What it would do is you would have a vacuum chamber and you would have an electrostatic 00:44:37.040 |
analyzer and you'd have a monochromator to producing infrared. You'd take a scoop of 00:44:44.000 |
the sample, put it in the life meter. It would then add a solvent or heat up the sample so some 00:44:49.840 |
volatiles come off. The volatiles would then be put into the mass spectrometer, into electrostatic 00:44:55.600 |
trap and you'd weigh the molecules and fragment them. Alternatively, you'd shine infrared light 00:45:00.320 |
on them, you'd count the number of bands, but you'd have to, in that case, do some separation 00:45:04.480 |
because you want to separate. In mass spec, it's really nice and convenient because you can 00:45:08.400 |
separate electrostatically, but you need to have that. 00:45:11.760 |
- Can you do it in real time? - Yeah, pretty much. Let's go all the way 00:45:15.520 |
back. Okay, we're really going to get this. Lex's life meter, Lex and Lee's life meter. 00:45:21.520 |
- Excellent. It's a good ring to it. - All right. You have a vacuum chamber, 00:45:28.800 |
you have a little nose. The nose would have a packing material. You would take your sample, 00:45:37.760 |
add it onto the nose, add a solvent or a gas. It would then be sucked up the nose and that would 00:45:42.960 |
be separated using what we call chromatography. Then as each band comes off the nose, we would 00:45:48.080 |
then do mass spec and infrared. In the case of the infrared, count the number of bands. In the case 00:45:54.240 |
of mass spec, count the number of fragments and weigh it. Then the further up in molecular weight 00:45:58.960 |
range for the mass spec and the number of bands, you go up and up and up from the dead, 00:46:02.960 |
interesting, interesting, over the threshold, oh my gosh, earth life. Then right up to the 00:46:09.280 |
batshit crazy, this is definitely alien intelligence that's made this life. You 00:46:14.400 |
could almost go all the way there, same in the infrared. It's pretty simple. The thing that 00:46:19.360 |
is really problematical is that for many years, decades, what people have done, and I can't blame 00:46:27.440 |
them, is they've been obsessing about small biomarkers that we find on earth, amino acids, 00:46:34.000 |
like single amino acids or evidence of small molecules and these things. Looking for those, 00:46:38.880 |
looking for complexity. The beautiful thing about this is you can look for 00:46:43.680 |
complexity without earth chemistry bias or earth biology bias. Assembly theory is just a way of 00:46:52.720 |
saying, hey, complexity and abundance is evidence of selection. That's how our universal life meter 00:46:57.920 |
will work. - Complexity in abundance is evidence of selection. Okay, so let's apply our life meter 00:47:08.720 |
to earth. So if we were just to apply assembly index measurements to earth, what kind of stuff 00:47:21.360 |
are going to get? What's impressive about some of the complexity on earth? - So we did this a few 00:47:29.120 |
years ago when I was trying to convince NASA and colleagues that this technique could work. Honestly, 00:47:34.480 |
it's so funny because everyone's like, no, it ain't going to work. And it was just like, 00:47:38.960 |
because the chemists were saying, of course there are complicated molecules out there you can detect 00:47:43.120 |
that just form randomly. I was like, really? That was a bit like, I don't know, someone saying, 00:47:52.720 |
of course Darwin's textbook was just written randomly by some monkeys and a typewriter. 00:47:57.600 |
Just for me, it was like, really? And I've pushed a lot on the chemists now, and I think 00:48:03.760 |
most of them are on board, but not totally. I really had some big arguments, but the copy 00:48:08.560 |
number caught there because I think I confused the chemists by saying one off. And then when 00:48:12.560 |
I made clear about the copy number, I think that made it a little bit easier. - Just to clarify, 00:48:16.800 |
chemists might say that, of course, out there outside of earth, there's complex molecules. 00:48:26.880 |
wait a minute, that's like saying, of course, there's aliens out there. - Yeah, exactly that. 00:48:32.160 |
- Okay, but you're saying, you clarify that that's actually a very interesting question, 00:48:39.440 |
and we should be looking for complex molecules of which the copy number is two or greater. 00:48:44.720 |
- Yeah, exactly. So on earth, so coming back to earth, what we did is we took a whole bunch of 00:48:50.000 |
samples, and we were running prebiotic chemistry experiments in the lab. We took various inorganic 00:48:58.000 |
minerals and extracted them, look at the volatile, because there's a special way of treating minerals 00:49:04.000 |
and polymers in assembly theory. In our life machine, we're looking at molecules. We don't 00:49:11.840 |
care about polymers because they're not volatile, you can't hold them. If you can't ascern that 00:49:18.800 |
they're identical, then it's very difficult for you to work out if there's undergone selection 00:49:24.640 |
or they're just a random mess. Same with some minerals, but we can come back to that. So 00:49:28.960 |
basically what you do, we got a whole load of samples, inorganic ones, we got a load of, 00:49:33.920 |
we got Scotch whiskey, and also took a Ardberg, which is one of my favorite whiskies, which is 00:49:39.520 |
very peaty. - What does peaty mean? - So the way that in Scotland, in Islay, which is a little 00:49:47.280 |
island, the Scotch, the whiskey is let to mature in barrels, and it's said that the complex molecules 00:50:01.040 |
in the peat might find their way through into the whiskey, and that's what gives it this intense 00:50:06.160 |
brown color and really complex flavor. It's literally molecular complexity that does that. 00:50:13.120 |
So vodka is the complete opposite, it's just pure. - So the better the whiskey, 00:50:17.280 |
the higher the assembly index, or the higher the assembly index, the better the whiskey. 00:50:20.400 |
- That's what I mean, I really love deep, peaty Scottish whiskeys. Near my house there is one of 00:50:27.520 |
the lowland distilleries called Glengoyne, still beautiful whiskey, but not as complex. So for fun, 00:50:33.600 |
I took some Glengoyne whiskey in Ardberg and put them into the mass spec and measured the assembly 00:50:38.560 |
index. I also got E. coli, so the way we do it, take the E. coli, break the cell apart, 00:50:43.520 |
take it all apart, and also got some beer. And people were ridiculing us, saying, "Oh, 00:50:50.960 |
beer is evidence of complexity." One of the computational complexity people was just throwing, 00:50:56.880 |
yeah, kind of he's very vigorous in his disagreement of assembly theory, was just 00:51:03.760 |
saying, "You don't know what you're doing, even beer is more complicated than human." 00:51:07.920 |
What we didn't realize is that it's not beer per se, it's taking the yeast extract, 00:51:13.440 |
taking the extract, breaking the cells, extracting the molecules, and just looking at the profile of 00:51:18.800 |
the molecules to see if there's anything over the threshold. And we also put in a really complex 00:51:22.800 |
molecule taxol. So we took all of these, but also NASA gave us, I think, five samples, 00:51:28.400 |
and they wouldn't tell us what they are. They said, "No, we don't believe you can get this to work." 00:51:32.640 |
And they gave us some super complex samples. And they gave us two fossils, one that was a million 00:51:39.680 |
years old, and one was at 10,000 years old, something from Antarctica, seabed. They gave 00:51:45.520 |
us the Murchison meteorite and a few others, put them through the system. So we took all the samples, 00:51:52.000 |
treat them all identically, put them into mass spec, fragmented them, counted. And in this case, 00:51:58.160 |
implicit in the measurement was, in mass spec, you only detect peaks when you've got more than, 00:52:05.440 |
say, let's say 10,000 identical molecules. So the copy number's already baked in. It wasn't 00:52:10.560 |
quantified, which is super important there. This was in the first paper, because I guess it's 00:52:14.560 |
abundant, of course. And when you then took it all out, we found that the biological samples 00:52:23.360 |
gave you molecules that had an assembly index greater than 15, and all the abiotic samples 00:52:30.160 |
were less than 15. And then we took the NASA samples, and we looked at the ones that were more 00:52:33.600 |
than 15 and less than 15, and we gave them back to NASA. They're like, "Oh, gosh. Yep, dead, living, 00:52:39.120 |
dead, living. You got it." And that's what we found on Earth. - That's a success. - Yeah. Oh, 00:52:46.080 |
yeah. Resounding success. - Can you just go back to the beer and the E. coli? So what's 00:52:52.400 |
the sum index on those? - So what you were able to do is, like, the assembly index of... We found 00:53:01.760 |
high assembly index molecules originating from the beer sample and the E. coli sample. - The yeast 00:53:08.640 |
and the beer. - I mean, I didn't know which one was higher. We didn't really do any detail there, 00:53:13.520 |
because now we are doing that, because one of the things we've done... It's a secret, but I can tell 00:53:20.480 |
you. - Nobody's listening. - Well, is that we've just mapped the tree of life using assembly theory, 00:53:29.280 |
because everyone said, "Oh, you can't do it from biology." And what we're able to do is... So I 00:53:33.280 |
think there's three ways... Well, two ways of doing tree of life... Well, three ways, actually. - What's 00:53:38.080 |
the tree of life? - So the tree of life is basically tracing back the history of life on Earth for all 00:53:44.880 |
the different species, going back who evolved from what. And it all goes all the way back to the first 00:53:49.920 |
kind of life forms, and they branch off. And you have plant kingdom, the animal kingdom, 00:53:54.960 |
the fungi kingdom, and different branches all the way up. And the way this was classically done... 00:54:02.720 |
And I'm no evolutionary biologist. Evolutionary biologists are very... Tell me. Every day, 00:54:07.760 |
at least 10 times. I want to be one, though. I kind of like biology. It's kind of cool. - Yeah, 00:54:12.400 |
it's very cool. Evolutionary. - But basically, what Darwin and Mendeleev and all these people 00:54:18.240 |
do is just they draw pictures, right? And they taxa. They were able to draw pictures and say, 00:54:23.520 |
"Oh, these look like common classes." - Yeah. - Then... - They're artists, really. They're just, 00:54:31.040 |
you know... - But they were able to find out a lot, right, in looking at verbrates and verbrates, 00:54:35.520 |
camera and explosion, all this stuff. And then came the genomic revolution, and suddenly everyone 00:54:42.320 |
used gene sequencing. And Craig Venter's a good example. I think he's gone around the world in 00:54:46.400 |
his yacht just picking up samples, looking for new species, where he's just found new species of life 00:54:51.440 |
just from sequencing. It's amazing. So you have taxonomy, you have sequencing, and then you can 00:54:57.840 |
also do a little bit of molecular archaeology, like measure the samples and kind of form some 00:55:07.200 |
inference. What we did is we were able to fingerprint... So we took a load of random 00:55:13.520 |
samples from all of biology, and we used mass spectrometry. And what we did now is not just 00:55:19.680 |
look for individual molecules, but we looked for coexisting molecules where they had to look at 00:55:25.120 |
their joint assembly space, and where we were able to cut them apart and undergo recursion in the 00:55:30.560 |
mass spec and infer some relationships. And we were able to recapitulate the tree of life using 00:55:36.480 |
mass spectroscopy, no sequencing and no drawing. - All right, can you try to say that again, 00:55:44.240 |
but a little more detail? So recreating, what does it take to recreate the tree of life? 00:55:49.360 |
What does the reverse engineering process look like here? - So what you do is you take an unknown 00:55:53.680 |
sample, you pung it into the mass spec, you get a... 'Cause this comes from what you're asking, 00:55:58.720 |
like what do you see in E. coli? And so in E. coli, you don't just see... It's not the most 00:56:04.240 |
sophisticated cells on Earth make the most sophisticated molecules. It is the coexistence 00:56:10.400 |
of lots of complex molecules above a threshold. And so what we realized is you could fingerprint 00:56:16.400 |
different life forms. So fungi make really complicated molecules. Why? Because they can't 00:56:21.520 |
move. They have to make everything on site. Whereas some animals are lazy. They can just go 00:56:27.600 |
eat the fungi. They don't need to make very much. And so what you do is you look at the... 00:56:34.160 |
So you take, I don't know, the fingerprint, maybe the top number of high molecular weight molecules 00:56:40.160 |
you find in the sample. You fragment them to get their assembly indices. And then what you can do 00:56:45.120 |
is you can infer common origins of molecules. You can do a kind of molecular... 00:56:54.800 |
When the reverse engineering of the assembly space, you can infer common roots 00:56:59.040 |
and look at what's called the joint assembly space. But let's translate that into the experiment. 00:57:04.720 |
Take a sample, bung it in the mass spec, take the top, say, 10 molecules, fragment them, 00:57:09.360 |
and that gives you one fingerprint. Then you do it for another sample, you get another fingerprint. 00:57:15.840 |
Now the question is you say, "Hey, are these samples the same or different?" And that's what 00:57:21.200 |
we've been able to do. And by basically looking at the assembly space that these molecules create. 00:57:27.040 |
Without any knowledge of assembly theory, you are unable to do it. With a knowledge of assembly 00:57:32.320 |
theory, you can reconstruct the tree. - How does knowing if they're the 00:57:36.800 |
same or different give you the tree? - Let's go to two leaves on different 00:57:40.800 |
branches on the tree, right? What you can do by counting the number of differences, you can 00:57:46.080 |
estimate how far away their origin was. - Got it. 00:57:48.960 |
- And that's all we do. And it just works. But when we realized you could even use assembly 00:57:53.520 |
theory to recapitulate the tree of life with no gene sequencing, we were like, "Huh." 00:57:57.680 |
- So this is looking at samples that exist today in the world. What about things that are no longer 00:58:02.880 |
exist... I mean, the tree contains information about the past. Some of it is gone. 00:58:10.320 |
- Yeah, absolutely. I would love to get old fossil samples and apply assembly theory, 00:58:15.840 |
mass spec, and see if we can find new forms of life that are no longer amenable to gene sequencing 00:58:22.240 |
because the DNA is all gone. Because DNA and RNA is quite unstable, but some of the more complex 00:58:28.880 |
molecules might be there and might give you a hint of something new. Or wouldn't it be great 00:58:33.200 |
if you find a sample that's worth really persevering and doing the proper extraction 00:58:41.120 |
to PCR and so on, and then sequence it, and then put it together? 00:58:46.000 |
- So when a thing dies, you can still get some information about its complexity. 00:58:50.560 |
- Yeah. And it appears that you can do some dating. Now, there are really good techniques. 00:58:59.280 |
There's radiocarbon dating. There is longer dating, going looking at radioactive minerals, 00:59:04.160 |
and so on. And you can also, in bone, you can look at what happens after something dies. 00:59:11.360 |
You get what's called racemization, where the chirality in the polymers basically changes, 00:59:19.760 |
and you get decomposition. And the deviation from the pure enantiomer to the mixture, 00:59:30.400 |
gives you a time scale on it, a half-life. So you can date when it died. I want to use 00:59:37.520 |
assembly theory to see if I can use it, date death and things, and trace the tree of life, 00:59:43.280 |
and also decomposition of molecules. - You think it's possible? 00:59:46.240 |
- Oh yeah, without a doubt. It may not be better than what, because I was just at a conference 00:59:51.600 |
where some brilliant people were looking at isotope enrichment, and looking at how life 00:59:56.240 |
enriches isotopes, and they're really sophisticated stuff that they're doing. But I think there's some 01:00:01.280 |
fun to be had there, because it gives you another dimension of dating. How old is this molecule? 01:00:06.480 |
In terms of, or more importantly, how long ago was this molecule produced by life? 01:00:13.200 |
The more complex the molecule, the more prospect for decomposition, oxidation, reorganization, 01:00:18.320 |
loss of chirality, and all that jazz. But what life also does is it enriches. As you get older, 01:00:25.600 |
the amount of carbon-13 in you goes up, because of the way the bonding is in carbon-13. So it 01:00:35.440 |
has a slightly different bond strength than you. It's called a kinetic isotope effect. So you can 01:00:39.840 |
literally date how old you are, or when you stop metabolizing. So you could date someone's debt, 01:00:45.920 |
how old they are, I think. I'm making this up, this might be right. But I think it's roughly 01:00:50.960 |
right. The amount of carbon-13 you have in you, you can kind of estimate how old you are. 01:00:55.360 |
- How old living organs, humans are? - Yeah, yeah. You could say, oh, 01:01:00.880 |
this person is 10 years old, and this person's 30 years old, because they've been metabolizing 01:01:04.720 |
more carbon, and they've accumulated it. That's the basic idea. It's probably completely wrong 01:01:09.440 |
timescale. - Signatures of chemistry are fascinating. 01:01:12.560 |
So you've been saying a lot of chemistry examples for assembly theory. What if we zoom out and look 01:01:20.720 |
at a bigger scale of an object? Like really complex objects, like humans, or living organisms 01:01:29.440 |
that are made up of millions or billions of other organisms. How do you try to apply assembly theory 01:01:37.360 |
to that? - At the moment, we should be able to do this to morphology in cells. So we're looking at 01:01:44.960 |
cell surfaces, and really I'm trying to extend further. It's just that we work so hard to get 01:01:52.160 |
this paper out, and people to start discussing the ideas. But it's kind of funny, because I 01:02:00.000 |
think the penny is falling on this. - What does it mean for a penny to be- 01:02:05.920 |
- I mean, the penny's dropped, right? Because a lot of people are like, "It's rubbish. It's rubbish. 01:02:10.320 |
You've insulted me. It's wrong." The paper got published on the 4th of October. 01:02:16.320 |
It had 2.3 million engagements on Twitter, and it's been downloaded over a few hundred thousand 01:02:22.320 |
times. Someone actually wrote to me and said, "This is an example of really bad writing, 01:02:27.040 |
and what not to do." I was like, "If all of my papers got read this much, because that's the 01:02:32.160 |
objective, if I have a publishing paper and people want to read it, I want to write that badly again." 01:02:36.320 |
- Yeah. I don't know what's the deep insight here about the negativity in the space. I think 01:02:41.120 |
it's probably the immune system of the scientific community making sure that there's no bullshit 01:02:46.080 |
that gets published. And then it can overfire, it can do a lot of damage, it can shut down 01:02:51.120 |
conversations in a way that's not productive. - I'll answer your question about the hierarchy 01:02:56.160 |
and assembly, but let's go back to the perception. People saying the paper was badly written, 01:03:01.040 |
I mean, of course we could improve it. We could always improve the clarity. 01:03:04.000 |
- Let's go there before we go to the hierarchy. It has been criticized quite a bit, the paper. 01:03:11.120 |
What has been some criticism that you've found most powerful, that you can understand, 01:03:19.200 |
and can you explain it? - Yes. The most exciting criticism 01:03:26.640 |
came from the evolutionary biologists telling me that they thought that 01:03:29.680 |
origin of life was a solved problem. And I was like, "Whoa, we're really onto something, 01:03:35.520 |
because it's clearly not." And when you poked them on that, they just said, "No, 01:03:39.280 |
you don't understand evolution." And I said, "No, no, I don't think you understand that 01:03:42.560 |
evolution had to occur before biology, and there's a gap." That was really, for me, 01:03:47.360 |
that misunderstanding, and that did cause an immune response, which was really interesting. 01:03:56.160 |
The second thing was the fact that physicists, well, the physicists were actually really polite, 01:04:00.800 |
right? Really nice about it. But they just said, "We're not really sure about the initial 01:04:04.960 |
conditions thing, but this is a really big debate that we should certainly get into, because 01:04:10.160 |
the emergence of life was not encoded in the initial conditions of the universe." 01:04:15.520 |
And I think assembly theory shows why it can't be. 01:04:26.400 |
The emergence of life was not and cannot, in principle, be encoded in the initial conditions 01:04:35.360 |
Just to clarify what I mean by life is like what, high assembly index objects? 01:04:39.680 |
Yeah. And this goes back to your favorite subject. 01:04:43.920 |
Right, so why? What does time have to do with it? 01:04:51.360 |
Probably we can come back to it later, but I think it might be if we have time. But I think 01:04:57.920 |
I now understand how to explain how lots of people got angry with the assembly paper, but also the 01:05:08.800 |
ramifications of this is how time is fundamental in the universe, and this notion of commentarial 01:05:15.360 |
spaces. And there are so many layers on this, but I think you have to become an intuitionist 01:05:23.680 |
mathematician, and you have to abandon platonic mathematics, and also platonic mathematics has 01:05:29.680 |
left physics astray, but there's a lot back there. So we can go to the… 01:05:34.640 |
Platonic mathematics, okay, it's okay. Evolutionary biologists criticize because 01:05:41.440 |
the origin of life is understood, and not, it doesn't require an explanation that involves 01:05:52.880 |
Well, I mean, they said lots of confusing statements. Basically, I realized the evolutionary 01:05:59.200 |
biology community that were vocal, and some of them were really rude, really spiteful, 01:06:04.160 |
and needlessly so, right? Because look, people misunderstand publication as well. Some of the 01:06:14.480 |
people have said, "How dare this be published in Nature? What a terrible journal." And it really, 01:06:22.640 |
and I've said to people, "Look, this is a brand new idea that's not only potentially going to 01:06:30.160 |
change the way we look at biology, it's going to change the way we look at the universe." 01:06:35.920 |
And everyone's saying, "How dare you? How dare you be so grandiose?" I'm like, "No, no, no, 01:06:40.880 |
this is not hype. We're not saying we've invented some, I don't know, we've discovered an alien in 01:06:48.560 |
a closet somewhere just for hype. We genuinely mean this to genuinely have the impact or ask 01:06:54.720 |
the question." And the way people jumped on that was a really bad precedent for young people who 01:06:59.600 |
want to actually do something new because this makes a bold claim. And the chances are that it's 01:07:08.160 |
not correct. But what I wanted to do is a couple of things. I wanted to make a bold claim that was 01:07:14.080 |
precise and testable and correctable. Not another woolly information in biology argument, information 01:07:21.920 |
Turing machine, blah, blah, blah, blah, blah. A concrete series of statements that can be falsified 01:07:27.760 |
and explored and either the theory could be destroyed or built upon. 01:07:32.000 |
- Well, what about the criticism of you're just putting a bunch of sexy 01:07:37.600 |
names on something that's already obvious? - Yeah, that's really good. So, the assembly 01:07:47.200 |
index of a molecule is not obvious, no one had measured it before. And no one has thought to 01:07:52.080 |
quantify selection, complexity and copy number before in such a primitive, quantifiable way. 01:08:02.000 |
I think the nice thing about this paper, this paper is a tribute to all the people that understand 01:08:09.600 |
that biology does something very interesting. Some people call it negentropy, some people call it, 01:08:15.280 |
think about organizational principles, that lots of people were not shocked by the paper because 01:08:22.480 |
they'd done it before. A lot of the arguments we got, some people said, "Oh, it's rubbish. Oh, 01:08:26.960 |
by the way, I had this idea 20 years before." I was like, which one? Is it the rubbish part 01:08:33.600 |
or the really revolutionary part? So, this kind of plucked two strings at once. It plucked the, 01:08:38.640 |
there is something interesting that biology is as we can see around this, but we haven't quantified 01:08:43.520 |
yet. And what this is a first stab at quantifying that. So, the fact that people said, "This is 01:08:52.320 |
obvious, but it's also," so if it's obvious, why have you not done it? - Sure, but there's a few 01:09:00.880 |
things to say there. One is, this is in part of a philosophical framework because it's not like 01:09:10.880 |
you can apply this generally to any object in the universe. It's very chemistry focused. 01:09:15.680 |
- Yeah, well, I think you will be able to, we just haven't got there robustly. So, 01:09:19.440 |
we can say, how can we, let's go up a level. So, if we go up from level, let's go up from 01:09:24.400 |
molecules to cells because you would jump to people and I jump to emoticons and both are 01:09:28.880 |
good and they will be assembled. - Let's stick with cells, yeah. 01:09:31.440 |
- If we go from, so if we go from molecules to assemblies and let's take a cellular assembly, 01:09:39.760 |
a nice thing about a cell is you can tell the difference between a eukaryote and a prokaryote, 01:09:44.480 |
right? The organelles are specialized differently. We then look at the cell surface 01:09:48.640 |
and the cell surface has different glycosylation patterns and these cells will stick together. 01:09:53.440 |
Now, let's go up a level with multicellular creatures. You have cellular differentiation. 01:09:57.520 |
Now, if you think about how embryos develop, you go all the way back, those cells undergo 01:10:02.080 |
differentiation in a causal way that's biomechanically a feedback between the 01:10:06.880 |
genetics and biomechanics. I think we can use assembly theory to apply to tissue types. 01:10:11.920 |
We can even apply it to different cell disease types. So, that's what we're doing next but we're 01:10:16.560 |
trying to walk, you know, the thing is I'm trying to leap ahead. I want to leap ahead to go, 01:10:21.280 |
well, we apply it to culture. Clearly, you can apply it to memes and culture 01:10:25.920 |
and we've also applied assembly theory to CAs and not as you think. 01:10:33.520 |
- Cellular automata, better. - Yeah, yeah, to cellular automata, 01:10:35.920 |
not just as you think. Different CA rules were invented by different people at different times 01:10:41.120 |
and one of my co-workers, very talented chap, basically was like, oh, I can realize that 01:10:48.320 |
different people had different ideas or different rules and they copied each other and made slightly 01:10:52.880 |
different bit but different cellular automata rules and looked at them online and so he was 01:10:58.640 |
able to infer an assembly index and copy number of rule whatever doing this thing but I digress. 01:11:04.240 |
But it does show you can apply it at a higher scale. So, what do we need to do to apply assembly 01:11:09.920 |
theory to things? We need to agree there's a common set of building blocks. So, in a cell, 01:11:15.120 |
well, in a multicellular creature, you need to look back in time. So, there is the initial cell 01:11:22.160 |
which the creature is fertilized and then starts to grow and then there is cell differentiation 01:11:27.600 |
and you have to then make that causal chain both on those. That requires 01:11:32.800 |
development of the organism in time or if you look at the cell surfaces and the cell types, 01:11:39.040 |
they've got different features on the cell, the walls and inside the cell. So, we're building up 01:11:47.280 |
but obviously, I want a leap to things like emoticons, language, mathematical theorem. 01:11:53.200 |
- But that's a very large number of steps to get from a molecule to the human brain. 01:12:01.040 |
- Yeah, and I think they are related but in hierarchies of emergence, right? So, 01:12:06.400 |
you shouldn't compare them. I mean, the assembly index of a human brain, what does that even mean? 01:12:11.200 |
Well, maybe we can look at the morphology of the human brain. Say, all human brains have these 01:12:15.840 |
number of features in common. If they have those number of and then let's look at a brain in a whale 01:12:21.760 |
or a dolphin or a chimpanzee or a bird and say, "Okay, let's look at the assembly indices number 01:12:27.840 |
of features in these," and now the copy number is just a number of how many birds are there, 01:12:32.400 |
how many chimpanzees are there, how many humans are there. 01:12:34.640 |
- But then you have to discover for that the features that you would be looking for. 01:12:39.360 |
- Yeah, and that means you need to have some idea of the anatomy. 01:12:42.800 |
- But is there an automated way to discover features? 01:12:45.280 |
- I guess so. I mean, and I think this is a good way to apply machine learning and image 01:12:53.040 |
recognition just to basically characterize things. 01:12:55.040 |
- So apply compression to it to see what emerges and then use the thing, 01:12:58.640 |
the features used as part of the compression as the measurement of, 01:13:04.320 |
as the thing that is searched for when you're measuring assembly index and copy number. 01:13:09.280 |
- And the compression has to be, remember, the assembly universe, which is you have to go from 01:13:13.760 |
assembly possible to assembly contingent and that jump from, because assembly possible, 01:13:19.120 |
all possible brains, all possible features all the time. But we know that on the tree of life 01:13:25.600 |
and also on the lineage of life, going back to Luca, the human brain just didn't spring into 01:13:29.360 |
existence yesterday. It is a long lineage of brains going all the way back. And so if we could 01:13:34.800 |
do assembly theory to understand the development, not just in evolutionary history, but in biological 01:13:41.440 |
development as you grow, we are going to learn something more. 01:13:44.720 |
- What would be amazing is if you can use assembly theory, this framework to show the 01:13:51.680 |
increase in the assembly index associated with, I don't know, cultures or pieces of text like 01:14:01.680 |
language or images and so on, and illustrate without knowing the data ahead of time, just 01:14:08.000 |
kind of like you did with NASA, that you were able to demonstrate that it applies in those other 01:14:12.080 |
contexts. I mean, and that, you know, probably wouldn't at first, and you have to evolve the 01:14:17.120 |
theory somehow, you have to change it, you have to expand it, you know. 01:14:22.240 |
- But like that, I guess this is as a paper, a first step in saying, okay, can we create a general 01:14:30.800 |
framework for measuring complexity of objects, for measuring life, the complexity of living 01:14:40.960 |
- That is the first step, and also to say, look, we have a way of quantifying selection 01:14:46.640 |
and evolution in a fairly, not mundane, but a fairly mechanical way. Because before now, 01:14:54.480 |
it wasn't very, the ground truth for it was very subjective, whereas here we're talking 01:15:00.880 |
about clean observables, and there's gonna be layers on that. I mean, with collaborators right 01:15:05.920 |
now, we already think we can do assembly theory on language, and not only that, wouldn't it be 01:15:10.560 |
great if we can, so the, if we can figure out how under pressure language is gonna evolve and be 01:15:18.080 |
more efficient, 'cause you're gonna wanna transmit things, and again, it's not just about compression, 01:15:23.120 |
it is about understanding how you can make the most of the architecture you've already built. 01:15:29.760 |
And I think this is something beautiful that evolution does, we're reusing those architectures, 01:15:35.040 |
we can't just abandon our evolutionary history, and if you don't wanna abandon your evolutionary 01:15:39.440 |
history, and you know that evolution has been happening, then assembly theory works. And I 01:15:44.400 |
think that's a key comment I wanna make, is that assembly theory is great for understanding 01:15:49.600 |
where evolution has been used. The next jump is when we go to technology, 'cause of course, 01:15:55.680 |
if you take the M3 processor, I wanna buy, I haven't bought one yet, I can't justify it, 01:16:00.880 |
but I want it at some point. The M3 processor arguably is, there's quite a lot of features, 01:16:05.520 |
a quite large number, the M2 came before it, then the M1, all the way back, you can apply 01:16:10.160 |
assembly theory to microprocessor architecture. It doesn't take a huge leap to see that. 01:16:15.360 |
I'm a Linux guy, by the way, so your examples go way over my head. 01:16:19.120 |
Is that, is that like a, is that a fruit company of some sort? I don't even know. 01:16:22.640 |
Yeah, there's a lot of interesting stuff to ask about language, like you could look at, 01:16:27.040 |
how would that work? You could look at GPT-1, GPT-2, GPT-3, 3, 5, 4, 01:16:35.280 |
and try to analyze the kind of language it produces. I mean, that's almost trying to look 01:16:44.400 |
Yeah, I mean, I think the thing about large language models, and this is a whole hobby 01:16:51.760 |
horse I have at the moment, is that obviously they're all about the evidence of evolution 01:17:03.760 |
in the large language model comes from all the people that produced all the language. 01:17:08.480 |
And that's really interesting, and all the corrections in the mechanical Turk, right? 01:17:17.680 |
That's the part of the history, part of the memory of the system. 01:17:20.000 |
Exactly. So, can you, so, so it would be really interesting to basically use 01:17:25.520 |
an assembly-based approach to making language in a hierarchy, right? I think, my guess is that 01:17:33.520 |
you could, we might be able to build a new type of large language model that uses assembly theory, 01:17:39.360 |
that it has more understanding of the past and how things were created. 01:17:45.600 |
Basically, the thing with LLMs is they're like everything, everywhere, all at once, 01:17:50.560 |
splat, and make the user happy. So, there's not much intelligence in the model. The model is how 01:17:56.560 |
the human interacts with the model, but wouldn't it be great if we could understand how to embed 01:18:03.280 |
What do you mean by intelligence there? Like, you seem to associate intelligence with history. 01:18:10.880 |
Yeah, well, I think selection produces intelligence. 01:18:14.240 |
You're almost implying that selection is intelligence. No. 01:18:20.640 |
Yeah, kind of. I would go that, I would go out on a limb and say that, but I think it's 01:18:23.840 |
a little bit more human beings have the ability to abstract, and they can break beyond selection. 01:18:28.480 |
And this is what, like, Darwinian selection, because the human being doesn't have to basically 01:18:33.680 |
do trial and error, but they can think about it and say, "Oh, that's a bad idea. I won't do that," 01:18:39.440 |
So, we escaped Darwinian evolution, and now we're onto some other kind of evolution, I guess, 01:18:46.400 |
And assembly theory will measure that as well, right? Because it's all a lineage. 01:18:50.480 |
Okay, another piece of criticism, or by way of question, is how is assembly theory, 01:18:57.920 |
or maybe assembly index, different from Kolmogorov complexity? So, for people who don't know, 01:19:02.320 |
Kolmogorov complexity of an object is the length of a shortest computer program 01:19:09.360 |
Yeah, I seem to, there seems to be a disconnect between the computational approach. So, yeah, 01:19:18.000 |
so Kolmogorov measure requires a Turing machine, requires a computer, and that's one thing. 01:19:28.480 |
And the other thing is, assembly theory is supposed to trace the process 01:19:36.160 |
by which life evolution emerged, right? That's the main thing there. 01:19:41.600 |
There are lots of other layers. So, Kolmogorov complexity, you can approximate Kolmogorov 01:19:49.040 |
complexity, but it's not really telling you very much about the actual, it's really telling you 01:19:56.960 |
about like your data set, compression of your data set. And so, that doesn't really help you 01:20:03.200 |
identify the turtle, in this case, is the computer. And so, what assembly theory does is, 01:20:09.840 |
I'm going to say, this is a trigger warning for anyone listening who loves complexity theory, 01:20:17.600 |
I think that we're going to show that AIT is a very important subset of assembly theory, 01:20:23.120 |
because here's what happens. I think that assembly theory allows us to build, 01:20:30.960 |
understand when were selections occurring, selection produces factories and things, 01:20:36.480 |
factories in the end produce computers, and you can get an algorithmic information theory comes 01:20:41.600 |
out of that. The frustration I've had with looking at life through this kind of information theory is 01:20:47.920 |
it doesn't take into account causation. So, the main difference between assembly theory and all 01:20:54.560 |
these complexity measures is there's no cause or chain. And I think that's the main- 01:21:00.240 |
That's the causal change at the core of assembly theory. 01:21:06.160 |
Exactly. And if you've got all your data in a computer memory, all the data is the same, 01:21:10.400 |
you can access it in the same way, you don't care, you just compress it, and you either look at a 01:21:16.240 |
program runtime or the shortest program. And that, for me, is absolutely not capturing what it is, 01:21:28.160 |
But assembly theory looks at objects, it doesn't have information about 01:21:32.720 |
the object history, it's going to try to infer that history by looking for the shortest history, 01:21:42.160 |
right? The object doesn't have a Wikipedia page that goes with it, about its history. 01:21:49.760 |
I would say it does in a way, and it is fascinating to look at. So, you've just got the object, 01:21:55.680 |
and you have no other information about the object. What assembly theory allows you to do 01:21:59.040 |
just with the object is to- and the word "infer" is correct, I agree, "infer". You say, 01:22:05.920 |
"Well, that's not the history." But something really interesting comes from this. 01:22:09.760 |
The shortest path is inferred from the object. That is the worst-case scenario if you have no 01:22:16.640 |
machine to make it. So, that tells you about the depth of that object in time. 01:22:23.280 |
And so, what assembly theory allows you to do is, without considering any other circumstances, 01:22:28.560 |
to say from this object, "How deep is this object in time?" 01:22:31.200 |
If we just treat the object as itself, without any other constraints. And that's super powerful, 01:22:38.960 |
because the shortest path then says, allows you to say, "Oh, this object wasn't just created 01:22:43.760 |
randomly, there was a process." And so, assembly theory is not meant to, you know, one-up AIT, 01:22:51.520 |
or to ignore the factory. It's just to say, "Hey, there was a factory. How big was that factory, 01:22:59.920 |
and how deep in time is it?" - But it's still computationally very difficult to compute 01:23:06.080 |
that history, right, for complex objects. - It is, it becomes harder. But one of the 01:23:13.360 |
things that's super nice is that it constrains your initial conditions, right? It constrains 01:23:19.120 |
where you're going to be. So, if you take, say, imagine, so one of the things we're doing right 01:23:23.680 |
now is applying assembly theory to drug discovery. Now, what everyone's doing right now is taking 01:23:29.120 |
all the proteins, and looking at the proteins, and looking at molecules docked with proteins. 01:23:33.600 |
Why not instead look at the molecules that are involved in interacting with the receptors over 01:23:40.320 |
time, rather than thinking about, and use the molecules that evolve over time as a proxy for 01:23:45.280 |
how the proteins evolved over time, and then use that to constrain your drug discovery process. You 01:23:51.280 |
flip the problem 180, and focus on the molecule evolution, rather than the protein. And so, 01:23:58.000 |
you can guess in the future what might happen. So, rather than having to consider all possible 01:24:04.080 |
molecules, you know where to focus. And that's the same thing if you're looking at an assembly 01:24:08.960 |
spaces for an object where you don't know the entire history, but you know that in the history 01:24:15.680 |
of this object, it's not going to have some other motif there that it doesn't appear in the past. 01:24:21.920 |
But just even for the drug discovery point you made, don't you have to simulate all of chemistry 01:24:27.120 |
to figure out how to come up with constraints? 01:24:37.280 |
This is another thing that I think causes, because this paper goes across so many boundaries. So, 01:24:41.840 |
chemists have looked at this and said, "This is not correct reaction." I was like, "No, it's a graph." 01:24:48.960 |
Sure, there's assembly index and shortest path examples here on chemistry. 01:24:58.320 |
Yeah. And so, what you do is you look at the minimal constraints on that graph. 01:25:03.840 |
Of course, it has some mapping to the synthesis, but actually, you don't have to know all of 01:25:09.280 |
chemistry. You can build up the constraint space rather nicely. But this is just at the beginning, 01:25:16.800 |
right? There are so many directions this could go in. And as I said, it could all be wrong, 01:25:22.160 |
What about the little criticism I saw of, by way of question, do you consider the different 01:25:30.000 |
probabilities of each reaction in the chain? So, that there could be different... When you look at 01:25:37.600 |
a chain of events that led up to the creation of an object, doesn't it matter that some parts in 01:25:48.320 |
No, no. Well, let's go back. So, no, not less likely, but react... So, no. So, let's go back 01:25:54.960 |
to what we're looking at here. So, the assembly index is the minimal path that could have created 01:26:00.000 |
that object probabilistically. So, imagine you have all your atoms in a plasma, you've got enough 01:26:05.280 |
energy, you've got enough... There's collisions. What is the quickest way you could zip out that 01:26:14.480 |
It's just basically a walk on a random graph. So, we make an assumption that basically, 01:26:19.680 |
the timescale for forming the bonds... So, no, I don't want to say that, 01:26:23.040 |
because then it's going to have people getting obsessing about this point, and your criticism 01:26:26.080 |
is a really good one. What we're trying to say is like, this puts a lower bound on something. 01:26:30.960 |
Of course, some reactions are less possible than others, but actually, 01:26:38.400 |
Oh, boy. What does that mean? Why don't chemical reactions exist? 01:26:43.520 |
I'm writing a paper right now that I keep being told I have to finish, and it's called 01:26:48.640 |
"The Origin of Chemical Reactions", and it merely says that reactivity exists as controlled by the 01:26:54.880 |
laws of quantum mechanics, and reactions... Chemists put names on reactions. So, you could 01:27:00.560 |
have like, I don't know, the Wittig reaction, which is by Wittig. You could have the Suzuki 01:27:06.720 |
reaction, which is by Suzuki. Now, what are these reactions? So, these reactions are constrained by 01:27:12.480 |
the following. They're constrained by the fact they're on planet Earth, 1G, 298 Kelvin, 1 bar. 01:27:19.280 |
So, these are constraints. They're also constrained by the chemical composition of Earth, 01:27:24.800 |
oxygen, availability, all this stuff, and that then allows us to focus in our chemistry. 01:27:30.800 |
So, when a chemist does a reaction, that's a really nice compressed shorthand for constraint 01:27:36.720 |
application. Glass flask, pure reagent, temperature, pressure, bom-bom-bom-bom-bom, 01:27:42.240 |
control-control-control-control-control. So, of course, we have bond energies. 01:27:48.320 |
So, the bond energies are kind of intrinsic in a vacuum, if you say that. So, the bond energy, 01:27:54.800 |
you have to have a bond. And so, for assembly theory to work, you have to have a bond, which 01:28:00.480 |
means that bond has to give the molecule a certain half-life. So, you're probably going to find later 01:28:05.920 |
on that some bonds are weaker and that you are going to miss in mass spectra. When you count, 01:28:11.760 |
look at the assembly of some molecules, you're going to miscount the assembly of the molecule 01:28:16.400 |
because it falls apart too quickly because the bonds just form. But you can solve that 01:28:20.000 |
by looking at infrared. So, when people think about the probability, they're kind of 01:28:25.120 |
misunderstanding. Assembly theory says nothing about the chemistry because chemistry is chemistry 01:28:31.600 |
and the constraints are put in by biology. There was no chemist on the origin of life, unless you 01:28:37.200 |
believe in the chemist in the sky and they were, you know, it's like Santa Claus, they had a lot 01:28:41.680 |
of work to do. But chemical reactions do not exist in the constraints that allow chemical 01:28:49.680 |
transformations to occur, do exist. - Okay, okay. So, it's constrained to 01:28:54.880 |
applicate, so there's no chemical reactions, it's all constraint application, which enables the 01:29:03.040 |
emergence of, what's a different word for chemical reaction? - Transformation. 01:29:12.560 |
it's a function. But no, I love chemical reactions as a shorthand and so the chemists don't all go 01:29:18.160 |
mad. I mean, of course chemical reactions exist on Earth. - It's a shorthand. 01:29:21.200 |
- It's a shorthand for these constraints. - Right. So, assuming all these constraints 01:29:25.680 |
that we've been using for so long, we just assume that that's always the case 01:29:29.040 |
in natural language conversation. - Exactly. The grammar of chemistry, 01:29:33.200 |
of course, emerges in reactions and we can use them reliably, but I do not think the 01:29:38.000 |
Wittig reaction is accessible on Venus. - Right, and this is useful to remember, 01:29:42.640 |
you know, to frame it as constraint application is useful for when you zoom out to the bigger 01:29:49.280 |
picture of the universe and looking at the chemistry of the universe and then starting 01:29:52.400 |
to apply assembly theory. That's interesting, that's really interesting. 01:29:56.080 |
But we've also pissed off the chemists now. - Oh, they're pretty happy, but, well, most of them. 01:30:04.160 |
- No, everybody, everybody deep down is happy, I think. They're just sometimes feisty, 01:30:11.440 |
that's how they show, that's how they have fun. - Everyone is grumpy on some days when you 01:30:15.440 |
challenge. The problem with this paper is you, what it's like, it's almost like I went to a party, 01:30:19.440 |
it's like you, I used to do this occasionally when I was young, is go to a meeting and just 01:30:24.320 |
find a way to offend everyone at the meeting simultaneously. Even the factions that don't 01:30:29.440 |
like each other, they're all unified in their hatred of you just offending them. This paper, 01:30:33.600 |
it feels like the person that went to the party and offended everyone simultaneously, 01:30:37.440 |
so they stopped fighting with themselves and just focused on this paper. 01:30:40.000 |
- Maybe just a little insider interesting information, what were the editors of Nature, 01:30:46.480 |
the reviews and so on, how difficult was that process? Because this is a pretty big paper. 01:30:52.960 |
- Yeah, I mean, so when we originally sent the paper, we sent the paper and the editor said, 01:31:02.320 |
like, this is quite a long process. We sent the paper and the editor gave us some feedback. 01:31:10.240 |
And said, you know, I don't think it's that interesting, or it's hard, it's a hard concept. 01:31:17.920 |
And we asked, and the editor gave us some feedback. And Sarah and I took a year to rewrite the paper. 01:31:26.400 |
- Was the nature of the feedback very specific on, like, this part, this part? Or was it like, 01:31:31.200 |
what are you guys smoking, what kind of? - Yeah, it was kind of the latter, 01:31:35.520 |
- And, you know. - But polite and there's promise. 01:31:41.040 |
- Yeah, well, the thing is, the editor was really critical, but in a really professional way. And I 01:31:48.400 |
mean, for me, this was the way science should happen. So when it came back, you know, we had 01:31:52.400 |
too many equations in the paper. If you look at the preprint, they're just equations everywhere, 01:31:55.680 |
like 23 equations. And when I said to Abhishek, who was the first author, we got to remove all 01:31:59.840 |
the equations. But my assembly equations thing, Abhishek was like, you know, no, we can't. I said, 01:32:05.040 |
well, look, if we want to explain this to people, it's a real challenge. And so Sarah and I went 01:32:10.400 |
through the, I think it was actually 160 versions of the paper, but we basically, we got to version 01:32:15.200 |
40 or something. We said, right, zero, let's start again. So we wrote the whole paper again. 01:32:22.000 |
- And we just went bit by bit by bit and said, what is it we want to say? And then we sent the 01:32:26.800 |
paper in and to our, we expected it to be rejected and not even go to review. And then we got 01:32:35.040 |
notification back, it had gone to review and we were like, oh my God, it's so going to get rejected. 01:32:39.440 |
How's it going to get rejected? Because the first assembly paper that on the mass spec we sent to 01:32:43.680 |
nature, went through six rounds of review and were rejected, right? And this biochemist just 01:32:50.560 |
said, I don't believe you, you must be committing fraud. And a long story, probably a boring story, 01:32:55.920 |
but in this case, it went out to review, the comments came back and the comments were incredibly, 01:33:02.960 |
they were very deep comments from all the reviewers. But the nice thing was the reviewers 01:33:18.400 |
were kind of very critical, but not dismissive. They were like, oh really? Explain this, explain 01:33:24.960 |
this, explain this, explain this. Are you sure it's not comical or off? Are you sure it's not this? 01:33:28.880 |
And we went through, I think three rounds of review, pretty quick. And the editor went, 01:33:36.560 |
yeah, it's in. - Maybe you could just comment on the whole process. You've published some 01:33:42.320 |
pretty huge papers on all kinds of topics within chemistry and beyond. Some of them have some 01:33:47.040 |
little spice in them, a little spice of crazy. Like Tom Waits says, I like my town with a little 01:33:53.840 |
drop of poison. It's not a mundane paper. So what's it like psychologically to go through all 01:34:03.200 |
this process, to keep getting rejected, to get reviews from people that don't get the paper or 01:34:08.640 |
all that kind of stuff? Just from a question of a scientist, what is that like? - I think it's, 01:34:19.520 |
I mean, this paper for me kind of, because this wasn't the first time we tried to publish 01:34:24.400 |
assembly theory at the highest level. The nature communications paper on the mass spec, 01:34:29.280 |
on the idea, went through, went to nature and got rejected, went through six rounds of review and 01:34:34.960 |
got rejected. And I just was so confused when the chemist said, this can't be possible. I do not 01:34:46.400 |
believe you can measure complexity using mass spec. And also, by the way, molecules, complex 01:34:54.720 |
molecules can randomly form. And we're like, but look at the data, the data says. And they said, 01:34:59.360 |
no, no, we don't believe you. And we went, and I just wouldn't give up. And the editor in the end 01:35:07.120 |
was just like, the different editors actually, right? - What's behind that never giving up? 01:35:11.840 |
Is it like, when you're sitting there, 10 o'clock in the evening, there's a melancholy feeling that 01:35:17.920 |
comes over you. And you're like, okay, this is rejection number five. Or it's not rejection, 01:35:24.240 |
but maybe it feels like a rejection because the comments are, you totally don't get it. 01:35:29.600 |
Like, what gives you strength to keep going there? - I don't know. 01:35:40.160 |
I don't normally get emotional about papers, but 01:35:42.160 |
it's not about giving up because we want to get it published, because we want the glory 01:35:59.840 |
or anything. It's just like, why don't you understand? And so, what I would just try to be 01:36:11.200 |
as rational as possible and say, yeah, you didn't like it. Tell me why. And then, 01:36:26.480 |
sorry. Silly. Never get emotional about papers normally, but I think what we do, 01:36:33.840 |
you just compress like five years of angst from this. - So, it's been rough. 01:36:39.200 |
- It's not just rough. It's like, it happened, I came up with the assembly equation, 01:36:43.680 |
remote from Sarah in Arizona and the people at SFI. I felt like I was a mad person. Like, 01:36:53.120 |
the guy depicted in A Beautiful Mind who was just like, not the actual genius part, 01:36:59.040 |
but just the gibberish. Because I kept writing expanded and I have no mathematical ability at 01:37:06.400 |
all. And I was making these mathematical expansions where I kept seeing the same 01:37:10.560 |
motif again. I was like, I think this is a copy number. The same string is coming again and again 01:37:15.760 |
and again. I couldn't do the math. And then I realized the copy number fell out of the equation 01:37:20.240 |
and everything collapsed down. I was like, oh, that works, kind of. So, we submitted the paper. 01:37:25.040 |
And then when it was almost accepted, the mass spec one, and it was astrobiologists said great, 01:37:31.040 |
mass spectroscopists said great, and the chemists went, nonsense. Like, biggest pile of nonsense 01:37:36.720 |
ever, fraud. And I was like, but why fraud? And they just said, just because. And I was like, 01:37:42.880 |
well, and so, and I could not convince the editor in this case. The editor was just so pissed off, 01:37:50.080 |
because they see it as like a kind of, you know, you're wasting my time. And I would not give up. 01:37:55.760 |
I wrote, I went and dissected, you know, all the parts. And I think, although, I mean, 01:38:01.360 |
I got upset about it, you know, it was kind of embarrassing actually, but I guess. 01:38:08.240 |
But it was just trying to understand why they didn't like it. So, they were part of me was like, 01:38:14.160 |
really devastated. And a part of me was super excited, because I'm like, huh, they can't tell 01:38:19.440 |
me why I'm wrong. And this kind of goes back to, you know, when I was at school, I was in a kind 01:38:25.680 |
of learning difficulties class. And I kept going to the teacher and say, you know, how, what do I 01:38:30.800 |
do today to prove I'm smart? And they were like, nothing, you can't. I was like, give me a job, 01:38:35.200 |
you know, give me something to do. Give me a job to do something to do as we. And I kind of felt 01:38:41.680 |
like that a bit when I was arguing with the, and not arguing, there was no ad hominem, I wasn't 01:38:45.920 |
telling the editor, they were idiots or anything like this, or the reviewers, I kept it strictly 01:38:50.160 |
like factual. And all I did is I just kept knocking it down bit by bit by bit by bit by bit. 01:38:56.640 |
It was ultimately rejected, and it got published elsewhere. And then the actual experiment or data. 01:39:04.400 |
So this is kind of the, in this paper, the experiment or justification was already published. 01:39:08.960 |
So when we did this one, and we went through the versions, and then we sent it in, and in the end, 01:39:14.720 |
it just got accepted. We were like, well, that's kind of cool, right? This is kind of like, you 01:39:19.040 |
know, some days you had, you know, the student, sorry, the first author was like, I can't believe 01:39:25.600 |
it got accepted. Like, nor am I, but it's great. It's like, it's good. And then when the paper was 01:39:30.880 |
published, I was not expecting the backlash. I was expecting computational, well, no, actually, 01:39:38.000 |
I was just expecting one person had been trolling me for a while about it, just to carry on trolling, 01:39:42.160 |
but I didn't expect the backlash. And then I wrote to the editor and apologized. And the editor was 01:39:48.400 |
like, what are you apologizing for? It was a great paper. Of course it's going to get backlash. You 01:39:52.800 |
said some controversial stuff, but it's awesome. - Well, I think it's a beautiful story of 01:39:58.720 |
perseverance, and the backlash is just a negative word for discourse, which I think is beautiful. 01:40:06.720 |
- I think you, as I said to, you know, when it got accepted, and people were saying, 01:40:13.360 |
were kind of like hacking on it, and I was like, papers are not gold medals. The reason I wanted 01:40:20.080 |
to publish that paper in Nature is because it says, hey, there's something before biological 01:40:27.920 |
evolution. You have to have that if you're not a creationist, by the way. This is an approach. 01:40:33.920 |
First time someone has put a concrete mechanism, or sorry, a concrete quantification, and what comes 01:40:40.080 |
next, you're pushing on, is a mechanism. And that's what we need to get to, is an autocatalytic 01:40:44.960 |
set, self-replicating molecules, some other features that come in. And the fact that this 01:40:51.440 |
paper has been so discussed, for me, is a dream come true. Like, it doesn't get better than that. 01:40:57.920 |
If you can't accept a few people hating it, and the nice thing is, the thing that really makes 01:41:03.040 |
me happy, is that no one has attacked the actual physical content. Like, you can measure the 01:41:10.720 |
assembly index. You can measure selection now. So either that's right, or it's, well, either that's 01:41:16.960 |
helpful or unhelpful. If it's unhelpful, this paper will sink down, and no one will use it again. 01:41:22.880 |
If it's helpful, it'll help people build scaffold on it, and we'll start to converge to a new 01:41:27.360 |
paradigm. So I think that that's the thing that I wanted to see, you know, my colleagues, authors, 01:41:35.040 |
collaborators, and people who were like, you've just published this paper, you're a chemist. Why 01:41:40.000 |
have you done this? Like, who are you to be doing evolutionary theory? Like, well, I don't know. I 01:41:46.400 |
mean, sorry, did I need to-- - Who is anyone to do anything? Well, I'm glad you did. Let me just, 01:41:51.520 |
before coming back to origin of life and these kinds of questions, you mentioned learning 01:41:57.040 |
difficulties. I didn't know about this. So what was it like? - I wasn't very good at school, right? 01:42:03.440 |
- This is when you were very young? - Yeah, yeah. Well, but in primary school, 01:42:07.680 |
my handwriting was really poor, and apparently I couldn't read, and my mathematics was very poor. 01:42:15.920 |
So they just said, this is a problem. They identified it. My parents kind of at the time 01:42:20.080 |
were confused because I was busy taking things apart, buying electronic junk from the shop, 01:42:24.320 |
trying to build computers and things. And then once I got out of, when I was, I think about 01:42:31.120 |
the major transition in my stupidity, like, you know, everyone thought I wasn't that stupid when 01:42:36.640 |
I was, basically everyone thought I was faking. I like stuff, and I was faking wanting to be it. 01:42:41.200 |
So I always wanted to be a scientist. So five, six, seven years, I'll be a scientist, take things 01:42:45.360 |
apart. And everyone's like, yeah, this guy wants to be a scientist, but he's an idiot. 01:42:49.520 |
And so everyone was really confused, I think, at first, that I wasn't smarter than I, you know, 01:42:56.640 |
was claiming to be. And then I just basically didn't do well in any of the tests, and I went 01:43:00.320 |
down and down and down and down. And then, and I was kind of like, huh, this is really embarrassing. 01:43:07.040 |
I really like maths, and everyone says I can't do it. I really like kind of, you know, physics and 01:43:13.120 |
chemistry and all that in science. And people say, you know, you can't, you can't read and write. 01:43:17.680 |
And so I found myself in a learning difficulties class at the end of primary school and the 01:43:22.720 |
beginning of secondary school. In the UK, secondary school is like 11, 12 years old. 01:43:27.440 |
And I remember being put in the, in the remedial class. And the remedial class was basically full 01:43:34.720 |
of, well, two types, three types of people. There were people that had quite violent, right? And 01:43:44.800 |
there were people who couldn't speak English, and there were people that really had learning 01:43:50.320 |
difficulties. So the one thing I can objectively remember was, I mean, I could read. I like 01:44:10.880 |
reading. I read a lot. But something in me, I was, I'm a bit of a rebel. I refused to read 01:44:18.480 |
what I was told to read. And I found it difficult to read individual words in the way they were told. 01:44:24.400 |
But anyway, I got caught one day teaching someone else to read. And they said, okay, 01:44:32.480 |
we don't understand this. I always knew I wanted to be a scientist, but I didn't really know what 01:44:39.680 |
that meant. And I realized you have to go to university. And I thought, I can just go to 01:44:42.720 |
university. It's like curious people, like, no, no, no, you need to have these, you have to be 01:44:46.240 |
able to enter these exams to get this grade point average. And the fact is, the exams you've been 01:44:50.880 |
entered into, you're just going to get C, D, or E. You can't even get A, B, or C, right? These 01:44:57.600 |
are the UK GCSEs. I was like, oh, shit. And I said, can you just put me into the higher exams? 01:45:03.840 |
They said, no, no, you're going to fail. There's no chance. So my father kind of intervened and 01:45:09.760 |
said, you know, just let him go in the exams. And they said, he's definitely going to fail. It's a 01:45:14.720 |
waste of time, waste of money. And he said, well, what if we paid? So they said, well, okay. So you 01:45:20.560 |
didn't actually have to pay. You didn't have to pay if I failed. So I took the exams and passed 01:45:24.720 |
them, fortunately. I didn't get the top grades, but I, you know, I got into A-levels. But then 01:45:30.240 |
that also kind of limited what I could do at A-levels. I wasn't allowed to do A-level maths. 01:45:34.320 |
- What do you mean you weren't allowed to? - Because I had such a bad math grade from 01:45:39.040 |
my GCSE, I only had a C. But they wouldn't let me go into the ABC for maths because of some kind of 01:45:44.320 |
coursework requirement back then. So the top grade I could have got was a C. So C, D, or E. So I got 01:45:48.960 |
a C. And they let me do kind of AS-level maths, which is this half intermediate, but go to 01:45:55.200 |
university. But in the end, I liked chemistry. I had a good chemistry teacher. So in the end, 01:45:59.200 |
I got to university to do chemistry. - So through that kind of process, I think 01:46:03.280 |
for kids in that situation, it's easy to start believing that you're not, 01:46:10.720 |
well, how do I put it? That you're stupid. And basically give up that you're just not 01:46:16.240 |
good at math, you're not good at school. So this is by way of advice for people, 01:46:21.120 |
for interesting people, for interesting young kids right now experiencing the same thing. 01:46:27.520 |
Where was the place, what was the source of you not giving up there? 01:46:31.920 |
- I have no idea other than I was really, I really like not understanding stuff. For me, 01:46:42.640 |
when I not understand something, I didn't understand, I feel like I don't understand 01:46:47.680 |
anything now. But back then I was so, I remember when I was like, I don't know, 01:46:53.040 |
I tried to build a laser when I was like eight. And I thought, how hard could it be? 01:47:00.960 |
And I basically, I was gonna build a CO2 laser. I was like, right, I think I need some partially 01:47:08.160 |
coated mirrors, I need some carbon dioxide, and I need a high voltage. So I kind of, 01:47:17.040 |
and I was like, I didn't have, and I was so stupid, right? I was kind of so embarrassed. 01:47:21.920 |
I had to make enough CO2, I actually set a fire and try to filter the flame. 01:47:27.760 |
- To crap enough CO2. And I was like, completely failed, and I burnt half the garage down. 01:47:35.120 |
So my parents were not very happy about that. So that was one thing, I was like, 01:47:39.120 |
I really like first principle thinking. And so, I remember being super curious 01:47:46.560 |
and being determined to find answers. And so the kind of, when people do give advice about this, 01:47:52.640 |
well, I ask for advice about this, I don't really have that much advice other than don't 01:47:56.640 |
give up. And one of the things I try to do as a chemistry professor in my group, is I hire people 01:48:05.760 |
that I think that, you know, I'm kind of, who am I, if they're persistent enough, 01:48:10.800 |
who am I to deny them the chance? Because, you know, people gave me a chance and I was able to 01:48:20.080 |
- I like, so I love being around smart people, and I love confusing smart people. 01:48:25.040 |
And when I'm confusing smart people, you know, not by stealing their wallets and hiding it 01:48:29.600 |
somewhere, but if I can confuse smart people, that is the one piece of hope that I might be 01:48:35.760 |
- Wow, that's quite brilliant. Like as a gradient to optimize. 01:48:41.520 |
- Hang out with smart people and confuse them. 01:48:43.760 |
- And the more confusing it is, the more there's something there. 01:48:46.960 |
And as long as they're not telling you just a complete idiot and they give you different reasons. 01:48:52.160 |
- And I mean, I'm, you know, if everyone, it's like with assembly theory and people said, 01:48:56.240 |
"Oh, it's wrong." And I was like, "Why?" And they're like, "And no one could give me a 01:48:59.520 |
consistent reason." They said, "Oh, because it's been done before, or it's just Komagolov, 01:49:03.520 |
or it's just that and the other." So I think the thing that I like to do is, and in academia, 01:49:09.440 |
it's hard, right? Because people are critical, but I mean, you know, the criticism, I mean, 01:49:16.480 |
although I got kind of upset about it earlier, which is kind of silly, but not silly, because 01:49:20.800 |
obviously it's hard work being on your own or with a team spatially separated, like during lockdown, 01:49:26.480 |
and try to keep everyone on board and have some faith that I've always wanted to have a new idea. 01:49:34.800 |
And so, you know, I like a new idea and I want to nurture it as long as possible. 01:49:40.800 |
And if someone can give me actionable criticism, that's why I think I was trying to say earlier, 01:49:46.320 |
when I was kind of like stuck for words, give me actionable criticism. You know, it's wrong. Okay, 01:49:52.560 |
why is it wrong? You say, "Oh, your equation's incorrect for this," or "Your method is wrong." 01:49:57.920 |
And so what I try and do is get enough criticism from people to then triangulate and go back. And 01:50:04.880 |
I've been very fortunate in my life that I've got great colleagues, great collaborators, funders, 01:50:11.520 |
mentors, and people that will take the time to say, "You're wrong because..." And then what I 01:50:17.360 |
have to do is integrate the wrongness and go, "Oh, cool. Maybe I can fix that." And I think 01:50:22.640 |
criticism is really good. People have a go at me because I'm really critical. But I'm not criticizing 01:50:27.600 |
you as a person. I'm just criticizing the idea and trying to make it better and say, "Well, 01:50:33.600 |
what about this?" And sometimes I'm kind of, you know, my filters are kind of truncated in some 01:50:40.960 |
ways. I'm just like, "That's wrong. That's wrong. That's wrong. I want to do this." And people are 01:50:43.520 |
like, "Oh my God, you just told me you destroyed my life's work." I'm like, "Relax. No, I'm just 01:50:48.960 |
like, let's make it better." And I think that we don't do that enough because we're either personally 01:50:57.600 |
critical, which isn't helpful, or we don't give any criticism at all because we're too scared. 01:51:01.920 |
Yeah, I've seen you be pretty aggressively critical, but every time I've seen it, it's 01:51:14.000 |
I'm sure I make mistakes on that. I mean, I argue lots with Sarah, and she's kind of shocked. I've 01:51:24.320 |
argued with Yasha in the past, and he's like, "You're just making it up." I'm like, "No, not 01:51:30.320 |
quite, but kind of." I had a big argument with Sarah about time. She's like, "No, time doesn't 01:51:38.640 |
exist." I'm like, "No, no, time does exist." And as she realized that her conception of assembly 01:51:44.320 |
theory and my conception of assembly theory were the same thing, necessitated us to abandon the 01:51:50.960 |
fact that time is eternal, to actually really fundamentally question how the universe produces 01:51:56.880 |
combinatorial novelty. - So time is fundamental for assembly theory? I'm just trying to figure 01:52:03.520 |
out where you and Sarah converge. - I think assembly theory is fine in this time right now, 01:52:08.560 |
but I think it helps us understand that something interesting is going on. I've been really inspired 01:52:14.800 |
by a guy called Nick Gizen. I'm going to butcher his argument, but I love his argument a lot, so 01:52:20.000 |
I hope he forgives me if he hears about it. Basically, if you want free will, time has to 01:52:28.000 |
be fundamental, and if you want time to be fundamental, you have to give up on platonic 01:52:41.120 |
mathematics, and you have to use intuition as mathematics, by the way. Again, I'm going to 01:52:48.080 |
butcher this, but basically, Hilbert said that infinite numbers are allowed, and I think it was 01:52:56.640 |
Brouwer who said, "No, all numbers are finite." So let's go back a step, because people are going 01:53:04.160 |
to say, "Assembly theory seems to explain that large combinatorial space allows you to produce 01:53:13.280 |
things like life and technology, and that large combinatorial space is so big, it's not even 01:53:19.360 |
accessible to a Sean Carroll, David Deutch multiverse, that physicists saying that all of 01:53:29.200 |
the universe already exists in time is probably, provably - that's a strong word - not correct. 01:53:43.120 |
That we are going to know that the universe as it stands, the present, the way the present 01:53:48.000 |
builds the future, so big, the universe can't ever contain the future. And this is a really 01:53:55.840 |
interesting thing. I think Max Tegmark has this mathematical universe, which says, you know, 01:53:59.840 |
the universe is kind of like a block universe, and I apologize to Max if I'm getting it wrong, 01:54:04.400 |
but people think you can just move - you have the initial conditions, and you can run the universe, 01:54:11.840 |
and right to the end, and go backwards and forwards in that universe. That is not correct. 01:54:16.400 |
- Yeah, let me load that in. The universe is not big enough to contain the future. 01:54:22.160 |
- That's another, that's a beautiful way of saying that time is fundamental. 01:54:26.560 |
- Yes, and that you can have, and that's what, this is why the law of the excluded middle, 01:54:33.840 |
something is true or false, only works in the past. 01:54:40.000 |
Is it going to snow in New York next week, or in Austin? You might in Austin say probably not, 01:54:44.960 |
in New York you might say yeah. If you go forward to next week and say, did it snow in New York 01:54:50.240 |
last week, true or false, you can answer that question. The fact that the law of the excluded 01:54:55.200 |
middle cannot apply to the future explains why time is fundamental. 01:54:58.800 |
- Well, I mean, that's a good example, intuitive example, but it's possible that we might be able 01:55:05.040 |
to predict, you know, whether it's gonna snow if we had perfect information. 01:55:12.720 |
- Impossible. Impossible. So here's why. I'll make a really quick argument, 01:55:19.360 |
and this argument isn't mine, it's Nick's and a few other people. 01:55:22.960 |
- Can you explain his view on fundamental, on time being fundamental? 01:55:27.600 |
- Yeah, so I'll give my view, which kind of resonates with his, but basically, 01:55:33.520 |
it's very simple, actually. He would say that free will, your ability to design and do an experiment 01:55:38.640 |
is exercising free will. So he used that thought process. I never really thought about it that way, 01:55:44.160 |
and that you actively make decisions. I do think that, I used to think that free will was a kind of 01:55:51.680 |
consequence of just selection, but I'm kind of understanding that human free will is something 01:55:57.200 |
really interesting, and he very much inspired me. But I think that what Sarah Walker said that 01:56:03.440 |
inspired me as well, that these will converge, is that I think that the universe, and the universe 01:56:10.800 |
is very big, huge, but actually, the place that is largest in the universe right now, 01:56:20.720 |
- Yeah, I've seen you say that, and boy does that, that's an interesting one to process. 01:56:28.000 |
What do you mean by that, Earth is the biggest place in the universe? 01:56:30.800 |
- Because we have this combinatorial scaffolding going all the way back from Luca, 01:56:35.440 |
so you've got cells that can self-replicate, and then you go all the way to terraforming the Earth, 01:56:41.120 |
you've got all these architectures, the amount of selection that's going on, biological selection, 01:56:45.920 |
just to be clear, biological evolution, and then you have multicellularity, 01:56:50.560 |
then animals, and abstraction, and with abstraction, there was another kick, 01:56:55.520 |
because you can then build architectures, and computers, and cultures, and language, 01:57:01.120 |
and these things are the biggest things that exist in the universe, because we can just build 01:57:05.280 |
architectures that couldn't naturally arise anywhere, and the further that distance goes in 01:57:09.680 |
time, and it's gigantic, and-- - From a complexity perspective. 01:57:18.880 |
I mean, I know you're being poetic, but how do you know there's not other Earth-like, 01:57:22.880 |
like, how do you know, you're basically saying Earth is really special, it's awesome stuff as 01:57:30.480 |
far as we look out, there's nothing like it going on, but how do you know there's not nearly infinite 01:57:37.040 |
number of places where cool stuff like this is going on? 01:57:39.600 |
- I agree, and I would say, I'll say again that Earth is the most gigantic thing 01:57:46.080 |
we know in the universe, commentarily, we know. - We know. 01:57:50.000 |
- Now, now, I guess, this is just purely a guess, I have no data, but other than hope, 01:57:56.080 |
well, maybe not hope, maybe, no, I have some data, that every star in the sky probably has planets, 01:58:04.640 |
and life is probably emerging on these planets, but the amount of contingency that is 01:58:11.120 |
associated with life is, I think, the commentarial space associated with these planets is so 01:58:17.040 |
different, we're never gonna, our causal cones are never gonna overlap, or not easily, 01:58:22.560 |
and this is the thing that makes me sad about alien life, why it's why we have to create alien 01:58:26.720 |
life in the lab as quickly as possible, because I don't know if we are gonna be able to, 01:58:33.680 |
be able to build architectures that will intersect with alien intelligence and architectures. 01:58:42.640 |
- Intersect, you don't mean in time or space? - Time and the ability to communicate. 01:58:47.600 |
- The ability to communicate. - Yeah, my biggest fear, in a way, 01:58:51.200 |
is that life is everywhere, but we become infinitely more lonely because of our 01:58:55.280 |
scaffolding in that commentarial space, because it's so big, and-- 01:59:00.080 |
- So you're saying the constraints created by the environment that led to the factory of 01:59:07.040 |
Darwinian evolution are just like this little tiny cone in a nearly infinite commentarial space. 01:59:14.640 |
other cones like it, and why can't we communicate with other, just because we can't create it, 01:59:23.040 |
doesn't mean we can't appreciate the creation, right, sorry, detect the creation. 01:59:29.440 |
- I truly don't know, but it's an excuse for me to ask for people to give me money to make 01:59:38.880 |
- Like another shameless say, it's like, give me money, I need money. 01:59:42.240 |
- This was all a long plug for a planet simulator. - It's like, you know-- 01:59:46.400 |
- Hey, I'll be the first in line to donate. - My Rick garage has run out of room, you know? 01:59:54.640 |
- And this is a planet simulator, you mean like a different kind of planet? 02:00:00.240 |
environments and pressures. - Exactly, if we could basically 02:00:03.200 |
recreate the selection before biology, as we know it, that gives rise to a different biology, 02:00:10.080 |
we should be able to put the constraints on where I look in the universe. So here's the thing, 02:00:14.400 |
here's my dream. My dream is that by creating life in the lab, and based upon constraints we 02:00:21.120 |
understand, so let's go for Venus-type life or Earth-type life or something, again, do Earth 2.0, 02:00:26.000 |
screw it, let's do Earth 2.0. And Earth 2.0 has a different genetic alphabet, fine, that's fine, 02:00:32.080 |
different protein alphabet, fine, have cells and evolution, all that stuff. We will then be able to 02:00:38.880 |
say, okay, life is a more general phenomena, selection is more general than what we think is 02:00:45.360 |
the chemical constraints on life, and we can point the Jane's Web and other telescopes at other 02:00:50.000 |
planets that we are in that zone, we are most likely to concomitantly overlap with, right? 02:00:57.920 |
So there are chemistry-- - You're looking for some overlap. 02:01:02.480 |
- And then we can then basically shine light on them literally, and look at light coming back, 02:01:08.240 |
and apply advanced assembly theory to general theory of language that we will get, and say, 02:01:14.800 |
huh, in that signal, it looks random, but there's a copy number. Oh, 02:01:20.320 |
this random set of things that shouldn't be, that looks like a true random number generator 02:01:28.000 |
has structure as a, not Komagolov, AIT-type structure, but evolutionary structure, 02:01:35.600 |
given by assembly theory, and we start to, but I would say that, 'cause I'm a shameless 02:01:40.000 |
assembly theorist. - Yeah, it just feels like the cone, 02:01:45.040 |
and I might be misusing the word cone here, but the width of the cone is growing faster, 02:01:49.680 |
is growing really fast to where eventually all the cones overlap, 02:01:55.440 |
even in a very, very, very large combinatorial space. 02:02:03.520 |
It just, but then again, if you're saying the universe is also growing very quickly 02:02:10.960 |
in terms of possibilities. - I hope that as we build abstractions, 02:02:19.840 |
the main, I mean, one idea is that as we go to intelligence, intelligence allows us to look 02:02:27.200 |
at the regularities around us in the universe, and that gives us some common grounding to discuss 02:02:34.080 |
with aliens, and you might be right, that we will overlap there, even though we have completely 02:02:41.680 |
different chemistry, literally completely different chemistry, that we will be at a 02:02:46.960 |
past information from one another, but it's not a given, and I have to kind of try and divorce 02:02:56.960 |
hope and emotion away from what I can logically justify. 02:03:02.080 |
- But it's just hard to intuit a world, a universe, where there's 02:03:05.760 |
nearly infinite complexity objects, and they somehow can't detect each other. 02:03:12.480 |
- But the universe is expanding, but the nice thing is, I would say, I would look, you see, 02:03:17.200 |
I think Carl Sagan did the wrong thing, well, not the wrong thing, he flicked the Voyager probe 02:03:21.440 |
around a pale blue dot and said, "Look how big the universe is." I would have done it the other way 02:03:25.200 |
around and said, "Look at the Voyager probe that came from the planet Earth, that came from Luca, 02:03:28.720 |
look at how big Earth is." - Then it produced that. 02:03:34.080 |
- And that, I think, is completely amazing, and then that should allow people on Earth to think 02:03:38.960 |
about, well, probably we should try and get causal chains off Earth onto Mars, onto the Moon, 02:03:46.320 |
wherever, whether it's human life or Martian life that we create, it doesn't matter. But I think 02:03:54.880 |
this commentarial space tells us something very important about the universe, 02:03:58.080 |
and that I realized in assembly theory that the universe is too big to contain itself. 02:04:04.160 |
And I think this is, now coming back, and I want to kind of change your mind about time, 02:04:10.640 |
'cause I'm guessing that your time is just a coordinate. 02:04:16.880 |
- I'm guessing you're one of those, yeah. - I'm gonna change your, one of those, 02:04:20.080 |
I'm gonna change your mind in real time, or at least attempt. 02:04:22.000 |
- Oh, in real time, there you go. I already got the tattoo, 02:04:25.520 |
so this is gonna be embarrassing if you change my mind. 02:04:27.440 |
- But you can just add an arrow of time onto it, right? 02:04:30.880 |
- Yeah, true, just modify it. - Or erase it a bit. 02:04:36.160 |
that is really most interesting is, people say the initial conditions specify the future of the 02:04:42.880 |
universe. Okay, fine, let's say that's the case for a moment. Now let's go back to Newtonian 02:04:47.920 |
mechanics. Now, the uncertainty principle in Newtonian mechanics is this. If I give you the 02:04:56.640 |
coordinates of an object moving in space, and the coordinates of another object and they collide 02:05:02.720 |
in space, and you know those initial conditions, you should know exactly what's gonna happen. 02:05:07.840 |
However, you cannot specify these coordinates to infinite precision. 02:05:14.560 |
Now everyone said, "Oh, this is kind of like the chaos theory argument." No, no, it's deeper than 02:05:19.520 |
that. Here's a problem with numbers. This is where Hilbert and Brouwer fell out. 02:05:24.000 |
To have the coordinates of this object, a given object as it's colliding, 02:05:30.640 |
you have to have them to infinite precision. That's what Hilbert says. He says, "No problem, 02:05:34.560 |
infinite precision is fine. Let's just take that for granted." But when the object is finite, 02:05:42.080 |
and it can't store its own coordinates, what do you do? 02:05:45.040 |
So in principle, if a finite object cannot be specified to infinite precision, 02:05:53.040 |
in principle, the initial conditions don't apply. - Well, how do you know it can't store its... 02:05:59.920 |
- Well, how do you store an infinitely long number in a finite size? - Well, 02:06:09.440 |
we're using infinity very loosely here. - No, no, we're using... 02:06:12.480 |
- Infinite precision, I mean, not loosely, but... - Very precisely. 02:06:15.280 |
- So you think infinite precision is required? - Well, let's take the object. Let's say the 02:06:19.600 |
object is a golf ball. Golf ball is a few centimeters in diameter. We can work out 02:06:25.680 |
how many atoms are on the golf ball. And let's say we can store numbers down to atomic dislocations. 02:06:31.200 |
So we can work out how many atoms there are in the golf ball, and we can store the coordinates 02:06:36.960 |
that's in that golf ball down to that number. But beyond that, we can't. Let's make the golf ball 02:06:41.040 |
smaller. And this is where I think that we think that we get randomness in quantum mechanics. 02:06:48.080 |
And some people say, "You can't get randomness in quantum mechanics, it's deterministic." 02:06:50.800 |
But aha, this is where we realize that classical mechanics and quantum mechanics suffer from the 02:06:56.160 |
same uncertainty principle. And that is the inability to specify the initial conditions 02:07:04.400 |
to a precise enough degree to give you determinism. The universe is intrinsically 02:07:11.120 |
too big, and that's why time exists. It's non-deterministic. Looking back into the past, 02:07:18.080 |
you can use logical arguments because you can say, "Was it true or false?" You already know. 02:07:24.240 |
But the fact we are unable to predict the future with the precision 02:07:29.760 |
is not evidence of lack of knowledge, it's evidence the universe is generating new things. 02:07:36.480 |
- Okay, so to you, first of all, quantum mechanics, you can just say statistically what's going to 02:07:42.720 |
happen when two golf balls hit each other. - Statistically, but sure, I can say statistically 02:07:47.760 |
what's going to happen, but then when they do happen, and you keep nesting it together, 02:07:53.440 |
you can't... I mean, it goes almost back to look at... Let's think about entropy in the universe. 02:07:59.200 |
So how do we understand entropy change? Well, we could do the... Look at our process. We can 02:08:06.880 |
use the Ergodic hypothesis. We can also have the counterfactuals, where we have all the different 02:08:18.000 |
states, and we can even put that in the multiverse, right? But both those are kind of... 02:08:23.360 |
They're non-physical. The multiverse kind of collapses back to the same problem about the 02:08:31.520 |
precision. So if you accept you don't have to have true and false going forward into the future, 02:08:42.240 |
the real numbers are real. They're observables. - We're trying to see exactly where time being 02:08:50.000 |
fundamental sneaks in, in this difference between the golf ball can't contain its own position 02:08:57.840 |
perfectly, precisely, how that leads to time needing to be fundamental. 02:09:05.040 |
- Let me... Do you believe or do you accept you have free will? 02:09:10.640 |
- Yeah, I think at this moment in time, I believe that I have free will. 02:09:17.520 |
- So then you have to believe that time is fundamental. 02:09:21.200 |
- I understand that's a statement you've made. - Well, no, that we can logically follow, 02:09:26.640 |
because if you don't have free will... So if you're in a universe that has no time, 02:09:32.160 |
the universe is deterministic. If it's deterministic, then you have no free will. 02:09:36.160 |
- I think the space of how much we don't know is so vast that saying the universe 02:09:43.120 |
is deterministic and from that jumping, there's no free will is just too difficult of a leap. 02:09:48.000 |
- No, I logically follow. No, no, I don't disagree. I'm not saying any... I mean, it's deep 02:09:53.440 |
and it's important. All I'm saying, and it's actually different to what I've said before, 02:10:00.800 |
is that if you don't require platonistic mathematics and accept that non-determinism 02:10:08.320 |
is how the universe looks, and that gives us our creativity and the way the universe is getting 02:10:14.000 |
novelty, it's kind of really deeply important in assembly theory, because assembly theory starts 02:10:18.640 |
to actually give you a mechanism why you go from boring time, which is basically initial conditions 02:10:24.400 |
specify everything, to a mismatch in creative time. And I hope we'll do experiments. I think 02:10:29.600 |
it's really important to... I would love to do an experiment that proves that time is fundamental 02:10:35.360 |
and the universe is generating novelty. I don't know all the features of that experiment yet, 02:10:41.840 |
but by having these conversations openly and getting people to think about the problems in 02:10:48.640 |
a new way, better people, more intelligent people with good mathematical backgrounds can say, "Oh, 02:10:54.800 |
hey, I've got an idea." I would love to do an experiment that shows that the universe... I mean, 02:11:01.360 |
universe is too big for itself going forward in time. And this is why I really hate the idea of 02:11:09.040 |
the Boltzmann brain. The Boltzmann brain makes me super... Everyone's having a free lunch. It's like 02:11:14.880 |
let's break all the laws of physics. So, a Boltzmann brain is this idea that in a long 02:11:20.480 |
enough universe, a brain will just emerge in the universe as conscious. And that neglects the causal 02:11:25.840 |
chain of evolution that's required to produce that brain. And this is where the computational 02:11:30.640 |
argument really falls down because the computation is to say, "I can calculate the probability of a 02:11:34.480 |
Boltzmann brain." And they'll give you a probability, but I can calculate the probability of a Boltzmann 02:11:38.800 |
brain, zero. - Just because the space of possibility is so large? - Yeah. It's like when we start 02:11:44.880 |
fooling ourselves with numbers that we can't actually measure and we can't ever conceive of, 02:11:50.000 |
I think it doesn't give us a good explanation. And I've become... I want to explain why life 02:11:59.360 |
is in the universe. I think life is actually a novelty miner. I mean, life basically mines novelty 02:12:05.920 |
almost from the future and actualizes it in the present. - Okay. Life is a novelty miner 02:12:16.080 |
from the future that is actualized in the present. - Yeah. 02:12:20.240 |
I think so. - Novelty miner. First of all, novelty. What's the origin of novelty 02:12:30.560 |
when you go from boring time to creative time? Where is that? Is it as simple as randomness, 02:12:36.800 |
like you're referring to? - I'm really struggling with randomness because I had a really good 02:12:42.480 |
argument with Jascha Bach about randomness. And he said, "Randomness doesn't give you free will. 02:12:47.360 |
That's insane because you'd just be random." And I think he's right at that level, but I don't 02:12:53.120 |
think he is right on another level. And it's not about randomness. It's about constrained... 02:13:00.720 |
I'm going to sound like... Constrained opportunity. I'm making this up as I go along, 02:13:05.680 |
so I'm making this up. Constrained opportunity. So, what I mean is like... So, you have to have... 02:13:11.600 |
So, the novelty... What is novelty? This is why I think it's a funny thing. If you ever want to 02:13:18.960 |
discuss AI, why I think everyone's kind of gone AI mad is that they're misunderstanding 02:13:24.160 |
novelty. But let's think about novelty. Yes, what is novelty? So, I think novelty is a genuinely 02:13:30.720 |
new configuration that is not predicted by the past, right? And that you discover in the present, 02:13:38.800 |
right? And that is truly different, right? Now, everyone says that... Some people say 02:13:44.320 |
that novelty doesn't exist. It's always with precedent. I want to do experiments that show 02:13:49.680 |
that that is not the case. And it goes back to a question you asked me a few moments ago, which is, 02:13:55.440 |
where is the factory? Right? Because I think the same mechanism that gives us a factory 02:14:01.440 |
gives us novelty. And I think that that is why I'm so deeply hung up on time. I mean, 02:14:07.200 |
of course I'm wrong, but how wrong? And I think that life opens up that commentarial space in a 02:14:16.000 |
way that our current laws of physics, although as contrived in a deterministic initial condition 02:14:25.280 |
universe, even with the get out of the multiverse, David Deutch style, which I love by the way, 02:14:30.400 |
but I don't think is correct. But it's really beautiful. The David Deutch's conception of the 02:14:40.080 |
multiverse is kind of like given. But I think that the problem with wave particle duality and 02:14:47.280 |
quantum mechanics is not about the multiverse. It's about understanding how determined the past 02:14:54.640 |
is. Well, I don't just think that actually this is a discussion I was having with Sarah about that, 02:15:00.400 |
right? She was like, "Oh, I think we'd have been debating this for a long time now about how do we 02:15:07.280 |
reconcile novelty, determinism, indeterminism." - So just to clarify, both you and Sarah think 02:15:15.920 |
the universe is not deterministic. - I won't speak for Sarah, but I think that the universe 02:15:23.760 |
is deterministic looking back in the past, but undetermined going forward in the future. So I'm 02:15:34.480 |
kind of having my cake and eating it here. This is because I fundamentally don't understand 02:15:38.400 |
randomness, right? As Yasha told me or other people told me. But if I adopt a new view now, 02:15:43.840 |
which the new view is the universe is just non-deterministic, but I'd like to refine that 02:15:49.280 |
and say the universe appears deterministic going back in the past, but it's undetermined going 02:15:56.000 |
forward in the future. So how can we have a universe that has deterministically looking 02:16:02.160 |
rules that's non-determined going into the future? It's this breakdown in precision in 02:16:06.720 |
the initial conditions. And we have to just stop using initial conditions and start looking at 02:16:11.520 |
trajectories and how the combinatorial space behaves in expanding universe in time and space. 02:16:21.600 |
And assembly theory helps us quantify the transition to biology, and biology appears 02:16:27.120 |
to be novelty mining because it's making crazy stuff. That we are unique to earth, right? 02:16:34.240 |
There are objects on earth that are unique to earth that will not be found anywhere else 02:16:38.800 |
because you can do the combinatorial math. - What was that statement you made about life 02:16:43.840 |
is novelty mining from the future? What's the little element of time that you're introducing? 02:16:51.280 |
- So what I'm kind of meaning is because the future is bigger than the present, 02:16:54.960 |
in a deterministic universe, how do the states go from one to another? I mean, 02:17:01.360 |
there's a mismatch, right? So that must mean that you have a little bit of indeterminism, 02:17:06.320 |
whether that's randomness or something else. I don't understand. I want to do experiments to 02:17:10.880 |
formulate a theory to refine that as we go forward that might help us explain that. And I think that's 02:17:16.560 |
why I'm so determined to try and crack the non-life-to-life transition, looking at networks 02:17:24.960 |
and molecules, and that might help us think about it, the mechanism. But certainly the future is 02:17:29.760 |
bigger than the past, in my conception of the universe and some conception of the universe. 02:17:34.560 |
- By the way, that's not obvious, right? The future being bigger than the past. 02:17:42.320 |
Well, that's one statement, and the statement that the universe is not big enough to contain 02:17:46.560 |
the future is another statement. - Yeah. Yeah, yeah, yeah. 02:17:49.840 |
- That one is a big one. That one's a really big one. 02:17:52.640 |
- I think so. But I think it's entirely... Because look, we have the second law. And right now, 02:17:59.600 |
I mean, we don't need the second law if the future's bigger than the past. It follows naturally. 02:18:05.680 |
So why are we retrofitting all these sticking plasters onto our reality to hold onto a 02:18:12.400 |
timeless universe? - Yeah, but that's because 02:18:15.040 |
it's kind of difficult to imagine the universe that can't contain the future. 02:18:21.120 |
- But isn't that really exciting? - It's very exciting, but it's hard. 02:18:26.960 |
I mean, we're humans on Earth, and we have a very kind of four-dimensional conception of the world. 02:18:35.600 |
Of 3D plus time. It's just hard to intuit a world where... What does that even mean? 02:18:41.760 |
A universe that can't contain the future. - Yeah, it's kind of crazy, but obvious. 02:18:50.240 |
- It's weird. I mean, I suppose it sounds obvious, yeah, if it's true. 02:18:53.920 |
- But the nice thing is you can... So the reason why assembly theory turned me onto that 02:19:00.160 |
was that... Let's just start in the present and look at all the complex molecules and go backwards 02:19:05.680 |
in time and understand how evolutionary processes gave rise to them. It's not at all obvious that 02:19:15.280 |
taxol, which is one of the most complex natural products produced by biology, was going to be 02:19:21.360 |
invented by biology. It's an accident. You know, taxol is unique to Earth. There's no taxol 02:19:26.960 |
elsewhere in the universe. And taxol was not decided by the initial conditions. It was decided 02:19:34.320 |
by this kind of interplay between the... So the past simply is embedded in the present. It gives 02:19:42.720 |
some features, but why the past doesn't map to the future one-to-one is because the universe is too 02:19:49.120 |
big to contain itself. That gives space for creativity, novelty, and some things which are 02:19:56.160 |
unpredictable. - Okay, so given that you're disrespecting the power of the initial conditions, 02:20:02.720 |
let me ask you about... So how do you explain that cellular automata are able to produce such 02:20:07.680 |
incredible complexity given just basic rules and basic initial conditions? - I think that this 02:20:14.320 |
falls into the Brouwer-Hilbert trap. So how do you get a cellular automata producing a complexity? 02:20:23.040 |
You have a computer, you generate a display, and you map the change of that in time. There are some 02:20:29.680 |
CAs that repeat, like functions. It's fascinating to me that for pi, there is a formula where you 02:20:36.080 |
can go to the millionth decimal place of pi and read out the number without having to go there. 02:20:42.480 |
But there are some numbers where you can't do that. You have to just crank through. Whether it's 02:20:48.400 |
Wolframian computational irreducibility or some other thing, that doesn't matter. But these CAs, 02:20:54.800 |
that complexity, is that just complexity or a number that is basically you're mining that 02:21:02.960 |
number in time? Is that just a display screen for that number, that function? - Well, can't you say 02:21:09.600 |
the same thing about the complexity on Earth then? - No, because the complexity on Earth 02:21:13.520 |
has a copy number and an assembly index associated with it. That CA is just a number running. 02:21:18.960 |
- You don't think it has a copy number? Wait a minute. - Well, it does in the human, where we're 02:21:25.360 |
looking at humans producing different rules, but then it's nested on selection. So those CAs are 02:21:29.680 |
produced by selection. I mean, the CA is such a fascinating pseudo-complexity generator. What I 02:21:37.760 |
would love to do is understand, quantify the degree of surprise in a CA and run it long enough. 02:21:43.680 |
But what I guess that means is we have to instantiate, we have to have a number 02:21:47.840 |
of experiments where we're generating different rules and running them in time steps. But, 02:21:51.840 |
ah, got it. CAs are mining novelty in the future by iteration, right? And you're like, oh, that's 02:22:00.160 |
great, that's great. You didn't predict it. Some rules you can predict what's going to happen, 02:22:04.080 |
other rules you can't. So for me, if anything, CAs are evidence that the universe is too big 02:22:09.440 |
to contain itself, because otherwise you'd know what the rules are going to do forever more. 02:22:13.440 |
- Right. I guess you were saying that the physicist saying that all you need is the 02:22:19.600 |
initial conditions and the rules of physics is somehow missing the bigger picture. 02:22:25.120 |
And if you look at CAs, all you need is the initial condition and the rules and then run the thing. 02:22:32.960 |
- You need three things. You need the initial conditions, you need the rules, 02:22:40.400 |
and you need time, iteration to mine it out. Without the coordinate, you can't get it out. 02:22:44.640 |
- Sure. And that's that to use fundamentally. 02:22:47.280 |
- And you can't predict it from initial conditions. If you could, then it'd be fine. 02:22:51.440 |
- And that time is the foundation of, this is the history, the memory of each of the things 02:22:58.560 |
it created. It has to have that memory of all the things that led up to it. 02:23:04.400 |
- I think it's, yeah, you have to have the resource. Because time is a fundamental 02:23:09.280 |
resource. And yeah, I'm becoming, I think I had a major 02:23:16.000 |
epiphany about randomness, but I keep doing that every two days and then it goes away again, 02:23:25.760 |
- You should be as well. If you believe in free will, the only conclusion is time is fundamental, 02:23:33.040 |
otherwise you cannot have free will, it logically follows. 02:23:35.600 |
- Well, the foundation of my belief in free will is observation driven. I think if you use logic, 02:23:52.480 |
logically it seems like the universe is deterministic. 02:23:55.120 |
- Looking backwards in time, and that's correct, the universe is. 02:23:58.160 |
- And then everything else is a kind of leap, it requires a leap. 02:24:03.520 |
- I mean, I think that, it's kind of, this is, I think machine learning is going to provide a big, 02:24:13.600 |
a chunk of that, right? Because it helped us explain this. So the way I'd say, if you take-- 02:24:18.480 |
- That's interesting, why? - Well, let's just, my favorite one is, 02:24:25.440 |
because I'm, the AI doomers are driving me mad, and the fact that we don't have any intelligence 02:24:30.880 |
yet, I call AI autonomous informatics, just to make people grumpy. 02:24:34.640 |
- Yeah, you're saying we're quite far away from AGI. 02:24:38.240 |
- I think that we have no conception of intelligence, and I think that we don't 02:24:45.120 |
understand how the human brain does what it does. I think that we are, neuroscience is making great 02:24:49.840 |
advances, but I think that we have no idea about AGI. So I am a technological, I guess, optimist. 02:24:57.680 |
I believe we should do everything, the whole regulation of AI is nonsensical. I mean, 02:25:02.240 |
why would you regulate Excel, other than the fact that Clippy should come back, and I love Excel 97, 02:25:06.800 |
'cause we can play, we can do the flight simulator. - I'm sorry, in Excel? 02:25:12.480 |
- Yeah, have you not played the flight simulator in 99? - In Excel 97? 02:25:16.000 |
- Yeah, yeah, yeah. - What does that look like? 02:25:18.640 |
- It's like wireframe, very basic, but basically I think it's X zero, Y zero, 02:25:25.360 |
shift, and it opens up, and you can play the flight simulator. 02:25:28.320 |
- Oh, wow. Wait, wait, wait, is it using Excel? - Excel, Excel 97. 02:25:35.360 |
and saw Clippy again for the first time in a long time. 02:25:37.680 |
- Well, Clippy is definitely coming back, but you're saying we don't have a great understanding 02:25:44.400 |
of what is intelligence, what is the intelligence-- - I am very frustrated-- 02:25:48.400 |
- Underpinning the human mind. - I'm very frustrated by the way that 02:25:52.320 |
we're AI-dooming right now, and people are bestowing some kind of magic. Now, 02:25:58.000 |
let's go back a bit. So you said about AGI, are we far away from AGI? Yes, I do not think we're 02:26:03.760 |
going to get to AGI anytime soon. I've seen no evidence of it, and the AI-doom scenario is 02:26:09.440 |
nonsensical in the extreme, and the reason why I think it's nonsensical, but it's not-- 02:26:14.560 |
I don't think there isn't things we should do and be very worried about, right? I mean, 02:26:20.560 |
there are things we need to worry about right now, what AI are doing, whether it's fake data, 02:26:25.120 |
fake users, right? I want authentic people or authentic data. I don't want everything to be 02:26:31.040 |
faked, and I think it's a really big problem, and I absolutely want to go on the record to say I 02:26:34.960 |
really worry about that. What I'm not worried about is that some fictitious entity is going 02:26:39.840 |
to turn us all to paperclips, or detonate nuclear bombs, I don't know, maybe, I don't know, 02:26:47.840 |
anything you can't think of. Why is this-- and I'll take a very simple series of logical arguments, 02:26:55.040 |
and the AI-doomers have not had the correct epistemology. They do not understand what 02:27:09.120 |
knowledge is, and until we understand what knowledge is, they're not going to get anywhere, 02:27:13.760 |
because they're applying things falsely. So let me give you a very simple argument. People talk 02:27:18.480 |
about the probability, p-doom AI. We can work out the probability of an asteroid hitting the planet, 02:27:26.000 |
why? Because it's happened before. We know the mechanism, we know that there's a gravity well, 02:27:30.320 |
or that space-time is bent and stuff falls in. We don't know the probability of AGI because we have 02:27:36.080 |
no mechanism. So let me give you another one, which is like, I'm really worried about AG. 02:27:40.720 |
What's AG? AG is anti-gravity. One day we could wake up and anti-gravity is discovered, we're all 02:27:47.680 |
going to die, the atmosphere is going to float away, we're going to float away, we're all doomed. 02:27:52.400 |
What is the probability of AG? We don't know because there's no mechanism for AG. Do we 02:27:57.600 |
worry about it? No. And I don't understand the current reason for certain people in certain 02:28:09.520 |
areas to be generating this nonsense. I think they're not doing it maliciously. I think we're 02:28:14.720 |
observing the emergence of new religions, how religions come because religions are about some 02:28:19.680 |
controls. You've got the optimist saying AI is going to cure us all and AI is going to kill us 02:28:24.160 |
all. What's the reality? Well, we don't have AI, we have really powerful machine learning tools 02:28:29.520 |
and they will allow us to do interesting things and we need to be careful about 02:28:32.880 |
how we use those tools in terms of manipulating human beings and faking stuff, right? 02:28:38.640 |
Right. Well, let me try to sort of steel man the AI Doomer's argument. Actually, I don't know, 02:28:45.200 |
are AI Doomers in the Yudkowsky camp saying it's definitely going to kill us? Because there's a 02:28:51.360 |
spectrum. 95% I think is the limit. 95% plus? No, not plus. I don't know, I was seeing on Twitter 02:28:57.840 |
today various things but I think Yudkowsky is at 95%. But to belong to the AI Doomer club, 02:29:04.160 |
is there a threshold? I don't know what the membership is. Maybe. And what are the fees? 02:29:08.160 |
I think Scott Aronson, I was quite surprised, I saw this online so it could be wrong, 02:29:15.120 |
so sorry if it's wrong, says 2%. But the thing is, if someone said there's a 2% chance you're 02:29:21.760 |
going to die going into the lift, would you go into the lift? In the elevator for the American 02:29:26.400 |
English speaking audience. Well, no, not for the elevator. So I would say anyone higher than 2%, 02:29:33.920 |
I mean, I think there's a 0% chance of AGI Doom. Zero. 02:29:37.520 |
Just to push back on the argument where the N of 0 on the AGI, we can see on Earth that there's 02:29:45.280 |
increasing levels of intelligence of organisms. We can see what humans with extra intelligence 02:29:51.200 |
were able to do to the other species. So that is a lot of samples of data, what a delta in 02:30:02.240 |
intelligence gives you. When you have an increase in intelligence, how you're able to dominate 02:30:06.880 |
a species on Earth. And so the idea there is that if you have a being that's 10x smarter than humans, 02:30:15.920 |
we're not gonna be able to predict what that's going to, what that being is gonna be able to do, 02:30:24.160 |
especially if it has the power to hurt humans. Which you can imagine a lot of trajectories in 02:30:30.160 |
which the more benefit AI systems give, the more control we give to those AI systems over 02:30:36.640 |
our power grid, over our nuclear weapons or weapons of any sort. And then it's hard to know 02:30:44.560 |
what a ultra intelligence system would be able to do in that case. You don't find that convincing. 02:30:49.440 |
I think this is, I would fail that argument 100%. Here's a number of reasons to fail it on. 02:30:54.000 |
First of all, we don't know where the intention comes from. The problem is that people think 02:31:00.080 |
they keep, I've been watching all the hucksters online with the prompt engineering and all this 02:31:04.560 |
stuff. When I talk to a typical AI computer scientist, they keep talking about the AI 02:31:13.120 |
as having some kind of decision making ability. That is a category error. The decision making 02:31:18.720 |
ability comes from human beings. We have no understanding of how humans make decision. 02:31:23.120 |
We've just been discussing free will for the last half an hour, right? We don't even know what that 02:31:27.440 |
is. So the intention, I totally agree with you. People who intend to do bad things can do bad 02:31:34.160 |
things and we should not let that risk go. That's totally here and now. I do not want that to happen 02:31:40.880 |
and I'm happy to be regulated to make sure that systems I generate, whether they're like 02:31:45.520 |
computer systems or, you know, I'm working on a new project called Chem Machina. 02:31:56.320 |
- For people who don't understand the point, the X Machina is a great film about, 02:32:03.360 |
I guess, AGI embodied and Chem is the chemistry version of that. 02:32:07.200 |
- And I only know one way to embody intelligence, that's in chemistry and human brains. 02:32:11.280 |
So category error number one is agents, they have agency. Category error number two is saying that, 02:32:17.120 |
assuming that anything we make is going to be more intelligent. Now you didn't say super 02:32:22.080 |
intelligent. I'll put the words into our mouths here, super intelligent. That, I think that there 02:32:28.640 |
is no reason to expect that we are going to make systems that are more intelligent, more capable. 02:32:37.360 |
You know, when people play chess computers, they don't expect to win now, right? They just, the 02:32:42.400 |
chess computer is very good at chess. That doesn't mean it's super intelligent. So I think that 02:32:48.240 |
super intelligence, I mean, I think even Nick Bostrom is pulling back on this now, 02:32:52.560 |
because he invented this. So I see this a lot. When did this first happen? Eric Drexler, 02:32:58.000 |
nanotechnology, atomically precise machines. He came up with a world where we had these 02:33:02.400 |
atom cogs everywhere, they were going to make self-replicating nanobots. Not possible, why? 02:33:07.760 |
Because there's no resources to build these self-replicating nanobots. You can't get the 02:33:11.760 |
precision, it doesn't work. It was a major category error in taking engineering principles down to the 02:33:17.600 |
molecular level. The only functioning molecular technology we know, sorry, the only functioning 02:33:23.120 |
nanomolecular technology we know, produced by evolution. There. So now let's go forward to 02:33:28.080 |
AGI. What is AGI? We don't know. It's super, it can do this, or humans can't think. That, 02:33:34.080 |
I would argue, the only AGI's that exist in the universe are produced by evolution. 02:33:39.920 |
And sure, we may be able to make our working memory better, we might be able to do more things. 02:33:46.080 |
The human brain is the most compact computing unit in the universe. It uses 20 watts, 02:33:51.280 |
it uses a really limited volume, it's not like a chat GPT cluster which has to have thousands of 02:33:57.680 |
watts, a model that's generated and has to be corrected by human beings. You are autonomous 02:34:02.400 |
and embodied intelligence. So I think that there are so many levels that we're missing out. We've 02:34:08.400 |
just kind of went, oh, we've discovered fire, oh gosh, the planet's just going to burn one day, 02:34:14.000 |
randomly. I mean, I just don't understand that leap. There are bigger problems we need to worry 02:34:18.800 |
about. So what is the motivation? Why are these people, let's assume they're earnest, 02:34:24.400 |
have this conviction? Well, I think it's kind of, they're making leaps that, they're trapped in a 02:34:32.160 |
virtual reality that isn't reality. - Well, I mean, I could continue a set of arguments here, 02:34:37.440 |
but also it is true that ideologies that fear monger are dangerous because you can then use it 02:34:47.360 |
to control, to regulate in a way that halts progress, to control people, to cancel people, 02:34:57.280 |
all that kind of stuff. So you have to be careful because reason ultimately wins, right? 02:35:03.200 |
But there is a lot of concerns with superintelligent systems, very capable systems. 02:35:08.080 |
I think when you hear the word superintelligent, you're hearing it's smarter than humans in every 02:35:15.760 |
way that humans are smart. But the paperclip manufacturing system doesn't need to be smart 02:35:25.600 |
in every way. It just needs to be smart in a set of specific ways. And the more 02:35:31.200 |
capable the AI systems become, the more you could see us giving them control over, like I said, 02:35:36.400 |
our power grid, a lot of aspects of human life. And that means they will be able to do more and 02:35:41.600 |
more damage when there's unintended consequences that come to life. - I think that that's right, 02:35:48.000 |
that the unintended consequences we have to think about, and that I fully agree with. But let's go 02:35:53.920 |
back a bit. Sentient, I mean, again, I'm far away from my comfort zone and all this stuff, 02:35:59.120 |
but hey, let's talk about it because I'll give myself a qualification. - Yeah, we're both 02:36:03.600 |
qualified and sentient, I think, as much as anyone else. - I think the paperclip scenario is just such 02:36:09.200 |
a poor one because let's think about how that would happen. And also let's think about, we are 02:36:13.840 |
being so unrealistic about how much of the Earth's surface we have commandeered. For paperclip 02:36:23.360 |
manufacturing to really happen, I mean, do the math. It's like, it's not going to happen. There's 02:36:28.560 |
not enough energy, there's not enough resource, where is it all going to come from? I think that 02:36:33.040 |
what happens in evolution is really, why is a killer virus not killed all life on Earth? Well, 02:36:41.440 |
what happens is, sure, super killer viruses that kill the ribosome have emerged, but you know what 02:36:46.240 |
happens? They nuke a small space because they can't propagate, they will die. So there's this 02:36:52.000 |
interplay between evolution and propagation, right, and death. And so- - In evolution. You don't think 02:36:58.000 |
it's possible to engineer, for example, sorry to interrupt, but like a perfect virus that's deadly 02:37:03.360 |
enough? - No. Nonsensical. I think that just wouldn't, again, it wouldn't work because it's 02:37:08.160 |
too deadly. It would just kill the radius and not replicate it. - Yeah. I mean, you don't think it's 02:37:12.880 |
possible to get a- - I mean, if you were super, I mean, if you were- - Not kill all of life on Earth, 02:37:21.280 |
but kill all humans. There's not many of us. There's only like eight billion. There's so much 02:37:27.360 |
more ants. - I mean, I don't- - So many more ants. And they're pretty smart. - I think the nice thing 02:37:34.560 |
about where we are, I would love for the AI crowd to take a leaf out of the book of the bio warfare, 02:37:42.480 |
chemical warfare crowd. I mean, not love, 'cause actually people have been killed with chemical 02:37:48.800 |
weapons in the First and Second World War, and bioweapons have been made, and we can argue about 02:37:53.760 |
COVID-19 and all this stuff. Let's not go there just now. But I think there is a consensus that 02:37:58.320 |
some certain things are bad and we shouldn't do them, right? And sure, it would be possible for 02:38:04.560 |
a bad actor to engineer something bad, but the damage would be, we would see it coming, 02:38:11.760 |
and we would be able to do something about it. Now, I guess what I'm trying to say is 02:38:21.360 |
when people talk about doom and they just, when you ask them for the mechanism, they just say, 02:38:26.000 |
they just make something up. I mean, in this case, I'm with Yann LeCun. I think he put out a very 02:38:32.880 |
good point about trying to regulate jet engines before we've even invented them. And I think 02:38:38.160 |
that's what I'm saying. I'm not saying we should, I just don't understand why these guys are going 02:38:43.200 |
around literally making stuff up about us all dying, when basically we need to actually really 02:38:49.760 |
focus on. Now, let's say there's some actors are earnest, right? Let's say Yudakowsky is being 02:38:56.240 |
earnest, right? And he really cares, but he loves it. He goes, and then you're all going to die. 02:39:01.600 |
It's like, why don't we try and do the same thing and say, you could do this, and then you're going 02:39:04.960 |
to be happy forever after. - Well, I think there's several things to say there. One, I think there is 02:39:11.600 |
a role in society for people that say we're all going to die, because I think it filters through 02:39:17.760 |
as a message, as a viral message, that gives us the proper amount of concern. Meaning not the, 02:39:24.800 |
it's not 95%, but when you say 95% and it filters through society, it'll give an average of like a 02:39:32.800 |
0.03%, an average. So it's nice to have people that are like, we're all going to die, then we'll 02:39:39.600 |
have a proper concern. Like, for example, I do believe we're not properly concerned about the 02:39:44.880 |
threat of nuclear weapons currently. It just seems like people have forgotten that that's a thing, 02:39:51.600 |
and there's a war in Ukraine with nuclear power involved, there's nuclear power throughout the 02:39:57.600 |
world, and it just feels like we're on the brink of a potential world war to a percentage that I 02:40:03.680 |
don't think people are properly calibrating in their head. We're all thinking it's a Twitter 02:40:08.960 |
battle as opposed to actual threat. So it's nice to have that kind of level of concern. But to me, 02:40:16.080 |
when I hear AI doers, what I'm imagining is with unintended consequences, a potential situation 02:40:23.120 |
where, let's say, 5% of the world suffers deeply because of a mistake made of unintended consequences. 02:40:33.120 |
I don't imagine the entirety of human civilization dying, but there could be a lot of suffering if 02:40:37.920 |
this is done poorly. - I understand that, and I guess I'm involved in the whole hype cycle. 02:40:44.160 |
I would like us to... I don't want us to... So what's happening right now is there seems to be... 02:40:51.280 |
So let's say, having some people saying AI doom is a worry, fine, let's give them that. 02:40:58.800 |
But what seems to be happening is there seems to be people who don't think AI is doing that, 02:41:03.680 |
trying to use that to control regulation and to push people to regulate, which stops humans 02:41:10.480 |
generating knowledge. And I am an advocate for generating as much knowledge as possible. 02:41:14.800 |
When it comes to nuclear weapons, I grew up in the '70s and '80s where the nuclear doom, 02:41:20.960 |
a lot of adults really had existential threat, almost as bad as now with AI doom. They were 02:41:27.120 |
really worried, right? There were some great... Well, not great. There were some horrific 02:41:30.800 |
documentaries. I think there's one called FREDS that was generated in the UK, which was like... 02:41:36.480 |
It was terrible. It was like so scary. And I think that the correct thing to do is obviously get rid 02:41:44.960 |
of nuclear weapons, but let's think about unintended consequences. We've got rid of... 02:41:48.800 |
We got rid of all the sulfur particles in the atmosphere, right? All the soot. And what's 02:41:55.520 |
happened in the last couple of years is global warming has accelerated because we've cleaned 02:41:58.560 |
up the atmosphere too much. - Sure. I mean, the same thing if you get rid of nuclear weapons. 02:42:04.960 |
- Exactly. That's my point. So what we could do is if we actually started to put the AI in charge, 02:42:11.440 |
which is I really like an AI, to be in charge of all world politics. And this sounds ridiculous 02:42:16.720 |
for a second, hang on. But if we could all agree on the... - The AI doomers just woke up. 02:42:22.160 |
- But I really don't like politicians who are basically just looking at local sampling. But 02:42:26.480 |
if you could say globally, look, here's some game theory here. What is the minimum number 02:42:30.960 |
of nuclear weapons we need to distribute around the world to everybody to basically reduce war to 02:42:40.800 |
experiment of the United States and China and Russia, major nuclear powers get together and say, 02:42:47.360 |
"All right, we're going to distribute nuclear weapons to every single nation on earth." 02:42:54.960 |
- Oh boy. I mean, that has a probably greater than 50% chance of eliminating major military 02:43:05.200 |
- Yeah, but it's not 100%. - But I don't think anyone will use them 02:43:09.120 |
because I think... And look, what you've got to try and do is to qualify for these nuclear weapons, 02:43:15.280 |
this is a great idea. The game theorists should do this, right? I think the question is this. I 02:43:22.000 |
really buy your question, we have too many nukes, just from a feeling point of view that we've got 02:43:26.960 |
too many of them. So let's reduce the number, but not get rid of them because we'll have too much 02:43:30.400 |
conventional warfare. So then, what is the minimum number of nuclear weapons we can distribute around 02:43:35.600 |
to remove... Humans hurting each other is something we should stop doing. It's not out 02:43:43.040 |
with our conceptual capability. But right now, what about certain nations that are being 02:43:50.080 |
exploited for their natural resources in the future for a short-term gain because we don't 02:43:54.800 |
want to generate knowledge. And so if everybody had an equal doomsday switch, I predict the quality 02:44:02.080 |
of life of the average human will go up faster. I am an optimist and I believe that humanity is 02:44:07.600 |
going to get better and better and better, that we're going to eliminate more problems. But I 02:44:12.720 |
think, yeah, let's... - But the probability of a bad actor 02:44:17.280 |
of one of the nations setting off a nuclear weapon, 02:44:20.560 |
I mean, you have to integrate that into the... - But we distribute the nukes-like population, 02:44:28.720 |
right? We give... What we do is we... But anyway, let's just go there. So if a small 02:44:36.400 |
nation with a couple of nukes uses one because they're a bit bored or annoyed, 02:44:39.520 |
the likelihood that they are going to be pummeled out of existence immediately is 100%. And yet, 02:44:46.640 |
they've only nuked one other city. I know this is crazy and I apologize for... 02:44:50.960 |
- Well, no, no. I think it's, just to be clear, we're just having a thought experiment that's 02:44:55.120 |
interesting, but there's terrorist organizations that would take that, would take that trade. 02:45:02.400 |
- Yeah, I mean, look, I'm... - And we have to ask ourselves 02:45:05.600 |
a question of how many... Which percentage of humans would be suicide bombers, essentially, 02:45:11.680 |
where they would sacrifice their own life because they hate another group of people? And that, 02:45:18.880 |
I believe it's a very small fraction, but is it large enough if you give out nuclear weapons? 02:45:24.560 |
- I can predict a future where we take all nuclear material and we burn it for energy, 02:45:28.720 |
right? Because we're getting there. And the other thing you could do is say, look, 02:45:31.360 |
there's a gap. So if we got all the countries to sign up to the virtual nuclear agreement where 02:45:36.400 |
we all exist, we have a simulation where we can nuke each other in the simulation. And the 02:45:40.720 |
economic consequences are catastrophic. - Sure. In the simulation. I love it. It's 02:45:45.600 |
not going to kill all humans. It's just going to have economic consequences. 02:45:48.480 |
- Yeah, yeah. I don't know. I just made it up. It seems like it's all I do. 02:45:51.680 |
- No, it's interesting. But it's interesting whether that would have as much power in human 02:45:56.080 |
psychology as actual physical nuclear explosions. - I think so. 02:45:59.600 |
- It's possible, but people don't take economic consequences as seriously, I think, as 02:46:05.120 |
actual nuclear weapons. - I think they do in Argentina, 02:46:08.800 |
and they do in Somalia, and they do in a lot of these places where... No, I think this is a great 02:46:14.720 |
idea. I'm a strong advocate now for... So what have we come up with? Burning all the nuclear 02:46:18.880 |
material to have energy. And before we do that, because MAD is good. Mutually Assured Destruction 02:46:24.560 |
is very powerful. Let's take it into the metaverse and then get people to kind of 02:46:29.440 |
subscribe to that. And if they actually nuke each other, even for fun in the metaverse, 02:46:34.720 |
there are dire consequences. - Yeah, yeah. So it's like a video game. 02:46:38.720 |
We all have to join this metaverse video game. - Yeah. I can't believe it. 02:46:43.040 |
- And there's dire economic consequences. I don't know how... And it's all run by AI, 02:46:47.840 |
as you mentioned. So the AI doomers are really terrified at this point. 02:46:51.920 |
- No, they're happy to have a job for another 20 years, right? 02:46:54.720 |
- Oh, fearmongering. - Yeah, yeah, yeah. I'm a believer 02:46:58.960 |
in equal employment. - You've mentioned that... What do you call it? Chemokina? 02:47:08.480 |
that a chemical brain is something you're interested in creating. And that's a way to 02:47:16.080 |
get conscious AI soon. Can you explain what a chemical brain is? - I want to understand the 02:47:22.960 |
mechanism of intelligence that's gone through evolution, right? Because the way that intelligence 02:47:28.640 |
was produced by evolution appears to be the following. Origin of life, multicellularity, 02:47:35.600 |
locomotion, senses. Once you can start to see things coming towards you, and you can remember 02:47:44.720 |
the past and interrogate the present and imagine the future, you can do something amazing, right? 02:47:49.920 |
And I think only in recent years did humans become Turing complete, right? 02:47:55.760 |
- Yeah, yeah, yeah. - And so that Turing completeness 02:48:00.720 |
kind of gave us another kick up. But our ability to process that information 02:48:07.120 |
is produced in a wet brain. And I think that we do not have the correct hardware architectures 02:48:18.400 |
to have the domain flexibility and the ability to integrate information. And I think intelligence 02:48:24.960 |
also comes at a massive compromise of data. Right now, we're obsessing about getting more and more 02:48:32.320 |
data, more and more processing, more and more tricks to get dopamine hits. So when we look back 02:48:38.480 |
on this, going, "Oh, yeah, that was really cool." Because when I chat GPT, it made me feel really 02:48:44.960 |
happy. I got a hit from it, but actually, it just exposed how little intelligence I use in every 02:48:54.400 |
moment because I'm easily fooled. So what I would like to do is to say, "Well, hey, hang on. What is 02:49:02.080 |
it about the brain?" So the brain has this incredible connectivity, and it has the ability to, 02:49:09.280 |
you know, as I said earlier about my nephew, I went from Bill to Billy, and he went, "All right, 02:49:14.880 |
Leroy." Like, how did he make that leap? That he was able to basically, without any training, 02:49:20.720 |
I extended his name. He went, "Okay." He doesn't like, he wants to be called Bill. 02:49:24.480 |
He went back and said, "You'd like to be called Lee? I'm going to call you Leroy." 02:49:27.360 |
So human beings have a brilliant ability, or intelligent beings appear to have a brilliant 02:49:34.960 |
ability to intercreate across all domains all at once, and to synthesize something which allows us 02:49:41.200 |
to generate knowledge. And becoming Turing-complete on our own, although AIs are built in Turing-complete 02:49:52.080 |
things, their thinking is not Turing-complete in that they are not able to build universal 02:49:57.440 |
explanations. And that lack of universal explanation means that they're just inductivists. 02:50:04.720 |
Inductivism doesn't get you anywhere. It's just basically a party trick. 02:50:09.600 |
It's like, you know, I like the, I think it's in the fabric of reality from David Deutsch, 02:50:15.680 |
where basically, you know, the farmer is feeding the chicken every day, and the chicken's getting 02:50:20.960 |
fat and happy, and the chicken's like, "I'm really happy. Every time the farmer comes in 02:50:24.560 |
and feeds me." And then one day the farmer comes in and doesn't, instead of feeding the chicken, 02:50:28.880 |
just wrings its neck. You know, and that's kind of, and had the chicken had an alternative 02:50:34.560 |
understanding of why the farmer was feeding it. - It's interesting though, because we don't know 02:50:40.160 |
what's special about the human mind that's able to come up with these kind of generalities, 02:50:43.600 |
this universal theories of things, and will come up with novelty. I can imagine, 02:50:49.440 |
'cause you gave an example, you know, about William and Leroy. I feel like 02:50:57.120 |
example like that, we'll be able to see in future versions of large language models. We'll be 02:51:06.000 |
really, really, really impressed by the humor, the insights, all of it. Because it's fundamentally 02:51:15.200 |
trained on all the incredible humor and insights that's available out there on the internet, right? 02:51:19.840 |
So we'll be impressed. I think we'll be impressed. - Oh, I'm impressed. I'm impressed. 02:51:25.120 |
- Increasingly so. - But we're mining the past. 02:51:29.600 |
appears to be able to do is mine the future. - Yes. So novelty, it is interesting whether 02:51:35.440 |
these large language models will ever be able to come up with something truly novel. 02:51:40.880 |
- I can show on the back of a piece of paper why that's impossible. And it's like, the problem 02:51:44.960 |
is that, and again, there's a domain experts kind of bullshitting each other. The term generative, 02:51:52.240 |
right? Average person think, "Oh, it's generative." No, no, no. If look, if I take the numbers between 02:51:59.600 |
zero and 1,000, and I train a model to pick out the prime numbers by giving them all the prime 02:52:04.800 |
numbers between zero and 1,000, it doesn't know what prime number is. Occasionally, if I can cheat 02:52:10.480 |
a bit, it will start to guess. It never will produce anything out with the dataset because 02:52:15.280 |
you mine the past. The thing that I'm getting to is I think that actually, current machine learning 02:52:20.640 |
technologies might actually help reveal why time is fundamental. It's like kind of insane, because 02:52:25.600 |
they tell you about what's happened in the past, but they can never help you understand what's 02:52:30.240 |
happening in the future without training examples. Sure, if that thing happens again, it's like... 02:52:38.720 |
So, let's think about what large average models are doing. We have all the internet as we know it, 02:52:45.600 |
you know, language, but also they're doing something else. We're having human beings 02:52:49.760 |
correcting it all the time. Those models are being corrected. Steered. Corrected. 02:52:56.880 |
Modified. Tweaked. Cheating. Well, you could say that training on human data in the first place is 02:53:07.600 |
cheating. Human is in the loop. Sorry to interrupt. Yes, so human is definitely in the loop, 02:53:11.680 |
but it's not just human that's in the loop. A very large collection of humans is in the loop. 02:53:19.760 |
I mean, to me, it's not intuitive that you said prime numbers, that the system can't generate an 02:53:28.240 |
algorithm, right? That the algorithm that can generate prime numbers, or the algorithm that 02:53:35.760 |
could tell you if a number is prime and so on, and generate algorithms that generate algorithms 02:53:40.160 |
that generate algorithms that start to look a lot like human reasoning, you know? 02:53:46.320 |
I think, again, we can show that on a piece of paper. Sure, I think you have to have... 02:53:53.440 |
So, this is the failure in epistemology. I'm glad I even can say that word, 02:53:58.160 |
let alone what it means. You've said it multiple times. 02:54:00.320 |
I know, it's like three times now. Without failure. 02:54:03.920 |
Quit while you're ahead. Just don't say it again, because you did really well. 02:54:07.360 |
Thanks. So, what is reasoning? So, coming back to the chemical brain, if I could basically, 02:54:15.360 |
if I could show that in a... Because, I mean, I'm never going to make an intelligence thing 02:54:20.560 |
in chem machina, because we don't have brain cells. They don't have glial cells, 02:54:24.640 |
they don't have neurons. But if I can take a gel and engineer the gel to have it be a hybrid 02:54:32.160 |
hardware for reprogramming, which I think I know how to do, I will be able to process a lot more 02:54:37.840 |
information and train models billions of times cheaper, and use cross-domain knowledge. And 02:54:44.720 |
there's certain techniques I think we can do. But there's still missing the abilities of human 02:54:51.920 |
beings have had to become true and complete. And so, I guess the question to give back at you, 02:54:58.880 |
is like, how do you tell the difference between trial and error, and the generation of new 02:55:05.680 |
knowledge? I think the way you can do it is this, is that you come up with a theory and explanations, 02:55:11.520 |
inspiration comes from out there, and then you then test that, and then you see that's going 02:55:17.840 |
towards a truth. And human beings are very good at doing that, in the transition between philosophy, 02:55:22.400 |
mathematics, physics, and natural sciences. And I think that we can see that. Where I get confused 02:55:29.520 |
is why people misappropriate the term artificial intelligence to say, "Hey, there's something else 02:55:37.680 |
going on here." Because I think you and I both agree, machine learning is really good. It's only 02:55:41.680 |
going to get better, we're going to get happier with the outcome. But why would you ever think 02:55:46.640 |
the model was thinking? Or reasoning? Reasoning requires intention. And the intention, if the 02:55:56.080 |
model isn't reasoning, the intentions come from the prompter. And the intention has come from 02:56:02.000 |
the person who programmed it to do it. So I-- - But don't you think you can prompt it to have 02:56:10.960 |
intention? Basically start with the initial conditions and get it going. Where the, you know, 02:56:19.120 |
currently large language models, Chad GPT, only talks to you when you talk to it. There's no 02:56:28.320 |
reason why you can't just start it talking. - But those initial conditions came from someone 02:56:35.120 |
- And that causal chain in there, so that intention comes from the outside. 02:56:38.960 |
I think that there is something in that causal chain of intention that's super important. 02:56:42.640 |
I don't disagree we're going to get to AGI. It's a matter of when and what hardware. I think we're 02:56:48.000 |
not going to do it in this hardware. And I think we're unnecessarily fetishizing really cool outputs 02:56:53.520 |
and dopamine hits. Because obviously that's what people want to sell us. - Well, but there could be, 02:56:59.440 |
I mean, AGI is a loaded term, but there could be incredibly super impressive intelligence systems 02:57:08.880 |
on the way to AGI. So these large language models, I mean, if it appears conscious, 02:57:15.040 |
if it appears super intelligent, who are we to say it's not? - I agree. But the super intelligence 02:57:23.600 |
I want, I want to be able to have a discussion with it about coming up with fundamental new 02:57:32.240 |
ideas that generate knowledge. And if the super intelligence we generate can mind novelty from 02:57:36.960 |
the future that I didn't see in its training set in the past, I would agree that something really 02:57:41.360 |
interesting is coming on. I'll say that again. If the intelligence system, be it a human being, 02:57:46.160 |
a chatbot, something else, is able to produce something truly novel that I could not predict, 02:57:53.120 |
even having full audit trail from the past, then I'd be sold. - Well, so we should be clear that 02:57:59.840 |
it can currently produce things that are, in a shallow sense, novel, that are not in the training 02:58:08.960 |
set. But you're saying truly novel. - I think they are in the training set. I think everything it 02:58:15.280 |
produces comes from a training set. There's a difference between novelty and interpolation. 02:58:20.160 |
We do not understand where these leaps come from yet. That is what intelligence is, I would argue. 02:58:25.440 |
Those leaps, and some people say, no, it's actually just what will happen if you just do cross-domain 02:58:30.320 |
training and all that stuff. And that may be true, and I may be completely wrong. But right now, 02:58:35.840 |
the human mind is able to mind novelty in a way that artificial intelligence systems cannot. And 02:58:42.000 |
this is why we still have a job, and we're still doing stuff. And I used chat GPT for a few weeks. 02:58:46.400 |
Well, this is cool. And then it took me too... Well, what happened is it took me too much time 02:58:51.120 |
to correct it. Then it got really good. And now they've done something to it. It's not actually 02:58:54.960 |
that good. - Yeah, right. - I don't know what's going on. - Censorship, yeah. I mean, that's 02:59:00.320 |
interesting, but it will push us humans to characterize novelty better, like characterize 02:59:05.280 |
what is novel, what is truly novel, what's the difference between novelty and interpolation. 02:59:10.400 |
- I think that this is the thing that makes me most excited about these technologies, 02:59:14.960 |
is they're gonna help me demonstrate to you that time is fundamental, and the future is bigger than 02:59:21.920 |
the present, which is why human beings are quite good at generating novelty, because we have to 02:59:28.160 |
expand our data set, and to cope with unexpected things in our environment. Our environment throws 02:59:33.920 |
them all at us. Again, we have to survive in that environment. And I mean, I never say never, 02:59:39.520 |
I would be very interested in how we can get cross-domain training cheaply in chemical systems, 02:59:46.080 |
'cause I'm a chemist, and the only sentient thing I know of is the human brain. But maybe that's 02:59:50.640 |
just me being boring and predictable, and not novel. - Yeah, you mentioned GPT for electron 02:59:56.240 |
density. So a GPT-like system for generating molecules that can bind to hosts automatically. 03:00:04.240 |
I mean, that's interesting. That's really interesting, applying this same kind of 03:00:08.960 |
transform mechanism. - Yeah, I mean, this is one that, it goes, my team, I try and do things that 03:00:16.800 |
are non-obvious, but non-obvious in certain areas. And one of the things I was always asking about, 03:00:22.000 |
in chemistry, people like to represent molecules as graphs, and it's quite difficult. It's really 03:00:28.480 |
hard. If you're doing AI in chemistry, you really want to basically have good representations, 03:00:33.120 |
you can generate new molecules that are interesting. And I was thinking, well, 03:00:36.800 |
molecules aren't really graphs, and they're not continuously differentiable. Could I do something 03:00:42.320 |
that was continuously differentiable? I was like, well, molecules are actually made up of electron 03:00:45.440 |
density. So then I got thinking, say, well, okay, could there be a way where we could just basically 03:00:50.320 |
take a database of readily solved electron densities for millions of molecules? So we 03:00:58.720 |
took the electron density for millions of molecules and just trained the model 03:01:01.680 |
to learn what electron density is. And so what we built was a system that you literally could 03:01:09.200 |
give it a, let's say you could take a protein that has a particular active site, or a cup with a 03:01:14.800 |
certain hole in it, you pour noise into it, and with a GPT, you turn the noise into electron 03:01:19.280 |
density. And then, in this case, it hallucinates like all of them do, but the hallucinations are 03:01:25.200 |
good because it means I don't have to train on such a large number, such a huge dataset, 03:01:30.720 |
because these datasets are very expensive. Because how do you produce it? So go back a step. So 03:01:36.160 |
you've got all these molecules in this dataset, but what you've literally done is a quantum 03:01:42.080 |
mechanical calculation where you produce electron densities for each molecule. 03:01:45.360 |
So you say, oh, this representation of this molecule has these electron densities associated 03:01:48.960 |
with it. So you know what the representation is, and you train the neural network to know what 03:01:53.200 |
electron density is. So then you give it an unknown pocket. You pour in noise, and you say, 03:01:58.000 |
right, produce me electron density. It produces electron density that doesn't look ridiculous. 03:02:02.880 |
And what we did in this case is we produced electron density that maximizes the electrostatic 03:02:09.440 |
potential, so the stickiness, but minimizes what we call the steric hindrance, so the overlap, 03:02:14.000 |
so it's repulsive. So make the perfect fit. And then we then used a kind of like a chat GPT type 03:02:22.320 |
thing to turn that electron density into what's called a smile. A smile string is a way of 03:02:28.480 |
representing a molecule in letters. And then we can then-- 03:02:32.160 |
- So it just generates them then? - Just generates them. And then the 03:02:34.720 |
other thing is then we bung that into the computer, and then it just makes it. 03:02:37.520 |
- Yeah, the computer being the thing that, right, to generate-- 03:02:41.760 |
- The robot that we've got that can basically just do chemistry. 03:02:44.000 |
- Create any-- - Yeah. So we've kind of got this 03:02:46.640 |
end-to-end drug discovery machine where you can say, oh, you want to bind to this active site? 03:02:51.360 |
Here you go. I mean, it's a bit leaky, and things kind of break, but it's a proof of principle. 03:02:56.160 |
- But were the hallucinations, are those still accurate? 03:03:00.880 |
- Well, the hallucinations are really great in this case, because in the case of a large 03:03:04.880 |
numbers model, the hallucinations just make everything up to, well, it doesn't just make 03:03:09.360 |
everything up, but it gives you an output that you're plausibly comfortable with, 03:03:12.400 |
but it thinks you're doing probabilistically. The problem on these electron density models is 03:03:17.840 |
it's very expensive to solve a Schrodinger equation going up to many heavy atoms 03:03:22.400 |
and large molecules. And so we wondered if we trained the system on up to nine heavy atoms, 03:03:32.720 |
whether it would go beyond nine. And it did. It started to generate molecules of 12. No problem, 03:03:38.160 |
they look pretty good. And I was like, well, this hallucination I will take for free. Thank you very 03:03:41.920 |
much. Because it just basically, this is a case where interpolation, extrapolation worked relatively 03:03:46.880 |
well, and we were able to generate the really good molecules. And then what we were able to 03:03:53.040 |
do here is, and this is a really good point, what I was trying to say earlier, that we were able to 03:03:58.720 |
generate new molecules from the known data set that would bind to the host. So a new guest would 03:04:07.840 |
bind. Were these truly novel? Not really, because they were constrained by the host. 03:04:14.400 |
Were they new to us? Yes. So I do understand, I can concede that machine learning systems, 03:04:23.360 |
artificial intelligence systems can generate new entities, but how novel are they? It remains to 03:04:29.600 |
be seen. - Yeah. And how novel the things that humans generate is also difficult to quantify. 03:04:36.640 |
They seem novel. - That's what a lot of people say. So the way to really get to genuine novelty, 03:04:46.640 |
and the assembly theory shows you the way, is to have different causal chains overlap. 03:04:51.680 |
And this really resonates with the time is fundamental argument. And if you're bringing 03:05:01.600 |
together a couple of objects with different initial conditions coming together, when they 03:05:07.840 |
interact, the more different their histories, the more novelty they generate in time going forward. 03:05:15.760 |
And so it could be that genuine novelty is basically about mix it up a little. And the 03:05:22.000 |
human brain is able to mix it up a little, and all that stimulus comes from the environment. 03:05:27.120 |
But all I think I'm saying is the universe is deterministic going back in time, 03:05:31.520 |
non-deterministic going forward in time, because the universe is too big in the future to contain 03:05:37.280 |
in the present. Therefore, these collisions of known things generate unknown things that then 03:05:44.640 |
become part of your data set and don't appear weird. That's how we give ourselves comfort. 03:05:49.520 |
The past looks consistent with this initial condition hypothesis, but actually we're 03:05:53.440 |
generating more and more novelty. And that's how it works. Simple. - So it's hard to quantify novelty 03:06:00.560 |
looking backwards. I mean, the present and the future are the novelty generators. - But I like 03:06:06.240 |
this whole idea of mining novelty. I think it is going to reveal why the limitations of current AI 03:06:14.240 |
is a bit like a printing press, right? Everyone thought that when the printing press came, that 03:06:20.400 |
writing books is going to be terrible, that you had evil spirits and all this. They were just books. 03:06:24.320 |
- And same would be with AI. But I think just the scale you can achieve in terms of impact 03:06:31.760 |
with AI systems is pretty nerve-wracking. - But that's what the big companies want you to think. 03:06:39.120 |
But not like in terms of destroy all humans, but you can have major consequences in the way social 03:06:46.160 |
media has had major consequences, both positive and negative. And so you have to kind of think 03:06:51.680 |
about it and worry about it. But yeah, people that fear monger, you know. - My pet theory for this, 03:06:57.280 |
you want to know? - Yeah. - Is I think that a lot of, and maybe I'm being, and I think, 03:07:02.560 |
I really do respect, you know, a lot of the people out there who are trying to have discourse about 03:07:08.640 |
the positive future. So open AI guys, meta guys and all this. What I wonder if they're trying to 03:07:14.080 |
cover up for the fact that social media has had a pretty disastrous effect on some level, and they're 03:07:18.560 |
just trying to say, "Oh yeah, we should do this." And covering up for the fact that we have got some 03:07:24.400 |
problems with, you know, teenagers and Instagram and Snapchat and, you know, all this stuff. And 03:07:30.400 |
maybe they're just overreacting now. It's like, "Oh yeah, sorry, we made the bubonic plate and 03:07:35.680 |
gave it to you all and you were all dying. And oh yeah, but look at this over here, it's even worse." 03:07:40.160 |
- Yeah, there's a little bit of that. But there's also not enough celebration of the positive impact 03:07:45.360 |
that all these technologies have had. We tend to focus on the negative and tend to forget 03:07:49.760 |
that, in part because it's hard to measure. Like, it's very hard to measure the positive 03:07:55.920 |
impact social media had on the world. - Yeah, I agree. But if what I worry about right now 03:08:01.200 |
is like, I'm really, I do care about the ethics of what we're doing. And one of the reasons why I'm 03:08:06.320 |
so open about the things we're trying to do in the lab, make life, look at intelligence, all this, 03:08:10.400 |
is so people say, "What are the consequences of this?" And you say, "Well, the consequences of 03:08:14.960 |
not doing it." And I think that what worries me right now in the present is lack of authenticated 03:08:26.080 |
- Yeah, human users. - I still think that there will be 03:08:29.840 |
AI agents that appear to be conscious, but they would have to be also authenticated and labeled 03:08:35.920 |
as such. There's too much value in that, like friendships with AI systems. There's too much 03:08:43.280 |
meaningful human experiences to have with AI systems that I just... 03:08:47.280 |
- But that's like a tool, right? It's a bit like a meditation tool, right? Some people have a 03:08:51.040 |
meditation tool, it makes them feel better. But I'm not sure you can ascribe sentience and legal 03:08:55.920 |
rights to a chatbot that makes you feel less lonely. - Sentience, yes. I think legal rights, 03:09:03.520 |
no. I think it's the same. You can have a really deep, meaningful relationship with a dog. 03:09:10.800 |
- The chatbot's not, right now, using the technology we use, it's not gonna be sentient. 03:09:15.840 |
- Ah, this is gonna be a fun, continued conversation on Twitter that I look forward to. 03:09:21.600 |
Since you've had, also, from another place, some debates that were inspired by the assembly theory 03:09:30.480 |
paper, let me ask you about God. Is there any room for notions of God in assembly theory? Who's God? 03:09:41.840 |
- Yeah, I don't know what God is a... I mean, so, God exists in our mind, created by selection. 03:09:49.760 |
So, human beings have created the concept of God in the same way that human beings have created 03:09:55.520 |
the concept of superintelligence. - Sure, but does it mean, does it not... 03:10:00.480 |
It still could mean that that's a projection from the real world, where we're just assigning words 03:10:09.760 |
and concepts to a thing that is fundamental to the real world. That there's something out there 03:10:16.640 |
that is a creative force underlying the universe. - I think the universe, there is a creative force 03:10:25.120 |
in the universe, but I don't think it's sentient. I mean, I think the... So, I do not understand the 03:10:32.080 |
universe. So, who am I to say that God doesn't exist? I am an atheist, but I'm not an angry 03:10:42.800 |
atheist, right? I have lots of... There's some people I know that are angry atheists and say 03:10:48.480 |
that religious people are stupid. I don't think that's the case. I have faith in some things, 03:10:55.760 |
because I don't... I mean, when I was a kid, I was like, "I need to know what the charge of 03:11:00.160 |
electron is." I was like, "I can't measure the charge of an electron." I just gave up and had 03:11:04.400 |
faith, okay? You know, resistors work. So, when it comes to... I want to know why the universe is 03:11:12.480 |
growing in the future and what humanity is going to become. I've seen that the acquisition of 03:11:19.200 |
knowledge via the generation of novelty to produce technology has uniformly made humans' lives 03:11:25.840 |
better. I would love to continue that tradition. - You said that there's that creative force. 03:11:33.680 |
Do you think, just to think on that point, do you think there's a creative force? Is there like a 03:11:39.360 |
thing, like a driver that's creating stuff? - Yeah. I think that... So, I think that... 03:11:47.920 |
- And where? Can you describe it mathematically? - Well, I think selection. I think selection... 03:11:53.360 |
- Selection is the force. - Selection is the force in the universe 03:11:56.480 |
that creates novelty. - So, is selection somehow fundamental? 03:12:00.960 |
- Yeah. I think persistence of objects that could decay into nothing through operations that 03:12:08.800 |
maintain that structure. I mean, think about it. It's amazing that things exist at all, that we're 03:12:20.960 |
A thing that exists persists in time. - Yeah. I mean, let's think maybe the universe is 03:12:26.400 |
actually, in the present, the things, everything that can exist in the present does exist. 03:12:39.600 |
Well, that would mean it's deterministic, right? - No, I think the universe is... So, 03:12:44.400 |
the universe started super small. The past was deterministic. There wasn't much going on. 03:12:48.560 |
And it was able to mine, mine, mine, mine, mine. And so, the process is somehow generating... 03:12:56.080 |
Universe is basically... I can't put... I'm trying to put this into words. 03:13:01.760 |
- Did you just say there's no free will, though? - No, I didn't say that. 03:13:07.840 |
there is free will. I think... I'm saying that free will occurs at the boundary between the... 03:13:16.160 |
- Past and the future? - The past and the future. 03:13:19.680 |
- Yeah. I got you. But everything that can exist does exist. 03:13:23.520 |
- Everything that is... So, everything that's possible to exist at this... So, no, 03:13:28.160 |
I'm really... - There's a lot of loaded words there. 03:13:34.080 |
element loaded into that statement. - I think that the universe is able 03:13:37.360 |
to do what it can in the present, right? - Yeah. 03:13:40.080 |
- And then I think in the future, there are other things that could be possible. We can 03:13:43.040 |
imagine lots of things, but they don't all happen. - Sure. That's where you sneak in free will right 03:13:51.520 |
saying is what exists is a convolution of the past with the present and the free will going 03:13:59.520 |
into the future. - But we can still imagine stuff, 03:14:02.240 |
right? We can imagine stuff that will never happen. - And it's amazing force, because you're 03:14:06.320 |
imagining... This is the most important thing that we don't understand is our imaginations can actually 03:14:13.200 |
change the future in a tangible way, which is what the initial conditions in physics cannot predict. 03:14:19.920 |
Your imagination has a causal consequence in the future. 03:14:34.640 |
as we know them right now. - Yeah. So, you think the imagination 03:14:41.760 |
- But it does exist in there in the head. - It does. 03:14:44.560 |
- And there must be a lot of power in whatever's going on. There could be a lot of power whatever's 03:14:48.880 |
going on in there. - If we then go back to the initial 03:14:52.080 |
conditions, and that is simply not possible, that can happen. But if we go into a universe where we 03:14:59.040 |
accept that there is a finite ability to represent numbers, and you have rounding... Well, not rounding 03:15:04.400 |
errors. You have some... What happens, your ability to make decisions, imagine, and do stuff 03:15:11.120 |
is at that interface between the certain and the uncertain. It's not, as Yasha was saying to me, 03:15:17.760 |
randomness goes and you just randomly do random stuff. It is that you are set free a little on 03:15:23.840 |
your trajectory. Free will is about being able to explore on this narrow trajectory that allows you 03:15:30.720 |
to build... You have a choice about what you build, or that choice is you interacting with a future 03:15:36.240 |
in the present. - What to you is most beautiful 03:15:40.800 |
about this whole thing? The universe. - The fact it seems to be very undecided, 03:15:50.320 |
very open. The fact that every time I think I'm getting towards an answer to a question, 03:15:58.000 |
there are so many more questions that make the chase. - Do you hate that it's going to be over 03:16:04.800 |
at some point? - Well, for me, I think if you think about it, is it over for Newton now? 03:16:13.120 |
Newton has had causal consequences in the future. We discuss him all the time. 03:16:18.480 |
- His ideas, but not the person. - The person just had a lot of causal 03:16:22.640 |
power when he was alive, but oh my God. One of the things I want to do is leave as many 03:16:26.160 |
Easter eggs in the future when I'm gone to go, "Oh, that's cool." 03:16:28.800 |
- Would you be very upset if somebody made a good, large language model that's fine-tuned 03:16:41.840 |
- I mean, if it's a faithful representation of what I've done in my life, that's great. That's 03:16:46.640 |
an interesting artifact, but I think the most interesting thing about knowing each other is we 03:16:51.760 |
don't know what we're going to do next. - Sure. Sure. 03:16:56.800 |
- I mean, within some constraints, I can predict some things about you, you can predict some 03:17:01.920 |
things about me, but we can't predict everything. - Everything. 03:17:04.720 |
- And it's because we can't predict everything is why we're excited to come back and discuss 03:17:09.680 |
and see. So yeah, I'm happy that it'll be interesting that some things that I've done 03:17:16.640 |
can be captured, but I'm pretty sure that my angle on mining novelty for the future 03:17:30.000 |
That's what life is, is just some novelty generation and then you're done. 03:17:38.880 |
Each one of us just generally a little bit, or have the capacity to at least. 03:17:42.880 |
- I think life is a selection produces life and life affects the universe in them. 03:17:50.640 |
Universes with life in them are materially and physically fundamentally different than 03:17:55.760 |
universes without life. And that's super interesting. And I have no beginnings of 03:18:01.840 |
understanding. I think maybe this is like in a thousand years, there'll be a new discipline 03:18:05.200 |
in humans. Yeah, of course, this is how it all works. - In retrospect, it will all be 03:18:11.440 |
obvious, I think. - I think assembly theory is obvious. That's why a lot of people got 03:18:15.600 |
angry. They were like, "Oh my God, this is such nonsense." And like, "Oh, actually it's 03:18:21.360 |
not quite." But the writing's really bad. - Well, I can't wait to see where it evolves, 03:18:27.440 |
Lee. And I'm glad to get to exist in this universe with you. You're a fascinating human. 03:18:33.520 |
This is always a pleasure. I hope to talk to you many more times. And I'm a huge fan of just 03:18:39.840 |
watching you create stuff in this world. And thank you for talking today. - It's a pleasure 03:18:45.120 |
as always, Lex. Thanks for having me on. - Thanks for listening to this conversation 03:18:49.200 |
with Lee Cronin. To support this podcast, please check out our sponsors in the description. 03:18:53.760 |
And now let me leave you with some words from Carl Sagan. "We can judge our progress by the 03:18:59.520 |
courage of our questions and the depth of our answers, our willingness to embrace what is true 03:19:06.080 |
rather than what feels good." Thank you for listening and hope to see you next time.