back to indexKarl Friston: Neuroscience and the Free Energy Principle | Lex Fridman Podcast #99
Chapters
0:0 Introduction
1:50 How much of the human brain do we understand?
5:53 Most beautiful characteristic of the human brain
10:43 Brain imaging
20:38 Deep structure
21:23 History of brain imaging
32:31 Neuralink and brain-computer interfaces
43:5 Free energy principle
84:29 Meaning of life
00:00:00.000 |
The following is a conversation with Carl Fristen, 00:00:03.080 |
one of the greatest neuroscientists in history. 00:00:10.320 |
known for many influential ideas in brain imaging, 00:00:19.840 |
of the free energy principle for action and perception. 00:00:23.280 |
Carl's mix of humor, brilliance, and kindness, 00:01:06.300 |
Cash App lets you send money to friends, buy Bitcoin, 00:01:09.420 |
and invest in the stock market with as little as $1. 00:01:16.220 |
let me mention a surprising fact related to physical money. 00:01:25.700 |
The other 92% of money only exists digitally. 00:01:29.980 |
So again, if you get Cash App from the App Store, 00:01:35.740 |
you get $10, and Cash App will also donate $10 to FIRST, 00:01:39.940 |
an organization that is helping to advance robotics 00:01:42.700 |
and STEM education for young people around the world. 00:01:45.820 |
And now, here's my conversation with Carl Fristen. 00:01:56.580 |
to the functional level to the highest level, 00:02:34.140 |
and what it does right down to the microcircuitry 00:02:58.100 |
if we have that perfect cartography of the brain? 00:03:04.820 |
And it would determine the sort of scientific career 00:03:11.620 |
every dendritic connection, every sort of microscopic 00:03:16.020 |
synaptic structure right down to the molecular level 00:03:18.740 |
was gonna give you the right kind of information 00:03:27.180 |
and you would study little cubic millimeters of brain 00:03:35.300 |
in holistic functions and a sort of functional anatomy 00:03:40.300 |
of the sort that a neuropsychologist would understand, 00:03:52.980 |
I think there are principled reasons not to go too far. 00:04:01.660 |
as a machine that's performing a form of inference 00:04:09.020 |
there are, that understanding, that level of understanding 00:04:15.980 |
is necessarily cast in terms of probability densities 00:04:24.180 |
And what that tells you is that you don't really want 00:04:27.100 |
to look at the atoms to understand the thermodynamics 00:04:30.500 |
of probabilistic descriptions of how the brain works. 00:04:34.420 |
So I personally wouldn't look at the molecules 00:04:44.060 |
of some non-equilibrium steady state of a gas 00:04:48.820 |
I wouldn't spend my life looking at the individual molecules 00:04:57.500 |
On the other hand, if you go too coarse grain, 00:05:00.180 |
you're gonna miss some basic canonical principles 00:05:11.740 |
about high field magnetic resonance imaging at 7TESLA. 00:05:17.460 |
Well, it gives us for the first time the opportunity 00:05:23.940 |
that distinguish between different layers of the cortex 00:05:29.380 |
evincing generic principles of canonical microcircuitry 00:05:41.180 |
and these density dynamics or neuronal ensemble 00:05:44.940 |
population dynamics that underwrite our brain function. 00:05:49.420 |
So somewhere between a millimeter and a meter. 00:05:56.500 |
if you allow me, what to you is the most beautiful 00:06:00.180 |
or surprising characteristic of the human brain? 00:06:03.460 |
- I think it's hierarchical and recursive aspect. 00:06:08.700 |
- Of the structure or of the actual representation 00:06:15.380 |
I was actually answering in a dull-minded way 00:06:22.980 |
I mean, there are many marvelous organs in the body. 00:06:28.500 |
Without it, you wouldn't be around for very long. 00:06:32.620 |
And it does some beautiful and delicate biochemistry 00:06:35.980 |
and homeostasis and you're evolved with a finesse 00:06:48.940 |
that crafted structure of sparse connectivity 00:06:56.340 |
- So you said a lot of interesting terms here. 00:07:03.500 |
So I've never thought of our brain as hierarchical. 00:07:08.500 |
Sort of I always thought it was just like a giant mess, 00:07:14.300 |
interconnected mess where it's very difficult 00:07:18.180 |
But in what sense do you see the brain as hierarchical? 00:07:37.860 |
So hierarchies, if you just think about the nature 00:07:42.060 |
of a hierarchy, how would you actually build one? 00:07:46.020 |
And what you would have to do is basically carefully remove 00:07:52.180 |
the completely connected soups that you might have in mind. 00:08:00.140 |
by a sparse and particular connectivity structure. 00:08:03.200 |
I'm not committing to any particular form of hierarchy. 00:08:11.420 |
In virtue of the fact that there is a sparsity 00:08:14.220 |
of connectivity, not necessarily of a qualitative sort, 00:08:32.260 |
to possess axonal processes, neuronal processes 00:08:44.780 |
And furthermore, on the basis of anatomical connectivity 00:08:50.300 |
and tracer studies, we know that that has a direct effect 00:09:05.020 |
that might be best understood a little bit like an onion. 00:09:08.500 |
There is a concentric, sometimes referred to as centripetal 00:09:19.340 |
So you can think of the brain as in a rough sense, 00:09:23.860 |
like an onion and all the sensory information 00:09:33.460 |
or to your secretory organs come from the surface. 00:09:45.140 |
that sits and looks at the exchange on the surface. 00:09:55.700 |
That's what I mean by a hierarchical organization. 00:10:04.820 |
that lends the architecture a hierarchical structure 00:10:08.580 |
that tells one a lot about the kinds of representations 00:10:29.100 |
a process that is in the service of doing something, 00:10:37.500 |
that shape that message-passing also dictate its function. 00:10:53.620 |
What can we learn about the brain by imaging it, 00:10:57.980 |
which is one way to sort of look at the anatomy of it, 00:11:26.300 |
that we're effectively looking at fluctuations 00:11:43.940 |
in terms of resting state, endogenous or intrinsic activity. 00:11:52.700 |
either induced or intrinsic in the neural activity, 00:12:07.700 |
One, functional specialization or segregation. 00:12:12.140 |
It simply means that there are certain parts of the brain 00:12:16.580 |
that may be specialized for certain kinds of processing. 00:12:22.180 |
our ability to recognize or to perceive movement 00:12:37.340 |
Which means that if I were to compare your brain activity 00:12:41.180 |
during a period of static, viewing a static image, 00:12:56.580 |
restricted, segregated differences in activity. 00:13:02.180 |
And those are basically the hotspots that you see 00:13:06.940 |
that test for the significance of the responses 00:13:15.020 |
what some people have perhaps unkindly called 00:13:19.300 |
This is a phonology augmented by modern day neuroimaging. 00:13:24.300 |
Basically finding blobs or bumps on the brain 00:13:39.060 |
since that's such a beautiful sort of ideal to strive for, 00:13:49.220 |
whereas like you said, there are segregated regions 00:13:51.620 |
that are responsible for the different function. 00:13:57.300 |
in terms of looking at the progress of studying the brain? 00:14:00.620 |
- Oh, I think enormous progress has been made 00:14:35.860 |
or losing particular parts of the visual cortex 00:14:59.980 |
was located in this functionally segregated area. 00:15:03.000 |
And you could then go and put invasive electrodes 00:15:16.500 |
But at no point could you exclude the possibility 00:15:28.460 |
Just out of curiosity, from your perspective, 00:15:34.520 |
versus the other animals in terms of our ability 00:15:42.660 |
from a human brain, the greater the difference is, 00:15:57.440 |
So if you're talking about sort of canonical principles 00:16:00.680 |
of microcircuitry, it might be perfectly okay 00:16:09.700 |
at the finer details of organization of visual cortex 00:16:13.140 |
and V1, V2, these are designated sort of patches of cortex 00:16:44.580 |
right from, well, yes, worms, right the way through 00:16:53.300 |
- So now returning to, so that was the early ideas 00:16:56.940 |
of studying the functional regions of the brain 00:17:06.180 |
might be somewhat responsible for this type of function. 00:17:12.980 |
- Right, well, just actually to reverse a bit, 00:17:19.580 |
That was actually a very prominent idea at one point, 00:17:36.700 |
it didn't matter where you took these spoonfuls out, 00:17:38.860 |
they always showed the same kinds of deficits. 00:17:41.020 |
So it was very difficult to infer functional specialization 00:17:55.300 |
neuronal excitement when looking at this versus that, 00:18:05.500 |
are very restricted and they're here or they're over there. 00:18:09.260 |
If I do this, then this part of the brain lights up. 00:18:18.700 |
with the advent of positron emission tomography 00:18:21.540 |
and then functional magnetic resonance imaging came along. 00:18:34.780 |
There are people who believe that it's all in the anatomy. 00:18:40.260 |
then you understand the function at some level. 00:18:45.780 |
on a deep understanding of the anatomy and the connectivity, 00:18:49.960 |
but they were all confirmed and taken much further 00:18:55.140 |
So that's what I meant by we've made an enormous 00:19:03.940 |
by looking at these functionally selective responses. 00:19:22.140 |
How all of these regionally specific responses 00:19:26.960 |
were orchestrated, how they were distributed, 00:19:29.840 |
how did they relate to distributed processing 00:19:35.560 |
So then you turn to the more challenging issue 00:19:53.580 |
- But nevertheless, we come back to this challenge 00:19:58.260 |
of trying to figure out how everything is integrated. 00:20:01.100 |
But what's your feeling, what's the general consensus? 00:20:04.220 |
Have we moved away from the magic soup view of the brain? 00:20:14.460 |
you said some people believe that the structure 00:20:19.180 |
at the core of the function by just deeply understanding 00:20:30.280 |
of studying through imaging and all the different methods 00:20:41.140 |
of using lots of long words and then you introduced one 00:20:46.300 |
Because deep is the sort of millennial equivalent 00:20:53.760 |
not only are you very millennial and very trending, 00:20:57.600 |
but you're also implying a hierarchical architecture. 00:21:01.500 |
So it is a depth, which is for me, the beautiful thing. 00:21:05.260 |
- That's right, the word deep kind of, yeah, exactly. 00:21:11.500 |
That indeed the implicit meaning of the word deep 00:21:18.520 |
- So deep inside the onion is the center of your soul. 00:21:35.180 |
I mean, just what's out there that's interesting 00:21:37.940 |
for people maybe outside the field to understand 00:21:52.580 |
And they can range from, and let's limit ourselves 00:21:56.860 |
to some imaging-based non-invasive techniques. 00:22:05.340 |
the structural attributes, the amount of water, 00:22:10.420 |
You can make lots of inferences about the structure 00:22:16.660 |
abduced from an X-ray, but a very nuanced X-ray 00:22:32.160 |
Then you move on to the kinds of measurements 00:22:38.120 |
And the most prevalent of those fall into two camps. 00:22:42.000 |
You've got these metabolic, sometimes hemodynamic, 00:22:48.920 |
So these metabolic and/or hemodynamic signals 00:23:05.340 |
Characteristically, though, the time constants 00:23:15.840 |
- And this is referring, forgive me for the dumb questions, 00:23:25.120 |
- So there's a ton of, it seems like there's a ton 00:23:30.240 |
- So, but what's the interaction between the flow of blood 00:23:38.740 |
And that interplay accounts for several careers 00:23:52.260 |
the neuronal infrastructure, the actual message passing 00:23:54.580 |
that we think underlies our capacity to perceive and act, 00:23:59.580 |
how is that coupled to the vascular responses 00:24:05.980 |
that supply the energy for that neural processing? 00:24:13.360 |
arteries and veins, that gets progressively finer 00:24:23.760 |
So coming back to this sort of onion perspective, 00:24:26.480 |
we were talking before using the onion as a metaphor 00:24:43.940 |
and then the interior of the brain is constituted 00:24:47.380 |
by fatty wires, essentially, axonal processes 00:24:58.280 |
they look fatty and white, and so it's called white matter, 00:25:04.100 |
which does the computation, constituted largely by neurons, 00:25:17.780 |
but it's a big ball of connections, like spaghetti, 00:25:20.860 |
very carefully structured with sparse connectivity 00:25:23.100 |
that preserves this deep hierarchical structure, 00:25:25.760 |
but all the action takes place on the surface, 00:25:41.120 |
which is rapidly absorbed and used by neural cells 00:25:55.000 |
So one peculiar thing about cerebral metabolism, 00:25:58.440 |
brain metabolism, is it really needs to be driven 00:26:14.100 |
you really do have to water that piece of the garden 00:26:21.540 |
- So that contains a lot of, hence the imaging 00:26:29.600 |
But it is slightly compromised in terms of the resolution. 00:26:33.440 |
So the deployment of these little micro vessels 00:26:37.360 |
that water the garden to enable the neural activity 00:27:04.080 |
of electromagnetic signals as they're generated 00:27:07.700 |
So here, the temporal bandwidth, if you like, 00:27:18.040 |
And then you can get into the phasic fast responses 00:27:37.120 |
But the problem is you're looking at electromagnetic signals 00:28:05.760 |
Or you've got these electromagnetic EEG, MEG setups 00:28:15.800 |
when something has responded, but you don't know where. 00:28:19.200 |
So you've got these two complementary measures, 00:28:33.360 |
And then the second level of responding to your question, 00:28:39.440 |
what are the big ways of using this technology? 00:28:43.400 |
So once you've chosen the kind of neuroimaging 00:28:47.160 |
that you want to use to answer your set questions, 00:28:59.640 |
that you can bring to bear in order to answer your questions 00:29:07.000 |
And interestingly, they both fall into the same two camps 00:29:11.440 |
this dialectic between specialization and integration, 00:29:17.120 |
So it's the cartography, the blobology analyses. 00:29:20.840 |
- I apologize, I probably shouldn't interrupt so much, 00:29:29.200 |
- It's a neologism, which means the study of blobs. 00:29:36.640 |
or is there an actual, does the word blobology 00:29:43.320 |
It would not appear in a worthy specialist journal. 00:29:48.320 |
But it's the fond word for the study of literally 00:29:52.720 |
little blobs on brain maps showing activations. 00:29:56.160 |
So the kind of thing that you'd see in the newspapers 00:30:10.040 |
in that stream of analysis does actually call upon 00:30:17.600 |
So seriously, they're actually called Euler characteristics, 00:30:21.760 |
and they have a lot of fancy names in mathematics. 00:30:30.600 |
there's echoes of blobs there when you consider 00:30:43.080 |
you entities of, well, from the free energy point of view, 00:30:48.080 |
entities of anything, but from the point of view 00:30:50.280 |
of the analysis, the cartography of the brain, 00:30:55.280 |
these are the entities that constitute the evidence 00:31:01.640 |
You have segregated this function in this blob, 00:31:06.760 |
And that's basically, if you were a mapmaker of America 00:31:17.720 |
would be to identify the cities, for example, 00:31:22.000 |
All of these uniquely spatially localizable features, 00:31:26.920 |
possibly topological features, have to be placed somewhere, 00:31:30.680 |
because that requires a mathematics of identifying 00:31:33.520 |
what does a city look like on a satellite image, 00:31:39.120 |
What data features would evidence that particular thing 00:31:58.640 |
statistical measure of the degree of activation 00:32:02.960 |
crosses a threshold, and in crossing that threshold 00:32:06.520 |
in the spatially restricted part of the brain, 00:32:11.080 |
And that's basically what statistical parametric mapping 00:32:13.680 |
does, it's basically mathematically finessed blobology. 00:32:19.840 |
- Okay, so you kind of described these two methodologies 00:32:23.160 |
for one is temporally noisy, one is spatially noisy, 00:32:39.400 |
and their dream is to, well, there's a bunch of dreams, 00:32:59.800 |
of this kind of technology of brain computer interfaces, 00:33:02.360 |
to be able to now have a window or direct contact 00:33:07.360 |
within the brain to be able to measure some of the signals, 00:33:21.400 |
So the good bits, if you just look at the legacy 00:33:34.680 |
we understand the brain prior to neuroimaging. 00:33:45.240 |
was done by stimulating the brain of, say, dogs, 00:33:51.880 |
either with their muscles or with their salivation, 00:33:56.400 |
and imputing what that part of the brain must be doing. 00:34:00.880 |
That if I stimulate it, and I evoke this kind of response, 00:34:09.080 |
So there's a long history of brain stimulation, 00:34:12.280 |
which continues to enjoy a lot of attention nowadays. 00:34:18.880 |
Deep brain stimulation for Parkinson's disease 00:34:23.580 |
and also a wonderful vehicle to try and understand 00:34:27.760 |
the neuronal dynamics underlying movement disorders 00:34:39.200 |
and will it work in people who are depressed, for example. 00:34:43.280 |
Quite a crude level of understanding what you're doing, 00:34:49.040 |
that these kinds of brute force interventions 00:35:14.400 |
and there've been enormous advances within limits. 00:35:20.720 |
Advances in terms of our ability to understand 00:35:42.220 |
ranging from sort of trying to replace lost visual signals 00:35:48.320 |
through to giving people completely new signals. 00:35:51.200 |
one of the, I think, most engaging examples of this 00:35:57.080 |
is equipping people with a sense of magnetic fields. 00:36:00.640 |
So you can actually give them magnetic sensors 00:36:05.440 |
should we say, tactile pressure around their tummy, 00:36:13.880 |
- And after a few weeks, they take it for granted. 00:36:17.640 |
They integrate it, they imbibe it, they assimilate it. 00:36:22.320 |
into the way that they literally feel their world, 00:36:25.440 |
but now equipped with this sense of magnetic direction. 00:36:31.020 |
about the brain's plastic potential to remodel, 00:36:34.880 |
and its plastic capacity to suddenly try to explain 00:36:54.680 |
and understanding the nature and the power of our brains. 00:37:08.360 |
such as locked-in syndrome, such as paraplegia, 00:37:18.920 |
So then we come to the more negative part of my ambivalence. 00:37:33.360 |
is probably a large out of ignorance than anything else. 00:37:37.240 |
Generally speaking, the bandwidth and the bit rates 00:37:51.440 |
So that would be like me only being able to communicate 00:38:05.580 |
And it is not even within an order of magnitude 00:38:13.440 |
near what we actually need for an inactive realization 00:38:18.440 |
of what people aspire to when they think about 00:38:21.320 |
curing people with paraplegia or replacing sight, 00:38:33.760 |
on the kinds of recurrent information exchange 00:38:41.040 |
between a brain and some augmented or artificial interface? 00:39:01.840 |
but at the moment, there may be fundamental reasons 00:39:13.520 |
let's paint the challenge facing brain-computer interfacing 00:39:34.360 |
steady states and dynamics that the brain does, the weather. 00:39:41.120 |
- Imagine you had some very aggressive satellites 00:39:47.600 |
that could perturb some little parts of the weather system. 00:39:59.720 |
and make the weather respond in a way that I want it to? 00:40:03.400 |
You're talking about chaos control on a scale 00:40:13.760 |
as you might read about it in a science fiction novel, 00:40:31.960 |
that requires you to have evolved with that system, 00:40:35.280 |
that you have to be part of a very delicately structured, 00:40:40.280 |
deeply structured, dynamic, ensemble activity 00:40:51.640 |
or plugging in a peripheral interface adapter. 00:40:54.680 |
It is much more like getting into the weather patterns 00:41:03.560 |
and meaningfully relate that to the outside world. 00:41:07.160 |
So I think there are enormous challenges there. 00:41:09.920 |
- So I think the example of the weather is a brilliant one. 00:41:13.280 |
And I think you paint a really interesting picture 00:41:21.000 |
including the low bound of the bandwidth and so on. 00:41:32.760 |
is the engineering challenge of controlling the weather, 00:41:34.840 |
of getting those satellites up and running and so on. 00:41:37.560 |
And once they are, then the rest is fundamentally 00:41:42.240 |
the same approaches that allow you to win in a game of Go 00:41:46.880 |
will allow you to potentially play in this soup, 00:41:51.000 |
So I have a hope that sort of machine learning methods 00:41:58.840 |
But perhaps you're right that it is a biology 00:42:24.620 |
You can't make any mistakes, you can't damage things. 00:42:31.360 |
One of the things I was really impressed by at Neuralink 00:42:45.880 |
if anyone can do it, it's some of these world-class 00:42:50.800 |
So I think the conclusion of our discussion here 00:42:55.240 |
is of this part is basically that the problem 00:42:59.920 |
is really hard, but hopefully not impossible. 00:43:07.240 |
So you've also formulated a fascinating principle, 00:43:35.960 |
So I think it's interesting to acknowledge that. 00:43:46.040 |
On the one hand, but I should also acknowledge 00:43:51.880 |
it inherits an awful lot from machine learning as well. 00:43:55.340 |
So the free energy principle is just a formal statement 00:44:00.340 |
that the existential imperatives for any system 00:44:21.220 |
the probability of existing as the evidence that you exist. 00:44:25.720 |
And if you can write down that problem of existence 00:44:30.900 |
then you can use all the maths that has been developed 00:44:48.260 |
you can always interpret anything that exists 00:44:51.180 |
in virtue of being separate from the environment 00:45:03.620 |
And if you're from the machine learning community, 00:45:05.600 |
you will know that as a negative evidence lower bound 00:45:09.220 |
or a negative elbow, which is the same as saying 00:45:17.860 |
are trying to maximize the compliment of that, 00:45:26.480 |
So that's basically the free energy principle. 00:45:30.140 |
- But to even take a sort of a small step backwards, 00:45:37.080 |
There's a lot of beautiful poetic words here, 00:45:48.100 |
of trying to describe, if you're looking at a blob, 00:46:07.220 |
That's just a fascinating sort of philosophically 00:46:25.420 |
So maybe can you talk about that optimization view of it? 00:46:30.220 |
So what's trying to be minimized and maximized? 00:46:33.500 |
A system that's alive, what is it trying to minimize? 00:46:56.760 |
what licenses you to say that something exists? 00:47:08.060 |
of things that exist, then they have certain properties. 00:47:29.340 |
So it's good you introduced the word optimization. 00:47:32.200 |
So what the free energy principle in its sort of 00:47:42.480 |
and simplest says, is that if something exists, 00:48:08.560 |
as the evidence lower bound in machine learning 00:48:11.380 |
or Bayesian model evidence in Bayesian statistics, 00:48:23.500 |
which is a bound on a surprisal self-information, 00:48:49.300 |
to be a physicist who was trying to understand 00:48:52.420 |
the fundaments of non-equilibrium steady state. 00:49:08.660 |
as a sort of more specific kind of case study? 00:49:15.700 |
but at its simplest, a single-celled organism 00:49:36.700 |
how on earth can you even elaborate questions 00:49:41.700 |
about the existence of a single drop of oil, for example? 00:50:00.640 |
which is the solvent in which it is immersed, 00:50:07.420 |
Why doesn't the oil just dissolve into solvent? 00:50:18.580 |
and the external states in which it's immersed, 00:50:22.300 |
if you're a physicist, say it would be the heat path. 00:50:31.540 |
an ensemble of atoms or molecules immersed in the heat path. 00:50:36.420 |
But the question is, how did the heat path get there 00:50:44.500 |
I mean, it's such a fascinating idea of a drop of oil 00:50:58.660 |
and also the idea of like, where does the thing, 00:51:02.140 |
where does the drop of oil end and where does it begin? 00:51:07.140 |
- Right, so I mean, you're asking deep questions, 00:51:24.940 |
I answer from the point of view of a psychologist 00:51:28.260 |
and predictive coding and the brain as an inference machine. 00:51:31.820 |
But you haven't asked me from that perspective, 00:51:34.100 |
I'm answering from the point of view of a physicist. 00:51:41.220 |
but if it exists, what properties must it display? 00:51:44.660 |
So that's the deflationary part of the free energy principle. 00:51:47.100 |
The free energy principle does not supply an answer 00:51:57.900 |
That's the sort of the thing that's on offer. 00:52:01.740 |
And it so happens that these properties it must display 00:52:05.420 |
are actually intriguing and have this inferential gloss, 00:52:13.860 |
that inherits on the fact that the very preservation 00:52:18.140 |
of the boundary between the oil drop and the not oil drop 00:52:22.860 |
requires an optimization of a particular function 00:52:33.260 |
which is why I started with existential imperatives. 00:52:55.900 |
in computational chemistry with self-assembly. 00:53:12.180 |
from the states or the soup in which they are immersed. 00:53:16.840 |
So from the point of view of computational chemistry, 00:53:25.580 |
to minimize its free energy, its thermodynamic free energy. 00:53:31.660 |
that thermodynamic free energy is just the negative elbow. 00:53:38.380 |
So the very emergence of existence of structure of form 00:53:42.700 |
that can be distinguished from the environment 00:53:49.420 |
necessitates the existence of an objective function 00:54:00.500 |
- And so just to clarify, I'm trying to wrap my head around. 00:54:05.100 |
So the free energy principle says that if something exists, 00:54:17.660 |
we can't just go into a soup and there's no mechanism. 00:54:21.580 |
Free energy principle doesn't give us a mechanism 00:54:25.940 |
Is that what's being implied that you can kind of use it 00:54:32.060 |
to reason, to think about, study a particular system 00:54:58.940 |
but you kind of drew a line between living and existing. 00:55:08.740 |
So things do exist, grains of sand, rocks on the moon, 00:55:23.860 |
from the environment in which they are immersed, 00:55:31.300 |
Taking this sort of model evidence interpretation 00:55:37.340 |
that basically means there's self-evidencing. 00:55:45.500 |
Statistically speaking, which I don't think I said that. 00:55:58.100 |
- I'm gonna have to think about that for a few days. 00:56:06.100 |
- So the step through to answer your question 00:56:15.780 |
First of all, you have to define what it means to exist, 00:56:22.060 |
you have to define what probabilistic properties 00:56:36.020 |
Again, it's not what's connected or what's correlated 00:56:56.900 |
And in this instance, basically being able to identify 00:57:09.100 |
well, there are actually four kinds of states 00:57:12.740 |
in any given universe that contains anything. 00:57:43.700 |
about trying to explore what it means to exist, 00:57:56.980 |
So anyway, so what, you were just talking about the surface, 00:58:01.180 |
- Yeah, so this surface, or these blanket states 00:58:21.580 |
or external states, which ones can influence each other 00:58:30.940 |
that you would find in non-equilibrium physics 00:58:33.580 |
or steady state or thermodynamics or hydrodynamics 00:58:43.220 |
And what it looks like is if all the normal gradient flows 00:58:48.140 |
that you would associate with any non-equilibrium system 00:58:56.180 |
part of the Markov blanket and the internal states 00:58:59.060 |
seem to be hill climbing or doing a gradient descent 00:59:13.180 |
You can write down the existence of this oil drop 00:59:16.020 |
in terms of flows, dynamics, equations of motion, 00:59:29.580 |
and must be, trying to look as if they're minimising 00:59:34.340 |
the same function, which is a log probability 00:59:39.300 |
Interesting thing is that, what would they be called 00:59:45.700 |
So what we're talking about are internal states, 01:00:16.620 |
Well, it means the active states, the internal states, 01:00:19.300 |
are now jointly not influenced by external states. 01:00:44.540 |
even a little oil drop with autonomous states 01:00:51.380 |
their variational free energy or their negative elbow, 01:00:59.380 |
And that would be an interesting intellectual exercise. 01:01:12.100 |
Now we make the next move, but what about living things? 01:01:21.580 |
and a little tadpole or a little larva or a plankton? 01:01:44.700 |
So it has sensor capabilities and acting capabilities 01:02:02.220 |
I mean, yeah, mortality, I'm not exactly sure. 01:02:19.780 |
but does it in and of itself move autonomously? 01:02:40.620 |
- What I'm trying to say is you're absolutely right. 01:02:49.420 |
then you've got, I think, something that's living. 01:03:02.620 |
but they're not influenced by the external states, 01:03:07.220 |
So there are two types of oil drops, if you like. 01:03:10.500 |
There are oil drops where the internal states are so random 01:03:20.380 |
And the thing cannot, on balance, on average, 01:03:31.220 |
There's lots of intrinsic autonomous activity going on. 01:03:35.900 |
because it doesn't have the deep in the millennial sense, 01:03:38.180 |
a hierarchical structure that the brain does, 01:03:41.060 |
there is no overall mode or pattern or organisation 01:03:54.140 |
but on mass, at the scale of the actual surface of the sun, 01:03:58.340 |
the average position of that surface cannot in itself move 01:04:02.980 |
because the internal dynamics are more like a hot gas. 01:04:08.540 |
Whereas your internal dynamics are much more structured 01:04:19.780 |
your autonomic nervous system and its effectors, 01:04:28.340 |
if you haven't thought of it like this before, 01:04:32.500 |
there is no other way that you can change the universe 01:04:39.340 |
Whether that moving is articulating with my voice box 01:04:48.780 |
there's only one way you can change the universe, 01:04:52.900 |
- And the fact that you do so non-randomly makes you alive. 01:05:23.020 |
its active states are actually changing the external states. 01:05:34.100 |
that depends upon this deeply structured autonomous behaviour 01:05:44.420 |
that are not only modelling the data impressed 01:05:53.860 |
but they are actively resampling those data by moving. 01:05:58.860 |
They're moving towards chemical gradients and chemotaxis. 01:06:02.620 |
So they've gone beyond just being good little models 01:06:14.380 |
in a panpsychic sense, be construed as a little being 01:06:18.500 |
that has now perfectly inferred it's a passive, 01:06:22.620 |
non-living oil drop living in a bowl of water. 01:06:29.940 |
with the ability to go out and test that hypothesis 01:06:34.100 |
so it can actually push its surface over there, over there, 01:06:38.740 |
or then you start to move to much more lifelike form. 01:06:45.020 |
but it actually is quite important in terms of reflecting 01:06:48.940 |
what I have seen since the turn of the millennium, 01:07:05.860 |
this sort of the central importance of movement, 01:07:09.620 |
I think has yet to really hit machine learning. 01:07:14.100 |
It certainly has now diffused itself throughout robotics, 01:07:19.100 |
and perhaps you could say certain problems in active vision 01:07:27.340 |
But machine learning of the data mining, deep learning sort, 01:07:35.980 |
with the movement problem and the active sampling of data, 01:07:39.220 |
it's just said, "We don't need to worry about it. 01:07:40.700 |
"We can see all the data 'cause we've got big data." 01:07:52.300 |
- So current machine learning is much more like the oil drop. 01:07:59.580 |
to nearly all the data that we need to be exposed to, 01:08:12.180 |
Let's go and move and ingest food, for example, 01:08:15.700 |
and see what that, you know, is that evidence 01:08:17.700 |
that I'm the kind of thing that likes this kind of food. 01:08:20.380 |
- So the next natural question, and forgive this question, 01:08:23.980 |
but if we think of sort of even artificial intelligence 01:08:27.100 |
systems, which has just painted a beautiful picture 01:08:34.420 |
do you find within this framework a possibility 01:08:47.460 |
Self-awareness and expanded to consciousness, 01:09:00.860 |
- Well, yeah, I think it's possible to think about it, 01:09:12.700 |
I think you'd have to speak to a qualified philosopher 01:09:25.860 |
to try and tie down the maths and the calculus 01:09:34.060 |
either in terms of sort of a minimal consciousness, 01:09:42.380 |
And what I'm talking about is the ability effectively 01:09:52.380 |
So you could argue that a virus does have a form of agency 01:09:57.380 |
in virtue of the way that it selectively finds hosts 01:10:09.260 |
to think about planning and moving in a purposeful way 01:10:18.580 |
you might think announce not quite as unconscious 01:10:21.980 |
as a virus, it certainly seems to have a purpose. 01:10:26.140 |
It talks to its friends en route during its foraging. 01:10:43.140 |
with the complexity of planning that may contain an answer. 01:10:47.940 |
I mean, it'd be beautiful if we can find a line 01:10:51.460 |
beyond which we can say a being is conscious. 01:11:12.380 |
So you're saying it would be wonderful to draw a line. 01:11:33.100 |
with at what point does a pile of sand become a pile? 01:11:37.060 |
Is it one grain, two grains, three grains, or four grains? 01:12:17.220 |
literally the thermodynamics and gradient flows 01:12:34.320 |
That self-evidencing must be evidence for a model 01:12:51.140 |
it must include a model of the future consequences 01:12:53.820 |
of your active states or your action, just planning. 01:12:56.780 |
So we're now in the game of planning as inference. 01:13:05.220 |
because again, it's the consequences of moving. 01:13:08.500 |
It's the consequences of selecting those data 01:13:14.220 |
And that tells you immediately that even to be a contender 01:13:26.780 |
Then you've got to have movement in the game. 01:13:29.260 |
And furthermore, you've got to have a generative model 01:13:32.580 |
of the sort you might find in say a variational autoencoder 01:13:35.900 |
that is thinking about the future conditioned 01:13:41.900 |
Now that brings a number of things to the table, 01:13:45.340 |
well, those who've got all the right ingredients 01:13:50.700 |
of different courses of action into the future 01:14:27.300 |
I don't think it gets you quite as far as self-aware though. 01:14:58.860 |
would probably say that a goldfish, a pet fish, 01:15:06.940 |
They would probably argue about their favorite cat, 01:15:53.260 |
I don't think that, well, perhaps we should just, 01:16:01.820 |
you'd see the truisms that you've just exposed for us. 01:16:07.620 |
I'm mindful that I didn't answer your question before. 01:16:11.380 |
Well, what's the free energy principle good for? 01:16:21.340 |
It can be regarded, it's gonna sound very arrogant, 01:16:24.060 |
but it is of the sort of theory of natural selection 01:16:42.060 |
It tells you nothing about the actual phenotype 01:16:44.740 |
and it wouldn't allow you to build something. 01:17:04.740 |
In a sense, the free energy principle has that same 01:17:22.220 |
And you just keep on going round and round and round. 01:17:35.660 |
like differential evolution or genetic algorithms 01:17:57.260 |
a probabilistic description of causes and consequences, 01:18:01.700 |
causes out there, consequences in the sensorium, 01:18:04.540 |
on the sensory parts of the Markov Planckian, 01:18:11.780 |
and then cause it to autonomously self-evidence. 01:18:15.860 |
So you should be able to write down oil droplets. 01:18:20.100 |
where you have supplied the objective function 01:18:34.140 |
when you can write down your required evidence 01:18:46.780 |
or this data, given that model is effectively 01:18:54.220 |
of the variational free energy bounds or approximates. 01:18:58.260 |
That means that you can actually write down the model 01:19:00.900 |
and the kind of thing that you want to engineer, 01:19:04.660 |
the kind of AGI, artificial general intelligence, 01:19:16.740 |
but you would engineer a robot and a computer 01:19:19.820 |
to perform a gradient descent on that objective function. 01:19:38.060 |
is write down your perfect artifact probabilistically 01:19:43.060 |
in the form of a probabilistic generative model, 01:19:46.540 |
probability distribution over the causes and consequences 01:19:49.860 |
of the world in which this thing is immersed. 01:19:54.700 |
And then you just engineer a computer and a robot 01:19:58.060 |
to perform a gradient descent on that objective function. 01:20:08.060 |
So it's the form and the structure of that generative model, 01:20:12.180 |
which basically defines the artifact that you will create, 01:20:15.660 |
or indeed, the kind of artifact that has self-awareness. 01:20:22.060 |
very much like natural selection doesn't tell you 01:20:26.980 |
So you have to drill down on the actual phenotype, 01:20:36.380 |
that tells me immediately the kinds of generative models 01:20:40.700 |
I would have to write down in order to have self-awareness? 01:20:43.500 |
- What you said to me was, I have to have a model 01:20:48.220 |
that is effectively fit for purpose for this kind of world 01:20:53.700 |
And if I now make the observation that this kind of world 01:20:57.140 |
is effectively largely populated by other things like me, 01:21:16.340 |
then it becomes, again, mandated to have a sense of self. 01:21:25.260 |
by things like me, basically a social world, a community, 01:21:29.500 |
then it becomes necessary now for me to infer 01:21:34.420 |
I wouldn't need that if I was on Mars by myself, 01:21:46.500 |
a hypothesis, ah, yes, it is me that is experiencing 01:21:54.700 |
induced by the fact that there are others in that world. 01:21:58.260 |
So I think that the special thing about self-aware artifacts 01:22:03.260 |
is that they have learned to, or they have acquired, 01:22:08.300 |
or at least are equipped with, possibly by evolution, 01:22:14.580 |
there are lots of copies of things like them around, 01:22:17.380 |
and therefore they have to work out it's you and not me. 01:22:24.580 |
I never thought of that, that the purpose of, 01:22:28.460 |
the really usefulness of consciousness or self-awareness 01:22:32.980 |
in the context of planning existing in the world 01:22:35.940 |
is so you can operate with other things like you. 01:22:38.380 |
And like you could, it doesn't have to necessarily be human. 01:22:43.460 |
- Absolutely, well, we imbue a lot of our attributes 01:22:56.220 |
that basically you're me, and it's just your turn to talk. 01:23:04.180 |
the highest, if you like, manifestation or realization 01:23:09.620 |
I mean, the human condition doesn't get any higher 01:23:12.540 |
than this talking about the philosophy of existence 01:23:17.900 |
But in that conversation, there is a beautiful art 01:23:22.900 |
of turn-taking and mutual inference, theory of mind. 01:23:32.500 |
I have to have a model in my head of your model in your head. 01:23:35.780 |
That's the highest, the most sophisticated form 01:23:38.300 |
of generative model, where the generative model 01:23:51.220 |
Because without that, we'd both be talking over each other, 01:23:54.620 |
or we'd be singing together in a choir, you know? 01:23:58.260 |
That was just probably not, that's not a brilliant analogy 01:24:13.220 |
I'll re-listen to this conversation many times. 01:24:15.780 |
There's so much poetry in this, and mathematics. 01:24:21.600 |
Let me ask the silliest, or perhaps the biggest question 01:24:57.620 |
- I'm tempted to answer that, again, as a physicist. 01:25:01.740 |
Free energy I expect, consequent upon my behavior. 01:25:06.300 |
and we could get a really interesting conversation 01:25:11.840 |
searching for information, resolving uncertainty 01:25:16.580 |
But I suspect that you want a slightly more personal 01:25:20.220 |
and fun answer, but which can be consistent with that. 01:25:29.660 |
and harps back to what you were taught as a child, 01:25:35.380 |
that you have certain beliefs about the kind of creature 01:25:50.380 |
is fulfilling the beliefs about what kind of thing 01:25:55.740 |
And of course, we're all given those scripts, 01:26:01.300 |
usually in the form of bedtime stories or fairy stories, 01:26:04.340 |
that I'm a princess and I'm gonna meet a beast 01:26:07.220 |
who's gonna transform and it's gonna be a prince. 01:26:17.700 |
And then your objective function is to fulfill-- 01:26:27.180 |
also the sort of the culture in which you grew up 01:26:30.900 |
I mean, again, because of this active inference, 01:26:36.100 |
not only am I modeling my environment, my econish, 01:26:56.820 |
So the question now is for me being very selfish, 01:27:02.260 |
It basically was a mixture between Einstein and Sherlock Holmes. 01:27:15.260 |
enjoy the fantasy that you're a popular scientist 01:27:20.260 |
who's gonna make a difference in a slightly quirky way. 01:27:28.300 |
and he loved sort of things like Sir Arthur Eddington's 01:27:41.780 |
So all the fairy stories I was told as I was growing up 01:27:53.140 |
But it's a journey of exploration, I suppose, of sorts. 01:27:58.180 |
what I imagine a mild-mannered Sherlock Holmes/Albert Einstein 01:28:10.100 |
Carl, it was a huge honor talking to you today. 01:28:15.580 |
- Thank you for listening to this conversation 01:28:18.500 |
and thank you to our presenting sponsor, Cash App. 01:28:23.060 |
by downloading Cash App and using code LEXPODCAST. 01:28:27.060 |
If you enjoy this podcast, subscribe on YouTube, 01:28:33.460 |
or simply connect with me on Twitter @LexFriedman. 01:28:44.620 |
and your motor system seeks to minimize prediction error. 01:28:48.060 |
Thank you for listening, and hope to see you next time.