back to index

Karl Friston: Neuroscience and the Free Energy Principle | Lex Fridman Podcast #99


Chapters

0:0 Introduction
1:50 How much of the human brain do we understand?
5:53 Most beautiful characteristic of the human brain
10:43 Brain imaging
20:38 Deep structure
21:23 History of brain imaging
32:31 Neuralink and brain-computer interfaces
43:5 Free energy principle
84:29 Meaning of life

Whisper Transcript | Transcript Only Page

00:00:00.000 | The following is a conversation with Carl Fristen,
00:00:03.080 | one of the greatest neuroscientists in history.
00:00:06.960 | Cited over 245,000 times,
00:00:10.320 | known for many influential ideas in brain imaging,
00:00:13.580 | neuroscience, and theoretical neurobiology,
00:00:16.520 | including especially the fascinating idea
00:00:19.840 | of the free energy principle for action and perception.
00:00:23.280 | Carl's mix of humor, brilliance, and kindness,
00:00:28.040 | to me, are inspiring and captivating.
00:00:31.160 | This was a huge honor and a pleasure.
00:00:34.120 | This is the Artificial Intelligence Podcast.
00:00:36.800 | If you enjoy it, subscribe on YouTube,
00:00:38.920 | review it with five stars on Apple Podcasts,
00:00:41.280 | support it on Patreon,
00:00:42.560 | or simply connect with me on Twitter,
00:00:44.560 | Alex Friedman, spelled F-R-I-D-M-A-N.
00:00:48.080 | As usual, I'll do a few minutes of ads now,
00:00:50.480 | and never any ads in the middle
00:00:52.040 | that can break the flow of the conversation.
00:00:54.240 | I hope that works for you,
00:00:55.540 | and doesn't hurt the listening experience.
00:00:58.740 | This show is presented by Cash App,
00:01:00.900 | the number one finance app in the App Store.
00:01:03.220 | When you get it, use code LEXPODCAST.
00:01:06.300 | Cash App lets you send money to friends, buy Bitcoin,
00:01:09.420 | and invest in the stock market with as little as $1.
00:01:12.880 | Since Cash App allows you to send
00:01:14.500 | and receive money digitally,
00:01:16.220 | let me mention a surprising fact related to physical money.
00:01:20.240 | Of all the currency in the world,
00:01:22.020 | roughly 8% of it is actual physical money.
00:01:25.700 | The other 92% of money only exists digitally.
00:01:29.980 | So again, if you get Cash App from the App Store,
00:01:32.300 | Google Play, and use the code LEXPODCAST,
00:01:35.740 | you get $10, and Cash App will also donate $10 to FIRST,
00:01:39.940 | an organization that is helping to advance robotics
00:01:42.700 | and STEM education for young people around the world.
00:01:45.820 | And now, here's my conversation with Carl Fristen.
00:01:50.040 | How much of the human brain do we understand
00:01:53.380 | from the low level of neuronal communication
00:01:56.580 | to the functional level to the highest level,
00:02:01.580 | maybe the psychiatric disorder level?
00:02:04.940 | - Well, we're certainly in a better position
00:02:06.420 | than we were last century. (laughs)
00:02:08.980 | How far we've got to go, I think,
00:02:10.440 | is almost an unanswerable question.
00:02:12.660 | So you'd have to set the parameters,
00:02:16.100 | what constitutes understanding,
00:02:18.640 | what level of understanding do you want?
00:02:21.740 | I think we've made enormous progress
00:02:25.380 | in terms of broad brush principles.
00:02:27.820 | Whether that affords a detailed cartography
00:02:32.460 | of the functional anatomy of the brain
00:02:34.140 | and what it does right down to the microcircuitry
00:02:36.980 | in the neurons, that's probably out of reach
00:02:40.620 | at the present time.
00:02:42.220 | - So the cartography, so mapping the brain,
00:02:44.900 | do you think mapping of the brain,
00:02:47.440 | the detailed, perfect imaging of it,
00:02:50.380 | does that get us closer to understanding
00:02:54.100 | of the mind, of the brain?
00:02:56.060 | So how far does it get us
00:02:58.100 | if we have that perfect cartography of the brain?
00:03:01.580 | - I think there are lower bounds on that.
00:03:03.060 | It's a really interesting question.
00:03:04.820 | And it would determine the sort of scientific career
00:03:09.220 | you'd pursue if you believe that knowing
00:03:11.620 | every dendritic connection, every sort of microscopic
00:03:16.020 | synaptic structure right down to the molecular level
00:03:18.740 | was gonna give you the right kind of information
00:03:22.260 | to understand the computational anatomy,
00:03:25.340 | then you'd choose to be a microscopist
00:03:27.180 | and you would study little cubic millimeters of brain
00:03:32.180 | for the rest of your life.
00:03:33.500 | If on the other hand, you were interested
00:03:35.300 | in holistic functions and a sort of functional anatomy
00:03:40.300 | of the sort that a neuropsychologist would understand,
00:03:44.100 | you'd study brain lesions and strokes,
00:03:46.460 | just looking at the whole person.
00:03:48.460 | So again, it comes back to
00:03:50.260 | at what level do you want understanding?
00:03:52.980 | I think there are principled reasons not to go too far.
00:03:56.140 | If you commit to a view of the brain
00:04:01.660 | as a machine that's performing a form of inference
00:04:06.660 | and representing things,
00:04:09.020 | there are, that understanding, that level of understanding
00:04:15.980 | is necessarily cast in terms of probability densities
00:04:20.540 | and ensemble densities, distributions.
00:04:24.180 | And what that tells you is that you don't really want
00:04:27.100 | to look at the atoms to understand the thermodynamics
00:04:30.500 | of probabilistic descriptions of how the brain works.
00:04:34.420 | So I personally wouldn't look at the molecules
00:04:38.620 | or indeed the single neurons in the same way
00:04:41.700 | if I wanted to understand the thermodynamics
00:04:44.060 | of some non-equilibrium steady state of a gas
00:04:47.220 | or an active material.
00:04:48.820 | I wouldn't spend my life looking at the individual molecules
00:04:53.820 | that constitute that ensemble.
00:04:55.460 | I'd look at their collective behavior.
00:04:57.500 | On the other hand, if you go too coarse grain,
00:05:00.180 | you're gonna miss some basic canonical principles
00:05:03.900 | of connectivity and architectures.
00:05:06.300 | I'm thinking here, this bit colloquial,
00:05:10.020 | but there's current excitement
00:05:11.740 | about high field magnetic resonance imaging at 7TESLA.
00:05:17.460 | Well, it gives us for the first time the opportunity
00:05:19.500 | to look at the brain in action
00:05:21.780 | at the level of a few millimeters
00:05:23.940 | that distinguish between different layers of the cortex
00:05:27.260 | that may be very important in terms of
00:05:29.380 | evincing generic principles of canonical microcircuitry
00:05:35.300 | that are replicated throughout the brain
00:05:37.660 | that may tell us something fundamental
00:05:39.140 | about message passing in the brain
00:05:41.180 | and these density dynamics or neuronal ensemble
00:05:44.940 | population dynamics that underwrite our brain function.
00:05:49.420 | So somewhere between a millimeter and a meter.
00:05:52.980 | - Lingering for a bit on the big questions,
00:05:56.500 | if you allow me, what to you is the most beautiful
00:06:00.180 | or surprising characteristic of the human brain?
00:06:03.460 | - I think it's hierarchical and recursive aspect.
00:06:06.780 | It's recurrent aspect.
00:06:08.700 | - Of the structure or of the actual representation
00:06:11.620 | of power of the brain?
00:06:12.900 | - Well, I think one speaks to the other.
00:06:15.380 | I was actually answering in a dull-minded way
00:06:18.460 | from the point of view of purely its anatomy
00:06:20.380 | and its structural aspects.
00:06:22.980 | I mean, there are many marvelous organs in the body.
00:06:26.740 | Let's take your liver, for example.
00:06:28.500 | Without it, you wouldn't be around for very long.
00:06:32.620 | And it does some beautiful and delicate biochemistry
00:06:35.980 | and homeostasis and you're evolved with a finesse
00:06:40.980 | that would easily parallel the brain.
00:06:43.020 | But it doesn't have a beautiful anatomy.
00:06:45.140 | It has a simple anatomy, which is attractive
00:06:47.020 | in a minimalist sense, but it doesn't have
00:06:48.940 | that crafted structure of sparse connectivity
00:06:52.820 | and that recurrence and that specialization
00:06:55.380 | that the brain has.
00:06:56.340 | - So you said a lot of interesting terms here.
00:06:58.300 | So the recurrence, the sparsity,
00:07:00.680 | but you also started by saying hierarchical.
00:07:03.500 | So I've never thought of our brain as hierarchical.
00:07:08.500 | Sort of I always thought it was just like a giant mess,
00:07:14.300 | interconnected mess where it's very difficult
00:07:16.700 | to figure anything out.
00:07:18.180 | But in what sense do you see the brain as hierarchical?
00:07:21.820 | - Well, I see it's not a magic soup.
00:07:24.700 | (both laughing)
00:07:26.660 | Of course, it's what I used to think
00:07:29.420 | before I studied medicine and the like.
00:07:33.380 | So a lot of those terms imply each other.
00:07:37.860 | So hierarchies, if you just think about the nature
00:07:42.060 | of a hierarchy, how would you actually build one?
00:07:46.020 | And what you would have to do is basically carefully remove
00:07:49.140 | the right connections that destroy
00:07:52.180 | the completely connected soups that you might have in mind.
00:07:56.100 | So a hierarchy is in and of itself defined
00:08:00.140 | by a sparse and particular connectivity structure.
00:08:03.200 | I'm not committing to any particular form of hierarchy.
00:08:07.540 | - But your sense is there is some.
00:08:10.140 | - Oh, absolutely, yeah.
00:08:11.420 | In virtue of the fact that there is a sparsity
00:08:14.220 | of connectivity, not necessarily of a qualitative sort,
00:08:19.220 | but certainly for quantitative sort.
00:08:20.460 | So it is demonstrably so that the far more
00:08:26.380 | further apart two parts of the brain are,
00:08:29.220 | the less likely they are to be wired,
00:08:32.260 | to possess axonal processes, neuronal processes
00:08:35.820 | that directly communicate one message
00:08:39.100 | or messages from one part of that brain
00:08:41.140 | to the other part of the brain.
00:08:42.660 | So we know there's a sparse connectivity.
00:08:44.780 | And furthermore, on the basis of anatomical connectivity
00:08:50.300 | and tracer studies, we know that that has a direct effect
00:08:55.460 | that sparsity underwrites a hierarchical
00:09:00.100 | and very structured sort of connectivity
00:09:05.020 | that might be best understood a little bit like an onion.
00:09:08.500 | There is a concentric, sometimes referred to as centripetal
00:09:14.380 | by people like Marcel Mesulam,
00:09:17.500 | hierarchical organization to the brain.
00:09:19.340 | So you can think of the brain as in a rough sense,
00:09:23.860 | like an onion and all the sensory information
00:09:28.060 | and all the afferent outgoing messages
00:09:31.260 | that supply commands to your muscles
00:09:33.460 | or to your secretory organs come from the surface.
00:09:37.100 | So there's a massive exchange interface
00:09:40.140 | with the world out there on the surface.
00:09:43.060 | And then underneath there's a little layer
00:09:45.140 | that sits and looks at the exchange on the surface.
00:09:49.380 | And then underneath that, there's a layer
00:09:51.380 | right the way down to the very center,
00:09:53.380 | to the deepest part of the onion.
00:09:55.700 | That's what I mean by a hierarchical organization.
00:09:58.660 | There's a discernible structure
00:10:01.260 | defined by the sparsity of connections
00:10:04.820 | that lends the architecture a hierarchical structure
00:10:08.580 | that tells one a lot about the kinds of representations
00:10:12.740 | and messages.
00:10:13.580 | So going back to your earlier question,
00:10:15.620 | is this about the representational capacity
00:10:18.220 | or is it about the anatomy?
00:10:20.420 | Well, one underwrites the other.
00:10:23.460 | If one simply thinks of the brain
00:10:26.380 | as a message-passing machine,
00:10:29.100 | a process that is in the service of doing something,
00:10:32.580 | then the circuitry and the connectivity
00:10:37.500 | that shape that message-passing also dictate its function.
00:10:42.500 | - So you've done a lot of amazing work
00:10:45.300 | in a lot of directions.
00:10:46.980 | So let's look at one aspect of that,
00:10:49.580 | of looking into the brain
00:10:51.380 | and trying to study this onion structure.
00:10:53.620 | What can we learn about the brain by imaging it,
00:10:57.980 | which is one way to sort of look at the anatomy of it,
00:11:01.220 | broadly speaking?
00:11:02.620 | What are the methods of imaging,
00:11:05.020 | but even bigger, what can we learn about it?
00:11:07.940 | - Right, so, well, most imaging,
00:11:11.260 | human neuroimaging that you might see
00:11:15.220 | in science journals,
00:11:19.260 | that speaks to the way the brain works,
00:11:22.820 | measures brain activity over time.
00:11:24.620 | So, you know, that's the first thing to say,
00:11:26.300 | that we're effectively looking at fluctuations
00:11:30.260 | in neuronal responses,
00:11:32.900 | usually in response to some sensory input
00:11:36.420 | or some instruction, some task.
00:11:40.860 | Not necessarily, there's a lot of interest
00:11:42.500 | in just looking at the brain
00:11:43.940 | in terms of resting state, endogenous or intrinsic activity.
00:11:49.220 | But crucially, at every point,
00:11:51.020 | looking at these fluctuations,
00:11:52.700 | either induced or intrinsic in the neural activity,
00:11:56.660 | and understanding them at two levels.
00:11:59.300 | So normally people would recourse
00:12:03.820 | to two principles of brain organization
00:12:06.500 | that are complementary.
00:12:07.700 | One, functional specialization or segregation.
00:12:10.740 | So what does that mean?
00:12:12.140 | It simply means that there are certain parts of the brain
00:12:16.580 | that may be specialized for certain kinds of processing.
00:12:19.580 | You know, for example, visual motion,
00:12:22.180 | our ability to recognize or to perceive movement
00:12:26.340 | in the visual world.
00:12:28.060 | And furthermore, that specialized processing
00:12:31.380 | may be spatially or anatomically segregated,
00:12:34.820 | leading to functional segregation.
00:12:37.340 | Which means that if I were to compare your brain activity
00:12:41.180 | during a period of static, viewing a static image,
00:12:45.500 | and then compare that to the responses
00:12:48.740 | of fluctuations in the brain
00:12:50.500 | when you were exposed to a moving image,
00:12:52.980 | say a flying bird,
00:12:54.780 | I would expect to see
00:12:56.580 | restricted, segregated differences in activity.
00:13:02.180 | And those are basically the hotspots that you see
00:13:04.460 | in the statistical parametric maps
00:13:06.940 | that test for the significance of the responses
00:13:09.420 | that are circumscribed.
00:13:11.660 | So now basically we're talking about
00:13:15.020 | what some people have perhaps unkindly called
00:13:18.060 | a neocartography.
00:13:19.300 | This is a phonology augmented by modern day neuroimaging.
00:13:24.300 | Basically finding blobs or bumps on the brain
00:13:29.220 | that do this or do that.
00:13:31.060 | And trying to understand the cartography
00:13:33.300 | of that functional specialization.
00:13:35.900 | - So how much is there such,
00:13:39.060 | since that's such a beautiful sort of ideal to strive for,
00:13:44.300 | we human scientists would like to hope
00:13:47.540 | that there's a beautiful structure to this,
00:13:49.220 | whereas like you said, there are segregated regions
00:13:51.620 | that are responsible for the different function.
00:13:54.300 | How much hope is there to find such regions
00:13:57.300 | in terms of looking at the progress of studying the brain?
00:14:00.620 | - Oh, I think enormous progress has been made
00:14:02.980 | in the past 20 or 30 years.
00:14:05.080 | So this is beyond incremental.
00:14:08.900 | At the advent of brain imaging,
00:14:12.160 | the very notion of functional segregation
00:14:15.060 | was just a hypothesis.
00:14:16.860 | Based upon a century, if not more,
00:14:20.100 | of careful neuropsychology,
00:14:21.660 | looking at people who had lost via insult
00:14:24.900 | or traumatic brain injury,
00:14:28.020 | particular parts of the brain,
00:14:29.540 | and then say, well, they can't do this
00:14:31.020 | or they can't do that.
00:14:32.320 | For example, losing the visual cortex
00:14:34.380 | and not being able to see,
00:14:35.860 | or losing particular parts of the visual cortex
00:14:39.860 | or regions known as V5
00:14:44.320 | or the middle temporal region, MT,
00:14:48.220 | and noticing that they selectively
00:14:49.840 | could not see moving things.
00:14:51.560 | And so that created the hypothesis
00:14:55.400 | that perhaps movement processing,
00:14:58.680 | visual movement processing,
00:14:59.980 | was located in this functionally segregated area.
00:15:03.000 | And you could then go and put invasive electrodes
00:15:05.960 | in animal models and say, yes, indeed,
00:15:08.560 | we can excite activity here,
00:15:10.780 | we can form receptive fields
00:15:13.320 | that are sensitive to or defined
00:15:14.960 | in terms of visual motion.
00:15:16.500 | But at no point could you exclude the possibility
00:15:18.600 | that everywhere else in the brain
00:15:20.080 | was also very interested in visual motion.
00:15:23.200 | - By the way, I apologize to interrupt,
00:15:24.960 | but it's a tiny little tangent.
00:15:27.040 | You said animal models.
00:15:28.460 | Just out of curiosity, from your perspective,
00:15:32.400 | how different is the human brain
00:15:34.520 | versus the other animals in terms of our ability
00:15:37.580 | to study the brain?
00:15:39.040 | - Well, clearly, the further away you go
00:15:42.660 | from a human brain, the greater the difference is,
00:15:45.800 | but not as remarkable as you might think.
00:15:49.000 | So people will choose their level
00:15:51.480 | of approximation to the human brain
00:15:54.360 | depending upon the kinds of questions
00:15:56.600 | that they want to answer.
00:15:57.440 | So if you're talking about sort of canonical principles
00:16:00.680 | of microcircuitry, it might be perfectly okay
00:16:02.840 | to look at a mouse, indeed.
00:16:04.600 | You could even look at flies, worms.
00:16:08.120 | If, on the other hand, you wanted to look
00:16:09.700 | at the finer details of organization of visual cortex
00:16:13.140 | and V1, V2, these are designated sort of patches of cortex
00:16:17.460 | that may do different things, indeed do,
00:16:20.680 | you'd probably want to use a primate
00:16:23.360 | that looked a little bit more like a human
00:16:25.940 | because there are lots of ethical issues
00:16:28.300 | in terms of the use of non-human primates
00:16:32.820 | to answer questions about human anatomy.
00:16:37.500 | But I think most people assume
00:16:39.060 | that most of the important principles
00:16:42.140 | are conserved in a continuous way,
00:16:44.580 | right from, well, yes, worms, right the way through
00:16:50.020 | to you and me.
00:16:53.300 | - So now returning to, so that was the early ideas
00:16:56.940 | of studying the functional regions of the brain
00:17:00.860 | and if there's some damage to it,
00:17:02.860 | to try to infer that that part of the brain
00:17:06.180 | might be somewhat responsible for this type of function.
00:17:09.100 | So where does that lead us?
00:17:11.140 | What are the next steps beyond that?
00:17:12.980 | - Right, well, just actually to reverse a bit,
00:17:16.140 | come back to your sort of notion
00:17:17.860 | that the brain is a magic soup.
00:17:19.580 | That was actually a very prominent idea at one point,
00:17:22.600 | notions such as Lashley's law of mass action
00:17:27.140 | inherited from the observation
00:17:30.020 | that for certain animals,
00:17:34.020 | if you just took out spoonfuls of the brain,
00:17:36.700 | it didn't matter where you took these spoonfuls out,
00:17:38.860 | they always showed the same kinds of deficits.
00:17:41.020 | So it was very difficult to infer functional specialization
00:17:45.140 | pure on the basis of lesion deficit studies.
00:17:48.880 | But once we had the opportunity
00:17:51.400 | to look at the brain lighting up
00:17:52.980 | and it's literally, it's sort of excitement,
00:17:55.300 | neuronal excitement when looking at this versus that,
00:18:00.300 | one was able to say, yes, indeed,
00:18:03.780 | these functionally specialized responses
00:18:05.500 | are very restricted and they're here or they're over there.
00:18:09.260 | If I do this, then this part of the brain lights up.
00:18:11.820 | And that became doable in the early '90s.
00:18:16.820 | In fact, shortly before
00:18:18.700 | with the advent of positron emission tomography
00:18:21.540 | and then functional magnetic resonance imaging came along.
00:18:25.460 | In the early '90s and since that time,
00:18:28.140 | there has been an explosion of discovery,
00:18:31.820 | refinement, confirmation.
00:18:34.780 | There are people who believe that it's all in the anatomy.
00:18:38.940 | If you understand the anatomy,
00:18:40.260 | then you understand the function at some level.
00:18:43.220 | And many, many hypotheses were predicated
00:18:45.780 | on a deep understanding of the anatomy and the connectivity,
00:18:49.960 | but they were all confirmed and taken much further
00:18:53.880 | with neuroimaging.
00:18:55.140 | So that's what I meant by we've made an enormous
00:18:57.300 | amount of progress in this century, indeed,
00:19:01.180 | and in relation to the previous century,
00:19:03.940 | by looking at these functionally selective responses.
00:19:08.940 | But that wasn't the whole story.
00:19:11.140 | So there's this sort of neophrenology,
00:19:13.580 | but finding bumps and hot spots in the brain
00:19:15.580 | that did this or that.
00:19:16.700 | The bigger question was, of course,
00:19:20.020 | the functional integration.
00:19:22.140 | How all of these regionally specific responses
00:19:26.960 | were orchestrated, how they were distributed,
00:19:29.840 | how did they relate to distributed processing
00:19:32.640 | and indeed representations in the brain?
00:19:35.560 | So then you turn to the more challenging issue
00:19:39.520 | of the integration, the connectivity,
00:19:42.640 | and then we come back to this beautiful,
00:19:44.920 | sparse, recurrent, hierarchical connectivity
00:19:49.040 | that seems characteristic of the brain
00:19:51.080 | and probably not many other organs.
00:19:53.580 | - But nevertheless, we come back to this challenge
00:19:58.260 | of trying to figure out how everything is integrated.
00:20:01.100 | But what's your feeling, what's the general consensus?
00:20:04.220 | Have we moved away from the magic soup view of the brain?
00:20:07.340 | - Yes.
00:20:08.180 | - So there is a deep structure to it.
00:20:10.540 | And then maybe a further question,
00:20:14.460 | you said some people believe that the structure
00:20:16.820 | is most of it, that you could really get
00:20:19.180 | at the core of the function by just deeply understanding
00:20:21.580 | the structure.
00:20:22.520 | Where do you sit on that?
00:20:25.160 | - I think it's got some mileage to it, yes.
00:20:27.340 | Yeah, yeah.
00:20:28.180 | - So it's a worthy pursuit of going,
00:20:30.280 | of studying through imaging and all the different methods
00:20:34.760 | to actually study the structure.
00:20:35.780 | - No, absolutely, yeah.
00:20:38.300 | Sorry, I'm just noting, you were accusing me
00:20:41.140 | of using lots of long words and then you introduced one
00:20:43.460 | there which is deep, which is interesting.
00:20:46.300 | Because deep is the sort of millennial equivalent
00:20:50.380 | of hierarchical.
00:20:51.620 | So if you put deep in front of anything,
00:20:53.760 | not only are you very millennial and very trending,
00:20:57.600 | but you're also implying a hierarchical architecture.
00:21:01.500 | So it is a depth, which is for me, the beautiful thing.
00:21:05.260 | - That's right, the word deep kind of, yeah, exactly.
00:21:08.260 | It implies hierarchy.
00:21:10.260 | I didn't even think about that.
00:21:11.500 | That indeed the implicit meaning of the word deep
00:21:15.460 | is hierarchy.
00:21:16.860 | - Yep.
00:21:17.680 | - Yeah.
00:21:18.520 | - So deep inside the onion is the center of your soul.
00:21:20.860 | (Lex laughing)
00:21:22.380 | - Beautifully put.
00:21:23.500 | Maybe briefly, if you could paint a picture
00:21:26.800 | of the kind of methods of neuroimaging,
00:21:30.980 | maybe the history which you were a part of,
00:21:33.420 | from statistical parametric mapping.
00:21:35.180 | I mean, just what's out there that's interesting
00:21:37.940 | for people maybe outside the field to understand
00:21:41.660 | of what are the actual methodologies
00:21:43.460 | of looking inside the human brain.
00:21:45.320 | - Right, well, you can answer that question
00:21:47.420 | from two perspectives.
00:21:48.320 | Basically, it's the modality.
00:21:49.900 | What kind of signal are you measuring?
00:21:52.580 | And they can range from, and let's limit ourselves
00:21:56.860 | to some imaging-based non-invasive techniques.
00:22:01.060 | So you've essentially got brain scanners,
00:22:03.020 | and brain scanners can either measure
00:22:05.340 | the structural attributes, the amount of water,
00:22:07.660 | the amount of fat, or the amount of iron
00:22:09.300 | in different parts of the brain.
00:22:10.420 | You can make lots of inferences about the structure
00:22:13.780 | of the organ of the sort that you might have
00:22:16.660 | abduced from an X-ray, but a very nuanced X-ray
00:22:20.480 | that is looking at this kind of property
00:22:22.460 | or that kind of property.
00:22:24.400 | So looking at the anatomy non-invasively
00:22:27.860 | would be the first sort of neuroimaging
00:22:30.120 | that people might want to employ.
00:22:32.160 | Then you move on to the kinds of measurements
00:22:34.960 | that reflect dynamic function.
00:22:38.120 | And the most prevalent of those fall into two camps.
00:22:42.000 | You've got these metabolic, sometimes hemodynamic,
00:22:46.440 | blood-related signals.
00:22:48.920 | So these metabolic and/or hemodynamic signals
00:22:53.440 | are basically proxies for elevated activity
00:22:58.400 | and message passing and neuronal dynamics,
00:23:03.400 | in particular parts of the brain.
00:23:05.340 | Characteristically, though, the time constants
00:23:07.660 | of these hemodynamic or metabolic responses
00:23:11.420 | to neural activity are much longer
00:23:14.320 | than the neural activity itself.
00:23:15.840 | - And this is referring, forgive me for the dumb questions,
00:23:20.600 | but this would be referring to blood,
00:23:22.960 | like the flow of blood?
00:23:24.280 | - Absolutely, absolutely.
00:23:25.120 | - So there's a ton of, it seems like there's a ton
00:23:27.360 | of blood vessels in the brain.
00:23:29.400 | - Yeah.
00:23:30.240 | - So, but what's the interaction between the flow of blood
00:23:33.720 | and the function of the neurons?
00:23:36.000 | Is there an interplay there, or?
00:23:37.400 | - Yep, yep.
00:23:38.740 | And that interplay accounts for several careers
00:23:42.740 | of world-renowned scientists.
00:23:45.100 | Yes, absolutely.
00:23:46.160 | So this is known as neurovascular coupling,
00:23:49.100 | is exactly what you said.
00:23:50.140 | It's how does the neural activity,
00:23:52.260 | the neuronal infrastructure, the actual message passing
00:23:54.580 | that we think underlies our capacity to perceive and act,
00:23:59.580 | how is that coupled to the vascular responses
00:24:05.980 | that supply the energy for that neural processing?
00:24:09.880 | So there's a delicate web of large vessels,
00:24:13.360 | arteries and veins, that gets progressively finer
00:24:16.360 | and finer in detail until it perfuses
00:24:18.800 | at a microscopic level the machinery
00:24:21.840 | where little neurons lie.
00:24:23.760 | So coming back to this sort of onion perspective,
00:24:26.480 | we were talking before using the onion as a metaphor
00:24:30.480 | for a deep hierarchical structure,
00:24:32.440 | but also I think it's just anatomically
00:24:36.020 | quite a useful metaphor.
00:24:37.780 | All the action, all the heavy lifting
00:24:40.100 | in terms of neural computation is done
00:24:41.660 | on the surface of the brain,
00:24:43.940 | and then the interior of the brain is constituted
00:24:47.380 | by fatty wires, essentially, axonal processes
00:24:52.380 | that are enshrouded by myelin sheaths.
00:24:55.940 | And these, when you dissect them,
00:24:58.280 | they look fatty and white, and so it's called white matter,
00:25:01.240 | as opposed to the actual neuro pill,
00:25:04.100 | which does the computation, constituted largely by neurons,
00:25:07.220 | and that's known as gray matter.
00:25:08.700 | So the gray matter is a surface or a skin
00:25:13.260 | that sits on top of this big ball,
00:25:16.260 | now we are talking magic soup,
00:25:17.780 | but it's a big ball of connections, like spaghetti,
00:25:20.860 | very carefully structured with sparse connectivity
00:25:23.100 | that preserves this deep hierarchical structure,
00:25:25.760 | but all the action takes place on the surface,
00:25:28.300 | on the cortex of the onion.
00:25:30.960 | And that means that you have to supply
00:25:35.960 | the right amount of blood flow,
00:25:38.560 | the right amount of nutrient,
00:25:41.120 | which is rapidly absorbed and used by neural cells
00:25:45.240 | that don't have the same capacity
00:25:46.800 | that your leg muscles would have
00:25:48.760 | to basically spend their energy budget
00:25:52.480 | and then claim it back later.
00:25:55.000 | So one peculiar thing about cerebral metabolism,
00:25:58.440 | brain metabolism, is it really needs to be driven
00:26:01.460 | in the moment, which means you basically
00:26:03.060 | have to turn on the taps.
00:26:04.860 | So if there's lots of neural activity
00:26:08.700 | in one part of the brain, a little patch
00:26:10.780 | of a few millimeters, even less possibly,
00:26:14.100 | you really do have to water that piece of the garden
00:26:16.580 | now and quickly, and by quickly,
00:26:19.740 | I mean within a couple of seconds.
00:26:21.540 | - So that contains a lot of, hence the imaging
00:26:26.100 | could tell you a story of what's happening.
00:26:28.260 | - Absolutely.
00:26:29.600 | But it is slightly compromised in terms of the resolution.
00:26:33.440 | So the deployment of these little micro vessels
00:26:37.360 | that water the garden to enable the neural activity
00:26:42.360 | to play out, the spatial resolution
00:26:45.320 | is in order of a few millimeters.
00:26:48.000 | And crucially, the temporal resolution
00:26:50.360 | is the order of a few seconds.
00:26:52.180 | So you can't get right down and dirty
00:26:54.400 | into the actual spatial and temporal scale
00:26:57.360 | of neural activity in and of itself.
00:26:59.900 | To do that, you'd have to turn to the other
00:27:01.360 | big imaging modality, which is the recording
00:27:04.080 | of electromagnetic signals as they're generated
00:27:06.480 | in real time.
00:27:07.700 | So here, the temporal bandwidth, if you like,
00:27:10.280 | or the low limit on the temporal resolution
00:27:12.920 | is incredibly small.
00:27:15.120 | You're talking about, you know,
00:27:16.200 | nanoseconds, milliseconds.
00:27:18.040 | And then you can get into the phasic fast responses
00:27:23.520 | that is in and of itself the neural activity
00:27:27.440 | and start to see the succession or cascade
00:27:32.440 | of hierarchical recurrent message passing
00:27:35.200 | evoked by a particular stimulus.
00:27:37.120 | But the problem is you're looking at electromagnetic signals
00:27:41.480 | that have passed through an enormous amount
00:27:44.560 | of magic soup or spaghetti of connectivity
00:27:47.840 | and through the scalp and the skull.
00:27:50.680 | And it's become spatially very diffuse.
00:27:52.920 | It's very difficult to know where you are.
00:27:54.920 | So you've got this sort of catch 22.
00:27:58.560 | You can either use an imaging modality
00:28:00.240 | that tells you within millimeters
00:28:02.800 | which part of the brain is activated,
00:28:04.400 | but you don't know when.
00:28:05.760 | Or you've got these electromagnetic EEG, MEG setups
00:28:10.760 | that tell you to within a few milliseconds
00:28:15.800 | when something has responded, but you don't know where.
00:28:19.200 | So you've got these two complementary measures,
00:28:22.480 | either indirect via the blood flow
00:28:25.840 | or direct via the electromagnetic signals
00:28:28.760 | caused by neural activity.
00:28:31.080 | These are the two big imaging devices.
00:28:33.360 | And then the second level of responding to your question,
00:28:36.960 | what are the, from the outside,
00:28:39.440 | what are the big ways of using this technology?
00:28:43.400 | So once you've chosen the kind of neuroimaging
00:28:47.160 | that you want to use to answer your set questions,
00:28:50.280 | and sometimes it would have to be both,
00:28:52.280 | then you've got a whole raft of analyses,
00:28:57.560 | time series analyses usually,
00:28:59.640 | that you can bring to bear in order to answer your questions
00:29:04.360 | or address your hypotheses about those data.
00:29:07.000 | And interestingly, they both fall into the same two camps
00:29:09.880 | we were talking about before,
00:29:11.440 | this dialectic between specialization and integration,
00:29:14.800 | differentiation and integration.
00:29:17.120 | So it's the cartography, the blobology analyses.
00:29:20.840 | - I apologize, I probably shouldn't interrupt so much,
00:29:23.200 | but just heard a fun word, the blob--
00:29:27.160 | - Blobology. - Blobology.
00:29:29.200 | - It's a neologism, which means the study of blobs.
00:29:32.320 | (laughs)
00:29:33.160 | So nothing--
00:29:34.400 | - Are you being witty and humorous,
00:29:36.640 | or is there an actual, does the word blobology
00:29:39.080 | ever appear in a textbook somewhere?
00:29:40.760 | - It would appear in a popular book.
00:29:43.320 | It would not appear in a worthy specialist journal.
00:29:48.320 | But it's the fond word for the study of literally
00:29:52.720 | little blobs on brain maps showing activations.
00:29:56.160 | So the kind of thing that you'd see in the newspapers
00:29:59.520 | on ABC or BBC reporting the latest finding
00:30:04.080 | from brain imaging.
00:30:05.320 | Interestingly though, the maths involved
00:30:10.040 | in that stream of analysis does actually call upon
00:30:15.040 | the mathematics of blobs.
00:30:17.600 | So seriously, they're actually called Euler characteristics,
00:30:21.760 | and they have a lot of fancy names in mathematics.
00:30:26.760 | - We'll talk about it, but your ideas
00:30:28.880 | in free energy principle, I mean,
00:30:30.600 | there's echoes of blobs there when you consider
00:30:34.000 | sort of entities, mathematically speaking.
00:30:38.000 | - Yes, absolutely, yeah, yeah.
00:30:40.200 | - So anyway--
00:30:41.040 | - Well, circumscribed, well-defined,
00:30:43.080 | you entities of, well, from the free energy point of view,
00:30:48.080 | entities of anything, but from the point of view
00:30:50.280 | of the analysis, the cartography of the brain,
00:30:55.280 | these are the entities that constitute the evidence
00:30:59.200 | for this functional segregation.
00:31:01.640 | You have segregated this function in this blob,
00:31:04.480 | and it is not outside of the blob.
00:31:06.760 | And that's basically, if you were a mapmaker of America
00:31:11.560 | and you did not know its structure,
00:31:14.080 | the first thing you were doing,
00:31:15.200 | constituting or creating a map,
00:31:17.720 | would be to identify the cities, for example,
00:31:19.800 | or the mountains or the rivers.
00:31:22.000 | All of these uniquely spatially localizable features,
00:31:26.920 | possibly topological features, have to be placed somewhere,
00:31:30.680 | because that requires a mathematics of identifying
00:31:33.520 | what does a city look like on a satellite image,
00:31:36.080 | or what does a river look like,
00:31:37.400 | or what does a mountain look like?
00:31:39.120 | What data features would evidence that particular thing
00:31:44.120 | that you wanted to put on the map?
00:31:50.400 | And they normally are characterized in terms
00:31:53.160 | of literally these blobs, or these sort of,
00:31:55.800 | another way of looking at this is a certain
00:31:58.640 | statistical measure of the degree of activation
00:32:02.960 | crosses a threshold, and in crossing that threshold
00:32:06.520 | in the spatially restricted part of the brain,
00:32:09.600 | it creates a blob.
00:32:11.080 | And that's basically what statistical parametric mapping
00:32:13.680 | does, it's basically mathematically finessed blobology.
00:32:18.680 | (laughs)
00:32:19.840 | - Okay, so you kind of described these two methodologies
00:32:23.160 | for one is temporally noisy, one is spatially noisy,
00:32:26.920 | and you kind of have to play and figure out
00:32:28.360 | what can be useful.
00:32:31.400 | It'd be great if you can sort of comment,
00:32:33.000 | I got a chance recently to spend a day
00:32:34.920 | at a company called Neuralink,
00:32:37.200 | that uses brain computer interfaces,
00:32:39.400 | and their dream is to, well, there's a bunch of dreams,
00:32:44.400 | but one of them is to understand the brain
00:32:47.380 | by sort of getting in there,
00:32:51.500 | past the so-called factory wall,
00:32:53.820 | getting in there and be able to listen,
00:32:55.200 | communicate both directions.
00:32:57.160 | What are your thoughts about the future
00:32:59.800 | of this kind of technology of brain computer interfaces,
00:33:02.360 | to be able to now have a window or direct contact
00:33:07.360 | within the brain to be able to measure some of the signals,
00:33:10.020 | to be able to send signals, to understand
00:33:12.460 | some of the functionality of the brain?
00:33:15.040 | - Ambivalent, my sense is ambivalent.
00:33:17.760 | So it's a mixture of good and bad,
00:33:19.920 | and I acknowledge that freely.
00:33:21.400 | So the good bits, if you just look at the legacy
00:33:24.740 | of that kind of reciprocal but invasive,
00:33:29.240 | your brain stimulation,
00:33:31.160 | I didn't paint a complete picture
00:33:33.120 | when I was talking about some of the ways
00:33:34.680 | we understand the brain prior to neuroimaging.
00:33:37.080 | It wasn't just lesion deficit studies.
00:33:39.680 | Some of the early work, in fact, literally,
00:33:42.160 | a hundred years from where we're sitting
00:33:43.500 | at the Institute of Neurology,
00:33:45.240 | was done by stimulating the brain of, say, dogs,
00:33:50.000 | and looking at how they responded,
00:33:51.880 | either with their muscles or with their salivation,
00:33:56.400 | and imputing what that part of the brain must be doing.
00:34:00.880 | That if I stimulate it, and I evoke this kind of response,
00:34:05.880 | then that tells me quite a lot
00:34:07.240 | about the functional specialization.
00:34:09.080 | So there's a long history of brain stimulation,
00:34:12.280 | which continues to enjoy a lot of attention nowadays.
00:34:16.800 | - Positive attention?
00:34:17.720 | - Oh, yes, absolutely.
00:34:18.880 | Deep brain stimulation for Parkinson's disease
00:34:22.120 | is now a standard treatment,
00:34:23.580 | and also a wonderful vehicle to try and understand
00:34:27.760 | the neuronal dynamics underlying movement disorders
00:34:30.520 | like Parkinson's disease.
00:34:31.920 | Even interest in magnetic stimulation,
00:34:37.880 | stimulating the magnetic fields,
00:34:39.200 | and will it work in people who are depressed, for example.
00:34:43.280 | Quite a crude level of understanding what you're doing,
00:34:45.680 | but there is historical evidence
00:34:49.040 | that these kinds of brute force interventions
00:34:51.760 | do change things.
00:34:54.240 | A little bit like banging the TV,
00:34:56.000 | when the valves aren't working properly,
00:34:58.200 | but it still, it works.
00:35:00.700 | So there is a long history.
00:35:04.440 | Brain-computer interfacing, or BCI,
00:35:06.780 | I think is a beautiful example of that.
00:35:10.960 | It's sort of carved out its own niche,
00:35:12.720 | and its own aspirations,
00:35:14.400 | and there've been enormous advances within limits.
00:35:20.720 | Advances in terms of our ability to understand
00:35:23.960 | how the brain, the embodied brain,
00:35:28.720 | engages with the world.
00:35:31.500 | I'm thinking here of sensory substitution,
00:35:35.280 | augmenting our sensory capacities
00:35:37.220 | by giving ourselves extra ways of sensing
00:35:40.800 | and sampling the world,
00:35:42.220 | ranging from sort of trying to replace lost visual signals
00:35:48.320 | through to giving people completely new signals.
00:35:51.200 | one of the, I think, most engaging examples of this
00:35:57.080 | is equipping people with a sense of magnetic fields.
00:36:00.640 | So you can actually give them magnetic sensors
00:36:03.600 | that enable them to feel,
00:36:05.440 | should we say, tactile pressure around their tummy,
00:36:08.980 | where they are in relation
00:36:10.640 | to the magnetic field of the Earth.
00:36:13.040 | - It's incredible.
00:36:13.880 | - And after a few weeks, they take it for granted.
00:36:17.640 | They integrate it, they imbibe it, they assimilate it.
00:36:20.040 | - That is incredible.
00:36:20.880 | - This new sensory information
00:36:22.320 | into the way that they literally feel their world,
00:36:25.440 | but now equipped with this sense of magnetic direction.
00:36:29.300 | So that tells you something
00:36:31.020 | about the brain's plastic potential to remodel,
00:36:34.880 | and its plastic capacity to suddenly try to explain
00:36:41.080 | the sensory data at hand by augmenting
00:36:45.280 | the sensory sphere and the kinds of things
00:36:49.480 | that you can measure.
00:36:50.680 | Clearly, that's purely for entertainment
00:36:54.680 | and understanding the nature and the power of our brains.
00:36:58.760 | I would imagine that most BCI is pitched
00:37:03.360 | at solving clinical and human problems,
00:37:08.360 | such as locked-in syndrome, such as paraplegia,
00:37:12.200 | or replacing lost sensory capacities
00:37:16.080 | like blindness and deafness.
00:37:18.920 | So then we come to the more negative part of my ambivalence.
00:37:23.920 | The other side of it.
00:37:25.440 | So I don't want to be deflationary
00:37:30.920 | because much of my deflationary comments
00:37:33.360 | is probably a large out of ignorance than anything else.
00:37:37.240 | Generally speaking, the bandwidth and the bit rates
00:37:43.600 | that you get from brain computer interfaces
00:37:47.240 | as we currently know them,
00:37:49.120 | we're talking about bits per second.
00:37:51.440 | So that would be like me only being able to communicate
00:37:55.560 | with any world or with you using very, very,
00:37:59.560 | very slow Morse code.
00:38:05.580 | And it is not even within an order of magnitude
00:38:13.440 | near what we actually need for an inactive realization
00:38:18.440 | of what people aspire to when they think about
00:38:21.320 | curing people with paraplegia or replacing sight,
00:38:26.360 | despite heroic efforts.
00:38:30.280 | So one has to ask, is there a lower bound
00:38:33.760 | on the kinds of recurrent information exchange
00:38:41.040 | between a brain and some augmented or artificial interface?
00:38:46.040 | And then we come back to, interestingly,
00:38:51.600 | what I was talking about before,
00:38:52.800 | which is if you're talking about function
00:38:56.440 | in terms of inference,
00:38:58.520 | and I presume we'll get to that later on
00:39:00.520 | in terms of the free energy principle,
00:39:01.840 | but at the moment, there may be fundamental reasons
00:39:05.120 | to assume that is the case.
00:39:06.360 | We're talking about ensemble activity.
00:39:08.520 | We're talking about basically, for example,
00:39:13.520 | let's paint the challenge facing brain-computer interfacing
00:39:18.800 | in terms of controlling another system
00:39:24.500 | that is highly and deeply structured,
00:39:27.080 | very relevant to our lives, very nonlinear,
00:39:30.560 | that rests upon the kind of non-equilibrium,
00:39:34.360 | steady states and dynamics that the brain does, the weather.
00:39:38.560 | All right?
00:39:39.560 | So- - Good example, yeah.
00:39:41.120 | - Imagine you had some very aggressive satellites
00:39:45.840 | that could produce signals
00:39:47.600 | that could perturb some little parts of the weather system.
00:39:52.600 | And then what you're asking now is,
00:39:55.160 | can I meaningfully get into the weather
00:39:58.320 | and change it meaningfully
00:39:59.720 | and make the weather respond in a way that I want it to?
00:40:03.400 | You're talking about chaos control on a scale
00:40:06.640 | which is almost unimaginable.
00:40:08.880 | So there may be fundamental reasons why BCI,
00:40:13.760 | as you might read about it in a science fiction novel,
00:40:17.540 | aspirational BCI may never actually work
00:40:22.960 | in the sense that to really be integrated
00:40:26.960 | and be part of the system is a requirement
00:40:31.960 | that requires you to have evolved with that system,
00:40:35.280 | that you have to be part of a very delicately structured,
00:40:40.280 | deeply structured, dynamic, ensemble activity
00:40:48.040 | that is not like rewiring a broken computer
00:40:51.640 | or plugging in a peripheral interface adapter.
00:40:54.680 | It is much more like getting into the weather patterns
00:40:58.160 | or, come back to your magic soup,
00:41:00.720 | getting into the active matter
00:41:03.560 | and meaningfully relate that to the outside world.
00:41:07.160 | So I think there are enormous challenges there.
00:41:09.920 | - So I think the example of the weather is a brilliant one.
00:41:13.280 | And I think you paint a really interesting picture
00:41:15.320 | and it wasn't as negative as I thought.
00:41:17.440 | It's essentially saying
00:41:18.720 | that it might be incredibly challenging,
00:41:21.000 | including the low bound of the bandwidth and so on.
00:41:23.640 | I kind of, so just to full disclosure,
00:41:26.920 | I come from the machine learning world.
00:41:28.680 | So my natural thought is the hardest part
00:41:32.760 | is the engineering challenge of controlling the weather,
00:41:34.840 | of getting those satellites up and running and so on.
00:41:37.560 | And once they are, then the rest is fundamentally
00:41:42.240 | the same approaches that allow you to win in a game of Go
00:41:46.880 | will allow you to potentially play in this soup,
00:41:49.600 | in this chaos.
00:41:51.000 | So I have a hope that sort of machine learning methods
00:41:54.480 | will help us play in this soup.
00:41:58.840 | But perhaps you're right that it is a biology
00:42:03.840 | and the brain is just an incredible system
00:42:08.680 | that may be almost impossible to get in.
00:42:12.240 | But for me, what seems impossible
00:42:15.800 | is the incredible mess of blood vessels
00:42:19.800 | that you also described without,
00:42:22.320 | we also value the brain.
00:42:24.620 | You can't make any mistakes, you can't damage things.
00:42:27.080 | So to me, that engineering challenge
00:42:29.760 | seems nearly impossible.
00:42:31.360 | One of the things I was really impressed by at Neuralink
00:42:35.920 | is just talking to brilliant neurosurgeons
00:42:39.680 | and the roboticists that made me realize
00:42:43.320 | that even though it seems impossible,
00:42:45.880 | if anyone can do it, it's some of these world-class
00:42:48.600 | engineers that are trying to take it on.
00:42:50.800 | So I think the conclusion of our discussion here
00:42:55.240 | is of this part is basically that the problem
00:42:59.920 | is really hard, but hopefully not impossible.
00:43:02.560 | - Absolutely.
00:43:03.400 | - If it's okay, let's start with the basics.
00:43:07.240 | So you've also formulated a fascinating principle,
00:43:12.160 | the free energy principle.
00:43:13.520 | Can we maybe start at the basics
00:43:15.320 | and what is the free energy principle?
00:43:19.640 | - Well, in fact, the free energy principle
00:43:23.700 | inherits a lot from the building
00:43:28.700 | of these data analytic approaches
00:43:31.240 | to these very high dimensional time series
00:43:34.160 | you get from the brain.
00:43:35.960 | So I think it's interesting to acknowledge that.
00:43:37.960 | And in particular, the analysis tools
00:43:39.960 | that try to address the other side,
00:43:43.100 | which is a functional integration.
00:43:44.340 | So the connectivity analysis.
00:43:46.040 | On the one hand, but I should also acknowledge
00:43:51.880 | it inherits an awful lot from machine learning as well.
00:43:55.340 | So the free energy principle is just a formal statement
00:44:00.340 | that the existential imperatives for any system
00:44:07.580 | that manages to survive in a changing world
00:44:11.380 | can be cast as an inference problem
00:44:18.860 | in the sense that you could interpret
00:44:21.220 | the probability of existing as the evidence that you exist.
00:44:25.720 | And if you can write down that problem of existence
00:44:29.460 | as a statistical problem,
00:44:30.900 | then you can use all the maths that has been developed
00:44:33.940 | for inference to understand and characterize
00:44:38.940 | the ensemble dynamics that must be in play
00:44:43.000 | in the service of that inference.
00:44:45.620 | So technically what that means is
00:44:48.260 | you can always interpret anything that exists
00:44:51.180 | in virtue of being separate from the environment
00:44:55.700 | in which it exists as trying to minimize
00:45:00.700 | variational free energy.
00:45:03.620 | And if you're from the machine learning community,
00:45:05.600 | you will know that as a negative evidence lower bound
00:45:09.220 | or a negative elbow, which is the same as saying
00:45:13.180 | you're trying to maximize,
00:45:15.340 | or it will look as if all your dynamics
00:45:17.860 | are trying to maximize the compliment of that,
00:45:21.860 | which is the marginal likelihood
00:45:24.020 | or the evidence for your own existence.
00:45:26.480 | So that's basically the free energy principle.
00:45:30.140 | - But to even take a sort of a small step backwards,
00:45:34.100 | you said the existential imperative.
00:45:37.080 | There's a lot of beautiful poetic words here,
00:45:40.120 | but to put it crudely,
00:45:45.780 | it's a fascinating idea of basically
00:45:48.100 | of trying to describe, if you're looking at a blob,
00:45:51.780 | how do you know this thing is alive?
00:45:54.180 | What does it mean to be alive?
00:45:55.660 | What does it mean to exist?
00:45:57.500 | And so you can look at the brain,
00:45:59.440 | you can look at parts of the brain,
00:46:00.700 | or this is just a general principle
00:46:02.820 | that applies to almost any system.
00:46:07.220 | That's just a fascinating sort of philosophically
00:46:10.140 | at every level question and a methodology
00:46:13.100 | to try to answer that question.
00:46:14.320 | What does it mean to be alive?
00:46:16.020 | - Yes.
00:46:16.860 | - So that's a huge endeavor,
00:46:20.900 | and it's nice that there's at least some,
00:46:23.160 | from some perspective, a clean answer.
00:46:25.420 | So maybe can you talk about that optimization view of it?
00:46:30.220 | So what's trying to be minimized and maximized?
00:46:33.500 | A system that's alive, what is it trying to minimize?
00:46:36.820 | - Right, you've made a big move there.
00:46:39.360 | - Apologize.
00:46:41.460 | - No, no, no, it's good to make big moves.
00:46:44.000 | - Yeah, but you've assumed that things,
00:46:49.000 | the thing exists in a state
00:46:52.660 | that could be living or non-living.
00:46:54.720 | So I may ask you,
00:46:56.760 | what licenses you to say that something exists?
00:47:00.140 | That's why I use the word existential.
00:47:02.320 | It's beyond living, it's just existence.
00:47:05.480 | So if you drill down onto the definition
00:47:08.060 | of things that exist, then they have certain properties.
00:47:13.560 | If you borrow the maths
00:47:16.420 | from non-equilibrium steady state physics,
00:47:19.420 | that enable you to interpret their existence
00:47:24.420 | in terms of this optimization procedure.
00:47:29.340 | So it's good you introduced the word optimization.
00:47:32.200 | So what the free energy principle in its sort of
00:47:37.120 | most ambitious, but also most deflationary
00:47:42.480 | and simplest says, is that if something exists,
00:47:47.240 | then it must buy the mathematics
00:47:51.460 | of non-equilibrium steady state,
00:47:55.160 | exhibit properties that make it look as if
00:48:00.740 | it is optimizing a particular quantity.
00:48:03.720 | And it turns out that particular quantity
00:48:06.160 | happens to be exactly the same
00:48:08.560 | as the evidence lower bound in machine learning
00:48:11.380 | or Bayesian model evidence in Bayesian statistics,
00:48:15.300 | or, and then I can list a whole other list
00:48:18.820 | of ways of understanding this key quantity,
00:48:23.500 | which is a bound on a surprisal self-information,
00:48:28.500 | if you're in information theory.
00:48:31.000 | There are a number of different perspectives
00:48:34.080 | on this quantity.
00:48:34.920 | It's just basically the log probability
00:48:36.940 | of being in a particular state.
00:48:40.160 | I'm telling this story as an honest attempt
00:48:43.340 | to answer your question.
00:48:44.540 | And I'm answering it as if I was pretending
00:48:49.300 | to be a physicist who was trying to understand
00:48:52.420 | the fundaments of non-equilibrium steady state.
00:48:57.140 | And I shouldn't really be doing that
00:48:59.660 | because the last time I was taught physics,
00:49:02.220 | I was in my 20s.
00:49:03.740 | - What kind of systems, when you think about
00:49:05.420 | the free energy principle,
00:49:06.420 | what kind of systems are you imagining
00:49:08.660 | as a sort of more specific kind of case study?
00:49:11.620 | - Yeah, I'm imagining a range of systems,
00:49:15.700 | but at its simplest, a single-celled organism
00:49:20.700 | that can be identified from its economy,
00:49:26.020 | show its environment.
00:49:27.700 | So at its simplest, that's basically
00:49:31.500 | what I always imagined in my head.
00:49:33.900 | And you may ask, well, is there any,
00:49:36.700 | how on earth can you even elaborate questions
00:49:41.700 | about the existence of a single drop of oil, for example?
00:49:47.020 | But there are deep questions there.
00:49:49.780 | Why doesn't the oil, why doesn't the thing,
00:49:52.900 | the interface between the drop of oil
00:49:55.500 | that contains an interior and the thing
00:49:59.000 | that is not the drop of oil,
00:50:00.640 | which is the solvent in which it is immersed,
00:50:04.220 | how does that interface persist over time?
00:50:07.420 | Why doesn't the oil just dissolve into solvent?
00:50:10.760 | So what special properties of the exchange
00:50:16.400 | between the surface of the oil drop
00:50:18.580 | and the external states in which it's immersed,
00:50:22.300 | if you're a physicist, say it would be the heat path.
00:50:24.420 | You know, you've got a physical system,
00:50:27.220 | an ensemble again, we're talking about
00:50:28.740 | density dynamics, ensemble dynamics,
00:50:31.540 | an ensemble of atoms or molecules immersed in the heat path.
00:50:36.420 | But the question is, how did the heat path get there
00:50:39.460 | and why does it not dissolve?
00:50:41.340 | - How is it maintaining itself?
00:50:42.820 | - Exactly.
00:50:43.660 | - What actions is it?
00:50:44.500 | I mean, it's such a fascinating idea of a drop of oil
00:50:47.500 | and I guess it would dissolve in water,
00:50:49.980 | it wouldn't dissolve in water.
00:50:51.660 | So what-- - Precisely.
00:50:52.900 | So why not?
00:50:54.100 | - Why not? - Why not?
00:50:55.180 | - And how do you mathematically describe,
00:50:57.020 | I mean, it's such a beautiful idea
00:50:58.660 | and also the idea of like, where does the thing,
00:51:02.140 | where does the drop of oil end and where does it begin?
00:51:07.140 | - Right, so I mean, you're asking deep questions,
00:51:10.580 | deep in a non-millennial sense here.
00:51:12.700 | (both laughing)
00:51:13.540 | - Not in a hierarchical sense.
00:51:14.860 | - But what you can do, you say,
00:51:18.980 | so this is the deflationary part of it.
00:51:21.020 | Can I just qualify my answer by saying
00:51:23.580 | that normally when I'm asked this question,
00:51:24.940 | I answer from the point of view of a psychologist
00:51:26.780 | when we talk about predictive processing
00:51:28.260 | and predictive coding and the brain as an inference machine.
00:51:31.820 | But you haven't asked me from that perspective,
00:51:34.100 | I'm answering from the point of view of a physicist.
00:51:36.460 | So the question is not so much why,
00:51:41.220 | but if it exists, what properties must it display?
00:51:44.660 | So that's the deflationary part of the free energy principle.
00:51:47.100 | The free energy principle does not supply an answer
00:51:51.020 | as to why, it's saying, if something exists,
00:51:54.740 | then it must display these properties.
00:51:57.900 | That's the sort of the thing that's on offer.
00:52:01.740 | And it so happens that these properties it must display
00:52:05.420 | are actually intriguing and have this inferential gloss,
00:52:10.420 | this sort of self-evidencing gloss
00:52:13.860 | that inherits on the fact that the very preservation
00:52:18.140 | of the boundary between the oil drop and the not oil drop
00:52:22.860 | requires an optimization of a particular function
00:52:26.380 | or a functional that defines the presence
00:52:30.820 | of the existence of this oil drop,
00:52:33.260 | which is why I started with existential imperatives.
00:52:36.340 | It is a necessary condition for existence
00:52:39.700 | that this must occur because the boundary
00:52:43.060 | basically defines the thing that's existing.
00:52:46.220 | So it is that self-assembly aspect
00:52:48.140 | it's that you were hinting at in biology,
00:52:53.260 | sometimes known as autopoiesis
00:52:55.900 | in computational chemistry with self-assembly.
00:53:00.340 | It's the, what does it look like?
00:53:03.740 | Sorry, how would you describe things
00:53:06.180 | that configure themselves out of nothing?
00:53:08.780 | The way they clearly demarcate themselves
00:53:12.180 | from the states or the soup in which they are immersed.
00:53:16.840 | So from the point of view of computational chemistry,
00:53:21.100 | for example, you would just understand that
00:53:23.620 | as a configuration of a macromolecule
00:53:25.580 | to minimize its free energy, its thermodynamic free energy.
00:53:29.100 | It's exactly the same principle
00:53:30.700 | that we've been talking about
00:53:31.660 | that thermodynamic free energy is just the negative elbow.
00:53:35.220 | It's the same mathematical construct.
00:53:38.380 | So the very emergence of existence of structure of form
00:53:42.700 | that can be distinguished from the environment
00:53:45.220 | or the thing that is not the thing
00:53:49.420 | necessitates the existence of an objective function
00:53:54.420 | that it looks as if it is minimizing.
00:53:58.340 | It's finding a free energy minima.
00:54:00.500 | - And so just to clarify, I'm trying to wrap my head around.
00:54:05.100 | So the free energy principle says that if something exists,
00:54:09.800 | these are the properties it should display.
00:54:12.740 | So what that means is we can't just look,
00:54:17.660 | we can't just go into a soup and there's no mechanism.
00:54:21.580 | Free energy principle doesn't give us a mechanism
00:54:23.740 | to find the things that exist.
00:54:25.940 | Is that what's being implied that you can kind of use it
00:54:32.060 | to reason, to think about, study a particular system
00:54:37.460 | and say, does this exhibit these qualities?
00:54:40.600 | - That's an excellent question.
00:54:43.780 | But to answer that, I have to return
00:54:46.180 | to your previous question
00:54:47.020 | about what's the difference
00:54:47.860 | between living and non-living things.
00:54:49.960 | - Yes, well, exactly, actually, sorry.
00:54:53.180 | So yeah, maybe we can go there.
00:54:55.420 | You kind of drew a line,
00:54:57.140 | and forgive me for the stupid questions,
00:54:58.940 | but you kind of drew a line between living and existing.
00:55:02.540 | Is there an interesting sort of--
00:55:05.780 | - Distinction?
00:55:06.620 | - Distinction between the two?
00:55:07.440 | - Yeah, I think there is.
00:55:08.740 | So things do exist, grains of sand, rocks on the moon,
00:55:16.540 | trees, you.
00:55:19.460 | So all of these things can be separated
00:55:23.860 | from the environment in which they are immersed,
00:55:26.300 | and therefore, they must, at some level,
00:55:28.180 | be optimizing their free energy.
00:55:31.300 | Taking this sort of model evidence interpretation
00:55:36.180 | of this quantity,
00:55:37.340 | that basically means there's self-evidencing.
00:55:39.620 | Another nice little twist of phrase here
00:55:42.820 | is that you are your own existence proof.
00:55:45.500 | Statistically speaking, which I don't think I said that.
00:55:50.180 | Somebody did, but I love that phrase.
00:55:52.020 | - You are your own existence proof.
00:55:55.620 | - Yeah, so it's so existential, isn't it?
00:55:58.100 | - I'm gonna have to think about that for a few days.
00:56:01.460 | (Lex laughing)
00:56:03.220 | Yeah, that's a beautiful line.
00:56:06.100 | - So the step through to answer your question
00:56:09.780 | about what's it good for,
00:56:13.860 | we go along the following lines.
00:56:15.780 | First of all, you have to define what it means to exist,
00:56:19.780 | which now, as you've rightly pointed out,
00:56:22.060 | you have to define what probabilistic properties
00:56:25.020 | must the states of something possess
00:56:27.500 | so it knows where it finishes.
00:56:30.620 | And then you write that down
00:56:32.220 | in terms of statistical independences.
00:56:34.460 | Again, sparsity.
00:56:36.020 | Again, it's not what's connected or what's correlated
00:56:39.740 | or what depends upon what.
00:56:40.780 | It's what's not correlated.
00:56:43.740 | And what doesn't depend upon something.
00:56:45.980 | Again, it comes down to the deep structures,
00:56:49.700 | not in this instance hierarchical,
00:56:50.900 | but the structures that emerge
00:56:54.020 | from removing connectivity and dependency.
00:56:56.900 | And in this instance, basically being able to identify
00:57:00.700 | the surface of the oil drop
00:57:02.700 | from the water in which it is immersed.
00:57:06.500 | And when you do that, you start to realize,
00:57:09.100 | well, there are actually four kinds of states
00:57:12.740 | in any given universe that contains anything.
00:57:15.660 | The things that are internal to the surface,
00:57:18.860 | the things that are external to the surface,
00:57:20.660 | and the surface in and of itself,
00:57:22.540 | which is why I use a metaphor,
00:57:24.020 | a little single-celled organism
00:57:25.460 | that has an interior and exterior,
00:57:27.100 | and then the surface of the cell.
00:57:29.540 | And that's mathematically a Markov blanket.
00:57:32.740 | - Just to pause, I'm in awe of this concept
00:57:34.940 | that there's the stuff outside the surface,
00:57:36.580 | stuff inside the surface,
00:57:37.420 | and the surface itself, the Markov blanket.
00:57:40.260 | It's just the most beautiful kind of notion
00:57:43.700 | about trying to explore what it means to exist,
00:57:47.220 | mathematically.
00:57:48.060 | I apologize, it's just a beautiful idea.
00:57:50.700 | - But it came out of California, so that's--
00:57:53.060 | - I changed my mind, I take it all back.
00:57:54.860 | (both laughing)
00:57:56.980 | So anyway, so what, you were just talking about the surface,
00:58:00.340 | about the Markov blanket.
00:58:01.180 | - Yeah, so this surface, or these blanket states
00:58:04.700 | that are the,
00:58:08.500 | because they are now defined in relation to
00:58:12.220 | these independences,
00:58:17.540 | and what different states, internal blanket
00:58:21.580 | or external states, which ones can influence each other
00:58:25.300 | and which cannot influence each other,
00:58:27.740 | you can now apply standard results
00:58:30.940 | that you would find in non-equilibrium physics
00:58:33.580 | or steady state or thermodynamics or hydrodynamics
00:58:38.780 | usually out of equilibrium solutions
00:58:41.660 | and apply them to this partition.
00:58:43.220 | And what it looks like is if all the normal gradient flows
00:58:48.140 | that you would associate with any non-equilibrium system
00:58:52.100 | apply in such a way that two,
00:58:56.180 | part of the Markov blanket and the internal states
00:58:59.060 | seem to be hill climbing or doing a gradient descent
00:59:03.740 | on the same quantity.
00:59:05.900 | And that means that you can now describe
00:59:09.420 | the very existence of this oil drop.
00:59:13.180 | You can write down the existence of this oil drop
00:59:16.020 | in terms of flows, dynamics, equations of motion,
00:59:20.660 | where the blanket states or part of them,
00:59:24.180 | we call them active states,
00:59:25.900 | and the internal states now seem to be,
00:59:29.580 | and must be, trying to look as if they're minimising
00:59:34.340 | the same function, which is a log probability
00:59:36.700 | of occupying these states.
00:59:39.300 | Interesting thing is that, what would they be called
00:59:44.060 | if you were trying to describe these things?
00:59:45.700 | So what we're talking about are internal states,
00:59:50.100 | external states, and blanket states.
00:59:52.060 | Now let's carve the blanket states
00:59:54.100 | into two sensory states and active states.
00:59:57.260 | Operationally, it has to be the case
00:59:59.580 | that in order for this carving up
01:00:01.740 | into different sets of states to exist,
01:00:04.460 | the active states, the Markov blanket,
01:00:06.820 | cannot be influenced by the external states.
01:00:09.820 | And we already know that the internal states
01:00:11.620 | can't be influenced by the external states
01:00:13.660 | 'cause the blanket separates them.
01:00:15.780 | So what does that mean?
01:00:16.620 | Well, it means the active states, the internal states,
01:00:19.300 | are now jointly not influenced by external states.
01:00:23.460 | They only have autonomous dynamics.
01:00:26.180 | So now you've got a picture of an oil drop
01:00:30.020 | that has autonomy.
01:00:31.860 | It has autonomous states.
01:00:34.060 | It has autonomous states in the sense
01:00:35.420 | that there must be some parts of the surface
01:00:37.580 | of the oil drop that are not influenced
01:00:39.460 | by the external states and all the interior.
01:00:41.860 | And together, those two states endow
01:00:44.540 | even a little oil drop with autonomous states
01:00:47.660 | that look as if they are optimising
01:00:51.380 | their variational free energy or their negative elbow,
01:00:55.180 | their moral evidence.
01:00:59.380 | And that would be an interesting intellectual exercise.
01:01:03.220 | And you could say, you could even go
01:01:04.980 | into the realms of panpsychism,
01:01:06.580 | that everything that exists is implicitly
01:01:09.420 | making inferences on self-evidencing.
01:01:12.100 | Now we make the next move, but what about living things?
01:01:17.020 | I mean, so let me ask you,
01:01:19.180 | what's the difference between an oil drop
01:01:21.580 | and a little tadpole or a little larva or a plankton?
01:01:27.180 | The picture was just painted of an oil drop.
01:01:29.700 | Just immediately, in a matter of minutes,
01:01:32.860 | took me into the world of panpsychism,
01:01:35.180 | where you just convinced me,
01:01:38.020 | made me feel like an oil drop is a living,
01:01:41.340 | certainly an autonomous system,
01:01:43.380 | but almost a living system.
01:01:44.700 | So it has sensor capabilities and acting capabilities
01:01:48.940 | and maintains something.
01:01:50.620 | So what is the difference between that
01:01:53.940 | and something that we traditionally think
01:01:56.340 | of as a living system?
01:01:57.700 | That it could die or it can't,
01:02:02.220 | I mean, yeah, mortality, I'm not exactly sure.
01:02:05.260 | I'm not sure what the right answer there is.
01:02:07.500 | Because it can move, like movement seems
01:02:10.740 | like an essential element to being able
01:02:12.420 | to act in the environment,
01:02:13.540 | but the oil drop is doing that.
01:02:15.820 | So I don't know.
01:02:16.660 | - Is it?
01:02:18.140 | The oil drop will be moved,
01:02:19.780 | but does it in and of itself move autonomously?
01:02:22.700 | Will the surfaces performing actions
01:02:26.900 | that maintain its structure?
01:02:29.700 | - You're being too clever.
01:02:30.980 | (laughs)
01:02:32.380 | I didn't find a passive little oil drop
01:02:34.580 | that's sitting there at the bottom
01:02:37.700 | or the top of a glass of water.
01:02:39.260 | - Sure, I guess.
01:02:40.620 | - What I'm trying to say is you're absolutely right.
01:02:42.220 | You've nailed it.
01:02:44.420 | It's movement.
01:02:45.900 | So where does that movement come from?
01:02:47.220 | If it comes from the inside,
01:02:49.420 | then you've got, I think, something that's living.
01:02:53.020 | - What do you mean from the inside?
01:02:54.660 | - What I mean is that the internal states
01:02:58.820 | that can influence the active states,
01:03:01.100 | where the active states can influence,
01:03:02.620 | but they're not influenced by the external states,
01:03:05.180 | can cause movement.
01:03:07.220 | So there are two types of oil drops, if you like.
01:03:10.500 | There are oil drops where the internal states are so random
01:03:14.700 | that they average themselves away.
01:03:20.380 | And the thing cannot, on balance, on average,
01:03:23.900 | when you do the averaging, move.
01:03:26.060 | So a nice example of that would be the sun.
01:03:28.260 | The sun certainly has internal states.
01:03:31.220 | There's lots of intrinsic autonomous activity going on.
01:03:34.460 | But because it's not coordinated,
01:03:35.900 | because it doesn't have the deep in the millennial sense,
01:03:38.180 | a hierarchical structure that the brain does,
01:03:41.060 | there is no overall mode or pattern or organisation
01:03:45.900 | that expresses itself on the surface
01:03:48.260 | that allows it to actually swim.
01:03:50.140 | It can certainly have a very active surface,
01:03:54.140 | but on mass, at the scale of the actual surface of the sun,
01:03:58.340 | the average position of that surface cannot in itself move
01:04:02.980 | because the internal dynamics are more like a hot gas.
01:04:06.740 | They are literally like a hot gas.
01:04:08.540 | Whereas your internal dynamics are much more structured
01:04:11.500 | and deeply structured.
01:04:12.980 | And now you can express on your Markov
01:04:15.500 | and your active states with your muscles
01:04:17.420 | and your secretory organs,
01:04:19.780 | your autonomic nervous system and its effectors,
01:04:22.980 | you can actually move.
01:04:24.580 | And that's all you can do.
01:04:26.860 | And that's something which,
01:04:28.340 | if you haven't thought of it like this before,
01:04:30.500 | I think it's nice to just realise
01:04:32.500 | there is no other way that you can change the universe
01:04:37.140 | other than simply moving.
01:04:39.340 | Whether that moving is articulating with my voice box
01:04:43.900 | or walking around or squeezing juices
01:04:46.660 | out of my secretory organs,
01:04:48.780 | there's only one way you can change the universe,
01:04:52.060 | it's moving.
01:04:52.900 | - And the fact that you do so non-randomly makes you alive.
01:04:57.900 | - Yeah.
01:04:59.580 | So it's that non-randomness.
01:05:00.860 | And that would be manifested,
01:05:04.980 | we realise in terms of essentially swimming,
01:05:07.900 | essentially moving, changing one shape,
01:05:10.580 | a morphogenesis that is dynamic
01:05:13.340 | and possibly adaptive.
01:05:14.660 | So that's what I was trying to get at
01:05:17.980 | between the difference between the oil drop
01:05:19.260 | and the little tadpole.
01:05:20.620 | The tadpole is moving around,
01:05:23.020 | its active states are actually changing the external states.
01:05:26.980 | And there's now a cycle,
01:05:28.540 | an action perception cycle, if you like,
01:05:30.460 | a recurrent dynamic that's going on
01:05:34.100 | that depends upon this deeply structured autonomous behaviour
01:05:39.420 | that rests upon internal dynamics
01:05:44.420 | that are not only modelling the data impressed
01:05:49.660 | upon their surface or the blanket states,
01:05:53.860 | but they are actively resampling those data by moving.
01:05:58.860 | They're moving towards chemical gradients and chemotaxis.
01:06:02.620 | So they've gone beyond just being good little models
01:06:08.380 | of the kind of world they live in.
01:06:11.180 | For example, an oil droplet could,
01:06:14.380 | in a panpsychic sense, be construed as a little being
01:06:18.500 | that has now perfectly inferred it's a passive,
01:06:22.620 | non-living oil drop living in a bowl of water.
01:06:25.700 | No problem.
01:06:26.540 | But to now equip that oil drop
01:06:29.940 | with the ability to go out and test that hypothesis
01:06:32.500 | about different states of beings,
01:06:34.100 | so it can actually push its surface over there, over there,
01:06:37.020 | and test for chemical gradients,
01:06:38.740 | or then you start to move to much more lifelike form.
01:06:42.980 | This is all fun, theoretically interesting,
01:06:45.020 | but it actually is quite important in terms of reflecting
01:06:48.940 | what I have seen since the turn of the millennium,
01:06:53.220 | which is this move towards an inactive
01:06:56.700 | and embodied understanding of intelligence.
01:06:59.820 | And you say you're from machine learning.
01:07:02.660 | - Yes.
01:07:03.860 | - So what that means,
01:07:05.860 | this sort of the central importance of movement,
01:07:09.620 | I think has yet to really hit machine learning.
01:07:14.100 | It certainly has now diffused itself throughout robotics,
01:07:19.100 | and perhaps you could say certain problems in active vision
01:07:23.300 | where you actually have to move the camera
01:07:25.460 | to sample this and that.
01:07:27.340 | But machine learning of the data mining, deep learning sort,
01:07:31.700 | simply hasn't contended with this issue.
01:07:34.060 | What it's done, instead of dealing
01:07:35.980 | with the movement problem and the active sampling of data,
01:07:39.220 | it's just said, "We don't need to worry about it.
01:07:40.700 | "We can see all the data 'cause we've got big data."
01:07:43.200 | So we can ignore movement.
01:07:45.220 | So that, for me, is an important omission
01:07:50.220 | in current machine learning.
01:07:52.300 | - So current machine learning is much more like the oil drop.
01:07:54.900 | - Yes.
01:07:56.060 | But an oil drop that enjoys exposure
01:07:59.580 | to nearly all the data that we need to be exposed to,
01:08:03.700 | as opposed to the tadpoles swimming out
01:08:05.860 | to find the right data.
01:08:07.460 | For example, it likes food.
01:08:10.380 | That's a good hypothesis.
01:08:11.340 | Let's test it out.
01:08:12.180 | Let's go and move and ingest food, for example,
01:08:15.700 | and see what that, you know, is that evidence
01:08:17.700 | that I'm the kind of thing that likes this kind of food.
01:08:20.380 | - So the next natural question, and forgive this question,
01:08:23.980 | but if we think of sort of even artificial intelligence
01:08:27.100 | systems, which has just painted a beautiful picture
01:08:29.420 | of existence and life, so do you ascribe,
01:08:34.420 | do you find within this framework a possibility
01:08:41.100 | of defining consciousness or exploring
01:08:46.100 | the idea of consciousness?
01:08:47.460 | Self-awareness and expanded to consciousness,
01:08:54.740 | like, yeah, how can we start to think
01:08:58.020 | about consciousness within this framework?
01:08:59.700 | Is it possible?
01:09:00.860 | - Well, yeah, I think it's possible to think about it,
01:09:03.180 | whether you'll get--
01:09:04.020 | - Get anywhere is another question.
01:09:06.380 | - And again, I'm not sure that I'm licensed
01:09:10.340 | to answer that question.
01:09:12.700 | I think you'd have to speak to a qualified philosopher
01:09:15.100 | to get a definitive answer there.
01:09:17.300 | But certainly there's a lot of interest
01:09:19.620 | in using not just these ideas,
01:09:21.740 | but related ideas from information theory
01:09:25.860 | to try and tie down the maths and the calculus
01:09:30.500 | and the geometry of consciousness,
01:09:34.060 | either in terms of sort of a minimal consciousness,
01:09:39.060 | even less than a minimal selfhood.
01:09:42.380 | And what I'm talking about is the ability effectively
01:09:47.060 | to plan, to have agency.
01:09:52.380 | So you could argue that a virus does have a form of agency
01:09:57.380 | in virtue of the way that it selectively finds hosts
01:10:02.140 | and cells to live in and moves around,
01:10:04.940 | but you wouldn't endow it with the capacity
01:10:09.260 | to think about planning and moving in a purposeful way
01:10:14.260 | where it countenances the future.
01:10:17.220 | Whereas you might announce,
01:10:18.580 | you might think announce not quite as unconscious
01:10:21.980 | as a virus, it certainly seems to have a purpose.
01:10:26.140 | It talks to its friends en route during its foraging.
01:10:29.620 | It has a different kind of autonomy,
01:10:33.540 | which is biotic, but beyond a virus.
01:10:38.700 | - So there's something about,
01:10:40.460 | so there's some line that has to do
01:10:43.140 | with the complexity of planning that may contain an answer.
01:10:47.940 | I mean, it'd be beautiful if we can find a line
01:10:51.460 | beyond which we can say a being is conscious.
01:10:55.460 | - Yes, it would be.
01:10:56.500 | - These are wonderful lines that we've drawn
01:10:59.140 | with existence, life, and consciousness.
01:11:02.660 | - Yes, it will be very nice.
01:11:05.300 | One little wrinkle there,
01:11:07.140 | and this is something I've only learned
01:11:08.460 | in the past few months,
01:11:09.380 | is the philosophical notion of vagueness.
01:11:12.380 | So you're saying it would be wonderful to draw a line.
01:11:14.820 | I had always assumed that that line
01:11:17.500 | at some point would be drawn
01:11:20.460 | until about four months ago,
01:11:22.700 | and a philosopher taught me about vagueness.
01:11:24.860 | So I don't know if you've come across this,
01:11:26.220 | but it's a technical concept,
01:11:28.380 | and I think most revealingly illustrated
01:11:33.100 | with at what point does a pile of sand become a pile?
01:11:37.060 | Is it one grain, two grains, three grains, or four grains?
01:11:41.580 | So at what point would you draw the line
01:11:44.180 | between being a pile of sand
01:11:46.220 | and a collection of grains of sand?
01:11:51.220 | In the same way, is it right to ask,
01:11:53.500 | where would I draw the line
01:11:54.780 | between conscious and unconscious?
01:11:56.700 | And it might be a vague concept.
01:11:59.620 | Having said that, I agree with you entirely.
01:12:01.620 | (both laughing)
01:12:02.900 | Systems that have the ability to plan.
01:12:06.420 | So just technically what that means
01:12:08.380 | is your inferential self-evidencing,
01:12:13.860 | by which I simply mean the dynamics,
01:12:17.220 | literally the thermodynamics and gradient flows
01:12:20.300 | that underwrite the preservation
01:12:22.260 | of your oil droplet-like form
01:12:24.140 | are described as a,
01:12:28.580 | can be described as an optimization
01:12:30.300 | of log Bayesian model evidence, your elbow.
01:12:34.320 | That self-evidencing must be evidence for a model
01:12:41.380 | of what's causing the sensory impressions
01:12:44.020 | on the sensory part of your surface
01:12:46.380 | or your Markov blanket.
01:12:48.460 | If that model is capable of planning,
01:12:51.140 | it must include a model of the future consequences
01:12:53.820 | of your active states or your action, just planning.
01:12:56.780 | So we're now in the game of planning as inference.
01:12:59.380 | Now notice what we've made though.
01:13:00.620 | We've made quite a big move away
01:13:02.660 | from big data and machine learning,
01:13:05.220 | because again, it's the consequences of moving.
01:13:08.500 | It's the consequences of selecting those data
01:13:11.100 | or those data or looking over there.
01:13:14.220 | And that tells you immediately that even to be a contender
01:13:18.380 | for a conscious artifact or a,
01:13:20.120 | is it strong AI or generalized AI?
01:13:24.300 | - Generalized AI, yeah.
01:13:25.140 | - It's called now.
01:13:26.780 | Then you've got to have movement in the game.
01:13:29.260 | And furthermore, you've got to have a generative model
01:13:32.580 | of the sort you might find in say a variational autoencoder
01:13:35.900 | that is thinking about the future conditioned
01:13:39.860 | upon different courses of action.
01:13:41.900 | Now that brings a number of things to the table,
01:13:43.820 | which now you start to think,
01:13:45.340 | well, those who've got all the right ingredients
01:13:47.420 | talk about consciousness.
01:13:48.500 | I've now got to select among a number
01:13:50.700 | of different courses of action into the future
01:13:53.340 | as part of planning.
01:13:54.900 | I've now got free will.
01:13:56.460 | The act of selecting this course of action
01:13:58.700 | or that policy or that policy or that action
01:14:01.300 | suddenly makes me into an inference machine,
01:14:04.740 | a self-evidencing artifact
01:14:08.820 | that now looks as if it's selecting
01:14:11.220 | amongst different alternative ways forward
01:14:13.460 | as I actively swim here or swim there
01:14:15.460 | or look over here, look over there.
01:14:17.900 | So I think you've now got to a situation
01:14:19.900 | if there is planning in the mix,
01:14:22.180 | you're now getting much closer to that line,
01:14:25.180 | if that line were ever to exist.
01:14:27.300 | I don't think it gets you quite as far as self-aware though.
01:14:30.560 | I think, and then you have to, I think,
01:14:36.380 | grapple with the question
01:14:39.060 | how would formally you write down a calculus
01:14:42.580 | or a maths of self-awareness?
01:14:44.580 | I don't think it's impossible to do,
01:14:46.380 | but I think there'll be pressure on you
01:14:51.580 | to actually commit to a formal definition
01:14:53.220 | of what you mean by self-awareness.
01:14:55.860 | I think most people that I know
01:14:58.860 | would probably say that a goldfish, a pet fish,
01:15:03.780 | was not self-aware.
01:15:06.940 | They would probably argue about their favorite cat,
01:15:09.540 | but would be quite happy to say
01:15:12.180 | that their mom was self-aware.
01:15:14.820 | - But that might very well connect
01:15:17.420 | to some level of complexity with planning.
01:15:20.860 | It seems like self-awareness is essential
01:15:23.620 | for complex planning.
01:15:26.500 | - Yeah.
01:15:27.340 | Do you want to take that further?
01:15:28.180 | 'Cause I think you're absolutely right.
01:15:29.300 | - Again, the line is unclear,
01:15:31.100 | but it seems like integrating yourself
01:15:34.660 | into the world, into your planning,
01:15:39.140 | is essential for constructing complex plans.
01:15:42.260 | - Yes, yeah.
01:15:43.660 | - So mathematically describing that
01:15:45.900 | in the same elegant way
01:15:47.380 | as you have with the free energy principle
01:15:49.900 | might be difficult.
01:15:51.060 | - Well, yes and no.
01:15:53.260 | I don't think that, well, perhaps we should just,
01:15:55.300 | can we just go back?
01:15:56.940 | That's a very important answer you gave.
01:15:58.660 | And I think if I just unpacked it,
01:16:01.820 | you'd see the truisms that you've just exposed for us.
01:16:05.700 | But let me, sorry.
01:16:07.620 | I'm mindful that I didn't answer your question before.
01:16:11.380 | Well, what's the free energy principle good for?
01:16:13.900 | Is it just a pretty theoretical exercise
01:16:15.700 | to explain non-equilibrium steady states?
01:16:17.900 | Yes, it is.
01:16:19.340 | It does nothing more for you than that.
01:16:21.340 | It can be regarded, it's gonna sound very arrogant,
01:16:24.060 | but it is of the sort of theory of natural selection
01:16:27.900 | or a hypothesis of natural selection.
01:16:32.780 | Beautiful, undeniably true,
01:16:36.540 | but tells you absolutely nothing about
01:16:39.620 | why you have legs and eyes.
01:16:42.060 | It tells you nothing about the actual phenotype
01:16:44.740 | and it wouldn't allow you to build something.
01:16:48.140 | So the free energy principle by itself
01:16:51.140 | is as vacuous as most tautological theories.
01:16:54.900 | And by tautological, of course,
01:16:56.220 | I'm talking to the theory of natural,
01:16:58.780 | the survival of the fittest.
01:17:00.060 | What's the fittest that survive?
01:17:01.740 | Why do the cycles, the fitter?
01:17:03.020 | It just go round in circles.
01:17:04.740 | In a sense, the free energy principle has that same
01:17:08.540 | deflationary tautology under the hood.
01:17:11.460 | It's a characteristic of things that exist.
01:17:17.700 | Why do they exist?
01:17:18.540 | Because they minimize their free energy.
01:17:19.740 | Why do they minimize their free energy?
01:17:21.380 | Because they exist.
01:17:22.220 | And you just keep on going round and round and round.
01:17:24.700 | But the practical thing,
01:17:28.060 | which you don't get from natural selection,
01:17:32.660 | but you could say has now manifest in things
01:17:35.660 | like differential evolution or genetic algorithms
01:17:38.140 | or MCMC, for example, in machine learning.
01:17:41.300 | The practical thing you can get is,
01:17:43.180 | if it looks as if things that exist
01:17:45.420 | are trying to have density dynamics
01:17:49.380 | that look as though they're optimizing
01:17:51.540 | a variation of free energy,
01:17:53.300 | and a variation of free energy has to be
01:17:55.180 | a functional of a generative model,
01:17:57.260 | a probabilistic description of causes and consequences,
01:18:01.700 | causes out there, consequences in the sensorium,
01:18:04.540 | on the sensory parts of the Markov Planckian,
01:18:07.020 | then it should, in theory, be possible
01:18:08.660 | to write down the generative model,
01:18:10.380 | work out the gradients,
01:18:11.780 | and then cause it to autonomously self-evidence.
01:18:15.860 | So you should be able to write down oil droplets.
01:18:18.140 | You should be able to create artifacts
01:18:20.100 | where you have supplied the objective function
01:18:24.260 | that supplies the gradients,
01:18:25.620 | that supplies the self-organizing dynamics
01:18:28.380 | to non-equilibrium steady state.
01:18:30.300 | So there is actually a practical application
01:18:32.700 | of the free energy principle
01:18:34.140 | when you can write down your required evidence
01:18:37.820 | in terms of, well, when you can write down
01:18:40.420 | the generative model,
01:18:41.660 | that is the thing that has the evidence,
01:18:44.820 | the probability of these sensory data
01:18:46.780 | or this data, given that model is effectively
01:18:51.780 | the thing that the elbow
01:18:54.220 | of the variational free energy bounds or approximates.
01:18:58.260 | That means that you can actually write down the model
01:19:00.900 | and the kind of thing that you want to engineer,
01:19:04.660 | the kind of AGI, artificial general intelligence,
01:19:07.940 | that you want to manifest probabilistically,
01:19:14.500 | and then you engineer, a lot of hard work,
01:19:16.740 | but you would engineer a robot and a computer
01:19:19.820 | to perform a gradient descent on that objective function.
01:19:23.460 | So it does have a practical implication.
01:19:26.220 | Now, why am I wittering on about that?
01:19:27.500 | It did seem relevant to, yes.
01:19:28.940 | So what kinds of, so the answer to,
01:19:32.980 | would it be easy or would it be hard?
01:19:34.300 | Well, mathematically, it's easy.
01:19:36.220 | I've just told you, all you need to do
01:19:38.060 | is write down your perfect artifact probabilistically
01:19:43.060 | in the form of a probabilistic generative model,
01:19:46.540 | probability distribution over the causes and consequences
01:19:49.860 | of the world in which this thing is immersed.
01:19:54.700 | And then you just engineer a computer and a robot
01:19:58.060 | to perform a gradient descent on that objective function.
01:20:00.820 | No problem.
01:20:01.660 | But of course, the big problem
01:20:04.180 | is writing down the generative model.
01:20:05.940 | So that's where the heavy lifting comes in.
01:20:08.060 | So it's the form and the structure of that generative model,
01:20:12.180 | which basically defines the artifact that you will create,
01:20:15.660 | or indeed, the kind of artifact that has self-awareness.
01:20:19.620 | So that's where all the hard work comes,
01:20:22.060 | very much like natural selection doesn't tell you
01:20:24.620 | in the slightest why you have eyes.
01:20:26.980 | So you have to drill down on the actual phenotype,
01:20:29.500 | the actual generative model.
01:20:31.500 | So with that in mind, what did you tell me
01:20:36.380 | that tells me immediately the kinds of generative models
01:20:40.700 | I would have to write down in order to have self-awareness?
01:20:43.500 | - What you said to me was, I have to have a model
01:20:48.220 | that is effectively fit for purpose for this kind of world
01:20:51.860 | in which I operate.
01:20:53.700 | And if I now make the observation that this kind of world
01:20:57.140 | is effectively largely populated by other things like me,
01:21:00.580 | i.e. you, then it makes enormous sense
01:21:04.220 | that if I can develop a hypothesis
01:21:07.300 | that we are similar kinds of creatures,
01:21:11.540 | in fact, the same kind of creature,
01:21:13.620 | but I am me and you are you,
01:21:16.340 | then it becomes, again, mandated to have a sense of self.
01:21:21.340 | So if I live in a world that is constituted
01:21:25.260 | by things like me, basically a social world, a community,
01:21:29.500 | then it becomes necessary now for me to infer
01:21:32.300 | that it's me talking and not you talking.
01:21:34.420 | I wouldn't need that if I was on Mars by myself,
01:21:37.300 | or if I was in the jungle as a feral child.
01:21:40.060 | If there was nothing like me around,
01:21:43.140 | there would be no need to have an inference,
01:21:46.500 | a hypothesis, ah, yes, it is me that is experiencing
01:21:50.140 | or causing these sounds, and it is not you.
01:21:52.380 | It's only when there's ambiguity in play
01:21:54.700 | induced by the fact that there are others in that world.
01:21:58.260 | So I think that the special thing about self-aware artifacts
01:22:03.260 | is that they have learned to, or they have acquired,
01:22:08.300 | or at least are equipped with, possibly by evolution,
01:22:11.660 | generative models that allow for the fact
01:22:14.580 | there are lots of copies of things like them around,
01:22:17.380 | and therefore they have to work out it's you and not me.
01:22:20.580 | - That's brilliant.
01:22:23.260 | I've never thought of that.
01:22:24.580 | I never thought of that, that the purpose of,
01:22:28.460 | the really usefulness of consciousness or self-awareness
01:22:32.980 | in the context of planning existing in the world
01:22:35.940 | is so you can operate with other things like you.
01:22:38.380 | And like you could, it doesn't have to necessarily be human.
01:22:40.780 | It could be other kind of similar creatures.
01:22:43.460 | - Absolutely, well, we imbue a lot of our attributes
01:22:46.100 | into our pets, don't we?
01:22:47.860 | Or we try to make our robots humanoid.
01:22:49.900 | And I think there's a deep reason for that,
01:22:51.860 | that it's just much easier to read the world
01:22:54.660 | if you can make the simplifying assumption
01:22:56.220 | that basically you're me, and it's just your turn to talk.
01:22:59.300 | I mean, when we talk about planning,
01:23:01.500 | when you talk specifically about planning,
01:23:04.180 | the highest, if you like, manifestation or realization
01:23:07.780 | of that planning is what we're doing now.
01:23:09.620 | I mean, the human condition doesn't get any higher
01:23:12.540 | than this talking about the philosophy of existence
01:23:16.820 | and the conversation.
01:23:17.900 | But in that conversation, there is a beautiful art
01:23:22.900 | of turn-taking and mutual inference, theory of mind.
01:23:28.100 | I have to know when you want to listen.
01:23:29.620 | I have to know when you want to interrupt.
01:23:31.100 | I have to make sure that you're online.
01:23:32.500 | I have to have a model in my head of your model in your head.
01:23:35.780 | That's the highest, the most sophisticated form
01:23:38.300 | of generative model, where the generative model
01:23:40.140 | actually has a generative model
01:23:41.300 | of somebody else's generative model.
01:23:42.780 | And I think that, and what we are doing now,
01:23:45.780 | evinces the kinds of generative models
01:23:49.140 | that would support self-awareness.
01:23:51.220 | Because without that, we'd both be talking over each other,
01:23:54.620 | or we'd be singing together in a choir, you know?
01:23:58.260 | That was just probably not, that's not a brilliant analogy
01:24:00.500 | for what I'm trying to say, but,
01:24:02.460 | yeah, we wouldn't have this discourse.
01:24:05.140 | - Yeah, the dance of it, yeah, that's right.
01:24:07.020 | You'd have to have, as I interrupt. (laughs)
01:24:10.460 | I mean, that's beautifully put.
01:24:13.220 | I'll re-listen to this conversation many times.
01:24:15.780 | There's so much poetry in this, and mathematics.
01:24:21.600 | Let me ask the silliest, or perhaps the biggest question
01:24:26.260 | as a last kind of question.
01:24:29.860 | We've talked about living in existence
01:24:33.340 | and the objective function
01:24:34.620 | under which these objects would operate.
01:24:37.500 | What do you think is the objective function
01:24:39.700 | of our existence?
01:24:41.540 | What's the meaning of life?
01:24:43.320 | What do you think is the, for you, perhaps,
01:24:47.140 | the purpose, the source of fulfillment,
01:24:50.200 | the source of meaning for your existence,
01:24:53.100 | as one blob in this soup?
01:24:57.620 | - I'm tempted to answer that, again, as a physicist.
01:25:00.540 | (laughs)
01:25:01.740 | Free energy I expect, consequent upon my behavior.
01:25:05.460 | So technically, that, you know,
01:25:06.300 | and we could get a really interesting conversation
01:25:09.060 | about what that comprises in terms of
01:25:11.840 | searching for information, resolving uncertainty
01:25:14.340 | about the kind of thing that I am.
01:25:16.580 | But I suspect that you want a slightly more personal
01:25:20.220 | and fun answer, but which can be consistent with that.
01:25:24.660 | And I think it's reassuringly simple
01:25:29.660 | and harps back to what you were taught as a child,
01:25:35.380 | that you have certain beliefs about the kind of creature
01:25:39.540 | and the kind of person you are.
01:25:41.860 | And all that self-evidencing means,
01:25:44.860 | all that minimizing variational free energy
01:25:46.860 | in an inactive and embodied way means
01:25:50.380 | is fulfilling the beliefs about what kind of thing
01:25:54.620 | you are.
01:25:55.740 | And of course, we're all given those scripts,
01:25:58.060 | those narratives at a very early age,
01:26:01.300 | usually in the form of bedtime stories or fairy stories,
01:26:04.340 | that I'm a princess and I'm gonna meet a beast
01:26:07.220 | who's gonna transform and it's gonna be a prince.
01:26:09.860 | - So the narratives are all around you,
01:26:11.900 | from your parents to the friends,
01:26:14.580 | to the society feeds these stories.
01:26:17.700 | And then your objective function is to fulfill--
01:26:21.020 | - Exactly.
01:26:21.860 | - That narrative that has been encultured
01:26:24.260 | by your immediate family, but as you say,
01:26:27.180 | also the sort of the culture in which you grew up
01:26:29.620 | and you create for yourself.
01:26:30.900 | I mean, again, because of this active inference,
01:26:33.500 | this inactive aspect of self-evidencing,
01:26:36.100 | not only am I modeling my environment, my econish,
01:26:41.700 | my external states out there,
01:26:44.100 | but I'm actively changing them all the time.
01:26:46.540 | And external states are doing the same back,
01:26:49.020 | we're doing it together.
01:26:49.860 | So there's a synchrony that means
01:26:52.820 | that I'm creating my own culture
01:26:54.820 | over different timescales.
01:26:56.820 | So the question now is for me being very selfish,
01:27:00.780 | what scripts were I given?
01:27:02.260 | It basically was a mixture between Einstein and Sherlock Holmes.
01:27:06.140 | So I smoke as heavily as possible,
01:27:08.820 | try to avoid too much interpersonal contact,
01:27:15.260 | enjoy the fantasy that you're a popular scientist
01:27:20.260 | who's gonna make a difference in a slightly quirky way.
01:27:23.340 | So that's where I grew up.
01:27:25.140 | My father was an engineer and loved science
01:27:28.300 | and he loved sort of things like Sir Arthur Eddington's
01:27:33.300 | "Space, Time and Gravitation,"
01:27:34.500 | which was the first understandable version
01:27:38.820 | of general relativity.
01:27:41.780 | So all the fairy stories I was told as I was growing up
01:27:45.820 | were all about these characters.
01:27:47.660 | I'm keeping "The Hobbit" out of this
01:27:50.580 | because that doesn't quite fit my narrative.
01:27:53.140 | But it's a journey of exploration, I suppose, of sorts.
01:27:56.220 | So yeah, I've just grown up to be
01:27:58.180 | what I imagine a mild-mannered Sherlock Holmes/Albert Einstein
01:28:04.540 | would do in my shoes.
01:28:07.980 | - And you did it elegantly and beautifully,
01:28:10.100 | Carl, it was a huge honor talking to you today.
01:28:11.860 | It was fun.
01:28:12.700 | Thank you so much for your time.
01:28:13.620 | - Oh, thank you, Shane.
01:28:15.580 | - Thank you for listening to this conversation
01:28:17.260 | with Carl Friston,
01:28:18.500 | and thank you to our presenting sponsor, Cash App.
01:28:21.300 | Please consider supporting the podcast
01:28:23.060 | by downloading Cash App and using code LEXPODCAST.
01:28:27.060 | If you enjoy this podcast, subscribe on YouTube,
01:28:29.740 | review it with Five Stars and Apple Podcast,
01:28:32.060 | support it on Patreon,
01:28:33.460 | or simply connect with me on Twitter @LexFriedman.
01:28:37.500 | And now let me leave you with some words
01:28:39.380 | from Carl Friston.
01:28:40.660 | Your arm moves because you predict it will,
01:28:44.620 | and your motor system seeks to minimize prediction error.
01:28:48.060 | Thank you for listening, and hope to see you next time.
01:28:52.360 | (upbeat music)
01:28:54.940 | (upbeat music)
01:28:57.520 | [BLANK_AUDIO]