back to index

You Are Your Own Existence Proof (Karl Friston) | AI Podcast Clips with Lex Fridman


Chapters

0:0
0:10 What Is the Free Energy Principle
0:51 The Free Energy Principle
2:29 The Existential Imperative
4:30 The Free-Energy Principle
11:42 The Difference between Living and Nonliving Things
28:5 Philosophical Notion of Vagueness

Whisper Transcript | Transcript Only Page

00:00:00.000 | - Let's start with the basics.
00:00:03.240 | So you've also formulated a fascinating principle,
00:00:08.160 | the free energy principle.
00:00:09.520 | Can we maybe start at the basics
00:00:11.320 | and what is the free energy principle?
00:00:15.640 | - Well, in fact, the free energy principle
00:00:18.240 | inherits a lot from the building
00:00:24.680 | of these data analytic approaches
00:00:27.240 | to these very high dimensional time series
00:00:30.160 | you get from the brain.
00:00:31.960 | So I think it's interesting to acknowledge that.
00:00:33.960 | And in particular, the analysis tools
00:00:35.960 | that try to address the other side,
00:00:39.120 | which is a functional integration.
00:00:40.360 | So the connectivity analysis.
00:00:42.060 | On the one hand, but I should also acknowledge
00:00:47.880 | it inherits an awful lot from machine learning as well.
00:00:51.320 | So the free energy principle
00:00:56.080 | is just a formal statement
00:00:58.600 | that the existential imperatives
00:01:03.240 | for any system that manages to survive
00:01:06.040 | in a changing world
00:01:07.400 | can be cast as an inference problem
00:01:13.920 | in the sense that you could interpret
00:01:17.240 | the probability of existing as the evidence that you exist.
00:01:21.720 | And if you can write down that problem of existence
00:01:25.480 | as a statistical problem,
00:01:26.880 | then you can use all the maths
00:01:28.280 | that has been developed for inference
00:01:32.200 | to understand and characterize the ensemble dynamics
00:01:37.200 | that must be in play in the service of that inference.
00:01:41.600 | So technically what that means is
00:01:44.280 | you can always interpret anything that exists
00:01:47.200 | in virtue of being separate from the environment
00:01:51.680 | in which it exists
00:01:53.760 | as trying to minimize variational free energy.
00:01:58.760 | And if you're from the machine learning community,
00:02:01.600 | you will know that as a negative evidence lower bound
00:02:05.240 | or a negative elbow,
00:02:06.440 | which is the same as saying you're trying to maximize
00:02:11.360 | or it will look as if all your dynamics
00:02:13.880 | are trying to maximize the compliment of that,
00:02:17.920 | which is the marginal likelihood
00:02:20.040 | or the evidence for your own existence.
00:02:22.520 | So that's basically the free energy principle.
00:02:26.160 | - But to even take a sort of a small step backwards,
00:02:30.120 | you said the existential imperative.
00:02:33.120 | There's a lot of beautiful poetic words here,
00:02:36.120 | but to put it crudely,
00:02:38.360 | it's a fascinating idea of basically
00:02:44.080 | of trying to describe if you're looking at a blob,
00:02:47.800 | how do you know this thing is alive?
00:02:50.200 | What does it mean to be alive?
00:02:51.680 | What does it mean to exist?
00:02:53.480 | And so you can look at the brain,
00:02:55.440 | you can look at parts of the brain,
00:02:56.680 | or this is just a general principle
00:02:58.840 | that applies to almost any system.
00:03:03.200 | That's just a fascinating sort of philosophically
00:03:06.120 | at every level question and a methodology
00:03:09.080 | to try to answer that question.
00:03:10.320 | What does it mean to be alive?
00:03:12.000 | - Yes.
00:03:12.840 | - So that's a huge endeavor,
00:03:16.920 | and it's nice that there's at least some,
00:03:19.160 | from some perspective, a clean answer.
00:03:21.440 | So maybe can you talk about that optimization view of it?
00:03:26.240 | So what's trying to be minimized and maximized?
00:03:29.520 | A system that's alive, what is it trying to minimize?
00:03:32.840 | - Right, you've made a big move there.
00:03:35.360 | - Apologize.
00:03:37.440 | - No, no, it's good to make big moves.
00:03:40.000 | But you've assumed that the thing exists in a state
00:03:47.440 | that could be living or non-living.
00:03:50.680 | So I may ask you, well,
00:03:52.720 | what licenses you to say that something exists?
00:03:56.120 | That's why I use the word existential.
00:03:58.280 | It's beyond living, it's just existence.
00:04:01.440 | So if you drill down onto the definition
00:04:04.040 | of things that exist, then they have certain properties.
00:04:09.040 | If you borrow the maths
00:04:12.400 | from non-equilibrium steady state physics,
00:04:15.400 | that enable you to interpret their existence
00:04:21.400 | in terms of this optimization procedure.
00:04:25.320 | So it's good you introduced the word optimization.
00:04:28.160 | So what the free energy principle in its sort of
00:04:33.080 | most ambitious but also most deflationary
00:04:38.080 | and simplest says is that if something exists,
00:04:43.200 | then it must, by the mathematics
00:04:47.440 | of non-equilibrium steady state,
00:04:51.160 | exhibit properties that make it look as if
00:04:56.760 | it is optimizing a particular quantity.
00:04:59.720 | And it turns out that particular quantity
00:05:02.160 | happens to be exactly the same
00:05:04.560 | as the evidence lower bound in machine learning
00:05:07.360 | or Bayesian model evidence in Bayesian statistics,
00:05:11.320 | or, and then I can list a whole other,
00:05:14.360 | list of ways of understanding this key quantity,
00:05:19.360 | which is a bound on surprisal, self-information,
00:05:24.480 | if you're in information theory.
00:05:27.000 | There are a whole, there are a number
00:05:29.280 | of different perspectives on this quantity.
00:05:30.720 | It's just basically the log probability
00:05:32.960 | of being in a particular state.
00:05:36.160 | I'm telling this story as an honest,
00:05:38.840 | an attempt to answer your question.
00:05:41.640 | And I'm answering it as if I was pretending
00:05:45.280 | to be a physicist who was trying to understand
00:05:48.400 | the fundaments of non-equilibrium steady state.
00:05:53.120 | And I shouldn't really be doing that
00:05:55.640 | because the last time I was taught physics,
00:05:58.240 | I was in my 20s.
00:05:59.760 | - What kind of systems, when you think about
00:06:01.440 | the free energy principle,
00:06:02.400 | what kind of systems are you imagining?
00:06:04.640 | As a sort of more specific kind of case study.
00:06:07.640 | - Yeah, I'm imagining a range of systems,
00:06:11.720 | but at its simplest, a single-celled organism
00:06:16.720 | that can be identified from its economy,
00:06:22.040 | show its environment.
00:06:23.720 | So at its simplest, that's basically
00:06:27.520 | what I always imagined in my head.
00:06:29.880 | And you may ask, well, is there any,
00:06:32.680 | how on earth can you even elaborate questions
00:06:37.680 | about the existence of a single drop of oil, for example?
00:06:43.040 | But there are deep questions there.
00:06:45.800 | Why doesn't the oil, why doesn't the thing,
00:06:48.880 | the interface between the drop of oil
00:06:51.480 | that contains an interior
00:06:53.840 | and the thing that is not the drop of oil,
00:06:56.640 | which is the solvent in which it is immersed,
00:07:00.200 | how does that interface persist over time?
00:07:03.440 | Why doesn't the oil just dissolve into solvent?
00:07:06.760 | So what special properties of the exchange
00:07:12.400 | between the surface of the oil drop
00:07:14.600 | and the external states in which it's immersed,
00:07:18.280 | if you're a physicist, say it would be the heat path.
00:07:20.440 | You know, you've got a physical system,
00:07:23.240 | an ensemble again, we're talking about
00:07:24.760 | density dynamics, ensemble dynamics,
00:07:27.520 | an ensemble of atoms or molecules immersed in the heat path.
00:07:32.400 | But the question is, how did the heat path get there
00:07:35.440 | and why does it not dissolve?
00:07:37.360 | - How is it maintaining itself?
00:07:38.800 | - Exactly.
00:07:39.640 | - What actions is it?
00:07:40.480 | I mean, it's such a fascinating idea of a drop of oil
00:07:43.520 | and I guess it would dissolve in water,
00:07:46.000 | it wouldn't dissolve in water.
00:07:47.680 | So what--
00:07:48.520 | - Precisely, so why not?
00:07:50.080 | - Why not? - Why not?
00:07:51.200 | - And how do you mathematically describe,
00:07:53.000 | I mean, it's such a beautiful idea
00:07:54.640 | and also the idea of like, where does the thing,
00:07:58.120 | where does the drop of oil end and where does it begin?
00:08:03.120 | - Right, so I mean, you're asking deep questions,
00:08:06.560 | deep in a non-millennial sense here.
00:08:08.680 | (laughing)
00:08:09.520 | - Not in a hierarchical sense.
00:08:10.840 | (laughing)
00:08:12.440 | - But what you can do, so this is the deflationary part of it.
00:08:17.440 | Can I just qualify my answer by saying
00:08:19.560 | that normally when I'm asked this question,
00:08:20.920 | I answer from the point of view of a psychologist
00:08:22.760 | and we talk about predictive processing and predictive coding
00:08:25.240 | and the brain is an inference machine,
00:08:27.800 | but you haven't asked me from that perspective,
00:08:30.080 | I'm answering from the point of view of a physicist.
00:08:32.400 | So the question is not so much why,
00:08:37.200 | but if it exists, what properties must it display?
00:08:40.640 | So that's the deflationary part of the free energy principle.
00:08:43.080 | The free energy principle does not supply an answer
00:08:47.000 | as to why, it's saying if something exists,
00:08:50.720 | then it must display these properties.
00:08:53.880 | That's the sort of the thing that's on offer.
00:08:57.720 | And it so happens that these properties it must display
00:09:01.400 | are actually intriguing and have this inferential gloss,
00:09:06.400 | this sort of self-evidencing gloss
00:09:09.840 | that inherits on the fact that the very preservation
00:09:14.120 | of the boundary between the oil drop and the not oil drop
00:09:18.880 | requires an optimization of a particular function
00:09:22.360 | or a functional that defines the presence
00:09:26.800 | of the existence of this oil drop,
00:09:29.240 | which is why I started with existential imperatives.
00:09:32.320 | It is a necessary condition for existence
00:09:35.720 | that this must occur because the boundary
00:09:39.040 | basically defines the thing that's existing.
00:09:42.200 | So it is that self-assembly aspect
00:09:44.160 | that you were hinting at in biology,
00:09:49.160 | sometimes known as autopoiesis
00:09:51.920 | in computational chemistry with self-assembly.
00:09:56.320 | It's the, what does it look like?
00:09:59.760 | Sorry, how would you describe things
00:10:02.200 | that configure themselves out of nothing?
00:10:04.800 | The way they clearly demarcate themselves
00:10:08.160 | from the states or the soup in which they are immersed.
00:10:13.680 | So from the point of view of computational chemistry,
00:10:17.120 | for example, you would just understand that
00:10:19.640 | as a configuration of a macromolecule
00:10:21.640 | to minimize its free energy, its thermodynamic free energy.
00:10:25.120 | It's exactly the same principle
00:10:26.720 | that we've been talking about,
00:10:27.720 | that thermodynamic free energy is just the negative elbow.
00:10:31.260 | It's the same mathematical construct.
00:10:34.420 | So the very emergence of existence of structure or form
00:10:38.720 | that can be distinguished from the environment
00:10:41.280 | or the thing that is not the thing
00:10:43.480 | necessitates the existence of an objective function
00:10:50.440 | that it looks as if it is minimizing.
00:10:54.360 | It's finding a free energy minima.
00:10:56.520 | - And so just to clarify, I'm trying to wrap my head around.
00:11:01.120 | So the free energy principle says that if something exists,
00:11:05.840 | these are the properties it should display.
00:11:08.760 | So what that means is we can't just look,
00:11:13.640 | we can't just go into a soup and there's no mechanism.
00:11:17.280 | A free energy principle doesn't give us a mechanism
00:11:19.680 | to find the things that exist.
00:11:21.900 | Is that what's implying, is being implied
00:11:26.000 | that you can kind of use it to reason,
00:11:30.580 | to think about, like study a particular system
00:11:33.440 | and say, does this exhibit these qualities?
00:11:38.000 | - That's an excellent question.
00:11:39.760 | But to answer that, I'd have to return
00:11:42.160 | to your previous question about what's the difference
00:11:43.760 | between living and non-living things.
00:11:45.940 | - Yes, well, actually, sorry.
00:11:49.160 | So yeah, maybe we can go there.
00:11:51.400 | You kind of drew a line,
00:11:53.120 | and forgive me for the stupid questions,
00:11:54.920 | but you kind of drew a line between living and existing.
00:11:58.520 | Is there an interesting sort of--
00:12:01.760 | - Distinction?
00:12:02.600 | - Distinction between the two?
00:12:03.420 | - Yeah, I think there is.
00:12:05.560 | So things do exist, grains of sand,
00:12:10.560 | rocks on the moon, trees, you.
00:12:15.440 | So all of these things can be separated
00:12:19.840 | from the environment in which they are immersed,
00:12:22.280 | and therefore they must at some level
00:12:24.160 | be optimizing their free energy.
00:12:27.280 | Taking this sort of model evidence interpretation
00:12:32.160 | of this quantity, that basically means
00:12:34.120 | that self-evidencing, another nice little twist
00:12:36.960 | of phrase here is that you are your own existence proof,
00:12:41.480 | statistically speaking, which I don't think I said that.
00:12:46.160 | Somebody did, but I love that phrase.
00:12:48.020 | - You are your own existence proof.
00:12:51.600 | - Yeah, so it's so existential, isn't it?
00:12:54.040 | - I'm gonna have to think about that for a few days.
00:12:57.480 | (Lex laughing)
00:12:58.800 | So yeah, that's a beautiful line.
00:13:02.080 | So the step through to answer your question
00:13:05.760 | about what's it good for,
00:13:07.560 | we go along the following lines.
00:13:11.760 | First of all, you have to define what it means to exist,
00:13:15.740 | which now, as you've rightly pointed out,
00:13:18.040 | you have to define what probabilistic properties
00:13:21.000 | must the states of something possess
00:13:23.440 | so it knows where it finishes.
00:13:26.600 | And then you write that down
00:13:28.200 | in terms of statistical independences,
00:13:30.440 | again, sparsity.
00:13:32.000 | Again, it's not what's connected or what's correlated
00:13:35.700 | or what depends upon what, it's what's not correlated
00:13:39.680 | and what doesn't depend upon something.
00:13:41.960 | Again, it comes down to the deep structures,
00:13:45.680 | not in this instance hierarchical,
00:13:46.920 | but the structures that emerge
00:13:50.040 | from removing connectivity and dependency.
00:13:52.920 | And in this instance, basically being able to identify
00:13:56.720 | the surface of the oil drop from the water
00:14:00.440 | in which it is immersed.
00:14:02.520 | And when you do that, you start to realize,
00:14:05.160 | well, there are actually four kinds of states
00:14:08.760 | in any given universe that contains anything.
00:14:11.720 | The things that are internal to the surface,
00:14:14.880 | the things that are external to the surface
00:14:16.720 | and the surface in and of itself,
00:14:18.600 | which is why I use a metaphor,
00:14:20.080 | a little single-celled organism
00:14:21.480 | that has an interior and exterior
00:14:23.120 | and then the surface of the cell.
00:14:25.560 | And that's mathematically a Markov blanket.
00:14:28.720 | - Just to pause, I'm in awe of this concept
00:14:30.960 | that there's the stuff outside the surface,
00:14:32.600 | stuff inside the surface, and the surface itself,
00:14:34.960 | the Markov blanket.
00:14:36.240 | It's just the most beautiful kind of notion
00:14:39.720 | about trying to explore what it means to exist
00:14:43.040 | mathematically.
00:14:43.920 | I apologize, it's just a beautiful idea.
00:14:46.720 | - But it came out of California, so that's--
00:14:49.080 | - I changed my mind, I take it all back.
00:14:50.880 | (both laughing)
00:14:53.560 | - Anyway, so what you were just talking about,
00:14:55.680 | the surface, about the Markov blanket.
00:14:57.320 | - So the surface or these blanket states that are the,
00:15:02.320 | because they are now defined in relation to
00:15:08.240 | these independences and what different states,
00:15:15.720 | internal or blanket or external states,
00:15:18.800 | which ones can influence each other
00:15:21.320 | and which cannot influence each other.
00:15:23.800 | You can now apply standard results
00:15:27.000 | that you would find in non-equilibrium physics
00:15:29.640 | or steady state or thermodynamics or hydrodynamics,
00:15:33.920 | usually out of equilibrium solutions
00:15:37.680 | and apply them to this partition.
00:15:39.240 | And what it looks like is if all the normal gradient flows
00:15:44.160 | that you would associate with any non-equilibrium system
00:15:48.120 | apply in such a way that two,
00:15:52.200 | part of the Markov blanket and the internal states
00:15:55.080 | seem to be hill climbing or doing a gradient descent
00:15:59.760 | on the same quantity.
00:16:01.880 | And that means that you can now describe
00:16:05.400 | the very existence of this oil drop.
00:16:09.160 | You can write down the existence of this oil drop
00:16:12.040 | in terms of flows, dynamics, equations of motion,
00:16:16.640 | where the blanket states or part of them,
00:16:20.200 | we call them active states and the internal states
00:16:24.080 | now seem to be, and must be,
00:16:27.880 | trying to look as if they're minimizing the same function,
00:16:31.520 | which is a log probability of occupying these states.
00:16:35.320 | Interesting thing is that,
00:16:36.620 | what would they be called
00:16:40.060 | if you were trying to describe these things?
00:16:41.720 | So what we're talking about are internal states,
00:16:46.120 | external states and blanket states.
00:16:48.080 | Now let's carve the blanket states
00:16:50.120 | into two sensory states and active states.
00:16:53.240 | Operationally, it has to be the case
00:16:55.560 | that in order for this carving up
00:16:57.720 | into different sets of states to exist,
00:17:00.480 | the active states, the Markov blanket
00:17:02.840 | cannot be influenced by the external states.
00:17:05.840 | And we already know that the internal states
00:17:07.600 | can't be influenced by the external states
00:17:09.680 | 'cause the blanket separates them.
00:17:11.760 | So what does that mean?
00:17:12.600 | Well, it means the active states, the internal states
00:17:15.320 | are now jointly not influenced by external states.
00:17:19.440 | They only have autonomous dynamics.
00:17:22.160 | So now you've got a picture of an oil drop
00:17:26.040 | that has autonomy.
00:17:27.880 | It has autonomous states.
00:17:30.080 | It has autonomous states in the sense
00:17:31.460 | that there must be some parts of the surface
00:17:33.600 | of the oil drop that are not influenced
00:17:35.480 | by the external states and all the interior.
00:17:37.920 | And together those two states endow
00:17:40.560 | even a little oil drop with autonomous states
00:17:43.680 | that look as if they are optimizing
00:17:47.400 | their variational free energy or their negative elbow,
00:17:51.220 | their moral evidence.
00:17:53.880 | And that would be an interesting intellectual exercise.
00:17:59.240 | And you could say, you could even go
00:18:01.000 | into the realms of panpsychism
00:18:02.600 | that everything that exists is implicitly
00:18:05.440 | making inferences on self-evidencing.
00:18:08.120 | Now we make the next move, but what about living things?
00:18:13.040 | I mean, so let me ask you,
00:18:15.200 | what's the difference between an oil drop
00:18:17.600 | and a little tadpole or a little larva or a plankton?
00:18:22.600 | - The picture was just painted of an oil drop.
00:18:25.720 | Just immediately in a matter of minutes
00:18:28.840 | took me into the world of panpsychism
00:18:31.200 | where you just convinced me,
00:18:34.040 | made me feel like an oil drop is a living,
00:18:37.240 | certainly an autonomous system,
00:18:39.400 | but almost a living system.
00:18:40.720 | So it has a capability, sensory capabilities
00:18:43.760 | and acting capabilities and it maintains something.
00:18:46.640 | So what is the difference between that
00:18:49.960 | and something that we traditionally think
00:18:52.360 | of as a living system?
00:18:53.700 | That it could die or it can't, I mean, yeah, mortality.
00:19:00.000 | I'm not exactly sure.
00:19:01.280 | I'm not sure what the right answer there is
00:19:03.440 | because it can move, like movement seems
00:19:06.720 | like an essential element to being able to act
00:19:08.880 | in the environment, but the oil drop is doing that.
00:19:11.840 | So I don't know.
00:19:12.680 | - Is it?
00:19:14.160 | The oil drop will be moved,
00:19:15.760 | but does it in and of itself move autonomously?
00:19:18.680 | - Well, the surface is performing actions
00:19:22.880 | that maintain its structure.
00:19:25.720 | - Yeah, you're being too clever.
00:19:26.840 | I was, I didn't find a passive little oil drop
00:19:30.560 | that's sitting there at the bottom
00:19:33.720 | of the top of a glass of water.
00:19:35.280 | - Sure, I guess.
00:19:36.600 | - What I'm trying to say is you're absolutely right.
00:19:38.240 | You've nailed it.
00:19:40.400 | It's movement.
00:19:41.880 | So where does that movement come from?
00:19:43.160 | If it comes from the inside,
00:19:45.400 | then you've got, I think, something that's living.
00:19:48.000 | - What do you mean from the inside?
00:19:50.640 | - What I mean is that the internal states
00:19:54.840 | that can influence the active states,
00:19:57.080 | where the active states can influence,
00:19:58.640 | but they're not influenced by the external states,
00:20:01.200 | can cause movement.
00:20:03.200 | So there are two types of oil drops, if you like.
00:20:06.480 | There are oil drops where the internal states
00:20:08.840 | are so random that they average themselves away.
00:20:13.840 | And the thing cannot balance on average
00:20:19.840 | when you do the averaging move.
00:20:22.000 | So a nice example of that would be the sun.
00:20:24.200 | The sun certainly has internal states.
00:20:27.200 | There's lots of intrinsic autonomous activity going on,
00:20:30.400 | but because it's not coordinated,
00:20:31.880 | because it doesn't have the deep in the millennial sense,
00:20:34.120 | the hierarchical structure that the brain does,
00:20:37.040 | there is no overall mode or pattern or organization
00:20:41.880 | that expresses itself on the surface
00:20:44.240 | that allows it to actually swim.
00:20:46.160 | It can certainly have a very active surface,
00:20:50.160 | but on mass, at the scale of the actual surface of the sun,
00:20:54.320 | the average position of that surface cannot in itself move
00:20:59.000 | because the internal dynamics are more like a hot gas.
00:21:02.760 | They are literally like a hot gas.
00:21:04.520 | Whereas your internal dynamics are much more structured
00:21:07.480 | and deeply structured.
00:21:08.960 | And now you can express on your Markov
00:21:11.520 | and your active states with your muscles
00:21:13.440 | and your secretory organs,
00:21:15.800 | your autonomic nervous system and its effectors,
00:21:19.000 | you can actually move.
00:21:20.560 | And that's all you can do.
00:21:22.840 | And that's something which,
00:21:24.320 | if you haven't thought of it like this before,
00:21:26.480 | I think it's nice to just realize there is no other way
00:21:31.000 | that you can change the universe other than simply moving.
00:21:35.320 | Whether that movement is articulating with my voice box
00:21:39.920 | or walking around or squeezing juices
00:21:42.680 | out of my secretory organs,
00:21:44.800 | there's only one way you can change the universe,
00:21:48.040 | it's moving.
00:21:48.920 | - And the fact that you do so non-randomly makes you alive.
00:21:53.920 | - Yeah.
00:21:55.560 | So it's that non-randomness.
00:21:59.040 | And that would be manifest,
00:22:00.840 | it would be realized in terms of essentially swimming,
00:22:03.880 | essentially moving, changing one shape,
00:22:06.560 | a morphogenesis that is dynamic and possibly adaptive.
00:22:10.680 | So that's what I was trying to get at
00:22:14.000 | between the difference from the oil drop
00:22:15.280 | and the little tadpole.
00:22:16.640 | The tadpole is moving around,
00:22:19.080 | its active states are actually changing the external states.
00:22:23.040 | And there's now a cycle,
00:22:24.560 | an action perception cycle, if you like,
00:22:26.480 | a recurrent dynamic that's going on
00:22:30.080 | that depends upon this deeply structured autonomous behavior
00:22:35.080 | that rests upon internal dynamics
00:22:40.440 | that are not only modeling the data
00:22:45.680 | impressed upon their surface or the blanket states,
00:22:49.880 | but they are actively resampling those data by moving.
00:22:54.880 | They're moving towards say chemical gradients and chemotaxis.
00:22:58.600 | So they've gone beyond just being good little models
00:23:04.360 | of the kind of world they live in.
00:23:07.160 | For example, an oil droplet could in a panpsychic sense
00:23:12.040 | be construed as a little being
00:23:14.520 | that has now perfectly inferred
00:23:16.800 | it's a passive non-living oil drop
00:23:19.880 | living in a bowl of water.
00:23:21.720 | No problem.
00:23:24.080 | But to now equip that oil drop
00:23:25.960 | with the ability to go out and test that hypothesis
00:23:28.520 | about different states of beings.
00:23:30.120 | So it can actually push its surface over there, over there,
00:23:33.000 | and test for chemical gradients,
00:23:34.760 | or then you start to move to much more lifelike form.
00:23:38.760 | Now this is all fun, theoretically interesting,
00:23:41.040 | but it actually is quite important
00:23:43.080 | in terms of reflecting what I have seen
00:23:45.920 | since the turn of the millennium,
00:23:49.240 | which is this move towards an inactive
00:23:52.720 | and embodied understanding of intelligence.
00:23:55.840 | And you say you're from machine learning.
00:23:58.840 | So what that means,
00:24:01.840 | this sort of the central importance of movement,
00:24:05.600 | I think has yet to really hit machine learning.
00:24:10.040 | It certainly has now diffused itself throughout robotics.
00:24:15.040 | And perhaps you could say certain problems in active vision
00:24:19.260 | where you actually have to move the camera
00:24:21.440 | to sample this and that.
00:24:23.320 | But machine learning of the data mining,
00:24:26.280 | deep learning sort,
00:24:27.680 | simply hasn't contended with this issue.
00:24:30.080 | What it's done, instead of dealing
00:24:32.000 | with the movement problem and the active sampling of data,
00:24:35.240 | it's just said, we don't need to worry about it,
00:24:36.760 | we can see all the data 'cause we've got big data.
00:24:39.200 | So we can ignore movement.
00:24:41.200 | So that for me is an important omission
00:24:46.200 | in current machine learning.
00:24:48.280 | - So current machine learning
00:24:49.200 | is much more like the oil drop.
00:24:50.880 | - Yes.
00:24:52.040 | But an oil drop that enjoys exposure
00:24:55.560 | to nearly all the data that we need to be exposed to,
00:24:59.680 | as opposed to the tadpoles swimming out
00:25:01.840 | to find the right data.
00:25:03.440 | For example, it likes food.
00:25:06.360 | That's a good hypothesis.
00:25:07.320 | Let's test it out.
00:25:08.160 | Let's go and move and ingest food, for example,
00:25:11.680 | and see what that, you know,
00:25:12.840 | is that evidence that I'm the kind of thing
00:25:14.640 | that likes this kind of food.
00:25:16.360 | - So the next natural question,
00:25:18.680 | and forgive this question,
00:25:19.960 | but if we think of sort of even
00:25:22.160 | artificial intelligence systems,
00:25:23.600 | which has just painted a beautiful picture
00:25:25.400 | of existence and life.
00:25:28.760 | So do you ascribe,
00:25:33.760 | do you find within this framework a possibility
00:25:37.040 | of defining consciousness
00:25:41.200 | or exploring the idea of consciousness?
00:25:43.320 | Like what, you know, self-awareness
00:25:48.720 | and expanded to consciousness?
00:25:51.080 | Yeah.
00:25:52.160 | How can we start to think about consciousness
00:25:54.840 | within this framework?
00:25:55.720 | Is it possible?
00:25:56.840 | - Well, yeah, I think it's possible to think about it,
00:25:59.160 | whether you'll get it.
00:26:00.000 | - Get it, you heard it, it's another question.
00:26:02.360 | - And again, I'm not sure that I'm licensed
00:26:06.320 | to answer that question.
00:26:08.720 | I think you'd have to speak to a qualified philosopher
00:26:11.120 | to get a definitive answer there.
00:26:13.320 | But certainly there's a lot of interest
00:26:15.640 | in using not just these ideas,
00:26:17.720 | but related ideas from information theory
00:26:21.840 | to try and tie down the maths and the calculus
00:26:26.480 | and the geometry of consciousness,
00:26:30.040 | either in terms of sort of a minimal consciousness,
00:26:35.040 | even less than a minimal selfhood.
00:26:38.360 | And what I'm talking about is the ability effectively
00:26:44.280 | to plan, to have agency.
00:26:47.320 | So you could argue that a virus does have a form of agency
00:26:53.360 | in virtue of the way that it selectively finds hosts
00:26:58.160 | and cells to live in and moves around.
00:27:00.920 | But you wouldn't endow it with the capacity
00:27:05.240 | to think about planning and moving in a purposeful way
00:27:10.240 | where it countenances the future.
00:27:13.240 | Whereas you might think an ant's not quite
00:27:16.680 | as unconscious as a virus.
00:27:20.240 | It certainly seems to have a purpose.
00:27:22.160 | It talks to its friends en route during its foraging.
00:27:25.640 | It has a different kind of autonomy,
00:27:30.640 | which is biotic, but beyond a virus.
00:27:34.720 | - So there's something about,
00:27:36.480 | so there's some line that has to do
00:27:39.160 | with the complexity of planning
00:27:41.640 | that may contain an answer.
00:27:43.960 | I mean, it would be beautiful if we can find a line
00:27:47.520 | beyond which we can say a being is conscious.
00:27:51.480 | - Yes, it would be.
00:27:52.320 | - These are wonderful lines that we've drawn
00:27:55.160 | with existence, life, and consciousness.
00:27:58.640 | - Yes, it would be very nice.
00:28:01.280 | One little wrinkle there,
00:28:03.160 | and this is something I've only learned
00:28:04.480 | in the past few months,
00:28:05.360 | is the philosophical notion of vagueness.
00:28:08.360 | So you're saying it would be wonderful to draw a line.
00:28:10.800 | I had always assumed that that line
00:28:13.520 | at some point would be drawn,
00:28:15.000 | until about four months ago,
00:28:18.680 | and a philosopher taught me about vagueness.
00:28:20.840 | So I don't know if you've come across this,
00:28:22.200 | but it's a technical concept,
00:28:24.400 | and I think most revealingly illustrated
00:28:29.080 | with at what point does a pile of sand become a pile?
00:28:33.040 | Is it one grain, two grains, three grains, or four grains?
00:28:37.560 | So at what point would you draw the line
00:28:40.200 | between being a pile of sand
00:28:42.240 | and a collection of grains of sand?
00:28:47.240 | In the same way, is it right to ask,
00:28:49.520 | where would I draw the line
00:28:50.760 | between conscious and unconscious?
00:28:52.680 | And it might be a vague concept.
00:28:55.640 | Having said that, I agree with you entirely.
00:28:57.440 | I think it's systems that have the ability to plan.
00:29:02.440 | So just technically what that means
00:29:04.400 | is your inferential self-evidencing
00:29:09.880 | by which I simply mean the dynamics,
00:29:13.240 | literally the thermodynamics and gradient flows
00:29:16.320 | that underwrite the preservation
00:29:18.280 | of your oil droplet-like form
00:29:20.160 | are described as a,
00:29:24.600 | can be described as an optimization
00:29:26.320 | of log Bayesian model evidence, your elbow.
00:29:30.360 | That self-evidencing must be evidence for a model
00:29:37.400 | of what's causing the sensory impressions
00:29:40.000 | on the sensory part of your surface
00:29:42.360 | or your Markov blanket.
00:29:44.440 | If that model is capable of planning,
00:29:47.120 | it must include a model of the future consequences
00:29:49.800 | of your active states or your action, just planning.
00:29:52.760 | So we're now in the game of planning as inference.
00:29:55.360 | Now notice what we've made though.
00:29:56.600 | We've made quite a big move away
00:29:58.640 | from big data and machine learning,
00:30:01.200 | because again, it's the consequences of moving.
00:30:04.440 | It's the consequences of selecting those data
00:30:07.080 | or those data or looking over there.
00:30:10.200 | And that tells you immediately that even to be a contender
00:30:14.320 | for a conscious artifact or a,
00:30:16.080 | is it strong AI or generalized?
00:30:20.120 | - Generalized, yeah.
00:30:20.960 | - It's called now.
00:30:22.720 | Then you've got to have movement in the game.
00:30:25.240 | And furthermore, you've got to have a generative model
00:30:28.520 | of the sort you might find in say a variational autoencoder
00:30:31.880 | that is thinking about the future conditioned
00:30:35.840 | upon different courses of action.
00:30:37.880 | Now that brings a number of things to the table,
00:30:39.800 | which now you start to think,
00:30:41.320 | well, those who've got all the right ingredients
00:30:43.400 | talk about consciousness.
00:30:44.480 | I've now got to select among a number
00:30:46.680 | of different courses of action into the future
00:30:49.280 | as part of planning.
00:30:50.880 | I've now got free will.
00:30:52.400 | The act of selecting this course of action
00:30:54.640 | or that policy or that policy or that action
00:30:57.280 | suddenly makes me into an inference machine,
00:31:00.680 | a self-evidencing artifact that now looks
00:31:05.800 | as if it's selecting amongst different alternative ways
00:31:08.800 | forward as I actively swim here or swim there
00:31:11.480 | or look over here, look over there.
00:31:13.920 | So I think you've now got to a situation
00:31:15.920 | if there is planning in the mix,
00:31:18.200 | you're now getting much closer to that line,
00:31:21.200 | if that line were ever to exist.
00:31:23.320 | I don't think it gets you quite as far as self-aware though.
00:31:26.600 | I think, and then you have to, I think,
00:31:32.400 | grapple with the question,
00:31:35.640 | how would formally you write down a calculus
00:31:38.560 | or a maths of self-awareness?
00:31:40.560 | I don't think it's impossible to do,
00:31:42.400 | but I think there'll be pressure on you
00:31:47.560 | to actually commit to a formal definition
00:31:49.200 | of what you mean by self-awareness.
00:31:51.840 | I think most people that I know would probably say
00:31:56.680 | that a goldfish, a pet fish was not self-aware.
00:32:02.960 | They would probably argue about their favorite cat,
00:32:05.560 | but would be quite happy to say
00:32:08.200 | that their mom was self-aware.
00:32:10.000 | So- - I mean, but that might
00:32:12.320 | very well connect to some level of complexity with planning.
00:32:16.880 | It seems like self-awareness is essential
00:32:19.600 | for complex planning. - Yeah.
00:32:23.240 | Do you want to take that further?
00:32:24.080 | 'Cause I think you're absolutely right.
00:32:25.320 | - Again, the line is unclear,
00:32:27.120 | but it seems like integrating yourself
00:32:30.680 | into the world, into your planning
00:32:35.160 | is essential for constructing complex plans.
00:32:38.280 | - Yes, yeah.
00:32:39.720 | - So mathematically describing that
00:32:41.920 | in the same elegant way as you have
00:32:44.640 | with the free energy principle might be difficult.
00:32:47.120 | - Well, yes and no.
00:32:49.280 | I don't think that, well, perhaps we should just,
00:32:51.320 | can we just go back?
00:32:53.000 | That's a very important answer you gave.
00:32:54.680 | And I think if I just unpacked it,
00:32:57.840 | you'd see the truisms that you've just exposed for us.
00:33:01.680 | But let me, sorry, I'm mindful
00:33:05.720 | that I didn't answer your question before.
00:33:07.360 | Well, what's the free energy principle good for?
00:33:09.840 | Is it just a pretty theoretical exercise
00:33:11.640 | to explain non-equilibrium steady states?
00:33:13.880 | Yes, it is.
00:33:15.320 | It does nothing more for you than that.
00:33:17.320 | It can be regarded, it's gonna sound very arrogant,
00:33:20.040 | but it is of the sort of theory of natural selection
00:33:23.840 | or a hypothesis of natural selection.
00:33:28.720 | Beautiful, undeniably true,
00:33:32.480 | but tells you absolutely nothing about
00:33:35.560 | why you have legs and eyes.
00:33:38.040 | It tells you nothing about the actual phenotype
00:33:40.720 | and it wouldn't allow you to build something.
00:33:44.080 | So the free energy principle by itself
00:33:47.080 | is as vacuous as most tautological theories.
00:33:50.880 | And by tautological, of course,
00:33:52.200 | I'm talking to the theory of natural,
00:33:54.800 | the survival of the fittest.
00:33:56.080 | What's the fittest that survive?
00:33:57.720 | Why do the cycles, the fitter?
00:33:59.040 | It just go round in circles.
00:34:00.720 | In a sense, the free energy principle
00:34:03.000 | has that same deflationary tautology under the hood.
00:34:07.480 | It's a characteristic of things that exist.
00:34:13.680 | Why do they exist?
00:34:14.520 | Because they minimize their free energy.
00:34:15.720 | Why do they minimize their free energy?
00:34:17.400 | Because they exist.
00:34:18.440 | And you just keep on going round and round and round.
00:34:20.720 | But the practical thing,
00:34:24.080 | which you don't get from natural selection,
00:34:28.680 | but you could say has now manifest in things
00:34:31.680 | like differential evolution or genetic algorithms
00:34:34.160 | or MCMC, for example, in machine learning.
00:34:37.320 | The practical thing you can get is,
00:34:39.200 | if it looks as if things that exist
00:34:41.440 | are trying to have density dynamics
00:34:45.400 | that look as though they're optimizing
00:34:47.560 | a variation of free energy,
00:34:49.320 | and a variation of free energy has to be
00:34:51.200 | a functional of a generative model,
00:34:53.280 | a probabilistic description of causes and consequences,
00:34:57.720 | causes out there, consequences in the sensorium,
00:35:00.560 | on the sensory parts of the Markov Planckian,
00:35:03.040 | then it should, in theory, be possible
00:35:04.680 | to write down the generative model,
00:35:06.400 | work out the gradients,
00:35:07.800 | and then cause it to autonomously self-evidence.
00:35:11.920 | So you should be able to write down oil droplets.
00:35:14.160 | You should be able to create artifacts
00:35:16.080 | where you have supplied the objective function
00:35:20.240 | that supplies the gradients,
00:35:21.600 | that supplies the self-organizing dynamics
00:35:24.360 | to non-equilibrium steady state.
00:35:26.280 | So there is actually a practical application
00:35:28.680 | of the free energy principle
00:35:30.120 | when you can write down your required evidence
00:35:33.800 | in terms of, well, when you can write down
00:35:36.400 | the generative model,
00:35:37.640 | that is the thing that has the evidence,
00:35:40.800 | the probability of these sensory data or this data,
00:35:44.520 | given that model is effectively the thing
00:35:48.960 | that the ELBO, or the Variational Free Energy,
00:35:51.640 | bounds or approximates.
00:35:54.240 | That means that you can actually write down the model,
00:35:57.720 | and the kind of thing that you want to engineer,
00:36:00.640 | the kind of AGI, or Artificial General Intelligence,
00:36:03.920 | that you want to manifest probabilistically,
00:36:10.480 | and then you engineer, a lot of hard work,
00:36:12.720 | but you would engineer a robot and a computer
00:36:15.760 | to perform a gradient descent on that objective function.
00:36:19.400 | So it does have a practical implication.
00:36:22.160 | Now, why am I wittering on about that?
00:36:23.440 | It did seem relevant to, yes.
00:36:24.880 | So, what kinds of, so the answer to,
00:36:28.920 | would it be easy or would it be hard?
00:36:30.240 | Well, mathematically, it's easy.
00:36:32.160 | I've just told you, all you need to do is write down
00:36:34.880 | your perfect artifact probabilistically
00:36:39.880 | in the form of a probabilistic generative model,
00:36:42.480 | probability distribution over the causes and consequences
00:36:45.840 | of the world in which this thing is immersed.
00:36:50.680 | And then you just engineer a computer and a robot
00:36:54.040 | to perform a gradient descent on that objective function.
00:36:56.800 | No problem.
00:36:57.640 | But of course, the big problem
00:37:00.160 | is writing down the generative model.
00:37:01.960 | So that's where the heavy lifting comes in.
00:37:04.040 | So it's the form and the structure of that generative model,
00:37:08.160 | which basically defines the artifact that you will create,
00:37:11.640 | or indeed, the kind of artifact that has self-awareness.
00:37:15.600 | So that's where all the hard work comes,
00:37:17.960 | very much like natural selection doesn't tell you
00:37:20.640 | in the slightest why you have eyes.
00:37:23.000 | So you have to drill down on the actual phenotype,
00:37:25.480 | the actual generative model.
00:37:27.480 | So with that in mind, what did you tell me
00:37:32.360 | that tells me immediately the kinds of generative models
00:37:36.720 | I would have to write down in order to have self-awareness?
00:37:39.480 | - What you said to me was, I have to have a model
00:37:44.200 | that is effectively fit for purpose for this kind of world
00:37:47.840 | in which I operate.
00:37:49.680 | And if I now make the observation that this kind of world
00:37:53.160 | is effectively largely populated by other things like me,
00:37:56.560 | i.e. you, then it makes enormous sense
00:38:00.200 | that if I can develop a hypothesis
00:38:03.280 | that we are similar kinds of creatures,
00:38:07.520 | in fact, the same kind of creature,
00:38:09.640 | but I am me and you are you,
00:38:12.320 | then it becomes, again, mandated to have a sense of self.
00:38:17.320 | So if I live in a world that is constituted
00:38:21.280 | by things like me, basically a social world, a community,
00:38:25.520 | then it becomes necessary now for me to infer
00:38:28.320 | that it's me talking and not you talking.
00:38:30.400 | I wouldn't need that if I was on Mars by myself,
00:38:33.280 | or if I was in the jungle as a feral child.
00:38:36.040 | If there was nothing like me around,
00:38:39.120 | there would be no need to have an inference,
00:38:42.480 | a hypothesis, ah, yes, it is me that is experiencing
00:38:46.120 | or causing these sounds, and it is not you.
00:38:48.360 | It's only when there's ambiguity in play
00:38:50.680 | induced by the fact that there are others in that world.
00:38:54.240 | So I think that the special thing about self-aware artifacts
00:38:59.240 | is that they have learned to, or they have acquired,
00:39:04.280 | or at least are equipped with, possibly by evolution,
00:39:07.680 | generative models that allow for the fact
00:39:10.600 | there are lots of copies of things like them around,
00:39:13.360 | and therefore they have to work out it's you and not me.
00:39:16.160 | - That's brilliant.
00:39:19.280 | I've never thought of that.
00:39:20.560 | I never thought of that, that the purpose of,
00:39:24.440 | the really usefulness of consciousness or self-awareness
00:39:29.000 | in the context of planning existing in the world
00:39:31.920 | is so you can operate with other things like you.
00:39:34.360 | And like you could, it doesn't have to necessarily be human.
00:39:36.800 | It could be other kind of similar creatures.
00:39:39.480 | - Absolutely, well, we imbue a lot of our attributes
00:39:42.080 | into our pets, don't we?
00:39:43.840 | Or we try to make our robots humanoid.
00:39:45.880 | And I think there's a deep reason for that,
00:39:47.840 | that it's just much easier to read the world
00:39:50.680 | if you can make the simplifying assumption
00:39:52.220 | that basically you're me, and it's just your turn to talk.
00:39:55.280 | I mean, when we talk about planning,
00:39:57.520 | when you talk specifically about planning,
00:40:00.200 | the highest, if you like, manifestation or realization
00:40:03.800 | of that planning is what we're doing now.
00:40:05.600 | I mean, the human condition doesn't get any higher
00:40:08.560 | than this talking about the philosophy of existence
00:40:12.800 | and the conversation.
00:40:13.900 | But in that conversation, there is a beautiful art
00:40:17.740 | of turn-taking and mutual inference, theory of mind.
00:40:24.080 | I have to know when you want to listen.
00:40:25.640 | I have to know when you want to interrupt.
00:40:27.080 | I have to make sure that you're online.
00:40:28.480 | I have to have a model in my head
00:40:30.360 | of your model in your head.
00:40:31.800 | That's the highest, the most sophisticated form
00:40:34.320 | of generative model, where the generative model
00:40:36.160 | actually has a generative model
00:40:37.360 | of somebody else's generative model.
00:40:38.800 | And I think that, and what we are doing now,
00:40:41.800 | evinces the kinds of generative models
00:40:45.160 | that would support self-awareness.
00:40:47.260 | 'Cause without that, we'd both be talking over each other,
00:40:50.640 | or we'd be singing together in a choir, you know?
00:40:54.320 | That was just probably not, that's not a brilliant analogy
00:40:56.560 | what I'm trying to say, but yeah,
00:40:58.720 | we wouldn't have this discourse.
00:41:01.160 | - Yeah, the dance of it, yeah, that's right.
00:41:03.680 | As I interrupt. (laughs)
00:41:06.500 | I mean, that's beautifully put.
00:41:09.240 | I'll re-listen to this conversation many times.
00:41:11.800 | There's so much poetry in this, and mathematics.
00:41:17.640 | Let me ask the silliest, or perhaps the biggest question
00:41:22.280 | as a last kind of question.
00:41:25.880 | We've talked about living in existence
00:41:29.360 | and the objective function under which
00:41:31.160 | these objects would operate.
00:41:33.520 | What do you think is the objective function
00:41:35.760 | of our existence?
00:41:37.560 | What's the meaning of life?
00:41:39.360 | What do you think is the, for you perhaps,
00:41:43.180 | the purpose, the source of fulfillment,
00:41:46.240 | the source of meaning for your existence?
00:41:49.120 | As one blob. - As one blob.
00:41:52.360 | - In this soup.
00:41:53.680 | - I'm tempted to answer that again as a physicist.
00:41:56.520 | (laughs)
00:41:57.720 | Free energy I expect consequent upon my behavior.
00:42:01.480 | So technically that, and we could get
00:42:03.160 | a really interesting conversation about
00:42:05.680 | what that comprises in terms of searching for information,
00:42:09.160 | resolving uncertainty about the kind of thing that I am.
00:42:12.560 | But I suspect that you want a slightly more personal
00:42:16.240 | and fun answer.
00:42:17.220 | But which can be consistent with that.
00:42:21.120 | And I think it's reassuringly simple
00:42:26.120 | and harps back to what you were taught as a child,
00:42:31.440 | that you have certain beliefs about the kind of creature
00:42:35.520 | and the kind of person you are.
00:42:37.840 | And all that self-evidencing means,
00:42:40.840 | all that minimizing variational free energy
00:42:42.840 | in an inactive and embodied way,
00:42:46.020 | means is fulfilling the beliefs
00:42:48.720 | about what kind of thing you are.
00:42:51.720 | And of course we're all given those scripts,
00:42:54.040 | those narratives at a very early age,
00:42:57.240 | usually in the form of bedtime stories or fairy stories.
00:43:00.420 | I'm a princess and I'm gonna meet a beast
00:43:03.160 | who's gonna transform and it's gonna be a prince.
00:43:05.800 | - So the narratives are all around you,
00:43:07.860 | from your parents to the friends,
00:43:10.540 | to the society feeds these stories.
00:43:13.680 | And then your objective function is to fulfill--
00:43:17.000 | - Exactly, that narrative that has been encultured
00:43:20.240 | by your immediate family,
00:43:22.240 | but as you say also the sort of the culture
00:43:24.440 | in which you grew up and you create for yourself.
00:43:26.880 | I mean, again, because of this active inference,
00:43:29.440 | this inactive aspect of self-evidencing,
00:43:32.040 | not only am I modeling my environment,
00:43:36.940 | my econish, my external states out there,
00:43:40.040 | but I'm actively changing them all the time.
00:43:42.520 | And external states are doing the same back,
00:43:44.960 | we're doing it together.
00:43:45.800 | So there's a synchrony that means
00:43:48.800 | that I'm creating my own culture
00:43:50.800 | over different timescales.
00:43:52.800 | So the question now is for me being very selfish,
00:43:56.780 | what scripts were I given?
00:43:58.240 | It basically was a mixture between Einstein and Sherlock Holmes.
00:44:02.120 | So I smoke as heavily as possible,
00:44:04.800 | try to avoid too much interpersonal contact,
00:44:11.280 | enjoy the fantasy that you're a popular scientist
00:44:16.280 | who's gonna make a difference in a slightly quirky way.
00:44:19.360 | So that's where I grew up.
00:44:21.160 | My father was an engineer and loved science
00:44:24.280 | and he loved sort of things like Sir Arthur Eddington's
00:44:29.280 | Space, Time and Gravitation,
00:44:30.480 | which was the first understandable version
00:44:34.800 | of general relativity.
00:44:36.760 | So all the fairy stories I was told
00:44:40.840 | as I was growing up were all about these characters.
00:44:43.680 | I'm keeping the Hobbit out of this
00:44:46.600 | because that doesn't quite fit my narrative.
00:44:49.160 | But it's a journey of exploration, I suppose, of sorts.
00:44:52.240 | So yeah, I've just grown up to be
00:44:54.200 | what I imagine a mild-mannered Sherlock Holmes/Albert Einstein
00:45:00.560 | would do in my shoes.
00:45:04.000 | - And you did it elegantly and beautifully, Carl.
00:45:06.520 | It was a huge honor talking to you today.
00:45:07.920 | It was fun.
00:45:08.760 | Thank you so much for your time.
00:45:09.600 | - Oh, thank you. - Appreciate it.
00:45:11.640 | (upbeat music)
00:45:14.240 | (upbeat music)
00:45:16.840 | (upbeat music)
00:45:19.440 | (upbeat music)
00:45:22.040 | (upbeat music)
00:45:24.640 | (upbeat music)
00:45:27.240 | [BLANK_AUDIO]