Back to Index

You Are Your Own Existence Proof (Karl Friston) | AI Podcast Clips with Lex Fridman


Chapters

0:0
0:10 What Is the Free Energy Principle
0:51 The Free Energy Principle
2:29 The Existential Imperative
4:30 The Free-Energy Principle
11:42 The Difference between Living and Nonliving Things
28:5 Philosophical Notion of Vagueness

Transcript

- Let's start with the basics. So you've also formulated a fascinating principle, the free energy principle. Can we maybe start at the basics and what is the free energy principle? - Well, in fact, the free energy principle inherits a lot from the building of these data analytic approaches to these very high dimensional time series you get from the brain.

So I think it's interesting to acknowledge that. And in particular, the analysis tools that try to address the other side, which is a functional integration. So the connectivity analysis. On the one hand, but I should also acknowledge it inherits an awful lot from machine learning as well. So the free energy principle is just a formal statement that the existential imperatives for any system that manages to survive in a changing world can be cast as an inference problem in the sense that you could interpret the probability of existing as the evidence that you exist.

And if you can write down that problem of existence as a statistical problem, then you can use all the maths that has been developed for inference to understand and characterize the ensemble dynamics that must be in play in the service of that inference. So technically what that means is you can always interpret anything that exists in virtue of being separate from the environment in which it exists as trying to minimize variational free energy.

And if you're from the machine learning community, you will know that as a negative evidence lower bound or a negative elbow, which is the same as saying you're trying to maximize or it will look as if all your dynamics are trying to maximize the compliment of that, which is the marginal likelihood or the evidence for your own existence.

So that's basically the free energy principle. - But to even take a sort of a small step backwards, you said the existential imperative. There's a lot of beautiful poetic words here, but to put it crudely, it's a fascinating idea of basically of trying to describe if you're looking at a blob, how do you know this thing is alive?

What does it mean to be alive? What does it mean to exist? And so you can look at the brain, you can look at parts of the brain, or this is just a general principle that applies to almost any system. That's just a fascinating sort of philosophically at every level question and a methodology to try to answer that question.

What does it mean to be alive? - Yes. - So that's a huge endeavor, and it's nice that there's at least some, from some perspective, a clean answer. So maybe can you talk about that optimization view of it? So what's trying to be minimized and maximized? A system that's alive, what is it trying to minimize?

- Right, you've made a big move there. - Apologize. - No, no, it's good to make big moves. But you've assumed that the thing exists in a state that could be living or non-living. So I may ask you, well, what licenses you to say that something exists? That's why I use the word existential.

It's beyond living, it's just existence. So if you drill down onto the definition of things that exist, then they have certain properties. If you borrow the maths from non-equilibrium steady state physics, that enable you to interpret their existence in terms of this optimization procedure. So it's good you introduced the word optimization.

So what the free energy principle in its sort of most ambitious but also most deflationary and simplest says is that if something exists, then it must, by the mathematics of non-equilibrium steady state, exhibit properties that make it look as if it is optimizing a particular quantity. And it turns out that particular quantity happens to be exactly the same as the evidence lower bound in machine learning or Bayesian model evidence in Bayesian statistics, or, and then I can list a whole other, list of ways of understanding this key quantity, which is a bound on surprisal, self-information, if you're in information theory.

There are a whole, there are a number of different perspectives on this quantity. It's just basically the log probability of being in a particular state. I'm telling this story as an honest, an attempt to answer your question. And I'm answering it as if I was pretending to be a physicist who was trying to understand the fundaments of non-equilibrium steady state.

And I shouldn't really be doing that because the last time I was taught physics, I was in my 20s. - What kind of systems, when you think about the free energy principle, what kind of systems are you imagining? As a sort of more specific kind of case study. - Yeah, I'm imagining a range of systems, but at its simplest, a single-celled organism that can be identified from its economy, show its environment.

So at its simplest, that's basically what I always imagined in my head. And you may ask, well, is there any, how on earth can you even elaborate questions about the existence of a single drop of oil, for example? But there are deep questions there. Why doesn't the oil, why doesn't the thing, the interface between the drop of oil that contains an interior and the thing that is not the drop of oil, which is the solvent in which it is immersed, how does that interface persist over time?

Why doesn't the oil just dissolve into solvent? So what special properties of the exchange between the surface of the oil drop and the external states in which it's immersed, if you're a physicist, say it would be the heat path. You know, you've got a physical system, an ensemble again, we're talking about density dynamics, ensemble dynamics, an ensemble of atoms or molecules immersed in the heat path.

But the question is, how did the heat path get there and why does it not dissolve? - How is it maintaining itself? - Exactly. - What actions is it? I mean, it's such a fascinating idea of a drop of oil and I guess it would dissolve in water, it wouldn't dissolve in water.

So what-- - Precisely, so why not? - Why not? - Why not? - And how do you mathematically describe, I mean, it's such a beautiful idea and also the idea of like, where does the thing, where does the drop of oil end and where does it begin? - Right, so I mean, you're asking deep questions, deep in a non-millennial sense here.

(laughing) - Not in a hierarchical sense. (laughing) - But what you can do, so this is the deflationary part of it. Can I just qualify my answer by saying that normally when I'm asked this question, I answer from the point of view of a psychologist and we talk about predictive processing and predictive coding and the brain is an inference machine, but you haven't asked me from that perspective, I'm answering from the point of view of a physicist.

So the question is not so much why, but if it exists, what properties must it display? So that's the deflationary part of the free energy principle. The free energy principle does not supply an answer as to why, it's saying if something exists, then it must display these properties. That's the sort of the thing that's on offer.

And it so happens that these properties it must display are actually intriguing and have this inferential gloss, this sort of self-evidencing gloss that inherits on the fact that the very preservation of the boundary between the oil drop and the not oil drop requires an optimization of a particular function or a functional that defines the presence of the existence of this oil drop, which is why I started with existential imperatives.

It is a necessary condition for existence that this must occur because the boundary basically defines the thing that's existing. So it is that self-assembly aspect that you were hinting at in biology, sometimes known as autopoiesis in computational chemistry with self-assembly. It's the, what does it look like? Sorry, how would you describe things that configure themselves out of nothing?

The way they clearly demarcate themselves from the states or the soup in which they are immersed. So from the point of view of computational chemistry, for example, you would just understand that as a configuration of a macromolecule to minimize its free energy, its thermodynamic free energy. It's exactly the same principle that we've been talking about, that thermodynamic free energy is just the negative elbow.

It's the same mathematical construct. So the very emergence of existence of structure or form that can be distinguished from the environment or the thing that is not the thing necessitates the existence of an objective function that it looks as if it is minimizing. It's finding a free energy minima.

- And so just to clarify, I'm trying to wrap my head around. So the free energy principle says that if something exists, these are the properties it should display. So what that means is we can't just look, we can't just go into a soup and there's no mechanism. A free energy principle doesn't give us a mechanism to find the things that exist.

Is that what's implying, is being implied that you can kind of use it to reason, to think about, like study a particular system and say, does this exhibit these qualities? - That's an excellent question. But to answer that, I'd have to return to your previous question about what's the difference between living and non-living things.

- Yes, well, actually, sorry. So yeah, maybe we can go there. You kind of drew a line, and forgive me for the stupid questions, but you kind of drew a line between living and existing. Is there an interesting sort of-- - Distinction? - Distinction between the two? - Yeah, I think there is.

So things do exist, grains of sand, rocks on the moon, trees, you. So all of these things can be separated from the environment in which they are immersed, and therefore they must at some level be optimizing their free energy. Taking this sort of model evidence interpretation of this quantity, that basically means that self-evidencing, another nice little twist of phrase here is that you are your own existence proof, statistically speaking, which I don't think I said that.

Somebody did, but I love that phrase. - You are your own existence proof. - Yeah, so it's so existential, isn't it? - I'm gonna have to think about that for a few days. (Lex laughing) So yeah, that's a beautiful line. So the step through to answer your question about what's it good for, we go along the following lines.

First of all, you have to define what it means to exist, which now, as you've rightly pointed out, you have to define what probabilistic properties must the states of something possess so it knows where it finishes. And then you write that down in terms of statistical independences, again, sparsity.

Again, it's not what's connected or what's correlated or what depends upon what, it's what's not correlated and what doesn't depend upon something. Again, it comes down to the deep structures, not in this instance hierarchical, but the structures that emerge from removing connectivity and dependency. And in this instance, basically being able to identify the surface of the oil drop from the water in which it is immersed.

And when you do that, you start to realize, well, there are actually four kinds of states in any given universe that contains anything. The things that are internal to the surface, the things that are external to the surface and the surface in and of itself, which is why I use a metaphor, a little single-celled organism that has an interior and exterior and then the surface of the cell.

And that's mathematically a Markov blanket. - Just to pause, I'm in awe of this concept that there's the stuff outside the surface, stuff inside the surface, and the surface itself, the Markov blanket. It's just the most beautiful kind of notion about trying to explore what it means to exist mathematically.

I apologize, it's just a beautiful idea. - But it came out of California, so that's-- - I changed my mind, I take it all back. (both laughing) - Anyway, so what you were just talking about, the surface, about the Markov blanket. - So the surface or these blanket states that are the, because they are now defined in relation to these independences and what different states, internal or blanket or external states, which ones can influence each other and which cannot influence each other.

You can now apply standard results that you would find in non-equilibrium physics or steady state or thermodynamics or hydrodynamics, usually out of equilibrium solutions and apply them to this partition. And what it looks like is if all the normal gradient flows that you would associate with any non-equilibrium system apply in such a way that two, part of the Markov blanket and the internal states seem to be hill climbing or doing a gradient descent on the same quantity.

And that means that you can now describe the very existence of this oil drop. You can write down the existence of this oil drop in terms of flows, dynamics, equations of motion, where the blanket states or part of them, we call them active states and the internal states now seem to be, and must be, trying to look as if they're minimizing the same function, which is a log probability of occupying these states.

Interesting thing is that, what would they be called if you were trying to describe these things? So what we're talking about are internal states, external states and blanket states. Now let's carve the blanket states into two sensory states and active states. Operationally, it has to be the case that in order for this carving up into different sets of states to exist, the active states, the Markov blanket cannot be influenced by the external states.

And we already know that the internal states can't be influenced by the external states 'cause the blanket separates them. So what does that mean? Well, it means the active states, the internal states are now jointly not influenced by external states. They only have autonomous dynamics. So now you've got a picture of an oil drop that has autonomy.

It has autonomous states. It has autonomous states in the sense that there must be some parts of the surface of the oil drop that are not influenced by the external states and all the interior. And together those two states endow even a little oil drop with autonomous states that look as if they are optimizing their variational free energy or their negative elbow, their moral evidence.

And that would be an interesting intellectual exercise. And you could say, you could even go into the realms of panpsychism that everything that exists is implicitly making inferences on self-evidencing. Now we make the next move, but what about living things? I mean, so let me ask you, what's the difference between an oil drop and a little tadpole or a little larva or a plankton?

- The picture was just painted of an oil drop. Just immediately in a matter of minutes took me into the world of panpsychism where you just convinced me, made me feel like an oil drop is a living, certainly an autonomous system, but almost a living system. So it has a capability, sensory capabilities and acting capabilities and it maintains something.

So what is the difference between that and something that we traditionally think of as a living system? That it could die or it can't, I mean, yeah, mortality. I'm not exactly sure. I'm not sure what the right answer there is because it can move, like movement seems like an essential element to being able to act in the environment, but the oil drop is doing that.

So I don't know. - Is it? The oil drop will be moved, but does it in and of itself move autonomously? - Well, the surface is performing actions that maintain its structure. - Yeah, you're being too clever. I was, I didn't find a passive little oil drop that's sitting there at the bottom of the top of a glass of water.

- Sure, I guess. - What I'm trying to say is you're absolutely right. You've nailed it. It's movement. So where does that movement come from? If it comes from the inside, then you've got, I think, something that's living. - What do you mean from the inside? - What I mean is that the internal states that can influence the active states, where the active states can influence, but they're not influenced by the external states, can cause movement.

So there are two types of oil drops, if you like. There are oil drops where the internal states are so random that they average themselves away. And the thing cannot balance on average when you do the averaging move. So a nice example of that would be the sun. The sun certainly has internal states.

There's lots of intrinsic autonomous activity going on, but because it's not coordinated, because it doesn't have the deep in the millennial sense, the hierarchical structure that the brain does, there is no overall mode or pattern or organization that expresses itself on the surface that allows it to actually swim.

It can certainly have a very active surface, but on mass, at the scale of the actual surface of the sun, the average position of that surface cannot in itself move because the internal dynamics are more like a hot gas. They are literally like a hot gas. Whereas your internal dynamics are much more structured and deeply structured.

And now you can express on your Markov and your active states with your muscles and your secretory organs, your autonomic nervous system and its effectors, you can actually move. And that's all you can do. And that's something which, if you haven't thought of it like this before, I think it's nice to just realize there is no other way that you can change the universe other than simply moving.

Whether that movement is articulating with my voice box or walking around or squeezing juices out of my secretory organs, there's only one way you can change the universe, it's moving. - And the fact that you do so non-randomly makes you alive. - Yeah. So it's that non-randomness. And that would be manifest, it would be realized in terms of essentially swimming, essentially moving, changing one shape, a morphogenesis that is dynamic and possibly adaptive.

So that's what I was trying to get at between the difference from the oil drop and the little tadpole. The tadpole is moving around, its active states are actually changing the external states. And there's now a cycle, an action perception cycle, if you like, a recurrent dynamic that's going on that depends upon this deeply structured autonomous behavior that rests upon internal dynamics that are not only modeling the data impressed upon their surface or the blanket states, but they are actively resampling those data by moving.

They're moving towards say chemical gradients and chemotaxis. So they've gone beyond just being good little models of the kind of world they live in. For example, an oil droplet could in a panpsychic sense be construed as a little being that has now perfectly inferred it's a passive non-living oil drop living in a bowl of water.

No problem. But to now equip that oil drop with the ability to go out and test that hypothesis about different states of beings. So it can actually push its surface over there, over there, and test for chemical gradients, or then you start to move to much more lifelike form.

Now this is all fun, theoretically interesting, but it actually is quite important in terms of reflecting what I have seen since the turn of the millennium, which is this move towards an inactive and embodied understanding of intelligence. And you say you're from machine learning. So what that means, this sort of the central importance of movement, I think has yet to really hit machine learning.

It certainly has now diffused itself throughout robotics. And perhaps you could say certain problems in active vision where you actually have to move the camera to sample this and that. But machine learning of the data mining, deep learning sort, simply hasn't contended with this issue. What it's done, instead of dealing with the movement problem and the active sampling of data, it's just said, we don't need to worry about it, we can see all the data 'cause we've got big data.

So we can ignore movement. So that for me is an important omission in current machine learning. - So current machine learning is much more like the oil drop. - Yes. But an oil drop that enjoys exposure to nearly all the data that we need to be exposed to, as opposed to the tadpoles swimming out to find the right data.

For example, it likes food. That's a good hypothesis. Let's test it out. Let's go and move and ingest food, for example, and see what that, you know, is that evidence that I'm the kind of thing that likes this kind of food. - So the next natural question, and forgive this question, but if we think of sort of even artificial intelligence systems, which has just painted a beautiful picture of existence and life.

So do you ascribe, do you find within this framework a possibility of defining consciousness or exploring the idea of consciousness? Like what, you know, self-awareness and expanded to consciousness? Yeah. How can we start to think about consciousness within this framework? Is it possible? - Well, yeah, I think it's possible to think about it, whether you'll get it.

- Get it, you heard it, it's another question. - And again, I'm not sure that I'm licensed to answer that question. I think you'd have to speak to a qualified philosopher to get a definitive answer there. But certainly there's a lot of interest in using not just these ideas, but related ideas from information theory to try and tie down the maths and the calculus and the geometry of consciousness, either in terms of sort of a minimal consciousness, even less than a minimal selfhood.

And what I'm talking about is the ability effectively to plan, to have agency. So you could argue that a virus does have a form of agency in virtue of the way that it selectively finds hosts and cells to live in and moves around. But you wouldn't endow it with the capacity to think about planning and moving in a purposeful way where it countenances the future.

Whereas you might think an ant's not quite as unconscious as a virus. It certainly seems to have a purpose. It talks to its friends en route during its foraging. It has a different kind of autonomy, which is biotic, but beyond a virus. - So there's something about, so there's some line that has to do with the complexity of planning that may contain an answer.

I mean, it would be beautiful if we can find a line beyond which we can say a being is conscious. - Yes, it would be. - These are wonderful lines that we've drawn with existence, life, and consciousness. - Yes, it would be very nice. One little wrinkle there, and this is something I've only learned in the past few months, is the philosophical notion of vagueness.

So you're saying it would be wonderful to draw a line. I had always assumed that that line at some point would be drawn, until about four months ago, and a philosopher taught me about vagueness. So I don't know if you've come across this, but it's a technical concept, and I think most revealingly illustrated with at what point does a pile of sand become a pile?

Is it one grain, two grains, three grains, or four grains? So at what point would you draw the line between being a pile of sand and a collection of grains of sand? In the same way, is it right to ask, where would I draw the line between conscious and unconscious?

And it might be a vague concept. Having said that, I agree with you entirely. I think it's systems that have the ability to plan. So just technically what that means is your inferential self-evidencing by which I simply mean the dynamics, literally the thermodynamics and gradient flows that underwrite the preservation of your oil droplet-like form are described as a, can be described as an optimization of log Bayesian model evidence, your elbow.

That self-evidencing must be evidence for a model of what's causing the sensory impressions on the sensory part of your surface or your Markov blanket. If that model is capable of planning, it must include a model of the future consequences of your active states or your action, just planning. So we're now in the game of planning as inference.

Now notice what we've made though. We've made quite a big move away from big data and machine learning, because again, it's the consequences of moving. It's the consequences of selecting those data or those data or looking over there. And that tells you immediately that even to be a contender for a conscious artifact or a, is it strong AI or generalized?

- Generalized, yeah. - It's called now. Then you've got to have movement in the game. And furthermore, you've got to have a generative model of the sort you might find in say a variational autoencoder that is thinking about the future conditioned upon different courses of action. Now that brings a number of things to the table, which now you start to think, well, those who've got all the right ingredients talk about consciousness.

I've now got to select among a number of different courses of action into the future as part of planning. I've now got free will. The act of selecting this course of action or that policy or that policy or that action suddenly makes me into an inference machine, a self-evidencing artifact that now looks as if it's selecting amongst different alternative ways forward as I actively swim here or swim there or look over here, look over there.

So I think you've now got to a situation if there is planning in the mix, you're now getting much closer to that line, if that line were ever to exist. I don't think it gets you quite as far as self-aware though. I think, and then you have to, I think, grapple with the question, how would formally you write down a calculus or a maths of self-awareness?

I don't think it's impossible to do, but I think there'll be pressure on you to actually commit to a formal definition of what you mean by self-awareness. I think most people that I know would probably say that a goldfish, a pet fish was not self-aware. They would probably argue about their favorite cat, but would be quite happy to say that their mom was self-aware.

So- - I mean, but that might very well connect to some level of complexity with planning. It seems like self-awareness is essential for complex planning. - Yeah. Do you want to take that further? 'Cause I think you're absolutely right. - Again, the line is unclear, but it seems like integrating yourself into the world, into your planning is essential for constructing complex plans.

- Yes, yeah. - So mathematically describing that in the same elegant way as you have with the free energy principle might be difficult. - Well, yes and no. I don't think that, well, perhaps we should just, can we just go back? That's a very important answer you gave. And I think if I just unpacked it, you'd see the truisms that you've just exposed for us.

But let me, sorry, I'm mindful that I didn't answer your question before. Well, what's the free energy principle good for? Is it just a pretty theoretical exercise to explain non-equilibrium steady states? Yes, it is. It does nothing more for you than that. It can be regarded, it's gonna sound very arrogant, but it is of the sort of theory of natural selection or a hypothesis of natural selection.

Beautiful, undeniably true, but tells you absolutely nothing about why you have legs and eyes. It tells you nothing about the actual phenotype and it wouldn't allow you to build something. So the free energy principle by itself is as vacuous as most tautological theories. And by tautological, of course, I'm talking to the theory of natural, the survival of the fittest.

What's the fittest that survive? Why do the cycles, the fitter? It just go round in circles. In a sense, the free energy principle has that same deflationary tautology under the hood. It's a characteristic of things that exist. Why do they exist? Because they minimize their free energy. Why do they minimize their free energy?

Because they exist. And you just keep on going round and round and round. But the practical thing, which you don't get from natural selection, but you could say has now manifest in things like differential evolution or genetic algorithms or MCMC, for example, in machine learning. The practical thing you can get is, if it looks as if things that exist are trying to have density dynamics that look as though they're optimizing a variation of free energy, and a variation of free energy has to be a functional of a generative model, a probabilistic description of causes and consequences, causes out there, consequences in the sensorium, on the sensory parts of the Markov Planckian, then it should, in theory, be possible to write down the generative model, work out the gradients, and then cause it to autonomously self-evidence.

So you should be able to write down oil droplets. You should be able to create artifacts where you have supplied the objective function that supplies the gradients, that supplies the self-organizing dynamics to non-equilibrium steady state. So there is actually a practical application of the free energy principle when you can write down your required evidence in terms of, well, when you can write down the generative model, that is the thing that has the evidence, the probability of these sensory data or this data, given that model is effectively the thing that the ELBO, or the Variational Free Energy, bounds or approximates.

That means that you can actually write down the model, and the kind of thing that you want to engineer, the kind of AGI, or Artificial General Intelligence, that you want to manifest probabilistically, and then you engineer, a lot of hard work, but you would engineer a robot and a computer to perform a gradient descent on that objective function.

So it does have a practical implication. Now, why am I wittering on about that? It did seem relevant to, yes. So, what kinds of, so the answer to, would it be easy or would it be hard? Well, mathematically, it's easy. I've just told you, all you need to do is write down your perfect artifact probabilistically in the form of a probabilistic generative model, probability distribution over the causes and consequences of the world in which this thing is immersed.

And then you just engineer a computer and a robot to perform a gradient descent on that objective function. No problem. But of course, the big problem is writing down the generative model. So that's where the heavy lifting comes in. So it's the form and the structure of that generative model, which basically defines the artifact that you will create, or indeed, the kind of artifact that has self-awareness.

So that's where all the hard work comes, very much like natural selection doesn't tell you in the slightest why you have eyes. So you have to drill down on the actual phenotype, the actual generative model. So with that in mind, what did you tell me that tells me immediately the kinds of generative models I would have to write down in order to have self-awareness?

- What you said to me was, I have to have a model that is effectively fit for purpose for this kind of world in which I operate. And if I now make the observation that this kind of world is effectively largely populated by other things like me, i.e. you, then it makes enormous sense that if I can develop a hypothesis that we are similar kinds of creatures, in fact, the same kind of creature, but I am me and you are you, then it becomes, again, mandated to have a sense of self.

So if I live in a world that is constituted by things like me, basically a social world, a community, then it becomes necessary now for me to infer that it's me talking and not you talking. I wouldn't need that if I was on Mars by myself, or if I was in the jungle as a feral child.

If there was nothing like me around, there would be no need to have an inference, a hypothesis, ah, yes, it is me that is experiencing or causing these sounds, and it is not you. It's only when there's ambiguity in play induced by the fact that there are others in that world.

So I think that the special thing about self-aware artifacts is that they have learned to, or they have acquired, or at least are equipped with, possibly by evolution, generative models that allow for the fact there are lots of copies of things like them around, and therefore they have to work out it's you and not me.

- That's brilliant. I've never thought of that. I never thought of that, that the purpose of, the really usefulness of consciousness or self-awareness in the context of planning existing in the world is so you can operate with other things like you. And like you could, it doesn't have to necessarily be human.

It could be other kind of similar creatures. - Absolutely, well, we imbue a lot of our attributes into our pets, don't we? Or we try to make our robots humanoid. And I think there's a deep reason for that, that it's just much easier to read the world if you can make the simplifying assumption that basically you're me, and it's just your turn to talk.

I mean, when we talk about planning, when you talk specifically about planning, the highest, if you like, manifestation or realization of that planning is what we're doing now. I mean, the human condition doesn't get any higher than this talking about the philosophy of existence and the conversation. But in that conversation, there is a beautiful art of turn-taking and mutual inference, theory of mind.

I have to know when you want to listen. I have to know when you want to interrupt. I have to make sure that you're online. I have to have a model in my head of your model in your head. That's the highest, the most sophisticated form of generative model, where the generative model actually has a generative model of somebody else's generative model.

And I think that, and what we are doing now, evinces the kinds of generative models that would support self-awareness. 'Cause without that, we'd both be talking over each other, or we'd be singing together in a choir, you know? That was just probably not, that's not a brilliant analogy what I'm trying to say, but yeah, we wouldn't have this discourse.

- Yeah, the dance of it, yeah, that's right. As I interrupt. (laughs) I mean, that's beautifully put. I'll re-listen to this conversation many times. There's so much poetry in this, and mathematics. Let me ask the silliest, or perhaps the biggest question as a last kind of question. We've talked about living in existence and the objective function under which these objects would operate.

What do you think is the objective function of our existence? What's the meaning of life? What do you think is the, for you perhaps, the purpose, the source of fulfillment, the source of meaning for your existence? As one blob. - As one blob. - In this soup. - I'm tempted to answer that again as a physicist.

(laughs) Free energy I expect consequent upon my behavior. So technically that, and we could get a really interesting conversation about what that comprises in terms of searching for information, resolving uncertainty about the kind of thing that I am. But I suspect that you want a slightly more personal and fun answer.

But which can be consistent with that. And I think it's reassuringly simple and harps back to what you were taught as a child, that you have certain beliefs about the kind of creature and the kind of person you are. And all that self-evidencing means, all that minimizing variational free energy in an inactive and embodied way, means is fulfilling the beliefs about what kind of thing you are.

And of course we're all given those scripts, those narratives at a very early age, usually in the form of bedtime stories or fairy stories. I'm a princess and I'm gonna meet a beast who's gonna transform and it's gonna be a prince. - So the narratives are all around you, from your parents to the friends, to the society feeds these stories.

And then your objective function is to fulfill-- - Exactly, that narrative that has been encultured by your immediate family, but as you say also the sort of the culture in which you grew up and you create for yourself. I mean, again, because of this active inference, this inactive aspect of self-evidencing, not only am I modeling my environment, my econish, my external states out there, but I'm actively changing them all the time.

And external states are doing the same back, we're doing it together. So there's a synchrony that means that I'm creating my own culture over different timescales. So the question now is for me being very selfish, what scripts were I given? It basically was a mixture between Einstein and Sherlock Holmes.

So I smoke as heavily as possible, try to avoid too much interpersonal contact, enjoy the fantasy that you're a popular scientist who's gonna make a difference in a slightly quirky way. So that's where I grew up. My father was an engineer and loved science and he loved sort of things like Sir Arthur Eddington's Space, Time and Gravitation, which was the first understandable version of general relativity.

So all the fairy stories I was told as I was growing up were all about these characters. I'm keeping the Hobbit out of this because that doesn't quite fit my narrative. But it's a journey of exploration, I suppose, of sorts. So yeah, I've just grown up to be what I imagine a mild-mannered Sherlock Holmes/Albert Einstein would do in my shoes.

- And you did it elegantly and beautifully, Carl. It was a huge honor talking to you today. It was fun. Thank you so much for your time. - Oh, thank you. - Appreciate it. (upbeat music) (upbeat music) (upbeat music) (upbeat music) (upbeat music) (upbeat music)