Back to Index

Occam's Razor (Marcus Hutter) | AI Podcast Clips


Chapters

0:0 Occams Razor
0:48 The most important principle in science
2:20 Why is Einstein so beautiful
3:43 Induction
4:29 Theory
6:31 Weighting
8:2 Compression
9:38 Kolmogorov Complexity
11:18 The Whole Universe
11:58 Noise and Chaos
13:38 Library of All Books
15:3 Game of Life
17:48 Finding Simple Programs

Transcript

- What is Occam's Razor? - So Occam's Razor says that you should not multiply entities beyond necessity, which sort of if you translate it to proper English, means, and you know, in a scientific context, means that if you have two theories or hypotheses or models which equally well describe the phenomenon you are studying or the data, you should choose the more simple one.

- So that's just the principle? - Yes. - So that's not like a provable law perhaps? Perhaps we'll kind of discuss it and think about it, but what's the intuition of why the simpler answer is the one that is likelier to be more correct descriptor of whatever we're talking about?

- I believe that Occam's Razor is probably the most important principle in science. I mean, of course we lead logical deduction and we do experimental design, but science is about finding, understanding the world, finding models of the world, and we can come up with crazy complex models which explain everything but predict nothing, but the simple model seem to have predictive power and it's a valid question why.

And there are two answers to that. You can just accept it, that is the principle of science, and we use this principle and it seems to be successful. We don't know why, but it just happens to be. Or you can try, you know, find another principle which explains Occam's Razor.

And if we start with the assumption that the world is governed by simple rules, then there's a bias towards simplicity and applying Occam's Razor is the mechanism to finding these rules. And actually in a more quantitative sense, and we come back to that later in case of somnolent deduction you can rigorously prove that.

You assume that the world is simple, then Occam's Razor is the best you can do in a certain sense. - So I apologize for the romanticized question, but why do you think outside of its effectiveness, why do you think we find simplicity so appealing as human beings? Why does it just, why does E equals MC squared seem so beautiful to us humans?

- I guess mostly in general, many things can be explained by an evolutionary argument. And, you know, there's some artifacts in humans which are just artifacts and not evolutionary necessary. But with this beauty and simplicity, it's, I believe, at least the core is about, like science, finding regularities in the world, understanding the world, which is necessary for survival, right?

If I look at a bush, right, and I just see noise, and there is a tiger, right, and eats me, then I'm dead. But if I try to find a pattern, and we know that humans are prone to find more patterns in data than they are, like the Mars face and all these things, but this bias towards finding patterns, even if they are non, but I mean, it's best, of course, if they are, helps us for survival.

- Yeah, that's fascinating. I haven't thought really about the, I thought I just loved science, but indeed, in terms of just survival purposes, there is an evolutionary argument for why we find the work of Einstein so beautiful. Maybe a quick, small tangent. Could you describe what Solomonov induction is?

- Yeah, so that's a theory which I claim, and Raysolomonov sort of claimed a long time ago that this solves the big philosophical problem of induction. And I believe the claim is essentially true. And what it does is the following. So, okay, for the picky listener, induction can be interpreted narrowly and widely.

Narrow means inferring models from data. And widely means also then using these models for doing predictions, so predictions also part of the induction. So I'm a little sloppy sort of with the terminology, and maybe that comes from Raysolomonov being sloppy. Maybe I shouldn't say that. (both laughing) He can't complain anymore.

So let me explain a little bit this theory in simple terms. So assume you have a data sequence, make it very simple, the simplest one, say 1, 1, 1, 1, 1, and you see 100 1s. What do you think comes next? The natural answer, I'm gonna speed up a little bit.

The natural answer is, of course, 1. And the question is why? Well, we see a pattern there. Okay, there's a 1, and we repeat it. And why should it suddenly after 100 1s be different? So what we're looking for is simple explanations or models for the data we have.

And now the question is, a model has to be presented in a certain language. In which language do we use? In science, we want formal languages, and we can use mathematics, or we can use programs on a computer. So abstractly on a Turing machine, for instance, or it can be a general purpose computer.

So, and there are, of course, lots of models. You can say maybe it's 100 1s, and then 100 0s, and 100 1s, that's a model, right? But there are simpler models. There's a model print one loop. Now that also explains the data. And if you push that to the extreme, you are looking for the shortest program, which if you run this program, reproduces the data you have.

It will not stop, it will continue naturally. And this you take for your prediction. And on the sequence of ones, it's very plausible, right? That print one loop is the shortest program. We can give some more complex examples, like one, two, three, four, five. What comes next? The short program is again, you know, counter.

And so that is roughly speaking how Solomon's induction works. The extra twist is that it can also deal with noisy data. So if you have, for instance, a coin flip, say a biased coin, which comes up head with 60% probability, then it will predict, it will learn and figure this out.

And after a while it predict, oh, the next coin flip will be head with probability 60%. So it's the stochastic version of that. - But the goal is, the dream is always the search for the short program. - Yes, yeah. Well, in Solomonov induction, precisely what you do is, so you combine, so looking for the shortest program is like applying Opac's Razor, like looking for the simplest theory.

There's also Epicurus principle, which says, if you have multiple hypotheses, which equally well describe your data, don't discard any of them, keep all of them around, you never know. And you can put it together and say, okay, I have a bias towards simplicity, but I don't rule out the larger models.

And technically what we do is, we weigh the shorter models higher and the longer models lower. And you use a Bayesian technique, so you have a prior, and which is precisely two to the minus the complexity of the program. And you weigh all this hypothesis and take this mixture, and then you get also the stochasticity in.

- Yeah, like many of your ideas, that's just a beautiful idea of weighing based on the simplicity of the program. I love that. That seems to me maybe a very human-centric concept, seems to be a very appealing way of discovering good programs in this world. You've used the term compression quite a bit.

I think it's a beautiful idea. Sort of, we just talked about simplicity and maybe science or just all of our intellectual pursuits is basically the attempt to compress the complexity all around us into something simple. So what does this word mean to you, compression? - I essentially have already explained it.

So compression means for me, finding short programs for the data or the phenomenon at hand. You could interpret it more widely as finding simple theories, which can be mathematical theories, or maybe even informal, like just in words. Compression means finding short descriptions, explanations, programs for the data. - Do you see science as a kind of, our human attempt at compression?

So we're speaking more generally, 'cause when you say programs, we're kind of zooming in on a particular sort of, almost like a computer science, artificial intelligence focus. But do you see all of human endeavor as a kind of compression? - Well, at least all of science I see as an endeavor of compression, not all of humanity, maybe.

And well, there are also some other aspects of science, like experimental design, right? I mean, we create experiments specifically to get extra knowledge. And that is then part of the decision-making process. But once we have the data, to understand the data is essentially compression. So I don't see any difference between compression, understanding, and prediction.

- So we're jumping around topics a little bit, but returning back to simplicity, a fascinating concept of Kolmogorov complexity. So in your sense, do most objects in our mathematical universe have high Kolmogorov complexity? And maybe what is, first of all, what is Kolmogorov complexity? - Okay, Kolmogorov complexity is a notion of simplicity or complexity.

And it takes the compression view to the extreme. So I explained before that if you have some data sequence, just think about a file on a computer, and best sort of, you know, just a string of bits. And if you, and we have data compressors, like we compress big files into, say, zip files with certain compressors.

And you can also produce self-extracting RKFs. That means as an executable, if you run it, it reproduces your original file without needing an extra decompressor. It's just a decompressor plus the RKF together in one. And now there are better and worse compressors, and you can ask, what is the ultimate compressor?

So what is the shortest possible self-extracting RKF you could produce for a certain data set, yeah, which reproduces the data set? And the length of this is called the Kolmogorov complexity. And arguably, that is the information content in the data set. I mean, if the data set is very redundant or very boring, you can compress it very well, so the information content should be low.

And, you know, it is low according to this definition. - So it's the length of the shortest program that summarizes the data? - Yes, yeah. - And what's your sense of our sort of universe when we think about the different objects in our universe, that we try concepts or whatever at every level, do they have high or low Kolmogorov complexity?

So what's the hope? Do we have a lot of hope in being able to summarize much of our world? - That's a tricky and difficult question. So as I said before, I believe that the whole universe, based on the evidence we have, is very simple. So it has a very short description.

- Sorry, to linger on that, the whole universe, what does that mean? Do you mean at the very basic fundamental level in order to create the universe? - Yes, yeah. So you need a very short program, and you run it-- - To get the thing going. - To get the thing going, and then it will reproduce our universe.

There's a problem with noise. We can come back to that later, possibly. - Is noise a problem, or is it a bug or a feature? - I would say it makes our life as a scientist really, really much harder. I mean, think about without noise, we wouldn't need all of the statistics.

- But that may be, we wouldn't feel like there's a free will. Maybe we need that for the-- - Yeah, this is an illusion that noise can give you free will. - At least in that way, it's a feature. But also, if you don't have noise, you have chaotic phenomena, which are effectively like noise.

So we can't get away with statistics even then. I mean, think about rolling a dice and forget about quantum mechanics, and you know exactly how you throw it. But I mean, it's still so hard to compute the trajectory that effectively it is best to model it as coming out with a number, this probability one over six.

But from this set of philosophical Kolmogorov complexity perspective, if we didn't have noise, then arguably you could describe the whole universe as well as a standard model plus generativity. I mean, we don't have a theory of everything yet, but sort of assuming we are close to it or have it, yeah.

Plus the initial conditions, which may hopefully be simple. And then you just run it, and then you would reproduce the universe. But that's spoiled by noise or by chaotic systems or by initial conditions, which may be complex. So now if we don't take the whole universe, we're just a subset, just take planet Earth.

Planet Earth cannot be compressed into a couple of equations. This is a hugely complex system. - So interesting. So when you look at the window, like the whole thing might be simple, but when you just take a small window, then-- - It may become complex, and that may be counterintuitive, but there's a very nice analogy.

The book, the library of all books. So imagine you have a normal library with interesting books and you go there, great, lots of information and quite complex, yeah? So now I create a library which contains all possible books, say, of 500 pages. So the first book just has AAAA over all the pages.

The next book, AAAA, and ends with B, and so on. I create this library of all books. I can write a super short program which creates this library. So this library which has all books has zero information content. And you take a subset of this library and suddenly you have a lot of information in there.

- So that's fascinating. I think one of the most beautiful object, mathematical objects that, at least today, seems to be understudied or under-talked about is cellular automata. What lessons do you draw from sort of the game of life for cellular automata, where you start with the simple rules, just like you're describing with the universe, and somehow complexity emerges?

Do you feel like you have an intuitive grasp on the fascinating behavior of such systems, where, like you said, some chaotic behavior could happen, some complexity could emerge, it could die out in some very rigid structures? Do you have a sense about cellular automata that somehow transfers maybe to the bigger questions of our universe?

- Yeah, the cellular automata, and especially the converse game of life, is really great because the rules are so simple, you can explain it to every child, and even by hand, you can simulate a little bit, and you see these beautiful patterns emerge, and people have proven that it's even Turing-complete.

You cannot just use a computer to simulate game of life, but you can also use game of life to simulate any computer. That is truly amazing, and it's the prime example, probably, to demonstrate that very simple rules can lead to very rich phenomena. And people sometimes, how is chemistry and biology so rich?

I mean, this can't be based on simple rules, but no, we know quantum electrodynamics describes all of chemistry, and we come later back to that. I claim intelligence can be explained or described in one single equation, this very rich phenomenon. You asked also about whether I understand this phenomenon, and it's probably not, and there's this saying, you never understand really things, you just get used to them.

And I think I'm pretty used to cellular automata, so you believe that you understand now why this phenomenon happens, but I give you a different example. I didn't play too much with this converse game of life, but a little bit more with fractals and with the Mandelbrot set, and these beautiful patterns, just look, Mandelbrot set.

And well, when the computers were really slow, and I just had a black and white monitor and programmed my own programs in assembler too. - Assembler, wow. Wow, you're legit. (both laughing) - To get these fractals on the screen, and it was mesmerized, and much later. So I returned to this every couple of years, and then I tried to understand what is going on, and you can understand a little bit, so I tried to derive the locations, there are these circles and the apple shape, and then you have smaller Mandelbrot sets recursively in this set.

And there's a way to mathematically, by solving high order polynomials, to figure out where these centers are and what size they are approximately. And by sort of mathematically approaching this problem, you slowly get a feeling of why things are like they are, and that sort of is a first step to understanding why this rich phenomenon.

- Do you think it's possible, what's your intuition, do you think it's possible to reverse engineer and find the short program that generated these fractals by looking at the fractals? - Well, in principle, yes. So, I mean, in principle, what you can do is, you take any data set, you take these fractals, or you take whatever your data set, whatever you have, say a picture of Conway's Game of Life, and you run through all programs, you take a program of size one, two, three, four, and all these programs, run them all in parallel in so-called dovetailing fashion, give them computational resources, first one 50%, second one half resources, and so on, and let them run, wait until they halt, give an output, compare it to your data, and if some of these programs produce the correct data, then you stop, and then you have already some program.

It may be a long program because it's faster, and then you continue, and you get shorter and shorter programs until you eventually find the shortest program. The interesting thing, you can never know whether it's the shortest program because there could be an even shorter program, which is just even slower, and you just have to wait, yeah?

But asymptotically, and actually after a finite time, you have the shortest program. So this is a theoretical but completely impractical way of finding the underlying structure in every data set, and that is what Solomon of induction does and Kolmogorov complexity. In practice, of course, we have to approach the problem more intelligently, and then if you take resource limitations into account, there's, for instance, the field of pseudo random numbers, and these are random numbers, so these are deterministic sequences, but no algorithm which is fast, fast means runs in polynomial time, can detect that it's actually deterministic.

So we can produce interesting, I mean, random numbers maybe not that interesting, but just an example. We can produce complex-looking data, and we can then prove that no fast algorithm can detect the underlying pattern. - Which is unfortunately, that's a big challenge for our search for simple programs in the space of artificial intelligence, perhaps.

- Yes, it definitely is for artificial intelligence, and it's quite surprising that it's, I can't say easy, I mean, physicists worked really hard to find these theories, but apparently it was possible for human minds to find these simple rules in the universe. It could have been different, right? - It could have been different.

It's awe-inspiring. (static) (silence) (silence) (silence) (silence) (silence) (silence) (silence)