back to index

Cellular Automata and Rule 30 (Stephen Wolfram) | AI Podcast Clips


Whisper Transcript | Transcript Only Page

00:00:00.000 | - So in 2002, you published A New Kind of Science,
00:00:05.000 | to which, sort of on a personal level,
00:00:09.280 | I can credit my love for cellular automata
00:00:11.400 | and computation in general.
00:00:12.800 | I think a lot of others can as well.
00:00:15.960 | Can you briefly describe the vision, the hope,
00:00:20.960 | the main idea presented in this 1,200 page book?
00:00:27.240 | - Sure, although it took 1,200 pages to say in the book.
00:00:30.840 | So no, the real idea, it's kind of,
00:00:35.400 | a good way to get into it is to look at sort of
00:00:37.360 | the arc of history and to look at what's happened
00:00:39.400 | in kind of the development of science.
00:00:41.360 | I mean, there was this sort of big idea in science
00:00:44.400 | about 300 years ago that was,
00:00:46.800 | let's use mathematical equations
00:00:49.240 | to try and describe things in the world.
00:00:51.520 | Let's use sort of the formal idea of mathematical equations
00:00:55.280 | to describe what might be happening in the world,
00:00:57.320 | rather than, for example,
00:00:58.560 | just using sort of logical argumentation and so on.
00:01:01.240 | Let's have a formal theory about that.
00:01:04.160 | And so there'd been this 300 year run
00:01:06.440 | of using mathematical equations
00:01:07.840 | to describe the natural world, which had worked pretty well.
00:01:10.520 | But I got interested in how one could generalize
00:01:14.080 | that notion.
00:01:15.280 | There is a formal theory, there are definite rules,
00:01:17.920 | but what structure could those rules have?
00:01:20.520 | And so what I got interested in was,
00:01:22.820 | let's generalize beyond the sort of purely
00:01:24.920 | mathematical rules.
00:01:26.460 | And we now have this sort of notion of programming
00:01:29.460 | and computing and so on.
00:01:31.100 | Let's use the kinds of rules that can be embodied
00:01:34.380 | in programs to, as a sort of generalization
00:01:37.820 | of the ones that can exist in mathematics
00:01:40.180 | as a way to describe the world.
00:01:42.140 | And so my kind of favorite version of these kinds
00:01:45.900 | of simple rules are these things called cellular automata.
00:01:49.540 | And so typical case,
00:01:51.060 | - So wait, what are cellular automata?
00:01:54.420 | - Fair enough.
00:01:55.260 | So typical case of a cellular automaton,
00:01:57.420 | it's an array of cells.
00:01:59.820 | It's just a line of discrete cells.
00:02:03.560 | Each cell is either black or white.
00:02:05.840 | And in a series of steps that you can represent
00:02:08.860 | as lines going down a page,
00:02:11.060 | you're updating the color of each cell
00:02:13.060 | according to a rule that depends on the color
00:02:15.380 | of the cell above it and to its left and right.
00:02:17.620 | So it's really simple.
00:02:18.460 | So a thing might be, you know,
00:02:21.020 | if the cell and its right neighbor are not the same
00:02:26.020 | and or the cell on the left is black or something,
00:02:34.140 | then make it black on the next step.
00:02:36.540 | And if not make it white, typical rule.
00:02:39.280 | That rule, I'm not sure I said it exactly right,
00:02:42.780 | but a rule very much like what I just said
00:02:45.160 | has the feature that if you started off
00:02:46.820 | from just one black cell at the top,
00:02:48.860 | it makes this extremely complicated pattern.
00:02:51.180 | So some rules, you get a very simple pattern.
00:02:55.020 | Some rules you have, the rule is simple.
00:02:58.860 | You start them off from a sort of simple seed.
00:03:01.260 | You just get this very simple pattern.
00:03:03.460 | But other rules, and this was the big surprise
00:03:06.380 | when I started actually just doing
00:03:07.900 | the simple computer experiments to find out what happens,
00:03:10.800 | is that they produce very complicated patterns of behavior.
00:03:14.220 | So for example, this rule 30 rule has the feature
00:03:18.500 | you started off from just one black cell at the top,
00:03:21.000 | makes this very random pattern.
00:03:23.740 | If you look like at the center column of cells,
00:03:26.920 | you get a series of values, you know,
00:03:29.340 | it goes black, white, black, black, whatever it is.
00:03:32.020 | That sequence seems for all practical purposes random.
00:03:36.060 | So it's kind of like in math, you know,
00:03:39.380 | you compute the digits of pi, 3.1415926, whatever.
00:03:43.940 | Those digits once computed, I mean,
00:03:46.780 | the scheme for computing pi, you know,
00:03:48.420 | it's the ratio of the circumference diameter of a circle,
00:03:51.080 | very well-defined, but yet when you are,
00:03:54.540 | once you've generated those digits,
00:03:56.440 | they seem for all practical purposes completely random.
00:03:59.480 | And so it is with rule 30,
00:04:01.700 | that even though the rule is very simple, much simpler,
00:04:04.740 | much more sort of computationally obvious
00:04:07.540 | than the rule for generating digits of pi,
00:04:09.820 | even with a rule that simple,
00:04:11.700 | you're still generating immensely complicated behavior.
00:04:14.660 | - Yeah, so if we could just pause on that,
00:04:16.340 | I think you probably have said it and looked at it so long,
00:04:19.660 | you forgot the magic of it, or perhaps you don't,
00:04:21.980 | you still feel the magic.
00:04:23.020 | But to me, if you've never seen sort of,
00:04:27.500 | I would say, what is it?
00:04:28.620 | A one-dimensional, essentially, cellular automata, right?
00:04:33.020 | And you were to guess what you would see
00:04:36.540 | if you have some sort of cells
00:04:41.020 | that only respond to its neighbors.
00:04:43.740 | If you were to guess what kind of things you would see,
00:04:47.820 | like my initial guess,
00:04:50.460 | like even when I first opened your book,
00:04:52.540 | A New Kind of Science, right?
00:04:54.340 | My initial guess is you would see,
00:04:56.700 | I mean, it would be very simple stuff.
00:04:59.220 | And I think it's a magical experience
00:05:03.180 | to realize the kind of complexity.
00:05:05.100 | You mentioned rule 30,
00:05:06.860 | still your favorite cellular automaton?
00:05:09.260 | - My favorite rule, yes.
00:05:11.580 | - You get complexity, immense complexity.
00:05:14.940 | You get arbitrary complexity.
00:05:17.300 | - Yes. - And when you say
00:05:18.300 | randomness down the middle column,
00:05:20.420 | that's just one cool way to say
00:05:25.340 | that there's incredible complexity.
00:05:27.260 | And that's just, I mean, that's a magical idea.
00:05:31.300 | However you start to interpret it,
00:05:32.740 | all the reducibility discussions, all that,
00:05:35.220 | but it's just, I think that has profound philosophical
00:05:39.780 | kind of notions around it too.
00:05:42.180 | It's not just, I mean, it's transformational
00:05:44.980 | about how you see the world.
00:05:46.020 | I think for me it was transformational.
00:05:48.020 | I don't know, we can have all kinds of discussion
00:05:50.620 | about computation and so on,
00:05:52.420 | but just, you know, I sometimes think
00:05:55.740 | if I were on a desert island and was,
00:06:00.740 | I don't know, maybe it was some psychedelics or something,
00:06:03.140 | but if I had to take one book,
00:06:05.180 | I mean, "New Kind of Science" would be it
00:06:06.580 | 'cause you could just enjoy that notion.
00:06:09.540 | For some reason it's a deeply profound notion,
00:06:11.940 | at least to me. - I find it that way.
00:06:13.500 | Yeah, I mean, look, it's been,
00:06:15.160 | it was a very intuition-breaking thing to discover.
00:06:21.180 | I mean, it's kind of like, you know,
00:06:23.100 | you point the computational telescope out there
00:06:26.100 | and suddenly you see, I don't know, you know,
00:06:29.300 | in the past it's kind of like, you know,
00:06:30.900 | moons of Jupiter or something,
00:06:32.060 | but suddenly you see something that's kind of
00:06:33.380 | very unexpected and Rule 30 was very unexpected for me.
00:06:37.060 | And the big challenge at a personal level
00:06:39.220 | was to not ignore it.
00:06:41.480 | I mean, people, you know, in other words,
00:06:43.660 | you might say, you know- - It's a bug.
00:06:46.020 | What would you say?
00:06:47.300 | Yeah, what would you say? - Yeah, I mean, I-
00:06:48.700 | - What are we looking at, by the way?
00:06:50.020 | - Oh, well, I was just generating here,
00:06:51.420 | I'll actually generate a Rule 30 pattern.
00:06:53.580 | So that's the rule for Rule 30.
00:06:56.940 | And it says, for example, it says here,
00:06:59.020 | if you have a black cell in the middle
00:07:01.180 | and black cell to the left and white cell to the right,
00:07:03.260 | then the cell on the next step will be white.
00:07:05.820 | And so here's the actual pattern that you get
00:07:08.340 | starting off from a single black cell at the top there.
00:07:11.960 | And then- - That's the initial state,
00:07:14.420 | initial condition. - That's the initial thing.
00:07:15.980 | You just start off from that
00:07:17.260 | and then you're going down the page.
00:07:19.380 | And at every step, you're just applying this rule
00:07:23.500 | to find out the new value that you get.
00:07:25.860 | And so you might think, Rule that simple,
00:07:28.740 | you gotta get the, there's gotta be some trace
00:07:30.900 | of that simplicity here.
00:07:32.580 | Okay, we'll run it, let's say, for 400 steps.
00:07:35.780 | It's what it does.
00:07:36.980 | It's kind of aliasing a bit on the screen there,
00:07:38.700 | but you can see there's a little bit of regularity
00:07:41.420 | over on the left.
00:07:42.820 | But there's a lot of stuff here
00:07:46.000 | that just looks very complicated, very random.
00:07:48.960 | And that's a big sort of shock to,
00:07:52.580 | was a big shock to my intuition, at least,
00:07:54.940 | that that's possible.
00:07:56.100 | - Your mind immediately starts, is there a pattern?
00:07:58.620 | There must be a reparative pattern.
00:08:00.380 | - Yeah. - There must be.
00:08:01.220 | That's where the mind goes. - Well, right, so I spent,
00:08:02.580 | so indeed, that's what I thought at first.
00:08:04.740 | And I thought, well, this is kind of interesting,
00:08:07.540 | but if we run it long enough, we'll see,
00:08:10.420 | something will resolve into something simple.
00:08:13.020 | And I did all kinds of analysis
00:08:15.980 | of using mathematics, statistics, cryptography,
00:08:19.300 | whatever, to try and crack it.
00:08:22.940 | And I never succeeded.
00:08:24.020 | And after I hadn't succeeded for a while,
00:08:25.740 | I started thinking maybe there's a real phenomenon here
00:08:29.380 | that is the reason I'm not succeeding.
00:08:31.060 | Maybe, I mean, the thing that, for me,
00:08:33.060 | was sort of a motivating factor
00:08:34.980 | was looking at the natural world
00:08:36.700 | and seeing all this complexity
00:08:37.940 | that exists in the natural world.
00:08:39.540 | The question is, where does it come from?
00:08:41.420 | What secret does nature have
00:08:43.460 | that lets it make all this complexity
00:08:45.560 | that we humans, when we engineer things,
00:08:47.780 | typically are not making?
00:08:49.540 | We're typically making things
00:08:50.740 | that at least look quite simple to us.
00:08:53.300 | And so the shock here was,
00:08:55.820 | even from something very simple,
00:08:57.140 | you're making something that complex.
00:08:59.820 | Maybe this is getting at sort of the secret
00:09:02.140 | that nature has that allows it
00:09:04.180 | to make really complex things,
00:09:06.300 | even though its underlying rules may not be that complex.
00:09:09.860 | - How did it make you feel?
00:09:11.100 | If we look at the Newton apple,
00:09:13.940 | was there, you know, you took a walk
00:09:16.940 | and something, it profoundly hit you,
00:09:20.700 | or was this a gradual thing?
00:09:22.540 | A lobster being boiled?
00:09:24.500 | - The truth of every sort of science discovery
00:09:27.700 | is it's not that gradual.
00:09:29.740 | I mean, I've spent, I happen to be interested
00:09:31.980 | in scientific biography kinds of things,
00:09:33.580 | and so I've tried to track down, you know,
00:09:35.060 | how did people come to figure out this or that thing?
00:09:38.140 | And there's always a long kind of sort of preparatory,
00:09:43.140 | you know, there's a need to be prepared
00:09:46.220 | and a mindset in which it's possible to see something.
00:09:49.140 | I mean, in the case of Rule 30,
00:09:50.940 | I was around June 1st, 1984,
00:09:53.380 | was kind of a silly story in some ways.
00:09:56.420 | I finally had a high-resolution laser printer.
00:09:59.100 | So I was able, so I thought,
00:10:00.340 | I'm gonna generate a bunch of pictures
00:10:01.780 | of the acyllar automata, and I generate this one,
00:10:04.860 | and I put it, I was on some plane flight to Europe,
00:10:08.940 | and I have this with me, and it's like,
00:10:10.940 | you know, I really should try to understand this.
00:10:13.980 | And this is really, you know,
00:10:15.580 | this is, I really don't understand what's going on.
00:10:17.980 | And that was kind of the, you know,
00:10:20.260 | slowly trying to see what was happening.
00:10:24.300 | It was not, it was depressingly unsudden, so to speak,
00:10:28.580 | in the sense that a lot of these ideas,
00:10:31.840 | like principle of computational equivalence, for example,
00:10:35.180 | you know, I thought, well, that's a possible thing.
00:10:37.660 | I didn't know if it's correct.
00:10:39.180 | Still don't know for sure that it's correct,
00:10:41.900 | but it's sort of a gradual thing,
00:10:43.300 | that these things gradually kind of become,
00:10:46.420 | seem more important than one thought.
00:10:48.580 | I mean, I think the whole idea
00:10:50.660 | of studying the computational universe of simple programs,
00:10:53.860 | it took me probably a decade, decade and a half
00:10:57.700 | to kind of internalize
00:10:59.260 | that that was really an important idea.
00:11:01.220 | And I think, you know, if it turns out,
00:11:03.700 | we find the whole universe lurking out there
00:11:06.180 | in the computational universe, that's a good, you know,
00:11:09.260 | it's a good brownie point or something for the whole idea.
00:11:12.760 | But I think that the thing that's strange
00:11:16.060 | in this whole question about, you know,
00:11:18.260 | finding this different raw material
00:11:19.900 | for making models of things.
00:11:21.480 | What's been interesting sort of in the arc of history
00:11:25.580 | is, you know, for 300 years,
00:11:26.980 | it's kind of like the mathematical equations approach.
00:11:29.980 | It was the winner.
00:11:31.020 | It was the thing, you know,
00:11:31.940 | you want to have a really good model
00:11:33.620 | for something that's what you use.
00:11:35.740 | The thing that's been remarkable
00:11:37.220 | is just in the last decade or so,
00:11:39.540 | I think one can see a transition
00:11:41.100 | to using not mathematical equations,
00:11:44.020 | but programs as sort of the raw material
00:11:46.660 | for making models of stuff.
00:11:48.660 | And that's pretty neat.
00:11:50.420 | And it's kind of, you know,
00:11:52.140 | as somebody who's kind of lived inside this paradigm shift,
00:11:54.940 | so to speak, it is bizarre.
00:11:57.780 | I mean, no doubt in sort of the history of science
00:12:00.240 | that will be seen as an instantaneous paradigm shift,
00:12:03.260 | but it sure isn't instantaneous when it's played out
00:12:05.580 | in one's actual life, so to speak.
00:12:07.340 | It seems glacial.
00:12:08.720 | And it's the kind of thing where it's sort of interesting
00:12:13.740 | because in the dynamics of sort of the adoption
00:12:17.420 | of ideas like that into different fields,
00:12:20.900 | the younger the field, the faster the adoption typically,
00:12:24.100 | because people are not kind of locked in
00:12:27.220 | with the fifth generation of people
00:12:28.740 | who've studied this field.
00:12:30.460 | And it is the way it is,
00:12:32.620 | and it can never be any different.
00:12:34.280 | And I think that's been, you know,
00:12:36.180 | watching that process has been interesting.
00:12:38.400 | I mean, I think I'm fortunate that I've,
00:12:43.120 | I do stuff mainly 'cause I like doing it.
00:12:45.980 | And if I was, that makes me kind of thick-skinned
00:12:50.280 | about the world's response to what I do.
00:12:52.480 | But that's definitely, you know,
00:12:56.000 | and any time you write a book called something
00:12:58.500 | like "A New Kind of Science,"
00:13:00.580 | it's kind of the pitchforks will come out
00:13:03.760 | for the old kind of science.
00:13:05.400 | And it was interesting dynamics.
00:13:07.080 | I think that the,
00:13:10.400 | I have to say that I was fully aware of the fact that
00:13:14.040 | when you see sort of incipient paradigm shifts in science,
00:13:18.880 | the vigor of the negative response upon early introduction
00:13:23.560 | is a fantastic positive indicator of good long-term results.
00:13:28.560 | So in other words, if people just don't care,
00:13:32.280 | it's, you know, that's not such a good sign.
00:13:35.600 | If they're like, "Oh, this is great."
00:13:37.180 | That means you didn't really discover
00:13:38.480 | anything interesting.
00:13:40.540 | - What fascinating properties of Rule 30
00:13:43.700 | have you discovered over the years?
00:13:45.460 | You've recently announced the Rule 30 prizes
00:13:47.860 | for solving three key problems.
00:13:50.140 | Can you maybe talk about interesting properties
00:13:52.860 | that have been kind of revealed,
00:13:55.780 | Rule 30 or other cellular automata,
00:13:57.700 | and what problems are still before us,
00:14:00.260 | like the three problems you've announced?
00:14:01.800 | - Yeah, yeah, right.
00:14:02.640 | So I mean, the most interesting thing about cellular automata
00:14:07.460 | is that it's hard to figure stuff out about them.
00:14:09.860 | And that's some, in a sense,
00:14:12.960 | every time you try and sort of,
00:14:14.700 | you try and bash them with some other technique,
00:14:17.980 | you say, "Can I crack them?"
00:14:20.460 | The answer is they seem to be uncrackable.
00:14:22.780 | They seem to have the feature that they are,
00:14:26.580 | that they're sort of showing irreducible computation.
00:14:29.620 | They're not, you're not able to say,
00:14:31.760 | "Oh, I know exactly what this is going to do.
00:14:33.940 | "It's going to do this or that."
00:14:36.020 | - But there's specific formulations of that fact.
00:14:38.960 | - Yes, right.
00:14:39.800 | So I mean, for example, in Rule 30,
00:14:42.180 | in the pattern you get just starting
00:14:43.740 | from a single black cell,
00:14:45.320 | you get this sort of very,
00:14:47.540 | very sort of random looking pattern.
00:14:50.500 | And so one feature of that,
00:14:51.620 | just look at the center column.
00:14:53.260 | And for example, we use that for a long time
00:14:55.940 | to generate randomness in Wolfram language,
00:14:58.420 | just what Rule 30 produces.
00:15:00.980 | Now the question is, can you prove how random it is?
00:15:04.340 | So for example, one very simple question,
00:15:06.780 | can you prove that it'll never repeat?
00:15:09.220 | We haven't been able to show that it will never repeat.
00:15:11.980 | We know that if there are two adjacent columns,
00:15:16.220 | we know they can't both repeat,
00:15:18.140 | but just knowing whether that center column can ever repeat,
00:15:21.020 | we still don't even know that.
00:15:22.520 | Another problem that I've sort of put in my collection of,
00:15:27.140 | you know, it's like $30,000 for three,
00:15:29.580 | you know, for these three prizes for about Rule 30,
00:15:33.460 | I would say that this is not one of those,
00:15:35.700 | this is one of those cases where
00:15:37.460 | the money is not the main point,
00:15:39.300 | but it's just, you know, helps motivate
00:15:43.860 | somehow the investigation.
00:15:46.100 | - So there's three problems you propose,
00:15:47.700 | you get $30,000 if you solve all three,
00:15:50.220 | or maybe, I don't know.
00:15:51.060 | - No, it's 10,000 for each.
00:15:52.540 | - For each, right.
00:15:53.380 | The problems, that's right, money's not the thing,
00:15:56.500 | the problems themselves are just clean,
00:15:58.740 | - Yeah, right. - Formulations of chat box.
00:16:00.260 | - It's just, you know, will it ever become periodic?
00:16:03.260 | Second problem is, are there an equal number
00:16:05.460 | of black and white cells?
00:16:06.740 | - Down the middle column.
00:16:07.580 | - Down the middle column.
00:16:08.860 | And the third problem is a little bit harder to state,
00:16:10.580 | which is essentially, is there a way of figuring out
00:16:13.660 | what the color of a cell at position T
00:16:17.020 | down the center column is,
00:16:18.540 | with a less computational effort than about T steps?
00:16:23.760 | So in other words, is there a way to jump ahead and say,
00:16:26.740 | I know what this is gonna do,
00:16:28.580 | it's just some mathematical function of T.
00:16:33.580 | - Or proving that there is no way.
00:16:35.420 | - Or proving there is no way, yes.
00:16:37.140 | But both, I mean, for any one of these,
00:16:39.500 | one could prove that, one could discover,
00:16:42.380 | we know what rule 30 does for a billion steps,
00:16:45.100 | but, and maybe we'll know for a trillion steps
00:16:47.460 | before too very long,
00:16:49.180 | but maybe at a quadrillion steps,
00:16:50.940 | it suddenly becomes repetitive.
00:16:52.700 | You might say, how could that possibly happen?
00:16:55.280 | But so when I was writing up these prizes,
00:16:57.660 | I thought, and this is typical of what happens
00:16:59.880 | in the computational universe,
00:17:00.940 | I thought, let me find an example
00:17:03.260 | where it looks like it's just gonna be random forever,
00:17:05.920 | but actually it becomes repetitive.
00:17:07.940 | And I found one.
00:17:09.060 | And it's just, I did a search, I searched, I don't know,
00:17:11.980 | maybe a million different rules with some criterion.
00:17:15.340 | And this is, what's sort of interesting about that is,
00:17:19.060 | I kind of have this thing that I say
00:17:21.620 | in a kind of silly way about the computational universe,
00:17:23.860 | which is, the animals are always smarter than you are.
00:17:27.060 | That is, there's always some way
00:17:28.220 | one of these computational systems
00:17:29.620 | is gonna figure out how to do something,
00:17:31.480 | even though I can't imagine how it's gonna do it.
00:17:34.020 | And I didn't think I would find one that,
00:17:36.620 | you would think after all these years
00:17:37.820 | that when I found sort of all possible things,
00:17:40.120 | funky things, that I would have gotten my intuition wrapped
00:17:47.020 | around the idea that these creatures are always,
00:17:52.060 | in the computational universe,
00:17:53.140 | are always smarter than I'm gonna be.
00:17:55.820 | - Well, they're equivalently smart, right?
00:17:57.660 | - That's correct.
00:17:58.500 | And that makes it, that makes one feel very sort of,
00:18:02.300 | it's humbling every time.
00:18:04.340 | Because every time the thing is,
00:18:06.980 | you know, you think it's gonna do this,
00:18:08.420 | so it's not gonna be possible to do this.
00:18:10.380 | And it turns out it finds a way.
00:18:12.060 | - Of course, the promising thing is,
00:18:13.300 | there's a lot of other rules like Rule 30.
00:18:16.180 | It's just Rule 30 is--
00:18:18.220 | - Oh, it's my favorite 'cause I found it first.
00:18:20.180 | - Yeah, that's right.
00:18:21.060 | But the problems are focusing on Rule 30.
00:18:23.260 | It's possible that Rule 30 is repetitive
00:18:27.020 | after a trillion steps.
00:18:28.380 | - It is possible.
00:18:29.220 | - And that doesn't prove anything about the other rules.
00:18:31.100 | - It does not.
00:18:31.940 | - But this is a good sort of experiment
00:18:34.140 | of how you go about trying to prove something
00:18:36.700 | about a particular rule.
00:18:37.740 | - Yes, and it also, all these things help build intuition.
00:18:41.060 | That is, if it turned out that this was repetitive
00:18:43.500 | after a trillion steps, that's not what I would expect.
00:18:47.900 | And so we learn something from that.
00:18:49.820 | - The method to do that, though,
00:18:51.860 | would reveal something interesting about the--
00:18:54.340 | - No doubt, no doubt.
00:18:55.620 | I mean, although it's sometimes challenging,
00:18:58.740 | like I put out a prize in 2007
00:19:01.340 | for a particular Turing machine
00:19:05.020 | that was the simplest candidate
00:19:07.620 | for being a universal Turing machine.
00:19:09.860 | And the young chap in England named Alex Smith,
00:19:12.620 | after a smallish number of months said,
00:19:15.940 | "I've got a proof," and he did.
00:19:18.020 | It took a little while to iterate, but he had a proof.
00:19:20.900 | Unfortunately, the proof is very,
00:19:23.140 | it's a lot of micro details.
00:19:26.380 | It's not like you look at it and you say,
00:19:29.220 | "Aha, there's a big new principle."
00:19:32.220 | The big new principle is the simplest Turing machine
00:19:35.420 | that might have been universal actually is universal,
00:19:38.380 | and it's incredibly much simpler than the Turing machines
00:19:40.980 | that people already knew were universal before that.
00:19:43.460 | And so that, intuitionally, is important,
00:19:45.940 | 'cause it says computation universality
00:19:48.380 | is closer at hand than you might've thought.
00:19:51.100 | But the actual methods are not,
00:19:53.540 | in that particular case, were not terribly illuminated.
00:19:55.540 | - It would be nice if the methods would also be elegant.
00:19:58.500 | - That's true.
00:19:59.340 | Yeah, no, I mean, I think it's one of these things where,
00:20:02.220 | I mean, it's like a lot of, we've talked about earlier,
00:20:04.700 | kind of opening up AIs and machine learning and things
00:20:08.900 | of what's going on inside.
00:20:10.460 | And is it just step-by-step,
00:20:12.500 | or can you sort of see the bigger picture more abstractly?
00:20:15.700 | - It's unfortunate, I mean, with Fermat's last theorem proof,
00:20:18.540 | it's unfortunate that the proof
00:20:20.220 | to such an elegant theorem is not,
00:20:25.060 | I mean, it doesn't fit into the margins of a page.
00:20:29.500 | - That's true.
00:20:30.340 | But just know, one of the things is,
00:20:31.620 | that's another consequence of computational irreducibility,
00:20:34.860 | this fact that there are even quite short results
00:20:39.020 | in mathematics whose proofs are arbitrarily long.
00:20:42.260 | That's a consequence of all this stuff.
00:20:44.220 | And it makes one wonder,
00:20:46.540 | how come mathematics is possible at all?
00:20:49.980 | Why is it the case?
00:20:51.940 | How have people managed to navigate doing mathematics
00:20:55.540 | through looking at things
00:20:56.740 | where they're not just thrown into, it's all undecidable?
00:20:59.860 | That's its own separate story.
00:21:03.980 | - And that would be, that would have a poetic beauty to it
00:21:08.980 | if people were to find something interesting about rule 30,
00:21:12.820 | because, I mean, there's an emphasis
00:21:15.980 | to this particular rule.
00:21:17.060 | It wouldn't say anything about the broad irreducibility
00:21:20.060 | of all computations, but it would nevertheless
00:21:22.780 | put a few smiles on people's faces of, yeah.
00:21:25.700 | - Well, yeah, but to me, it's like, in a sense,
00:21:30.540 | establishing principle of computational equivalence,
00:21:33.420 | it's a little bit like doing inductive science anywhere.
00:21:36.820 | That is, the more examples you find,
00:21:39.220 | the more convinced you are that it's generally true.
00:21:41.860 | I mean, we don't get to, whenever we do natural science,
00:21:45.380 | we say, well, it's true here that this or that happens.
00:21:49.300 | Can we prove that it's true everywhere in the universe?
00:21:51.980 | No, we can't.
00:21:53.380 | So, it's the same thing here.
00:21:56.220 | We're exploring the computational universe.
00:21:57.860 | We're establishing facts in the computational universe.
00:22:00.860 | And that's sort of a way
00:22:02.940 | of inductively concluding general things.
00:22:08.300 | (upbeat music)
00:22:10.900 | (upbeat music)
00:22:13.500 | (upbeat music)
00:22:16.100 | (upbeat music)
00:22:18.700 | (upbeat music)
00:22:21.300 | (upbeat music)
00:22:23.900 | [BLANK_AUDIO]