back to index

Stephen Wolfram: Cellular Automata, Computation, and Physics | Lex Fridman Podcast #89


Chapters

0:0 Introduction
4:16 Communicating with an alien intelligence
12:11 Monolith in 2001: A Space Odyssey
29:6 What is computation?
44:54 Physics emerging from computation
74:10 Simulation
79:23 Fundamental theory of physics
88:1 Richard Feynman
99:57 Role of ego in science
107:21 Cellular automata
135:8 Wolfram language
175:14 What is intelligence?
177:47 Consciousness
182:36 Mortality
185:47 Meaning of life

Whisper Transcript | Transcript Only Page

00:00:00.000 | The following is a conversation with Stephen Wolfram,
00:00:02.760 | a computer scientist, mathematician,
00:00:04.680 | and theoretical physicist,
00:00:06.240 | who is the founder and CEO of Wolfram Research,
00:00:09.800 | a company behind Mathematica, Wolfram Alpha,
00:00:12.600 | Wolfram Language, and the new Wolfram Physics Project.
00:00:16.360 | He's the author of several books,
00:00:18.040 | including "A New Kind of Science,"
00:00:20.480 | which on a personal note,
00:00:22.560 | was one of the most influential books in my journey
00:00:25.600 | in computer science and artificial intelligence.
00:00:28.980 | It made me fall in love with the mathematical beauty
00:00:31.560 | and power of cellular automata.
00:00:33.540 | It is true that perhaps one of the criticisms of Stephen
00:00:38.120 | is on a human level, that he has a big ego,
00:00:41.960 | which prevents some researchers
00:00:43.520 | from fully enjoying the content of his ideas.
00:00:46.240 | We talk about this point in this conversation.
00:00:49.120 | To me, ego can lead you astray,
00:00:51.520 | but can also be a superpower,
00:00:53.800 | one that fuels bold, innovative thinking
00:00:56.800 | that refuses to surrender to the cautious ways
00:01:00.040 | of academic institutions.
00:01:01.680 | And here, especially, I ask you to join me
00:01:05.240 | in looking past the peculiarities of human nature
00:01:08.140 | and opening your mind to the beauty of ideas
00:01:11.060 | in Stephen's work and in this conversation.
00:01:14.300 | I believe Stephen Wolfram
00:01:15.540 | is one of the most original minds of our time,
00:01:18.700 | and at the core, is a kind, curious,
00:01:21.340 | and brilliant human being.
00:01:23.140 | This conversation was recorded in November, 2019,
00:01:26.580 | when the Wolfram Physics Project was underway,
00:01:28.980 | but not yet ready for public exploration as it is now.
00:01:32.680 | We now agreed to talk again,
00:01:34.420 | probably multiple times in the near future,
00:01:36.940 | so this is round one, and stay tuned for round two soon.
00:01:40.520 | This is the Artificial Intelligence Podcast.
00:01:44.060 | If you enjoy it, subscribe on YouTube,
00:01:46.140 | review it with Five Stars and Apple Podcast,
00:01:48.320 | support it on Patreon, or simply connect with me on Twitter,
00:01:51.260 | @LexFriedman, spelled F-R-I-D-M-A-N.
00:01:54.940 | As usual, I'll do a few minutes of ads now,
00:01:57.420 | and never any ads in the middle
00:01:58.700 | that can break the flow of the conversation.
00:02:00.780 | I hope that works for you,
00:02:02.280 | and doesn't hurt the listening experience.
00:02:05.180 | Quick summary of the ads.
00:02:06.660 | Two sponsors, ExpressVPN and Cash App.
00:02:09.840 | Please consider supporting the podcast
00:02:11.500 | by getting ExpressVPN at expressvpn.com/lexpod
00:02:16.160 | and downloading Cash App and using code LEXPODCAST.
00:02:20.020 | This show is presented by Cash App,
00:02:23.340 | the number one finance app in the App Store.
00:02:25.540 | When you get it, use code LEXPODCAST.
00:02:28.500 | Cash App lets you send money to friends,
00:02:30.300 | buy Bitcoin, and invest in the stock market
00:02:32.580 | with as little as $1.
00:02:34.420 | Since Cash App does fractional share trading,
00:02:36.940 | let me mention that the order execution algorithm
00:02:39.640 | that works behind the scenes
00:02:40.820 | to create the abstraction of fractional orders
00:02:43.340 | is an algorithmic marvel.
00:02:45.340 | So big props to the Cash App engineers
00:02:47.220 | for solving a hard problem
00:02:48.820 | that in the end provides an easy interface
00:02:51.200 | that takes a step up to the next layer of abstraction
00:02:53.860 | over the stock market.
00:02:55.420 | This makes trading more accessible for new investors
00:02:58.980 | and diversification much easier.
00:03:01.460 | So again, if you get Cash App from the App Store,
00:03:03.860 | Google Play, and use the code LEXPODCAST,
00:03:07.020 | you get $10, and Cash App will also donate $10 to FIRST,
00:03:11.060 | an organization that is helping to advance robotics
00:03:13.620 | and STEM education for young people around the world.
00:03:16.300 | This show is presented by ExpressVPN.
00:03:20.580 | Get it at expressvpn.com/lexpod
00:03:24.980 | to get a discount and to support this podcast.
00:03:27.780 | I've been using ExpressVPN for many years.
00:03:30.580 | I love it.
00:03:31.860 | It's really easy to use.
00:03:33.020 | Press the big power on button and your privacy is protected.
00:03:36.220 | And if you like, you can make it look like your location
00:03:39.620 | is anywhere else in the world.
00:03:41.740 | This has a large number of obvious benefits.
00:03:44.500 | Certainly, it allows you to access international versions
00:03:47.180 | of streaming websites like the Japanese Netflix
00:03:50.340 | or the UK Hulu.
00:03:51.680 | ExpressVPN works on any device you can imagine.
00:03:55.800 | I use it on Linux.
00:03:57.260 | Shout out to Ubuntu.
00:03:59.060 | New version coming out soon, actually.
00:04:01.180 | Windows, Android, but it's available anywhere else too.
00:04:04.600 | Once again, get it at expressvpn.com/lexpod
00:04:08.900 | to get a discount and to support this podcast.
00:04:11.520 | And now, here's my conversation with Stephen Wolfram.
00:04:16.020 | You and your son, Christopher,
00:04:18.540 | helped create the alien language in the movie "Arrival."
00:04:22.060 | So let me ask maybe a bit of a crazy question,
00:04:25.340 | but if aliens were to visit us on Earth,
00:04:27.980 | do you think we would be able to find a common language?
00:04:31.460 | - Well, by the time we're saying aliens are visiting us,
00:04:36.060 | we've already prejudiced the whole story
00:04:38.300 | because the concept of an alien actually visiting,
00:04:42.460 | so to speak, we already know they're kind of things
00:04:45.860 | that make sense to talk about visiting.
00:04:48.140 | So we already know they exist
00:04:49.520 | in the same kind of physical setup that we do.
00:04:52.820 | They're not, you know, it's not just radio signals.
00:04:57.540 | It's an actual thing that shows up and so on.
00:05:01.300 | So I think in terms of, you know,
00:05:03.260 | can one find ways to communicate?
00:05:05.960 | Well, the best example we have of this right now is AI.
00:05:10.380 | I mean, that's our first sort of example
00:05:11.980 | of alien intelligence.
00:05:13.680 | And the question is, how well do we communicate with AI?
00:05:17.020 | You know, if you were to say,
00:05:18.460 | if you were in the middle of a neural net
00:05:20.420 | and you open it up and it's like, what are you thinking?
00:05:23.780 | Can you discuss things with it?
00:05:26.140 | It's not easy, but it's not absolutely impossible.
00:05:29.480 | So I think by the time,
00:05:31.020 | but given the setup of your question, aliens visiting,
00:05:35.540 | I think the answer is yes,
00:05:37.620 | one will be able to find some form of communication,
00:05:39.660 | whatever communication means,
00:05:40.940 | communication requires notions of purpose
00:05:43.260 | and things like this.
00:05:44.780 | It's a kind of philosophical quagmire.
00:05:46.980 | - So if AI is a kind of alien life form,
00:05:50.660 | what do you think visiting looks like?
00:05:53.940 | So if we look at aliens visiting
00:05:57.100 | and we'll get to discuss computation
00:05:59.660 | and the world of computation,
00:06:01.260 | but if you were to imagine,
00:06:03.060 | you said you already prejudiced something
00:06:04.740 | by saying you visit, but how would aliens visit?
00:06:09.580 | - By visit, there's kind of an implication
00:06:12.060 | and here we're using the imprecision of human language,
00:06:15.300 | you know, in a world of the future.
00:06:16.980 | And if that's represented in computational language,
00:06:20.040 | we might be able to take the concept visit
00:06:23.860 | and go look in the documentation basically
00:06:26.140 | and find out exactly what does that mean?
00:06:27.740 | What properties does it have and so on?
00:06:29.540 | But by visit in ordinary human language,
00:06:32.540 | I'm kind of taking it to be,
00:06:34.760 | there's, you know, something, a physical embodiment
00:06:38.660 | that shows up in a spacecraft
00:06:40.260 | since we kind of know that that's necessary.
00:06:44.580 | We're not imagining it's just, you know,
00:06:47.940 | photons showing up in a radio signal that,
00:06:50.180 | you know, photons in some very elaborate pattern.
00:06:53.340 | We're imagining it's physical things made of atoms
00:06:56.960 | and so on that show up.
00:06:58.980 | - Can it be photons in a pattern?
00:07:01.220 | - Well, that's good question.
00:07:02.840 | I mean, whether there is the possibility,
00:07:05.180 | you know, what counts as intelligence?
00:07:07.540 | Good question.
00:07:08.620 | I mean, it's, you know,
00:07:10.740 | and I used to think there was sort of a,
00:07:13.380 | oh, there'll be, you know, it'll be clear what it means
00:07:15.660 | to find extraterrestrial intelligence,
00:07:17.380 | et cetera, et cetera, et cetera.
00:07:18.700 | I've increasingly realized as a result of science
00:07:21.380 | that I've done that there really isn't a bright line
00:07:24.140 | between the intelligent
00:07:26.340 | and the merely computational, so to speak.
00:07:28.980 | So, you know, in our kind of everyday sort of discussion,
00:07:32.460 | we'll say things like, you know,
00:07:33.300 | the weather has a mind of its own.
00:07:35.300 | Well, let's unpack that question.
00:07:37.740 | You know, we realize that there are computational processes
00:07:41.220 | that go on that determine the fluid dynamics
00:07:43.580 | of this and that in the atmosphere,
00:07:45.540 | et cetera, et cetera, et cetera.
00:07:47.020 | How do we distinguish that from the processes
00:07:50.220 | that go on in our brains of, you know,
00:07:52.100 | the physical processes that go on in our brains?
00:07:54.020 | How do we separate those?
00:07:56.260 | How do we say the physical processes going on
00:07:59.980 | that represent sophisticated computations in the weather?
00:08:03.020 | Oh, that's not the same as the physical processes
00:08:05.220 | that go on that represent sophisticated computations
00:08:07.500 | in our brains.
00:08:08.660 | The answer is I don't think there is
00:08:10.540 | a fundamental distinction.
00:08:11.820 | I think the distinction for us
00:08:13.860 | is that there's kind of a thread of history and so on
00:08:17.620 | that connects kind of what happens in different brains
00:08:21.420 | to each other, so to speak.
00:08:23.020 | And it's a, you know, what happens in the weather
00:08:25.300 | is something which is not connected
00:08:26.940 | by sort of a thread of civilizational history, so to speak,
00:08:31.820 | to what we're used to.
00:08:33.220 | - In our story, in the stories
00:08:34.860 | that the human brain has told us,
00:08:36.020 | but maybe the weather has its own stories.
00:08:38.140 | - Absolutely, absolutely.
00:08:40.100 | And that's where we run into trouble
00:08:42.220 | thinking about extraterrestrial intelligence
00:08:44.220 | because, you know, it's like that pulsar magnetosphere
00:08:48.220 | that's generating these very elaborate radio signals.
00:08:51.180 | You know, is that something that we should think of
00:08:53.380 | as being this whole civilization
00:08:55.060 | that's developed over the last however long,
00:08:57.780 | you know, millions of years of processes going on
00:09:01.340 | in the neutron star or whatever
00:09:03.860 | versus what, you know, what we're used to
00:09:06.300 | in human intelligence.
00:09:07.740 | I mean, I think it's a, I think in the end,
00:09:09.860 | you know, when people talk
00:09:10.820 | about extraterrestrial intelligence and where is it
00:09:13.020 | and the whole, you know, Fermi paradox
00:09:14.940 | of how come there's no other signs of intelligence
00:09:18.420 | in the universe, my guess is that we've got sort of two
00:09:21.620 | alien forms of intelligence that we're dealing with,
00:09:26.180 | artificial intelligence and sort of physical
00:09:29.380 | or extraterrestrial intelligence.
00:09:31.580 | And my guess is people will sort of get comfortable
00:09:34.100 | with the fact that both of these have been achieved
00:09:37.220 | around the same time.
00:09:39.020 | And in other words, people will say,
00:09:41.220 | well, yes, we're used to computers,
00:09:44.100 | things we've created, digital things we've created
00:09:46.220 | being sort of intelligent like we are.
00:09:48.260 | And they'll say, oh, we're kind of also used to the idea
00:09:50.660 | that there are things around the universe
00:09:52.460 | that are kind of intelligent like we are,
00:09:55.020 | except they don't share the sort of civilizational history
00:09:59.100 | that we have.
00:10:00.140 | And so we don't, you know, they're a different branch.
00:10:03.740 | I mean, it's similar to when you talk about life,
00:10:05.940 | for instance.
00:10:06.780 | I mean, you kind of said life form,
00:10:08.900 | I think almost synonymously with intelligence,
00:10:11.580 | which I don't think is, you know,
00:10:15.300 | the AIs would be upset to hear you equate those two things.
00:10:19.060 | - Because they really probably implied biological life.
00:10:23.180 | - Right, right.
00:10:24.300 | - But you're saying, I mean, we'll explore this more,
00:10:27.060 | but you're saying it's really a spectrum
00:10:28.580 | and it's all just a kind of computation.
00:10:30.860 | And so it's a full spectrum and we just make ourselves
00:10:35.540 | special by weaving a narrative around
00:10:38.580 | our particular kinds of computation.
00:10:40.620 | - Yes, I mean, the thing that I think
00:10:42.620 | I've kind of come to realize is, you know,
00:10:44.860 | at some level it's a little depressing to realize
00:10:46.780 | that there's so little that's special about them.
00:10:48.820 | - Or liberating.
00:10:50.260 | - Well, yeah, but I mean, it's, you know,
00:10:51.380 | it's the story of science, right?
00:10:52.780 | And, you know, from Copernicus on, it's like, you know,
00:10:56.180 | first we were like convinced our planets
00:10:58.660 | are the center of the universe.
00:11:00.300 | No, that's not true.
00:11:01.140 | Well, then we were convinced there's something
00:11:02.860 | very special about the chemistry that we have
00:11:05.460 | as biological organisms.
00:11:07.020 | No, that's not really true.
00:11:08.420 | And then we're still holding out that hope.
00:11:10.700 | Oh, this intelligence thing we have, that's really special.
00:11:14.260 | I don't think it is.
00:11:15.340 | However, in a sense, as you say,
00:11:17.780 | it's kind of liberating for the following reason,
00:11:19.540 | that you realize that what's special is the details of us,
00:11:24.540 | not some abstract attribute that, you know,
00:11:29.820 | we could wonder, oh, is something else gonna come along
00:11:32.340 | and, you know, also have that abstract attribute?
00:11:35.300 | Well, yes, every abstract attribute we have,
00:11:37.740 | something else has it.
00:11:39.300 | But the full details of our kind of history
00:11:42.860 | of our civilization and so on, nothing else has that.
00:11:45.340 | That's what, you know, that's our story, so to speak.
00:11:49.020 | And that's sort of almost by definition special.
00:11:52.740 | So I view it as not being such a, I mean, I was,
00:11:56.300 | initially I was like, this is bad.
00:11:58.300 | This is kind of, you know, how can we have self-respect
00:12:01.580 | about the things that we do?
00:12:04.260 | Then I realized the details of the things we do,
00:12:06.820 | they are the story.
00:12:08.140 | Everything else is kind of a blank canvas.
00:12:10.240 | - So maybe on a small tangent, you just made me think of it,
00:12:15.800 | but what do you make of the monoliths in "2001 Space Odyssey"
00:12:19.860 | in terms of aliens communicating with us
00:12:23.020 | and sparking the kind of particular intelligent computation
00:12:28.020 | that we humans have?
00:12:29.460 | Is there anything interesting to get from that
00:12:33.920 | sci-fi?
00:12:35.900 | - Yeah, I mean, I think what's fun about that is,
00:12:39.120 | you know, the monoliths are these, you know,
00:12:40.820 | one to four to nine perfect cuboid things.
00:12:44.060 | And in the, you know, Earth a million years ago,
00:12:47.100 | whatever they were portraying with a bunch of apes
00:12:49.420 | and so on, a thing that has that level of perfection
00:12:53.460 | seems out of place.
00:12:54.940 | It seems very kind of constructed, very engineered.
00:12:59.300 | So that's an interesting question.
00:13:01.540 | What is the, you know, what's the techno signature?
00:13:03.860 | So to speak, what is it that you see it somewhere
00:13:07.340 | and you say, my gosh, that had to be engineered.
00:13:09.860 | Now, the fact is we see crystals,
00:13:13.540 | which are also very perfect.
00:13:15.260 | And you know, the perfect ones are very perfect.
00:13:17.820 | They're nice polyhedra or whatever.
00:13:20.340 | And so in that sense, if you say, well,
00:13:22.540 | it's a sign of sort of, it's a techno signature
00:13:26.340 | that it's a perfect, you know,
00:13:28.580 | a perfect polygonal shape, polyhedral shape.
00:13:31.300 | That's not true.
00:13:32.500 | And so then it's an interesting question.
00:13:35.100 | What is the, you know, what is the right signature?
00:13:38.360 | I mean, like, you know, Gauss, famous mathematician,
00:13:41.700 | you know, he had this idea,
00:13:43.140 | you should cut down the Siberian forest
00:13:45.340 | in the shape of sort of a typical image
00:13:47.820 | of the proof of the Pythagorean theorem
00:13:50.060 | on the grounds that, it was a kind of cool idea,
00:13:52.340 | didn't get done, but you know,
00:13:54.340 | it was on the grounds that the Martians would see that
00:13:56.980 | and realize, gosh, there are mathematicians out there.
00:14:00.420 | It's kind of, you know, it's the,
00:14:01.860 | in his theory of the world,
00:14:03.160 | that was probably the best advertisement
00:14:04.740 | for the cultural achievements of our species.
00:14:07.660 | But you know, it's a reasonable question.
00:14:10.980 | What do you, what can you send or create
00:14:15.100 | that is a sign of intelligence in its creation
00:14:18.460 | or even intention in its creation?
00:14:21.060 | - Yeah, you talk about if we were to send a beacon,
00:14:24.420 | can you, what should we send?
00:14:26.740 | Is math our greatest creation?
00:14:28.740 | Is, what is our greatest creation?
00:14:31.180 | - I think, I think, and it's a,
00:14:32.860 | it's a philosophically doomed issue to, I mean,
00:14:36.100 | in other words, you send something,
00:14:37.780 | you think it's fantastic,
00:14:39.620 | but it's kind of like, we are part of the universe.
00:14:42.900 | We make things that are, you know,
00:14:44.860 | things that happen in the universe.
00:14:47.140 | Computation, which is sort of the thing that we are,
00:14:50.780 | in some abstract sense,
00:14:52.140 | in a sense, using to create all these
00:14:54.180 | elaborate things we create, is surprisingly ubiquitous.
00:14:59.180 | In other words, we might've thought that, you know,
00:15:02.220 | we've built this whole giant engineering stack
00:15:05.380 | that's led us to microprocessors,
00:15:07.020 | that's led us to be able to do elaborate computations,
00:15:10.580 | but this idea, the computations are happening
00:15:13.820 | all over the place.
00:15:15.060 | The only question is whether there's a thread
00:15:17.700 | that connects our human intentions
00:15:20.660 | to what those computations are.
00:15:22.740 | And so I think, I think this question of what do you send
00:15:25.780 | to kind of show off our civilization
00:15:28.860 | in the best possible way,
00:15:30.500 | I think any kind of almost random slab
00:15:33.820 | of stuff we've produced
00:15:35.820 | is about equivalent to everything else.
00:15:38.380 | I think it's one of these things where-
00:15:40.060 | - Such a non-romantic way of phrasing it.
00:15:43.100 | I just, sorry to interrupt,
00:15:44.620 | but I just talked to Anne Druyan,
00:15:47.260 | who's the wife of Carl Sagan.
00:15:49.820 | And so, I don't know if you're familiar with "The Voyager,"
00:15:52.500 | I mean, she was part of sending, I think,
00:15:55.380 | brainwaves of, you know, I want you to-
00:15:58.420 | - Wasn't it hers?
00:15:59.380 | - It was hers. - Her family.
00:16:00.780 | - Her brainwaves when she was first falling in love
00:16:02.620 | with Carl Sagan, right?
00:16:03.660 | So this beautiful story.
00:16:05.100 | (laughing)
00:16:06.180 | - Right.
00:16:07.020 | - That perhaps you would shut down the power of that
00:16:10.620 | by saying we might as well send anything else,
00:16:12.460 | and that's interesting.
00:16:14.060 | All of it is kind of an interesting, peculiar thing
00:16:17.460 | that's- - Yeah, yeah, right.
00:16:18.780 | Well, I mean, I think it's kind of interesting, too,
00:16:20.220 | on the "Voyager" golden record thing,
00:16:23.100 | one of the things that's kind of cute about that is,
00:16:25.580 | you know, it was made, when was it,
00:16:26.820 | in the late '70s, early '80s.
00:16:28.860 | - Yeah.
00:16:29.900 | - And, you know, one of the things,
00:16:31.220 | it's a phonograph record, okay?
00:16:33.660 | And it has a diagram of how to play a phonograph record.
00:16:35.940 | - Yeah, it does.
00:16:36.780 | - And, you know, it's kind of like,
00:16:38.540 | it's shocking that in just 30 years,
00:16:41.460 | if you show that to a random kid of today
00:16:43.820 | and you show them that diagram,
00:16:44.940 | and I've tried this experiment,
00:16:46.460 | they're like, "I don't know what the heck this is."
00:16:49.220 | And the best anybody can think of is,
00:16:51.260 | you know, take the whole record,
00:16:52.500 | forget the fact that it has some kind of helical track in it,
00:16:55.420 | just image the whole thing and see what's there.
00:16:58.260 | That's what we would do today.
00:16:59.820 | In only 30 years, our technology has kind of advanced
00:17:03.460 | to the point where the playing of a helical,
00:17:05.820 | you know, mechanical track on a phonograph record
00:17:09.060 | is now something bizarre.
00:17:10.820 | So, you know, that's a cautionary tale, I would say,
00:17:14.220 | in terms of the ability to make something
00:17:17.940 | that in detail sort of leads by the nose some,
00:17:22.060 | you know, the aliens or whatever to do something.
00:17:24.820 | It's like, no, you know, best you're gonna do, as I say,
00:17:28.260 | if we were doing this today,
00:17:29.980 | we would not build a helical scan thing with a needle.
00:17:33.980 | We would just take some high-resolution imaging system
00:17:37.060 | and get all the bits off it and say,
00:17:38.980 | "Oh, it's a big nuisance that they put in a helix,
00:17:41.380 | "you know, in a spiral.
00:17:42.820 | "Let's just, you know, unravel the spiral
00:17:46.780 | "and start from there."
00:17:49.500 | - Do you think, and this will get into trying to figure out
00:17:54.180 | interpretability of AI, interpretability of computation,
00:17:58.300 | being able to communicate
00:18:00.500 | with various kinds of computations.
00:18:02.580 | Do you think we'd be able to,
00:18:03.940 | if you put your alien hat on, figure out this record,
00:18:08.940 | how to play this record?
00:18:10.620 | - Well, it's a question of what one wants to do.
00:18:13.660 | I mean--
00:18:14.500 | - Understand what the other party was trying to communicate
00:18:18.020 | or understand anything about the other party.
00:18:20.420 | - What does understanding mean?
00:18:21.980 | I mean, that's the issue.
00:18:22.940 | The issue is, it's like when people were trying to do
00:18:25.900 | natural language understanding for computers, right?
00:18:28.580 | So people tried to do that for years.
00:18:31.780 | It wasn't clear what it meant.
00:18:33.660 | In other words, you take your piece of English or whatever
00:18:37.060 | and you say, "Gosh, my computer has understood this."
00:18:40.300 | Okay, that's nice.
00:18:41.740 | What can you do with that?
00:18:43.220 | Well, so for example, when we did, you know,
00:18:45.980 | built WolfMalpha, you know, one of the things was,
00:18:50.060 | it's, you know, it's doing question answering and so on,
00:18:52.300 | and it needs to do natural language understanding.
00:18:54.500 | The reason that I realized after the fact,
00:18:57.660 | the reason we were able to do
00:18:58.820 | natural language understanding quite well,
00:19:01.260 | and people hadn't before,
00:19:03.340 | the number one thing was we had an actual objective
00:19:06.860 | for the natural language understanding.
00:19:08.100 | We were trying to turn the natural language--
00:19:09.860 | - Into computation.
00:19:10.700 | - Into this computational language
00:19:12.500 | that we could then do things with.
00:19:14.340 | Now, similarly, when you imagine your alien, you say,
00:19:17.020 | "Okay, we're playing them the record.
00:19:18.940 | "Did they understand it?"
00:19:20.660 | Well, depends what you mean.
00:19:22.060 | If they, you know, if we,
00:19:23.180 | if there's a representation that they have,
00:19:25.660 | if it converts to some representation where we can say,
00:19:28.540 | "Oh yes, that is a,
00:19:30.340 | "that's a representation that we can recognize
00:19:33.380 | "is represents understanding, then all well and good."
00:19:36.740 | But actually the only ones that I think we can say
00:19:39.820 | would represent understanding
00:19:41.460 | are ones that will then do things
00:19:43.420 | that we humans kind of recognize as being useful to us.
00:19:47.660 | - Maybe trying to understand,
00:19:50.340 | quantify how technologically advanced
00:19:52.900 | this particular civilization is.
00:19:55.020 | So are they a threat to us from a military perspective?
00:19:58.820 | - Yeah, yeah.
00:19:59.660 | - That's probably the kind of,
00:20:00.660 | first kind of understanding that I'll be interested in.
00:20:03.340 | - Gosh, that's so hard.
00:20:04.260 | I mean, that's like in the Arrival movie,
00:20:06.100 | that was sort of one of the key questions is,
00:20:08.820 | you know, why are you here, so to speak?
00:20:10.820 | And it's--
00:20:11.660 | - Are you gonna hurt us?
00:20:12.740 | - Right, but even that is, you know, it's a very unclear,
00:20:15.660 | you know, it's like the, are you gonna hurt us?
00:20:17.620 | That comes back to a lot of interesting AI ethics questions
00:20:20.340 | because the, you know, we might make an AI that says,
00:20:24.020 | "Well, take autonomous cars, for instance,
00:20:26.620 | "you know, are you gonna hurt us?"
00:20:27.900 | Well, let's make sure you only drive
00:20:29.980 | at precisely the speed limit
00:20:31.380 | because we wanna make sure we don't hurt you, so to speak,
00:20:33.500 | because that's some, and then, well, something, you know,
00:20:36.420 | but you say, "But actually that means
00:20:37.740 | "I'm gonna be really late for this thing."
00:20:39.300 | And, you know, that sort of hurts me in some way.
00:20:42.220 | So it's hard to know, even the definition
00:20:45.380 | of what it means to hurt someone is unclear.
00:20:50.380 | And as we start thinking about things about AI ethics
00:20:53.380 | and so on, that's, you know, something one has to address.
00:20:56.980 | - There's always trade-offs,
00:20:58.140 | and that's the annoying thing about ethics.
00:21:00.380 | - Yeah, well, right, and I mean, I think ethics,
00:21:02.460 | like these other things we're talking about,
00:21:04.340 | is a deeply human thing.
00:21:06.060 | There's no abstract, you know,
00:21:08.300 | let's write down the theorem
00:21:10.100 | that proves that this is ethically correct.
00:21:13.100 | That's a meaningless idea.
00:21:15.460 | You know, you have to have a ground truth, so to speak,
00:21:18.600 | that's ultimately sort of what humans want,
00:21:21.980 | and they don't all want the same thing.
00:21:23.900 | So that gives one all kinds of additional complexity
00:21:26.140 | in thinking about that.
00:21:27.940 | - One convenient thing in terms of turning ethics
00:21:30.580 | into computation, you can ask the question
00:21:32.340 | of what maximizes the likelihood
00:21:35.740 | of the survival of the species?
00:21:38.400 | - Yeah, that's a good existential issue.
00:21:41.640 | But then when you say survival of the species, right,
00:21:45.080 | you might say, you might, for example,
00:21:48.200 | for example, let's say, forget about technology,
00:21:51.360 | just, you know, hang out and, you know, be happy,
00:21:54.480 | live our lives, go on to the next generation,
00:21:56.960 | you know, go through many, many generations
00:21:59.060 | where, in a sense, nothing is happening.
00:22:01.880 | That okay?
00:22:02.720 | Is that not okay?
00:22:03.560 | Hard to know.
00:22:04.640 | In terms of, you know, the attempt to do elaborate things
00:22:09.440 | and the attempt to might be counterproductive
00:22:13.120 | for the survival of the species.
00:22:15.680 | Like for instance, I mean, in, you know,
00:22:17.920 | I think it's also a little bit hard to know,
00:22:20.340 | so okay, let's take that as a sort of thought experiment.
00:22:23.640 | Okay?
00:22:24.480 | You know, you can say, well, what are the threats
00:22:28.200 | that we might have to survive?
00:22:29.760 | You know, the super volcano, the asteroid impact,
00:22:32.820 | the, you know, all these kinds of things.
00:22:35.040 | Okay, so now we inventory these possible threats
00:22:37.920 | and we say, let's make our species as robust as possible
00:22:41.120 | relative to all these threats.
00:22:42.900 | I think in the end, it's a, it's sort of an unknowable thing
00:22:47.100 | what it takes to, you know, so given that
00:22:51.680 | you've got this AI and you've told it,
00:22:54.840 | maximize the long-term, what does long-term mean?
00:22:58.840 | Does long-term mean until the sun burns out?
00:23:01.560 | That's not gonna work.
00:23:03.680 | You know, does long-term mean next thousand years?
00:23:07.160 | Okay, there are probably optimizations
00:23:08.780 | for the next thousand years that it's like,
00:23:12.280 | it's like if you're running a company,
00:23:13.480 | you can make a company be very stable
00:23:15.280 | for a certain period of time.
00:23:16.920 | Like if, you know, if your company gets bought
00:23:19.920 | by some, you know, private investment group,
00:23:22.400 | then they'll, you know, you can run a company just fine
00:23:25.960 | for five years by just taking what it does
00:23:28.720 | and, you know, removing all R&D
00:23:31.200 | and the company will burn out after a while,
00:23:34.600 | but it'll run just fine for a little while.
00:23:36.360 | So if you tell the AI, keep the humans okay
00:23:38.840 | for a thousand years, there's probably a certain set
00:23:41.040 | of things that one would do to optimize that,
00:23:42.960 | many of which one might say, well,
00:23:45.120 | that would be a pretty big shame for the future of history,
00:23:47.240 | so to speak, for that to be what happens.
00:23:49.200 | But I think, I think in the end, you know,
00:23:50.960 | as you start thinking about that question,
00:23:53.080 | it is what you realize is there's a whole sort of raft
00:23:58.280 | of undecidability, computational irreducibility.
00:24:01.240 | In other words, it's, I mean, one of the good things
00:24:04.320 | about sort of the, what our civilization has gone through
00:24:09.280 | and what sort of we humans go through
00:24:11.560 | is that there's a certain computational irreducibility
00:24:14.060 | to it in the sense that it isn't the case
00:24:16.400 | that you can look from the outside and just say,
00:24:18.560 | the answer is gonna be this.
00:24:20.360 | At the end of the day, this is what's gonna happen.
00:24:22.520 | You actually have to go through the process to find out.
00:24:25.240 | And I think that's both, that feels better in the sense
00:24:28.960 | it's not a, you know, something is achieved
00:24:31.320 | by going through all of this process.
00:24:35.640 | And it's, but it also means that telling the AI,
00:24:40.640 | go figure out, you know, what will be the best outcome?
00:24:44.160 | Well, unfortunately, it's gonna come back and say,
00:24:45.960 | it's kind of undecidable what to do.
00:24:48.320 | We'd have to run all of those scenarios to see what happens.
00:24:52.320 | And if we want it for the infinite future,
00:24:55.280 | we're thrown immediately into sort of standard issues
00:24:58.640 | of kind of infinite computation and so on.
00:25:01.140 | - So yeah, even if you get that the answer to the universe
00:25:03.840 | and everything is 42,
00:25:05.040 | you still have to actually run the universe.
00:25:08.360 | - Yes.
00:25:09.200 | - To figure out like the question, I guess,
00:25:12.640 | or the, you know, the journey is the point.
00:25:16.720 | - Right, well, I think it's saying to summarize,
00:25:19.600 | this is the result of the universe.
00:25:21.600 | - Yeah.
00:25:22.440 | - That's, if that is possible, it tells us, I mean,
00:25:25.800 | the whole sort of structure of thinking about computation
00:25:28.800 | and so on, and thinking about how stuff works.
00:25:32.000 | If it's possible to say, and the answer is such and such,
00:25:35.960 | you're basically saying there's a way
00:25:37.440 | of going outside the universe.
00:25:39.440 | And you're kind of, you're getting yourself
00:25:40.920 | into something of a sort of paradox,
00:25:42.880 | because you're saying, if it's knowable, what the answer is,
00:25:46.760 | then there's a way to know it,
00:25:48.420 | that is beyond what the universe provides.
00:25:51.120 | But if we can know it, then something that we're dealing
00:25:54.240 | with is beyond the universe.
00:25:56.000 | So then the universe isn't the universe, so to speak.
00:26:01.000 | - And in general, as we'll talk about,
00:26:04.160 | at least for our small human brains,
00:26:07.280 | it's hard to show that the result
00:26:10.880 | of a sufficiently complex computation.
00:26:13.680 | It's hard, I mean, it's probably impossible, right?
00:26:16.280 | Undecidability, so.
00:26:19.000 | And the universe appears, by at least the poets,
00:26:24.000 | to be sufficiently complex that we won't be able
00:26:26.360 | to predict what the heck it's all going to.
00:26:30.320 | - Well, we better not be able to,
00:26:31.360 | because if we can, it kind of denies,
00:26:33.880 | I mean, we're part of the universe.
00:26:36.720 | So what does it mean for us to predict?
00:26:38.820 | It means that we, that our little part of the universe
00:26:42.120 | is able to jump ahead of the whole universe.
00:26:44.560 | And this quickly winds up, I mean,
00:26:48.520 | it is conceivable, the only way we'd be able to predict
00:26:52.240 | is if we are so special in the universe,
00:26:54.440 | we are the one place where there is computation
00:26:57.600 | more special, more sophisticated than anything else
00:27:00.260 | that exists in the universe.
00:27:01.120 | That's the only way we would have the ability to,
00:27:04.560 | sort of the almost theological ability, so to speak,
00:27:07.960 | to predict what happens in the universe,
00:27:10.760 | is to say somehow we're better
00:27:12.800 | than everything else in the universe,
00:27:14.520 | which I don't think is the case.
00:27:16.880 | - Yeah, perhaps we can detect a large number
00:27:20.280 | of looping patterns that reoccur throughout the universe
00:27:25.120 | and fully describe them, and therefore,
00:27:27.560 | but then it still becomes exceptionally difficult
00:27:30.120 | to see how those patterns interact
00:27:32.200 | and what kind of complexity emerges.
00:27:34.040 | - The most remarkable thing about the universe
00:27:36.340 | is that it has regularity at all.
00:27:39.520 | Might not be the case.
00:27:41.200 | - Does it have regularity?
00:27:42.920 | - Absolutely, it's full of, I mean, physics is successful.
00:27:46.320 | You know, it's full of laws that tell us a lot of detail
00:27:50.880 | about how the universe works.
00:27:52.200 | I mean, it could be the case that, you know,
00:27:54.280 | the 10 to the 90th particles in the universe,
00:27:55.960 | they all do their own thing, but they don't.
00:27:58.200 | They all follow, we already know,
00:28:00.120 | they all follow basically the same physical laws,
00:28:03.440 | and that's something, that's a very profound fact
00:28:06.960 | about the universe.
00:28:08.280 | What conclusion you draw from that is unclear.
00:28:10.440 | I mean, in the early theologians,
00:28:13.920 | that was, you know, exhibit number one
00:28:16.180 | for the existence of God.
00:28:18.320 | Now, you know, people have different conclusions about it,
00:28:21.160 | but the fact is, you know, right now,
00:28:23.520 | I mean, I happen to be interested, actually.
00:28:25.200 | I've just restarted a long-running kind of interest of mine
00:28:29.100 | about fundamental physics.
00:28:31.220 | I'm kind of like, I'm on a bit of a quest,
00:28:34.500 | which I'm about to make more public
00:28:37.560 | to see if I can actually find
00:28:39.240 | the fundamental theory of physics.
00:28:40.640 | - Excellent.
00:28:41.480 | We'll come to that, and I just had a lot of conversations
00:28:46.240 | with quantum mechanics folks,
00:28:48.280 | so I'm really excited on your take,
00:28:50.760 | 'cause I think you have a fascinating take
00:28:53.000 | on the fundamental nature of our reality
00:28:57.580 | from a physics perspective,
00:28:59.480 | and what might be underlying the kind of physics
00:29:03.360 | as we think of it today.
00:29:04.760 | Okay, let's take a step back.
00:29:06.800 | What is computation?
00:29:09.440 | - That's a good question.
00:29:10.820 | Operationally, computation is following rules.
00:29:14.180 | That's kind of it.
00:29:16.460 | I mean, computation is the result,
00:29:18.720 | is the process of systematically following rules,
00:29:21.840 | and it is the thing that happens when you do that.
00:29:24.880 | - So taking initial conditions,
00:29:26.360 | or taking inputs and following rules,
00:29:29.160 | I mean, what are you following rules on?
00:29:31.960 | So there has to be some data, some--
00:29:34.640 | - Not necessarily.
00:29:35.460 | It can be something where there's a very simple input,
00:29:39.940 | and then you're following these rules,
00:29:41.760 | and you'd say there's not really much data going into this.
00:29:44.840 | You could actually pack the initial conditions
00:29:47.120 | into the rule if you want to.
00:29:49.820 | So I think the question is,
00:29:52.120 | is there a robust notion of computation?
00:29:54.360 | That is-- - What does robust mean?
00:29:56.000 | - What I mean by that is something like this.
00:29:57.680 | So one of the things in another physics,
00:30:00.760 | something like energy, okay?
00:30:02.920 | There are different forms of energy,
00:30:04.760 | but somehow energy is a robust concept
00:30:08.520 | that isn't particular to kinetic energy,
00:30:13.520 | or nuclear energy, or whatever else.
00:30:16.040 | There's a robust idea of energy.
00:30:17.960 | So one of the things you might ask
00:30:19.080 | is there's the robust idea of computation,
00:30:22.100 | or does it matter that this computation
00:30:24.160 | is running in a Turing machine?
00:30:25.620 | This computation is running in a CMOS silicon CPU.
00:30:29.760 | This computation is running in a fluid system
00:30:32.080 | in the weather, those kinds of things.
00:30:33.640 | Or is there a robust idea of computation
00:30:36.280 | that transcends the sort of detailed framework
00:30:40.200 | that it's running in, okay?
00:30:41.880 | And-- - Is there?
00:30:43.240 | - Yes.
00:30:44.300 | I mean, it wasn't obvious that there was,
00:30:46.440 | so it's worth understanding the history
00:30:48.020 | and how we got to where we are right now,
00:30:50.120 | because to say that there is
00:30:53.160 | is a statement in part about our universe.
00:30:56.860 | It's not a statement
00:30:58.000 | about what is mathematically conceivable.
00:31:00.320 | It's about what actually can exist for us.
00:31:03.540 | - Maybe you can also comment,
00:31:05.040 | because energy as a concept is robust,
00:31:09.720 | but there's also,
00:31:11.140 | it's intricate, complicated relationship with matter,
00:31:17.160 | with mass, is very interesting,
00:31:21.500 | of particles that carry force,
00:31:23.600 | and particles that sort of,
00:31:27.360 | particles that carry force and particles that have mass.
00:31:30.680 | These kinds of ideas, they seem to map to each other,
00:31:33.920 | at least in the mathematical sense.
00:31:35.920 | Is there a connection between energy and mass
00:31:40.300 | and computation, or are these completely disjoint ideas?
00:31:44.120 | - We don't know yet.
00:31:45.440 | The things that I'm trying to do about fundamental physics
00:31:48.240 | may well lead to such a connection,
00:31:52.700 | but there is no known connection at this time.
00:31:54.880 | - So can you elaborate a little bit more
00:31:57.600 | on how do you think about computation?
00:32:01.360 | What is computation?
00:32:02.200 | - Yeah, so I mean, let's tell a little bit
00:32:04.400 | of a historical story, okay?
00:32:06.480 | So, you know, go back 150 years,
00:32:09.540 | people were making mechanical calculators of various kinds.
00:32:14.200 | And, you know, the typical thing was,
00:32:15.640 | you want an adding machine,
00:32:16.640 | you go to the adding machine store, basically.
00:32:19.040 | You want a multiplying machine,
00:32:20.240 | you go to the multiplying machine store.
00:32:21.680 | They're different pieces of hardware.
00:32:23.680 | And so that means that,
00:32:25.580 | at least at the level of that kind of computation
00:32:28.040 | and those kinds of pieces of hardware,
00:32:30.080 | there isn't a robust notion of computation.
00:32:32.200 | There's the adding machine kind of computation,
00:32:34.220 | there's the multiplying machine notion of computation,
00:32:37.160 | and they're disjoint.
00:32:38.560 | So what happened in around 1900,
00:32:41.080 | people started imagining,
00:32:42.480 | particularly in the context of mathematical logic,
00:32:44.920 | could you have something
00:32:46.200 | which would represent any reasonable function, right?
00:32:50.560 | And they came up with things,
00:32:51.600 | this idea of primitive recursion
00:32:52.920 | was one of the early ideas, and it didn't work.
00:32:56.520 | There were reasonable functions
00:32:57.920 | that people could come up with
00:32:59.520 | that were not represented
00:33:01.560 | using the primitives of primitive recursion, okay?
00:33:04.560 | So then along comes 1931 and Godel's theorem and so on.
00:33:09.200 | And as in looking back,
00:33:12.400 | one can see that as part of the process
00:33:14.740 | of establishing Godel's theorem,
00:33:16.960 | Godel basically showed how you could compile arithmetic,
00:33:21.120 | how you could basically compile logical statements,
00:33:24.640 | like this statement is unprovable, into arithmetic.
00:33:27.800 | So what he essentially did was to show that arithmetic
00:33:30.880 | can be a computer in a sense
00:33:34.200 | that's capable of representing all kinds of other things.
00:33:37.040 | And then Turing came along,
00:33:38.520 | 1936 came up with Turing machines.
00:33:41.120 | Meanwhile, Alonzo Church had come up with lambda calculus.
00:33:44.520 | And the surprising thing that was established very quickly
00:33:46.920 | is the Turing machine idea about what computation might be
00:33:51.180 | is exactly the same as the lambda calculus idea
00:33:53.960 | of what computation might be.
00:33:55.840 | And so, and then there started to be other ideas,
00:33:58.400 | register machines,
00:33:59.560 | other kinds of representations of computation.
00:34:03.180 | And the big surprise was
00:34:04.920 | they all turned out to be equivalent.
00:34:06.760 | So in other words, it might've been the case
00:34:08.640 | like those old adding machines and multiplying machines
00:34:11.240 | that Turing had his idea of computation,
00:34:13.460 | Church had his idea of computation,
00:34:15.380 | and they were just different, but it isn't true.
00:34:18.000 | They're actually all equivalent.
00:34:20.200 | So then by, I would say, the 1970s or so
00:34:25.360 | in sort of the computation,
00:34:27.480 | computer science computation theory area,
00:34:29.840 | people had sort of said,
00:34:30.680 | "Oh, Turing machines are kind of what computation is."
00:34:33.920 | Physicists were still holding out saying,
00:34:36.440 | "No, no, no, it's just not how the universe works.
00:34:38.240 | We've got all these differential equations.
00:34:40.400 | We've got all these real numbers
00:34:41.920 | that have infinite numbers of digits."
00:34:43.600 | - Yeah, the universe is not a Turing machine.
00:34:45.240 | - Right.
00:34:46.080 | The Turing machines are a small subset
00:34:49.120 | of the things that we make in microprocessors
00:34:52.520 | and engineering structures and so on.
00:34:54.500 | So probably, actually through my work in the 1980s
00:34:58.080 | about sort of the relationship between computation
00:35:01.980 | and models of physics,
00:35:04.120 | it became a little less clear that there would be,
00:35:07.760 | that there was this big sort of dichotomy
00:35:09.960 | between what can happen in physics
00:35:13.160 | and what happens in things like Turing machines.
00:35:14.920 | And I think probably by now, people would mostly think,
00:35:19.920 | and by the way, brains were another kind of element
00:35:22.480 | of this.
00:35:23.320 | Kurt Gödel didn't think that his notion of computation
00:35:26.380 | or what amounted to his notion of computation
00:35:28.580 | would cover brains.
00:35:30.260 | And Turing wasn't sure either.
00:35:33.580 | But although he was a little bit,
00:35:35.340 | he got to be a little bit more convinced
00:35:38.300 | that it should cover brains.
00:35:39.840 | But so, you know, I would say by probably sometime
00:35:44.660 | in the 1980s, there was beginning to be
00:35:46.700 | sort of a general belief that yes,
00:35:48.620 | this notion of computation that could be captured
00:35:50.940 | by things like Turing machines was reasonably robust.
00:35:55.120 | Now, the next question is, okay,
00:35:57.280 | you can have a universal Turing machine
00:36:00.680 | that's capable of being programmed to do anything
00:36:03.980 | that any Turing machine can do.
00:36:05.600 | And, you know, this idea of universal computation,
00:36:09.680 | it's an important idea,
00:36:10.560 | this idea that you can have one piece of hardware
00:36:13.120 | and program it with different pieces of software.
00:36:15.960 | You know, that's kind of the idea
00:36:17.440 | that launched most modern technology.
00:36:19.200 | I mean, that's kind of,
00:36:20.040 | that's the idea that launched computer revolution,
00:36:22.640 | software, et cetera.
00:36:23.680 | So important idea.
00:36:25.160 | But the thing that's still kind of holding out
00:36:28.400 | from that idea is, okay,
00:36:29.880 | there is this universal computation thing,
00:36:33.200 | but seems hard to get to.
00:36:35.360 | Seems like you want to make a universal computer,
00:36:37.760 | you have to kind of have a microprocessor with, you know,
00:36:40.480 | a million gates in it,
00:36:41.660 | and you have to go to a lot of trouble
00:36:43.440 | to make something that achieves that level
00:36:45.920 | of computational sophistication.
00:36:48.040 | Okay, so the surprise for me was that stuff
00:36:51.700 | that I discovered in the early '80s,
00:36:54.400 | looking at these things called cellular automata,
00:36:57.100 | which are really simple computational systems,
00:37:00.800 | the thing that was a big surprise to me was
00:37:03.520 | that even when their rules were very, very simple,
00:37:06.160 | they were doing things that were as sophisticated
00:37:08.040 | as they did when their rules were much more complicated.
00:37:10.640 | So it didn't look like, you know,
00:37:12.280 | this idea, oh, to get sophisticated computation,
00:37:15.240 | you have to build something with very sophisticated rules.
00:37:18.440 | That idea didn't seem to pan out.
00:37:21.760 | And instead, it seemed to be the case
00:37:23.600 | that sophisticated computation was completely ubiquitous,
00:37:26.640 | even in systems with incredibly simple rules.
00:37:29.280 | And so that led to this thing
00:37:31.320 | that I call the principle of computational equivalence,
00:37:33.920 | which basically says, when you have a system
00:37:36.960 | that follows rules of any kind,
00:37:40.240 | then whenever the system isn't doing things
00:37:43.480 | that are in some sense obviously simple,
00:37:46.200 | then the computation that the behavior
00:37:49.840 | of the system corresponds to
00:37:51.200 | is of equivalent sophistication.
00:37:53.640 | So that means that when you kind of go
00:37:55.840 | from the very, very, very simplest things you can imagine,
00:37:58.800 | then quite quickly, you hit this kind of threshold
00:38:02.000 | above which everything is equivalent
00:38:03.800 | in its computational sophistication.
00:38:05.840 | Not obvious that would be the case.
00:38:07.480 | I mean, that's a science fact.
00:38:09.880 | Well-- - No, no, no, no.
00:38:10.960 | Hold on a second.
00:38:12.360 | So this, you've opened with a new kind of science.
00:38:15.160 | I mean, I remember it was a huge eye-opener
00:38:17.880 | that such simple things can create such complexity.
00:38:22.240 | And yes, there's an equivalence, but it's not a fact.
00:38:25.040 | It just appears to, I mean, as much as a fact
00:38:28.100 | as sort of these theories are so elegant
00:38:33.100 | that it seems to be the way things are.
00:38:37.640 | But let me ask sort of, you just brought up previously
00:38:41.960 | kind of like the communities of computer scientists
00:38:44.600 | with their Turing machines,
00:38:46.000 | the physicists with their universe,
00:38:48.160 | and whoever the heck, maybe neuroscientists
00:38:51.680 | looking at the brain.
00:38:53.520 | What's your sense in the equivalence?
00:38:56.640 | So you've shown through your work
00:38:58.780 | that simple rules can create
00:39:01.040 | equivalently complex Turing machine systems, right?
00:39:08.000 | Is the universe equivalent to the kinds of,
00:39:13.000 | to Turing machines?
00:39:14.560 | Is the human brain a kind of Turing machine?
00:39:18.880 | Do you see those things basically blending together?
00:39:21.740 | Or is there still a mystery about how disjoint they are?
00:39:25.200 | - Well, my guess is that they all blend together,
00:39:27.980 | but we don't know that for sure yet.
00:39:29.640 | I mean, this, you know, I should say,
00:39:32.260 | I said rather glibly that the principle
00:39:34.880 | of computational equivalence is sort of a science fact.
00:39:37.200 | And I was using air quotes for the science fact,
00:39:42.120 | because when you, it is a,
00:39:45.520 | I mean, just to talk about that for a second,
00:39:46.920 | and then we'll,
00:39:47.760 | the thing is that it is,
00:39:52.340 | it has a complicated epistemological character,
00:39:55.480 | similar to things like the second law of thermodynamics,
00:39:59.200 | law of entropy increase.
00:40:00.680 | The, you know, what is the second law of thermodynamics?
00:40:04.120 | It is, is it a law of nature?
00:40:05.700 | Is it a thing that is true of the physical world?
00:40:08.160 | Is it something which is mathematically provable?
00:40:11.700 | Is it something which happens to be true
00:40:13.520 | of the systems that we see in the world?
00:40:15.680 | Is it in some sense, a definition of heat perhaps?
00:40:19.720 | Well, it's a combination of those things.
00:40:21.540 | And it's the same thing
00:40:23.320 | with the principle of computational equivalence.
00:40:25.320 | And in some sense,
00:40:26.560 | the principle of computational equivalence
00:40:28.240 | is at the heart of the definition of computation,
00:40:31.180 | because it's telling you there is a thing,
00:40:33.040 | there is a robust notion
00:40:34.920 | that is equivalent across all these systems
00:40:37.640 | and doesn't depend on the details of each individual system.
00:40:41.000 | And that's why we can meaningfully talk about
00:40:43.800 | a thing called computation.
00:40:45.120 | And we're not stuck talking about,
00:40:47.000 | oh, there's computation in Turing machine number 3785,
00:40:51.240 | and et cetera, et cetera, et cetera.
00:40:52.840 | That's why there is a robust notion like that.
00:40:55.560 | Now, on the other hand,
00:40:56.720 | can we prove the principle of computational equivalence?
00:40:59.200 | Can we prove it as a mathematical result?
00:41:02.360 | Well, the answer is,
00:41:03.700 | actually we've got some nice results along those lines
00:41:06.500 | that say, throw me a random system with very simple rules.
00:41:10.880 | Well, in a couple of cases,
00:41:13.320 | we now know that even the very simplest rules
00:41:16.120 | we can imagine of a certain type are universal
00:41:20.000 | and do sort of follow what you would expect
00:41:22.760 | from the principle of computational equivalence.
00:41:24.200 | So that's a nice piece of sort of mathematical evidence
00:41:27.000 | for the principle of computational equivalence.
00:41:28.880 | - Just to linger on that point,
00:41:30.600 | the simple rules creating sort of these complex behaviors,
00:41:35.600 | but is there a way to mathematically say
00:41:41.400 | that this behavior is complex?
00:41:44.400 | That you've mentioned that you cross a threshold.
00:41:47.060 | - Right.
00:41:47.900 | So there are various indicators.
00:41:49.160 | So for example, one thing would be,
00:41:51.800 | is it capable of universal computation?
00:41:53.860 | That is given the system,
00:41:55.880 | do there exist initial conditions for the system
00:41:59.040 | that can be set up to essentially represent programs
00:42:01.880 | to do anything you want, to compute primes,
00:42:03.800 | to compute pi, to do whatever you want, right?
00:42:06.440 | So that's an indicator.
00:42:07.960 | So we know in a couple of examples that yes,
00:42:11.160 | the simplest candidates that could conceivably
00:42:15.000 | have that property do have that property.
00:42:17.260 | And that's what the principle
00:42:18.100 | of computational equivalence might suggest.
00:42:20.560 | But this principle of computational equivalence,
00:42:23.920 | one question about it is,
00:42:25.940 | is it true for the physical world?
00:42:28.320 | It might be true for all these things we come up with,
00:42:30.400 | the Turing machines, the cellular automata, whatever else.
00:42:33.300 | Is it true for our actual physical world?
00:42:36.920 | Is it true for the brains,
00:42:39.340 | which are an element of the physical world?
00:42:42.040 | We don't know for sure.
00:42:43.280 | And that's not the type of question
00:42:45.000 | that we will have a definitive answer to,
00:42:47.380 | because there's a sort of scientific induction issue.
00:42:51.120 | You can say, well, it's true for all these brains,
00:42:53.620 | but this person over here is really special
00:42:55.520 | and it's not true for them.
00:42:56.960 | And you can't, you know,
00:42:58.560 | the only way that that cannot be what happens is
00:43:02.520 | if we finally nail it
00:43:04.240 | and actually get a fundamental theory for physics
00:43:07.320 | and it turns out to correspond to,
00:43:09.240 | let's say a simple program.
00:43:10.880 | If that is the case,
00:43:12.360 | then we will basically have reduced physics
00:43:14.400 | to a branch of mathematics
00:43:16.100 | in the sense that we will not be,
00:43:17.960 | you know, right now with physics, we're like,
00:43:19.720 | well, this is the theory that, you know,
00:43:21.520 | this is the rules that apply here.
00:43:23.580 | But in the middle of that, you know,
00:43:26.840 | right by that black hole,
00:43:29.000 | maybe these rules don't apply and something else applies.
00:43:31.640 | And there may be another piece of the onion
00:43:33.680 | that we have to peel back.
00:43:35.200 | But if we can get to the point where we actually have,
00:43:38.820 | this is the fundamental theory of physics,
00:43:40.920 | here it is, it's this program,
00:43:42.640 | run this program and you will get our universe.
00:43:45.600 | Then we've kind of reduced the problem
00:43:48.000 | of figuring out things in physics
00:43:50.260 | to a problem of doing some,
00:43:51.720 | what turns out to be very difficult,
00:43:54.000 | irreducibly difficult mathematical problems.
00:43:57.160 | But it no longer is the case that we can say
00:43:59.280 | that somebody can come in and say,
00:44:00.920 | whoops, you know, you were right about all these things
00:44:03.000 | about Turing machines,
00:44:04.400 | but you're wrong about the physical universe.
00:44:05.960 | We know there's sort of ground truth
00:44:08.080 | about what's happening in the physical universe.
00:44:09.960 | Now, I happen to think,
00:44:11.760 | I mean, you asked me at an interesting time
00:44:13.740 | 'cause I'm just in the middle of starting
00:44:15.340 | to re-energize my project
00:44:19.520 | to kind of study the fundamental theory of physics.
00:44:23.120 | As of today, I'm very optimistic
00:44:25.880 | that we're actually gonna find something
00:44:27.480 | and that it's going to be possible
00:44:28.800 | to see that the universe really is computational
00:44:31.400 | in that sense.
00:44:32.440 | But I don't know because we're betting against,
00:44:34.800 | we're betting against the universe, so to speak.
00:44:37.720 | And I didn't, you know, it's not like,
00:44:39.920 | you know, when I spend a lot of my life building technology
00:44:42.720 | and then I know what's in there, right?
00:44:45.120 | And it's, there may be, it may have unexpected behavior,
00:44:47.680 | it may have bugs, things like that,
00:44:48.840 | but fundamentally I know what's in there.
00:44:50.320 | For the universe, I'm not in that position, so to speak.
00:44:53.580 | - What kind of computation do you think
00:44:58.440 | the fundamental laws of physics might emerge from?
00:45:02.240 | So just to clarify, so you've done a lot of fascinating work
00:45:06.920 | with kind of discrete kinds of computation that,
00:45:10.440 | you know, you could sell your automata,
00:45:13.120 | and we'll talk about it,
00:45:14.760 | have this very clean structure.
00:45:17.160 | It's such a nice way to demonstrate that simple rules
00:45:20.440 | can create immense complexity.
00:45:22.600 | But what, you know, is that actually,
00:45:27.600 | are cellular automata sufficiently general
00:45:29.480 | to describe the kinds of computation
00:45:32.040 | that might create the laws of physics?
00:45:34.680 | Just to give, can you give a sense of
00:45:36.880 | what kind of computation do you think
00:45:38.880 | would create the laws of physics?
00:45:40.760 | - So this is a slightly complicated issue
00:45:42.680 | because as soon as you have universal computation,
00:45:45.640 | you can in principle simulate anything with anything.
00:45:48.680 | But it is not a natural thing to do.
00:45:51.120 | And if you're asking,
00:45:52.640 | were you to try to find our physical universe
00:45:55.660 | by looking at possible programs
00:45:57.440 | in the computational universe of all possible programs,
00:46:00.280 | would the ones that correspond to our universe
00:46:03.320 | be small and simple enough that we might find them
00:46:06.360 | by searching that computational universe?
00:46:08.640 | We got to have the right basis, so to speak.
00:46:10.520 | We have got to have the right language in effect
00:46:12.840 | for describing computation for that to be feasible.
00:46:15.880 | So the thing that I've been interested in for a long time
00:46:17.920 | is what are the most structuralist structures
00:46:20.480 | that we can create with computation?
00:46:22.760 | So in other words, if you say a cellular automaton,
00:46:25.560 | it has a bunch of cells that are arrayed on a grid
00:46:28.420 | and it's very, you know, and every cell is updated
00:46:31.080 | in synchrony at a particular, you know,
00:46:33.480 | when there's a click of a clock, so to speak,
00:46:36.440 | and it goes, a tick of a clock,
00:46:38.520 | and every cell gets updated at the same time.
00:46:41.040 | That's a very specific, very rigid kind of thing.
00:46:44.480 | But my guess is that when we look at physics
00:46:47.920 | and we look at things like space and time,
00:46:50.120 | that what's underneath space and time
00:46:52.440 | is something as structureless as possible,
00:46:55.120 | that what we see, what emerges for us as physical space,
00:46:59.060 | for example, comes from something
00:47:01.400 | that is sort of arbitrarily unstructured underneath.
00:47:04.980 | And so I've been for a long time interested
00:47:07.400 | in kind of what are the most structuralist structures
00:47:10.240 | that we can set up?
00:47:11.680 | And actually what I had thought about for ages
00:47:14.880 | is using graphs, networks, where essentially,
00:47:18.600 | so let's talk about space, for example.
00:47:21.100 | So what is space?
00:47:22.880 | Is a kind of a question one might ask.
00:47:25.360 | Back in the early days of quantum mechanics, for example,
00:47:27.560 | people said, oh, for sure, space is gonna be discrete
00:47:30.960 | 'cause all these other things we're finding are discrete,
00:47:32.800 | but that never worked out in physics.
00:47:34.880 | And so space and physics today is always treated
00:47:37.560 | as this continuous thing, just like Euclid imagined it.
00:47:40.920 | I mean, the very first thing Euclid says
00:47:43.080 | in his sort of common notions is,
00:47:45.640 | a point is something which has no part.
00:47:47.720 | In other words, there are points that are arbitrarily small
00:47:51.360 | and there's a continuum of possible positions of points.
00:47:54.820 | And the question is, is that true?
00:47:56.800 | And so, for example, if we look at, I don't know,
00:47:58.520 | a fluid like air or water,
00:48:00.500 | we might say, oh, it's a continuous fluid.
00:48:02.200 | We can pour it, we can do all kinds of things continuously.
00:48:05.280 | But actually we know, 'cause we know the physics of it,
00:48:07.660 | that it consists of a bunch of discrete molecules
00:48:09.560 | bouncing around and only in the aggregate
00:48:12.000 | is it behaving like a continuum.
00:48:14.440 | And so the possibility exists that that's true of space too.
00:48:17.680 | People haven't managed to make that work
00:48:19.400 | with existing frameworks in physics,
00:48:21.680 | but I've been interested in whether one can imagine
00:48:25.560 | that underneath space and also underneath time
00:48:28.800 | is something more structureless.
00:48:30.740 | And the question is, is it computational?
00:48:33.580 | So there are a couple of possibilities.
00:48:35.760 | It could be computational,
00:48:36.920 | somehow fundamentally equivalent to a Turing machine,
00:48:39.440 | or it could be fundamentally not.
00:48:41.440 | So how could it not be?
00:48:42.880 | It could not be, so a Turing machine
00:48:44.640 | essentially deals with integers, whole numbers,
00:48:47.200 | some level, and it can do things
00:48:49.540 | like it can add one to a number.
00:48:51.100 | It can do things like this.
00:48:52.560 | - It can also store whatever the heck it did.
00:48:54.960 | - Yes, it has an infinite storage.
00:48:58.480 | But when one thinks about doing physics
00:49:02.600 | or sort of idealized physics or idealized mathematics,
00:49:06.620 | one can deal with real numbers,
00:49:08.480 | numbers with an infinite number of digits,
00:49:10.720 | numbers which are absolutely precise.
00:49:13.000 | And one can say, we can take this number
00:49:14.720 | and we can multiply it by itself.
00:49:16.400 | - Are you comfortable with infinity in this context?
00:49:19.200 | Are you comfortable in the context of computation?
00:49:22.200 | Do you think infinity plays a part?
00:49:24.440 | - I think that the role of infinity is complicated.
00:49:26.760 | Infinity is useful in conceptualizing things.
00:49:30.920 | It's not actualizable.
00:49:33.120 | Almost by definition, it's not actualizable.
00:49:35.760 | - But do you think infinity is part of the thing
00:49:37.800 | that might underlie the laws of physics?
00:49:40.440 | - I think that, no.
00:49:42.360 | I think there are many questions that you ask about,
00:49:44.720 | you might ask about physics,
00:49:46.000 | which inevitably involve infinity.
00:49:47.500 | Like when you say, is faster than light travel possible?
00:49:51.500 | You could say, given the laws of physics,
00:49:55.240 | can you make something even arbitrarily large,
00:49:57.560 | even, quote, infinitely large,
00:49:59.780 | that will make faster than light travel possible?
00:50:03.240 | Then you're thrown into dealing with infinity
00:50:05.480 | as a kind of theoretical question.
00:50:07.640 | But I mean, talking about sort of
00:50:09.800 | what's underneath space and time
00:50:11.640 | and how one can make a computational infrastructure,
00:50:16.120 | one possibility is that you can't make
00:50:18.600 | a computational infrastructure
00:50:20.280 | in a Turing machine sense.
00:50:22.720 | That you really have to be dealing with precise real numbers,
00:50:25.640 | you're dealing with partial differential equations,
00:50:27.380 | which have precise real numbers
00:50:30.640 | at arbitrarily closely separated points,
00:50:32.640 | you have a continuum for everything.
00:50:35.320 | Could be that that's what happens,
00:50:37.160 | that there's sort of a continuum for everything
00:50:38.760 | and precise real numbers for everything,
00:50:40.200 | and then the things I'm thinking about are wrong.
00:50:43.120 | And that's the risk you take
00:50:45.680 | if you're trying to sort of do things about nature,
00:50:49.720 | is you might just be wrong.
00:50:51.120 | It's not, for me personally, it's kind of a strange thing,
00:50:55.080 | 'cause I've spent a lot of my life building technology
00:50:57.540 | where you can do something that nobody cares about,
00:51:00.620 | but you can't be sort of wrong in that sense,
00:51:03.020 | in the sense you build your technology
00:51:04.500 | and it does what it does.
00:51:05.900 | But I think this question of what
00:51:08.020 | the sort of underlying computational infrastructure
00:51:11.140 | for the universe might be,
00:51:12.460 | so it's sort of inevitable it's gonna be fairly abstract,
00:51:17.940 | because if you're gonna get all these things
00:51:20.860 | like there are three dimensions of space,
00:51:22.420 | there are electrons, there are muons,
00:51:23.900 | there are quarks, there are this,
00:51:25.720 | you don't get to, if the model for the universe is simple,
00:51:29.940 | you don't get to have sort of a line of code
00:51:31.960 | for each of those things.
00:51:32.880 | You don't get to have sort of the muon case,
00:51:36.940 | the tau lepton case, and so on.
00:51:38.960 | All of those things have to-- - Those all have to be
00:51:39.960 | emergent somehow. - Right.
00:51:41.360 | - So something deeper. - Right.
00:51:43.360 | So that means it's sort of inevitable
00:51:45.100 | that it's a little hard to talk about
00:51:46.880 | what the sort of underlying structuralist structure
00:51:49.320 | actually is.
00:51:50.300 | Do you think human beings have the cognitive capacity
00:51:54.320 | to understand, if we're to discover it,
00:51:56.320 | to understand the kinds of simple structure
00:51:59.920 | from which these laws can emerge?
00:52:02.280 | Do you think that's a hopeless pursuit?
00:52:04.320 | - Well, here's what I think.
00:52:05.280 | I think that, I mean, I'm right in the middle
00:52:07.700 | of this right now. - Right.
00:52:08.540 | - So I'm telling you that I-- - Do you think you'll
00:52:09.960 | hit a wall? - This human, yeah.
00:52:11.280 | I mean, this human has a hard time understanding
00:52:15.200 | a bunch of the things that are going on.
00:52:16.480 | But what happens in understanding is
00:52:18.920 | one builds waypoints.
00:52:20.160 | I mean, if you said, understand modern
00:52:22.340 | 21st century mathematics, starting from counting,
00:52:27.340 | back in whenever counting was invented 50,000 years ago,
00:52:30.940 | whatever it was, right?
00:52:33.020 | That would be really difficult.
00:52:34.580 | But what happens is we build waypoints
00:52:36.660 | that allow us to get to higher levels of understanding.
00:52:39.380 | And we see the same thing happening in language.
00:52:41.580 | You know, when we invent a word for something,
00:52:43.940 | it provides kind of a cognitive anchor,
00:52:46.300 | a kind of a waypoint that lets us,
00:52:48.340 | you know, like a podcast or something.
00:52:50.660 | You could be explaining, well, it's a thing
00:52:52.720 | which works this way, that way, the other way.
00:52:55.220 | But as soon as you have the word podcast
00:52:57.820 | and people kind of societally understand it,
00:53:00.500 | you start to be able to build on top of that.
00:53:02.460 | And so I think, and that's kind of the story
00:53:04.580 | of science actually too.
00:53:05.820 | I mean, science is about building these kind of waypoints
00:53:08.860 | where we find this sort of cognitive mechanism
00:53:12.280 | for understanding something, then we can build on top of it.
00:53:14.500 | You know, we have the idea of, I don't know,
00:53:16.820 | differential equations, we can build on top of that.
00:53:19.420 | We have this idea or that idea.
00:53:21.060 | So my hope is that if it is the case
00:53:24.440 | that we have to go all the way sort of from the sand
00:53:27.700 | to the computer and there's no waypoints in between,
00:53:30.980 | then we're toast.
00:53:32.600 | We won't be able to do that.
00:53:33.980 | - Well, eventually we might.
00:53:35.300 | So if we're, us clever apes are good enough
00:53:38.740 | for building those abstractions,
00:53:40.620 | eventually from sand we'll get to the computer, right?
00:53:43.260 | And it just might be a longer journey than--
00:53:44.860 | - The question is whether it is something
00:53:46.580 | that you asked whether our human brains
00:53:49.540 | will quote understand what's going on.
00:53:52.340 | And that's a different question because for that,
00:53:54.900 | it requires steps that are sort of
00:53:58.820 | from which we can construct
00:54:00.180 | a human understandable narrative.
00:54:02.460 | And that's something that I think I am somewhat hopeful
00:54:06.860 | that that will be possible.
00:54:08.140 | Although, you know, as of literally today, if you ask me,
00:54:12.160 | I'm confronted with things that I don't understand very well
00:54:15.500 | and--
00:54:16.540 | - So this is a small pattern in a computation
00:54:18.860 | trying to understand the rules
00:54:20.900 | under which the computation functions.
00:54:22.820 | And it's an interesting possibility
00:54:26.540 | under which kinds of computations
00:54:28.740 | such a creature can understand itself.
00:54:31.420 | - My guess is that within,
00:54:33.660 | so we didn't talk much about computational irreducibility,
00:54:36.300 | but it's a consequence of this principle
00:54:37.900 | of computational equivalence.
00:54:39.460 | And it's sort of a core idea
00:54:40.700 | that one has to understand, I think,
00:54:42.380 | which is question is you're doing a computation,
00:54:45.620 | you can figure out what happens in the computation
00:54:47.860 | just by running every step in the computation
00:54:49.860 | and seeing what happens.
00:54:51.500 | Or you can say, let me jump ahead and figure out,
00:54:55.620 | you know, have something smarter that figures out
00:54:57.700 | what's gonna happen before it actually happens.
00:55:00.060 | And a lot of traditional science
00:55:02.420 | has been about that act of computational reducibility.
00:55:06.300 | It's like, we've got these equations
00:55:08.860 | and we can just solve them
00:55:09.940 | and we can figure out what's gonna happen.
00:55:11.180 | We don't have to trace all of those steps.
00:55:13.500 | We just jump ahead 'cause we solved these equations.
00:55:16.300 | Okay, so one of the things that is a consequence
00:55:18.540 | of the principle of computational equivalence
00:55:20.100 | is you don't always get to do that.
00:55:22.020 | Many, many systems will be computationally irreducible
00:55:25.300 | in the sense that the only way to find out what they do
00:55:27.280 | is just follow each step and see what happens.
00:55:29.560 | Why is that?
00:55:30.400 | Well, if you're saying, well, we, with our brains,
00:55:33.660 | we're a lot smarter.
00:55:34.580 | We don't have to mess around
00:55:36.300 | like the little cellular automaton
00:55:38.020 | going through and updating all those cells.
00:55:40.060 | We can just use the power of our brains to jump ahead.
00:55:44.020 | But if the principle of computational equivalence is right,
00:55:46.900 | that's not gonna be correct
00:55:48.140 | because it means that there's us
00:55:51.380 | doing our computation in our brains.
00:55:53.500 | There's a little cellular automaton doing its computation.
00:55:56.280 | And the principle of computational equivalence says,
00:55:58.660 | these two computations are fundamentally equivalent.
00:56:01.580 | So that means we don't get to say,
00:56:03.360 | we're a lot smarter than the cellular automaton
00:56:05.140 | and jump ahead 'cause we're just doing computation
00:56:07.940 | that's of the same sophistication
00:56:09.740 | as the cellular automaton itself.
00:56:11.740 | - That's computational irreducibility.
00:56:13.300 | It's fascinating.
00:56:14.140 | And that's a really powerful idea.
00:56:16.780 | I think that's both depressing and humbling and so on,
00:56:21.700 | that we're all, we and the cellular automaton are the same.
00:56:24.340 | But the question we're talking about,
00:56:26.020 | the fundamental laws of physics,
00:56:28.060 | is kind of the reverse question.
00:56:30.140 | You're not predicting what's gonna happen.
00:56:32.340 | You have to run the universe for that.
00:56:34.280 | But saying, can I understand
00:56:36.300 | what rules likely generated me?
00:56:38.300 | - I understand.
00:56:39.180 | But the problem is, to know whether you're right,
00:56:43.300 | you have to have some computational reducibility
00:56:46.060 | because we are embedded in the universe.
00:56:48.060 | If the only way to know whether we get the universe
00:56:50.100 | is just to run the universe, we don't get to do that
00:56:52.940 | 'cause it just ran for 14.6 billion years or whatever.
00:56:56.020 | And we can't rerun it, so to speak.
00:56:58.700 | So we have to hope that there are pockets
00:57:01.060 | of computational reducibility sufficient
00:57:04.140 | to be able to say, yes, I can recognize
00:57:06.380 | those are electrons there.
00:57:08.060 | And I think that it's a feature
00:57:10.780 | of computational irreducibility.
00:57:12.740 | It's sort of a mathematical feature
00:57:14.060 | that there are always an infinite collection
00:57:15.780 | of pockets of reducibility.
00:57:17.840 | The question of whether they land in the right place
00:57:19.700 | and whether we can sort of build a theory
00:57:21.500 | based on them is unclear.
00:57:23.140 | But to this point about whether we,
00:57:25.900 | as observers in the universe,
00:57:27.320 | built out of the same stuff as the universe,
00:57:29.780 | can figure out the universe, so to speak,
00:57:32.660 | that relies on these pockets of reducibility.
00:57:35.660 | Without the pockets of reducibility,
00:57:37.220 | it won't work, can't work.
00:57:39.300 | But I think this question about how observers operate,
00:57:42.500 | it's one of the features of science
00:57:45.180 | over the last 100 years particularly,
00:57:47.180 | has been that every time we get more realistic
00:57:49.780 | about observers, we learn a bit more about science.
00:57:53.180 | So for example, relativity was all about
00:57:55.820 | observers don't get to say when,
00:57:59.100 | what's simultaneous with what.
00:58:00.500 | They have to just wait for the light signal to arrive
00:58:02.740 | to decide what's simultaneous.
00:58:04.860 | Or for example, in thermodynamics,
00:58:07.660 | observers don't get to say the position
00:58:09.500 | of every single molecule in a gas.
00:58:12.140 | They can only see the kind of large scale features
00:58:14.460 | and that's why the second law of thermodynamics,
00:58:16.780 | law of entropy increase and so on works.
00:58:19.060 | If you could see every individual molecule,
00:58:21.500 | you wouldn't conclude something about thermodynamics.
00:58:25.740 | You would conclude, oh, these molecules
00:58:27.380 | are just all doing these particular things.
00:58:28.780 | You wouldn't be able to see this aggregate fact.
00:58:31.460 | So I strongly expect that,
00:58:34.100 | and in fact in the theories that I have,
00:58:36.500 | that one has to be more realistic
00:58:38.940 | about the computation and other aspects of observers
00:58:42.880 | in order to actually make a correspondence
00:58:45.620 | between what we experience.
00:58:46.620 | In fact, my little team and I have a little theory
00:58:50.420 | right now about how quantum mechanics may work,
00:58:52.860 | which is a very wonderfully bizarre idea
00:58:56.380 | about how the sort of thread of human consciousness
00:59:00.500 | relates to what we observe in the universe.
00:59:03.580 | But there's several steps to explain what that's about.
00:59:06.540 | - What do you make of the mess of the observer
00:59:09.180 | at the lower level of quantum mechanics?
00:59:11.740 | Sort of the textbook definition with quantum mechanics
00:59:16.740 | kind of says that there's two worlds.
00:59:20.580 | One is the world that actually is
00:59:23.980 | and the other is that's observed.
00:59:25.940 | What do you make sense of that kind of observing?
00:59:29.460 | - Well, I think actually the ideas we've recently had
00:59:32.940 | might actually give away into this.
00:59:36.620 | And that's, I don't know yet.
00:59:40.220 | I mean, I think that's, it's a mess.
00:59:43.100 | I mean, the fact is there is a,
00:59:45.580 | one of the things that's interesting
00:59:47.660 | and when people look at these models
00:59:49.780 | that I started talking about 30 years ago now,
00:59:52.420 | they say, "Oh no, that can't possibly be right.
00:59:55.020 | "What about quantum mechanics?"
00:59:56.700 | Right, you say, "Okay, tell me what is the essence
00:59:59.500 | "of quantum mechanics?
01:00:00.340 | "What do you want me to be able to reproduce
01:00:02.260 | "to know that I've got quantum mechanics, so to speak?"
01:00:05.380 | Well, and that question comes up,
01:00:07.180 | it comes up very operationally actually
01:00:08.580 | because we've been doing a bunch of stuff
01:00:09.740 | with quantum computing and there are all these companies
01:00:12.180 | that say, "We have a quantum computer."
01:00:14.020 | And we say, "Let's connect to your API
01:00:16.060 | "and let's actually run it."
01:00:18.020 | And they're like, "Well, maybe you shouldn't do that yet.
01:00:21.300 | "We're not quite ready yet."
01:00:22.780 | And one of the questions that I've been curious about is,
01:00:25.180 | "If I have five minutes with a quantum computer,
01:00:27.700 | "how can I tell if it's really a quantum computer
01:00:29.820 | "or whether it's a simulator at the other end?"
01:00:32.180 | Right, and turns out it's really hard.
01:00:33.900 | It turns out there isn't, it's like a lot of these questions
01:00:37.140 | about sort of what is intelligence, what's life.
01:00:39.780 | - That's a Turing test for quantum computing.
01:00:42.060 | - That's right, that's right.
01:00:43.100 | It's like, are you really a quantum computer?
01:00:45.500 | And I think-- - Or just a simulation, yeah.
01:00:47.580 | - Yes, exactly.
01:00:48.420 | Is it just a simulation or is it really a quantum computer?
01:00:51.140 | - Yeah, same issue all over again.
01:00:53.060 | But that, so, you know, this whole issue
01:00:56.820 | about the sort of mathematical structure
01:00:59.220 | of quantum mechanics and the completely separate thing
01:01:03.620 | that is our experience in which we think
01:01:06.220 | definite things happen, whereas quantum mechanics
01:01:08.620 | doesn't say definite things ever happen.
01:01:10.600 | Quantum mechanics is all about the amplitudes
01:01:12.460 | for different things to happen.
01:01:14.080 | But yet our thread of consciousness operates
01:01:18.580 | as if definite things are happening.
01:01:21.420 | - But to linger on the point, you've kind of mentioned
01:01:24.700 | the structure that could underlie everything
01:01:28.700 | and this idea that it could perhaps have something
01:01:31.900 | like a structure of a graph.
01:01:33.700 | Can you elaborate why your intuition is
01:01:36.680 | that there's a graph structure of nodes and edges
01:01:39.380 | and what it might represent?
01:01:41.300 | - Right, okay, so the question is,
01:01:43.940 | what is, in a sense, the most structureless structure
01:01:47.340 | you can imagine, right?
01:01:49.380 | So, and in fact, what I've recently realized
01:01:54.220 | in the last year or so, I have a new
01:01:57.020 | most structureless structure.
01:01:58.540 | - By the way, the question itself is a beautiful one
01:02:01.260 | and a powerful one in itself.
01:02:02.700 | So even without an answer, just the question
01:02:05.180 | is a really strong question.
01:02:06.660 | - Right, right.
01:02:07.500 | - But what's your new idea?
01:02:09.100 | - Well, it has to do with hypergraphs.
01:02:11.020 | Essentially, what is interesting about the sort of model
01:02:16.620 | I have now is it's a little bit like what happened
01:02:19.940 | with computation.
01:02:21.180 | Everything that I think of as, oh, well,
01:02:23.540 | maybe the model is this, I discover it's equivalent.
01:02:27.500 | And that's quite encouraging because it's like,
01:02:30.580 | I could say, well, I'm gonna look at trivalent graphs
01:02:33.500 | with three edges for each node and so on.
01:02:35.700 | Or I could look at this special kind of graph.
01:02:37.780 | Or I could look at this kind of algebraic structure.
01:02:41.060 | And turns out that the things I'm now looking at,
01:02:44.380 | everything that I've imagined that is a plausible type
01:02:47.700 | of structureless structure is equivalent to this.
01:02:50.860 | So what is it?
01:02:52.140 | Well, a typical way to think about it is,
01:02:54.860 | well, so you might have some collection of tuples,
01:03:01.300 | collection of, let's say numbers.
01:03:07.380 | So you might have one, three, five, two, three, four,
01:03:12.900 | little, just collections of numbers,
01:03:15.500 | triples of numbers, let's say, quadruples of numbers,
01:03:17.740 | pairs of numbers, whatever.
01:03:19.620 | And you have all these sort of floating little tuples.
01:03:24.060 | They're not in any particular order.
01:03:26.060 | And that sort of floating collection of tuples,
01:03:30.820 | and I told you this was abstract,
01:03:32.860 | represents the whole universe.
01:03:34.940 | The only thing that relates them is when a symbol
01:03:38.980 | is the same, it's the same, so to speak.
01:03:41.860 | So if you have two tuples and they contain the same symbol,
01:03:45.540 | let's say at the same position of the tuple,
01:03:47.100 | the first element of the tuple,
01:03:48.580 | then that represents a relation.
01:03:50.900 | Okay, so let me try and peel this back.
01:03:53.900 | - Wow, okay.
01:03:54.980 | (laughing)
01:03:56.900 | - I told you it's abstract, but this is the--
01:03:59.820 | - So the relationship is formed by the same,
01:04:02.380 | some aspect of sameness.
01:04:03.820 | - Right, but so think about it in terms of a graph.
01:04:06.700 | So a graph, a bunch of nodes,
01:04:09.620 | let's say you number each node, okay?
01:04:12.380 | Then what is a graph?
01:04:13.540 | A graph is a set of pairs that say this node has an edge
01:04:17.700 | connecting it to this other node.
01:04:19.780 | So that's the, and a graph is just a collection
01:04:23.980 | of those pairs that say this node
01:04:27.060 | connects to this other node.
01:04:28.580 | So this is a generalization of that,
01:04:31.060 | in which instead of having pairs,
01:04:32.980 | you have arbitrary and tuples.
01:04:35.040 | That's it, that's the whole story.
01:04:38.860 | And now the question is, okay,
01:04:40.340 | so that might represent the state of the universe.
01:04:43.660 | How does the universe evolve?
01:04:45.020 | What does the universe do?
01:04:46.540 | And so the answer is that what I'm looking at
01:04:49.180 | is transformation rules on these hypergraphs.
01:04:53.200 | In other words, you say this,
01:04:55.900 | whenever you see a piece of this hypergraph
01:05:00.420 | that looks like this,
01:05:02.340 | turn it into a piece of a hypergraph that looks like this.
01:05:05.240 | So on a graph, it might be, when you see the subgraph,
01:05:08.140 | when you see this thing with a bunch of edges hanging out
01:05:10.140 | in this particular way,
01:05:11.620 | then rewrite it as this other graph, okay?
01:05:15.380 | And so that's the whole story.
01:05:17.580 | So the question is, what, so now you say,
01:05:21.580 | I mean, as I say, this is quite abstract.
01:05:25.220 | And one of the questions is,
01:05:26.740 | where do you do those updating?
01:05:29.560 | So you've got this giant graph.
01:05:31.060 | - What triggers the updating?
01:05:32.420 | Like what's the ripple effect of it?
01:05:35.200 | Is it?
01:05:36.580 | - Yeah.
01:05:37.420 | I suspect everything's discrete, even in time.
01:05:42.000 | - Okay, so the question is, where do you do the updates?
01:05:44.140 | - Yes.
01:05:44.980 | - And the answer is, the rule is,
01:05:46.060 | you do them wherever they apply.
01:05:48.040 | And you do them, the order in which the updates is done
01:05:51.580 | is not defined.
01:05:53.060 | That is, you can do them,
01:05:54.820 | so there may be many possible orderings for these updates.
01:05:58.220 | Now, the point is,
01:05:59.220 | imagine you're an observer in this universe.
01:06:02.060 | So, and you say, did something get updated?
01:06:05.300 | Well, you don't in any sense know
01:06:07.620 | until you yourself have been updated.
01:06:09.840 | - Right.
01:06:11.660 | - So in fact, all that you can be sensitive to
01:06:14.960 | is essentially the causal network
01:06:17.180 | of how an event over there affects an event that's in you.
01:06:22.180 | - That doesn't even feel like observation.
01:06:25.140 | That's like, that's something else.
01:06:26.700 | You're just part of the whole thing.
01:06:28.260 | - Yes, you're part of it, but even to have,
01:06:30.740 | so the end result of that is all you're sensitive to
01:06:34.480 | is this causal network of what event affects
01:06:36.980 | what other event.
01:06:38.620 | I'm not making a big statement
01:06:40.460 | about sort of the structure of the observer.
01:06:42.940 | I'm simply saying, I'm simply making the argument
01:06:45.300 | that what happens, the microscopic order of these rewrites
01:06:49.860 | is not something that any observer,
01:06:53.020 | any conceivable observer in this universe
01:06:55.620 | can be affected by.
01:06:57.300 | Because the only thing the observer can be affected by
01:07:00.440 | is this causal network of how the events
01:07:04.220 | in the observer are affected by other events
01:07:07.140 | that happen in the universe.
01:07:08.100 | So the only thing you have to look at
01:07:09.340 | is the causal network.
01:07:10.640 | You don't really have to look at this microscopic rewriting
01:07:13.600 | that's happening.
01:07:14.440 | So these rewrites are happening wherever they,
01:07:17.180 | they happen wherever they feel like.
01:07:18.680 | - Causal network, is there,
01:07:20.860 | you said that there's not really,
01:07:23.940 | so the idea would be an undefined,
01:07:26.620 | like what gets updated, the sequence of things is undefined.
01:07:30.400 | - Yes.
01:07:32.700 | - Is that's what you mean by the causal network,
01:07:34.460 | but then the--
01:07:35.300 | - No, the causal network is given that an update has happened
01:07:38.560 | that's an event.
01:07:39.920 | Then the question is, is that event causally related to?
01:07:43.620 | Does that event, if that event didn't happen,
01:07:46.420 | then some future event couldn't happen yet.
01:07:48.860 | - Gotcha.
01:07:49.700 | - And so you build up this network of what affects what.
01:07:53.060 | Okay?
01:07:53.900 | And so what that does, so when you build up that network,
01:07:57.500 | that's kind of the observable aspect of the universe
01:08:00.380 | in some sense.
01:08:01.220 | - Gotcha.
01:08:02.060 | - And so then you can ask questions about, you know,
01:08:04.820 | how robust is that observable network
01:08:07.860 | of what's happening in the universe.
01:08:09.780 | Okay, so here's where it starts getting kind of interesting.
01:08:12.640 | So for certain kinds of microscopic rewriting rules,
01:08:16.380 | the order of rewrites does not matter
01:08:18.820 | to the causal network.
01:08:20.320 | And so this is, okay, mathematical logic moment,
01:08:24.100 | this is equivalent to the Church-Rosser property
01:08:26.380 | or the confluence property of rewrite rules.
01:08:28.740 | And it's the same reason that if you're simplifying
01:08:31.020 | an algebraic expression, for example,
01:08:33.140 | you can say, oh, let me expand those terms out,
01:08:35.500 | let me factor those pieces.
01:08:37.140 | Doesn't matter what order you do that in,
01:08:38.980 | you'll always get the same answer.
01:08:40.680 | And that's, it's the same fundamental phenomenon
01:08:43.760 | that causes for certain kinds of microscopic rewrite rules
01:08:47.580 | that causes the causal network to be independent
01:08:50.860 | of the microscopic order of rewritings.
01:08:53.620 | - Why is that property important?
01:08:55.820 | - 'Cause it implies special relativity.
01:08:58.820 | I mean, the reason it's important is that that property,
01:09:03.820 | special relativity says you can look at these sort of,
01:09:09.780 | you can look at different reference frames.
01:09:11.880 | You can have different, you can be looking at your notion
01:09:14.600 | of what space and what's time can be different,
01:09:17.580 | depending on whether you're traveling at a certain speed,
01:09:19.520 | depending on whether you're doing this, that, and the other.
01:09:21.900 | But nevertheless, the laws of physics are the same.
01:09:24.020 | That's what the principle of special relativity says,
01:09:26.940 | is the laws of physics are the same
01:09:28.280 | independent of your reference frame.
01:09:30.260 | Well, turns out this sort of change
01:09:34.700 | of the microscopic rewriting order
01:09:36.900 | is essentially equivalent to a change of reference frame,
01:09:39.140 | or at least there's a sub part of how that works
01:09:41.740 | that's equivalent to change of reference frame.
01:09:43.660 | So, somewhat surprisingly, and sort of for the first time
01:09:47.100 | in forever, it's possible for an underlying
01:09:50.100 | microscopic theory to imply special relativity,
01:09:53.460 | to be able to derive it.
01:09:54.560 | It's not something you put in as a,
01:09:56.980 | this is a, it's something where this other property,
01:10:00.340 | causal invariance, which is also the property
01:10:03.640 | that implies that there's a single thread of time
01:10:06.120 | in the universe.
01:10:07.480 | It might not be the case.
01:10:08.820 | That's what would lead to the possibility
01:10:13.340 | of an observer thinking that definite stuff happens.
01:10:16.760 | Otherwise, you've got all these possible rewriting orders,
01:10:19.240 | and who's to say which one occurred.
01:10:21.320 | But with this causal invariance property,
01:10:22.720 | there's a notion of a definite thread of time.
01:10:25.480 | - It sounds like that kind of idea of time,
01:10:27.920 | even space, would be emergent from the system.
01:10:31.000 | - Oh yeah.
01:10:31.840 | - So it's not a fundamental part of the system.
01:10:33.520 | - No, no, at a fundamental level,
01:10:35.520 | all you've got is a bunch of nodes connected
01:10:37.280 | by hyper edges or whatever.
01:10:39.160 | - So there's no time, there's no space.
01:10:40.840 | - That's right.
01:10:42.080 | But the thing is that it's just like imagining,
01:10:44.840 | imagine you're just dealing with a graph,
01:10:46.920 | and imagine you have something like a honeycomb graph,
01:10:49.880 | where you have a bunch of hexagons.
01:10:51.860 | That graph, at a microscopic level,
01:10:54.920 | it's just a bunch of nodes connected to other nodes.
01:10:57.240 | But at a macroscopic level,
01:10:58.520 | you say that looks like a honeycomb, you know, lattice.
01:11:02.240 | It looks like a two-dimensional, you know,
01:11:04.600 | manifold of some kind.
01:11:05.960 | It looks like a two-dimensional thing.
01:11:07.840 | If you connect it differently,
01:11:08.920 | if you just connect all the nodes one to another,
01:11:11.620 | and kind of a sort of linked list type structure,
01:11:14.060 | then you'd say, well,
01:11:14.900 | that looks like a one-dimensional space.
01:11:17.280 | But at the microscopic level,
01:11:18.800 | all these are just networks with nodes.
01:11:21.000 | The macroscopic level,
01:11:22.460 | they look like something that's like
01:11:24.200 | one of our sort of familiar kinds of space.
01:11:26.760 | And it's the same thing with these hypergraphs.
01:11:29.360 | Now, if you ask me, have I found one
01:11:31.140 | that gives me three-dimensional space,
01:11:32.800 | the answer is not yet.
01:11:34.440 | So we don't know, you know, this is one of these things,
01:11:36.920 | we're kind of betting against nature, so to speak.
01:11:39.600 | And I have no way to know.
01:11:41.320 | And so there are many other properties of this kind of system
01:11:45.000 | that have a very beautiful, actually, and very suggestive.
01:11:49.160 | And it will be very elegant if this turns out to be right,
01:11:51.600 | because it's very clean.
01:11:53.280 | I mean, you start with nothing,
01:11:54.920 | and everything gets built up.
01:11:56.440 | Everything about space, everything about time,
01:11:59.200 | everything about matter, it's all just emergent
01:12:02.420 | from the properties of this extremely low-level system.
01:12:05.600 | And that will be pretty cool
01:12:07.440 | if that's the way our universe works.
01:12:09.600 | Now, do I, on the other hand,
01:12:11.960 | the thing that I find very confusing is,
01:12:15.960 | let's say we succeed.
01:12:17.520 | Let's say we can say this particular
01:12:21.680 | sort of hypergraph rewriting rule gives the universe.
01:12:25.360 | Just run that hypergraph rewriting rule for enough times,
01:12:28.400 | and you'll get everything.
01:12:29.240 | You'll get this conversation we're having.
01:12:30.920 | You'll get everything.
01:12:32.000 | It's that,
01:12:34.520 | if we get to that point,
01:12:37.840 | and we look at what is this thing,
01:12:40.280 | what is this rule that we just have
01:12:42.320 | that is giving us our whole universe?
01:12:43.680 | How do we think about that thing?
01:12:45.760 | Let's say, turns out the minimal version of this,
01:12:48.280 | and this is kind of cool thing
01:12:49.920 | for a language designer like me,
01:12:51.620 | the minimal version of this model
01:12:53.600 | is actually a single line of orphan language code.
01:12:56.560 | So that's, which I wasn't sure was gonna happen that way,
01:12:59.320 | but it's, that's, it's kind of, no.
01:13:03.560 | We don't know what, we don't know what,
01:13:05.760 | that's just the framework.
01:13:07.720 | To know the actual particular hypergraph,
01:13:10.280 | it might be a longer,
01:13:11.880 | the specification of the rules might be slightly longer.
01:13:13.760 | - How does that help you accept marveling in the beauty
01:13:18.080 | and the elegance of the simplicity that creates the universe?
01:13:21.240 | That does that help us predict anything?
01:13:23.380 | Not really, because of the irreducibility.
01:13:25.540 | - That's correct, that's correct.
01:13:27.060 | But so, the thing that is really strange to me,
01:13:29.400 | and I haven't wrapped my brain around this yet,
01:13:32.760 | is, you know, one is,
01:13:35.580 | one keeps on realizing that we're not special,
01:13:38.620 | in the sense that, you know,
01:13:40.300 | we don't live at the center of the universe,
01:13:41.940 | we don't blah, blah, blah,
01:13:43.500 | and yet, if we produce a rule for the universe,
01:13:48.100 | and it's quite simple,
01:13:49.460 | and we can write it down in a couple of lines or something,
01:13:52.900 | that feels very special.
01:13:54.620 | How do we come to get a simple universe,
01:13:57.780 | when many of the available universes, so to speak,
01:14:00.620 | are incredibly complicated?
01:14:02.140 | Might be, you know, a quintillion characters long.
01:14:05.460 | Why did we get one of the ones that's simple?
01:14:07.660 | And so, I haven't wrapped my brain around that issue yet.
01:14:10.900 | - If indeed, we are in such a simple,
01:14:14.100 | the universe is such a simple rule,
01:14:16.220 | is it possible that there is something outside of this,
01:14:20.380 | that we are in a kind of what people call,
01:14:22.980 | so the simulation, right?
01:14:25.100 | That we're just part of a computation
01:14:26.780 | that's being explored by a graduate student
01:14:29.580 | in an alternate universe?
01:14:31.340 | - Well, you know, the problem is,
01:14:33.300 | we don't get to say much about what's outside our universe,
01:14:35.820 | because by definition,
01:14:36.820 | our universe is what we exist within.
01:14:39.700 | Now, can we make a sort of almost theological conclusion,
01:14:43.740 | from being able to know how our particular universe works?
01:14:47.420 | Interesting question.
01:14:48.860 | I don't think that, if you ask the question,
01:14:52.180 | could we, and it relates again to this question
01:14:54.700 | about the extraterrestrial intelligence,
01:14:57.220 | you know, we've got the rule for the universe.
01:14:59.660 | Was it built in on purpose?
01:15:01.780 | Hard to say.
01:15:02.860 | That's the same thing as saying,
01:15:04.380 | we see a signal from, you know,
01:15:06.660 | that we're, you know, receiving from some,
01:15:09.480 | you know, random star somewhere.
01:15:11.340 | And it's a series of pulses.
01:15:13.820 | And, you know, it's a periodic series of pulses, let's say.
01:15:16.900 | Was that done on purpose?
01:15:18.120 | Can we conclude something about the origin
01:15:19.900 | of that series of pulses?
01:15:21.340 | - Just because it's elegant,
01:15:22.940 | does not necessarily mean that somebody created it,
01:15:27.520 | or that we can even comprehend what would create it.
01:15:29.340 | - Yeah, I mean, I think it's the ultimate version
01:15:32.620 | of the sort of identification
01:15:35.140 | of the techno-signature question.
01:15:37.580 | It's the ultimate version of that,
01:15:38.620 | is was our universe a piece of technology, so to speak?
01:15:41.940 | And how on earth would we know?
01:15:43.700 | Because, but I mean, it'll be, it's, I mean, you know,
01:15:47.260 | in the kind of crazy science fiction thing
01:15:49.140 | you could imagine, you could say,
01:15:50.980 | oh, somebody's going to have, you know,
01:15:53.060 | there's going to be a signature there.
01:15:54.300 | It's going to be, you know, made by so-and-so.
01:15:57.060 | But there's no way we could understand that, so to speak.
01:16:00.180 | And it's not clear what that would mean,
01:16:01.900 | because the universe simply, you know, this,
01:16:06.500 | if we find a rule for the universe, we're not,
01:16:09.660 | we're simply saying that rule
01:16:11.060 | represents what our universe does.
01:16:13.740 | We're not saying that that rule
01:16:15.780 | is something running on a big computer
01:16:17.900 | and making our universe.
01:16:19.220 | It's just saying that represents what our universe does,
01:16:22.300 | in the same sense that, you know,
01:16:23.780 | laws of classical mechanics, differential equations,
01:16:26.380 | whatever they are, represent what mechanical systems do.
01:16:30.040 | It's not that the mechanical systems
01:16:32.520 | are somehow running solutions
01:16:34.160 | to those differential equations.
01:16:35.900 | Those differential equations
01:16:37.100 | just representing the behavior of those systems.
01:16:39.340 | - So what's the gap in your sense to linger
01:16:42.700 | on the fascinating, perhaps slightly sci-fi question?
01:16:45.820 | What's the gap between understanding the fundamental rules
01:16:49.740 | that create a universe and engineering a system,
01:16:53.680 | actually creating a simulation ourselves?
01:16:55.820 | So you've talked about sort of,
01:16:58.140 | you've talked about, you know, nanoengineering,
01:17:01.500 | kind of ideas that are kind of exciting,
01:17:03.100 | actually creating some ideas of computation
01:17:05.540 | in the physical space.
01:17:06.740 | How hard is it as an engineering problem
01:17:09.460 | to create the universe,
01:17:10.460 | once you know the rules that create it?
01:17:12.540 | - Well, that's an interesting question.
01:17:13.940 | I think the substrate on which the universe is operating
01:17:17.460 | is not a substrate that we have access to.
01:17:19.700 | I mean, the only substrate we have
01:17:21.700 | is that same substrate that the universe is operating in.
01:17:24.980 | So if the universe is a bunch of hypergraphs being rewritten,
01:17:28.320 | then we get to attach ourselves
01:17:30.460 | to those same hypergraphs being rewritten.
01:17:33.000 | We don't get to, and if you ask the question,
01:17:37.240 | you know, is the code clean?
01:17:39.240 | You know, can we write nice, elegant code
01:17:41.640 | with efficient algorithms and so on?
01:17:43.640 | Well, that's an interesting question.
01:17:46.480 | How, you know, that's this question
01:17:48.480 | of how much computational reducibility
01:17:50.320 | there is in the system.
01:17:51.640 | - But so I've seen some beautiful cellular automata
01:17:53.960 | that basically create copies of itself within itself, right?
01:17:57.120 | So that's the question, whether it's possible to create,
01:18:00.600 | like whether you need to understand the substrate
01:18:02.840 | or whether you can just--
01:18:04.560 | - Yeah, well, right.
01:18:05.400 | I mean, so one of the things that is sort of
01:18:07.600 | one of my slightly sci-fi thoughts about the future,
01:18:10.640 | so to speak, is, you know, right now,
01:18:13.280 | if you poll typical people, you say,
01:18:15.360 | do you think it's important to find
01:18:16.400 | the fundamental theory of physics?
01:18:18.600 | You get, because I've done this poll, informally at least,
01:18:22.360 | it's curious, actually, you get a decent fraction
01:18:24.920 | of people saying, oh yeah, that would be pretty interesting.
01:18:27.880 | - I think that's becoming, surprisingly enough, more,
01:18:31.400 | I mean, a lot of people are interested in physics
01:18:36.120 | in a way that, like without understanding it,
01:18:37.960 | just kind of watching scientists,
01:18:42.160 | a very small number of them, struggle to understand
01:18:44.880 | the nature of our reality.
01:18:46.200 | - Right, I mean, I think that's somewhat true,
01:18:48.440 | and in fact, in this project that I'm launching into
01:18:51.720 | to try and find the fundamental theory of physics,
01:18:54.240 | I'm going to do it as a very public project.
01:18:56.080 | I mean, it's gonna be live-streamed
01:18:57.840 | and all this kind of stuff, and I don't know
01:18:59.600 | what will happen, it'll be kind of fun.
01:19:01.560 | I mean, I think that it's the interface
01:19:04.520 | to the world of this project.
01:19:07.120 | I mean, I figure one feature of this project is,
01:19:11.560 | unlike technology projects that basically are what they are,
01:19:15.000 | this is a project that might simply fail,
01:19:16.960 | because it might be the case that it generates
01:19:18.520 | all kinds of elegant mathematics,
01:19:20.240 | but it has absolutely nothing to do
01:19:21.460 | with the physical universe that we happen to live in.
01:19:23.920 | Well, okay, so we're talking about kind of the quest
01:19:27.560 | to find the fundamental theory of physics.
01:19:29.960 | First point is, you know, it's turned out
01:19:33.000 | it's kind of hard to find the fundamental theory of physics.
01:19:35.120 | People weren't sure that that would be the case.
01:19:37.480 | Back in the early days of applying mathematics to science,
01:19:41.760 | 1600s and so on, people were like,
01:19:43.920 | oh, in 100 years we'll know everything there is to know
01:19:46.800 | about how the universe works.
01:19:48.000 | Turned out to be harder than that,
01:19:49.560 | and people got kind of humble at some level,
01:19:51.840 | 'cause every time we got to sort of a greater level
01:19:54.040 | of smallness in studying the universe,
01:19:56.120 | it seemed like the math got more complicated
01:19:58.120 | and everything got harder.
01:19:59.960 | When I was a kid, basically,
01:20:04.760 | I started doing particle physics,
01:20:06.640 | and when I was doing particle physics,
01:20:09.840 | I always thought finding the fundamental,
01:20:12.400 | fundamental theory of physics,
01:20:14.160 | that's a kooky business, we'll never be able to do that.
01:20:17.280 | But we can operate within these frameworks
01:20:19.560 | that we built for doing quantum field theory
01:20:21.480 | and general relativity and things like this,
01:20:23.520 | and it's all good and we can figure out a lot of stuff.
01:20:26.440 | - Did you even at that time have a sense
01:20:28.080 | that there's something behind that too?
01:20:30.520 | - Sure, I just didn't expect that.
01:20:32.440 | I thought in some rather un,
01:20:35.840 | it's actually kind of crazy thinking back on it,
01:20:38.600 | because it's kind of like there was this long period
01:20:41.560 | in civilization where people thought the ancients
01:20:43.400 | had it all figured out and will never figure out
01:20:45.040 | anything new.
01:20:46.280 | And to some extent, that's the way I felt about physics
01:20:49.520 | when I was in the middle of doing it, so to speak.
01:20:53.480 | We've got quantum field theory,
01:20:54.640 | it's the foundation of what we're doing,
01:20:56.640 | and yes, there's probably something underneath this,
01:20:59.920 | but we'll sort of never figure it out.
01:21:01.800 | But then I started studying simple programs
01:21:06.000 | in the computational universe,
01:21:07.760 | things like cellular automata and so on,
01:21:09.960 | and I discovered that they do all kinds of things
01:21:13.400 | that were completely at odds with the intuition
01:21:15.800 | that I had had.
01:21:17.000 | And so after that, after you see this tiny little program
01:21:20.800 | that does all this amazingly complicated stuff,
01:21:23.360 | then you start feeling a bit more ambitious about physics
01:21:26.320 | and saying, maybe we could do this for physics too.
01:21:29.080 | And so that got me started years ago now
01:21:32.840 | in this kind of idea of could we actually find
01:21:37.240 | what's underneath all of these frameworks
01:21:39.520 | like quantum field theory and general relativity and so on.
01:21:41.280 | And people perhaps don't realize as clearly as they might
01:21:44.560 | that the frameworks we're using for physics,
01:21:46.740 | which is basically these two things,
01:21:48.040 | quantum field theory, sort of the theory of small stuff
01:21:52.600 | and general relativity, theory of gravitation
01:21:54.800 | and large stuff, those are the two basic theories
01:21:57.460 | and they're 100 years old.
01:21:58.880 | I mean, general relativity was 1915,
01:22:01.320 | quantum field theory, well, 1920s.
01:22:04.120 | So basically 100 years old.
01:22:06.000 | And it's been a good run.
01:22:08.820 | There's a lot of stuff been figured out.
01:22:10.760 | But what's interesting is the foundations haven't changed
01:22:14.720 | in all that period of time,
01:22:16.400 | even though the foundations had changed several times
01:22:18.800 | before that in the 200 years earlier than that.
01:22:22.500 | And I think the kinds of things that I'm thinking about,
01:22:25.380 | which are sort of really informed
01:22:26.720 | by thinking about computation
01:22:28.240 | and the computational universe,
01:22:29.920 | it's a different foundation.
01:22:31.520 | It's a different set of foundations and might be wrong,
01:22:35.440 | but it is at least, we have a shot.
01:22:38.700 | And I think it's, to me, it's,
01:22:41.540 | my personal calculation for myself is,
01:22:44.040 | if it turns out that the finding
01:22:48.840 | the fundamental theory of physics,
01:22:50.180 | it's kind of low hanging fruit, so to speak,
01:22:52.620 | it'd be a shame if we just didn't think to do it.
01:22:55.440 | If people just said, oh, you'll never figure that stuff out.
01:22:58.720 | And it takes another 200 years
01:23:01.280 | before anybody gets around to doing it.
01:23:03.900 | I think it's, I don't know how low hanging
01:23:08.560 | this fruit actually is.
01:23:09.580 | It may be that it's kind of the wrong century
01:23:13.860 | to do this project.
01:23:14.840 | I mean, I think the cautionary tale for me,
01:23:18.520 | I think about things that I've tried to do in technology
01:23:21.580 | where people thought about doing them a lot earlier.
01:23:25.200 | My favorite example is probably Leibniz
01:23:27.600 | who thought about making essentially,
01:23:30.580 | encapsulating the world's knowledge
01:23:32.240 | in a computational form in the late 1600s
01:23:36.240 | and did a lot of things towards that.
01:23:38.640 | And basically, we finally managed to do this,
01:23:41.720 | but he was 300 years too early.
01:23:43.600 | And that's kind of the, in terms of life planning,
01:23:47.120 | it's kind of like avoid things that can't be done
01:23:49.700 | in your century, so to speak.
01:23:52.080 | - Yeah, timing is everything.
01:23:55.240 | So you think if we kind of figure out
01:23:59.520 | the underlying rules it can create
01:24:02.280 | from which quantum field theory
01:24:04.400 | and general relativity can emerge,
01:24:06.560 | do you think that'll help us unify it
01:24:08.160 | at that level of abstraction?
01:24:09.320 | - Oh, we'll know it completely.
01:24:10.440 | We'll know how that all fits together.
01:24:12.100 | Yes, without a question.
01:24:13.840 | And I mean, it's already, even the things I've already done,
01:24:18.840 | they're very, you know, it's very elegant actually,
01:24:23.020 | how things seem to be fitting together.
01:24:24.960 | Now, you know, is it right?
01:24:26.080 | I don't know yet.
01:24:27.000 | It's awfully suggestive.
01:24:29.340 | If it isn't right, it's,
01:24:31.560 | then the designer of the universe
01:24:33.760 | should feel embarrassed, so to speak,
01:24:35.160 | 'cause it's a really good way to do it.
01:24:36.960 | - In your intuition, in terms of design universe,
01:24:39.640 | does God play dice?
01:24:41.440 | Is there randomness in this thing,
01:24:44.360 | or is it deterministic?
01:24:46.240 | So the kind of graph--
01:24:47.080 | - That's a little bit of a complicated question,
01:24:48.840 | because when you're dealing with these things
01:24:51.240 | that involve these rewrites that have, okay.
01:24:53.800 | - Even randomness is an emergent phenomenon perhaps?
01:24:56.280 | - Yes, yes.
01:24:57.360 | I mean, it's a, yeah, well, randomness,
01:24:59.240 | in many of these systems,
01:25:01.360 | pseudo-randomness and randomness are hard to distinguish.
01:25:04.880 | In this particular case,
01:25:06.200 | the current idea that we have
01:25:07.880 | about measurement in quantum mechanics,
01:25:11.800 | is something very bizarre and very abstract,
01:25:15.000 | and I don't think I can yet explain it
01:25:18.160 | without kind of yakking about very technical things.
01:25:21.400 | Eventually I will be able to,
01:25:22.600 | but if that's right,
01:25:25.960 | it's kind of a, it's a weird thing,
01:25:27.720 | because it slices between determinism and randomness
01:25:32.260 | in a weird way that hasn't been sliced before, so to speak.
01:25:35.180 | So like many of these questions that come up in science,
01:25:38.160 | where it's like, is it this or is it that?
01:25:40.680 | Turns out the real answer is it's neither of those things.
01:25:43.080 | It's something kind of different
01:25:44.600 | and sort of orthogonal to those categories.
01:25:48.920 | And so that's the current, you know,
01:25:51.080 | this week's idea about how that might work.
01:25:54.160 | But, you know, we'll see how that unfolds.
01:25:58.800 | I mean, there's this question about a field like physics
01:26:02.200 | and sort of the quest for fundamental theory and so on,
01:26:05.440 | and there's both the science of what happens
01:26:07.900 | and there's the sort of the social aspect of what happens,
01:26:11.440 | because, you know, in a field
01:26:13.680 | that is basically as old as physics,
01:26:16.520 | we're at, I don't know what it is, fourth generation,
01:26:18.960 | I don't know, fifth generation,
01:26:19.880 | I don't know what generation it is of physicists.
01:26:22.560 | And like, I was one of these, so to speak,
01:26:24.600 | and for me, the foundations were like the pyramids,
01:26:27.960 | so to speak, you know, it was that way
01:26:29.920 | and it was always that way.
01:26:31.280 | It is difficult in an old field
01:26:34.680 | to go back to the foundations
01:26:36.000 | and think about rewriting them.
01:26:37.960 | It's a lot easier in young fields
01:26:39.820 | where you're still dealing with the first generation
01:26:42.380 | of people who invented the field.
01:26:44.040 | And it tends to be the case, you know,
01:26:46.500 | that the nature of what happens in science
01:26:48.620 | tends to be, you know, you'll get,
01:26:51.240 | typically the pattern is some methodological advance occurs
01:26:55.440 | and then there's a period of five years, 10 years,
01:26:57.560 | maybe a little bit longer than that,
01:26:59.280 | where there's lots of things that are now made possible
01:27:01.840 | by that methodological advance,
01:27:04.120 | whether it's, you know, I don't know, telescopes
01:27:06.840 | or whether that's some mathematical method or something.
01:27:09.760 | It's, you know, there's a, something happens,
01:27:14.760 | a tool gets built and then you can do a bunch of stuff
01:27:18.600 | and there's a bunch of low-hanging fruit to be picked
01:27:21.640 | and that takes a certain amount of time.
01:27:24.020 | After that, all that low-hanging fruit is picked,
01:27:27.000 | then it's a hard slog for the next however many decades
01:27:31.200 | or century or more to get to the next sort of level
01:27:35.680 | at which one can do something.
01:27:36.680 | And it's kind of a, and it tends to be the case
01:27:39.480 | that in fields that are in that kind of,
01:27:41.480 | I wouldn't say cruise mode 'cause it's really hard work,
01:27:44.160 | but it's very hard work for very incremental progress.
01:27:48.040 | - And in your career and some of the things you've taken on,
01:27:51.080 | it feels like you're not,
01:27:52.440 | you haven't been afraid of the hard slog.
01:27:55.360 | - Yeah, that's true.
01:27:56.480 | - So it's quite interesting,
01:27:58.480 | especially on the engineering side.
01:28:01.480 | And a small tangent, when you were at Caltech,
01:28:05.200 | did you get to interact with Richard Feynman at all?
01:28:08.840 | Do you have any memories of Richard?
01:28:10.880 | - We worked together quite a bit actually.
01:28:13.520 | In fact, and in fact, both when I was at Caltech
01:28:16.560 | and after I left Caltech, we were both consultants
01:28:20.120 | at this company called Thinking Machines Corporation,
01:28:22.200 | which was just down the street from here actually,
01:28:24.960 | as ultimately ill-fated company.
01:28:26.720 | But I used to say,
01:28:28.520 | this company is not gonna work
01:28:30.200 | with the strategy they have.
01:28:31.360 | And Dick Feynman always used to say,
01:28:33.080 | what do we know about running companies?
01:28:34.560 | Just let them run their company.
01:28:36.560 | But anyway, he was not into that kind of thing.
01:28:41.560 | And he always thought that my interest
01:28:44.080 | in doing things like running companies
01:28:45.480 | was a distraction, so to speak.
01:28:48.720 | And for me, it's a mechanism to have a more effective machine
01:28:55.120 | for actually getting things, figuring things out
01:28:58.280 | and getting things to happen.
01:28:59.520 | - Did he think of it, 'cause essentially what you used,
01:29:02.680 | you did with the company,
01:29:03.560 | I don't know if you were thinking of it that way,
01:29:05.040 | but you're creating tools to empower your,
01:29:09.920 | to empower the exploration of the university.
01:29:12.800 | Do you think, did he--
01:29:15.080 | - Did he understand that point?
01:29:16.560 | - The point of tools of--
01:29:18.720 | - I think not as well as he might've done.
01:29:20.640 | I mean, I think that, but you know,
01:29:22.960 | he was actually my first company,
01:29:25.240 | which was also involved with,
01:29:27.640 | well, was involved with more mathematical computation
01:29:30.280 | kinds of things.
01:29:31.280 | You know, he was quite, he had lots of advice
01:29:35.800 | about the technical side of what we should do and so on.
01:29:39.320 | - Do you have examples, memories or thoughts that--
01:29:41.360 | - Oh yeah, yeah, he had all kinds of,
01:29:43.120 | look, in the business of doing sort of,
01:29:46.720 | you know, one of the hard things in math
01:29:47.960 | is doing integrals and so on, right?
01:29:49.760 | And so he had his own elaborate ways
01:29:51.960 | to do integrals and so on.
01:29:53.440 | He had his own ways of thinking about sort of
01:29:55.360 | getting intuition about how math works.
01:29:57.960 | And so his sort of meta idea was,
01:30:01.720 | take those intuitional methods
01:30:03.440 | and make a computer follow those intuitional methods.
01:30:06.280 | Now it turns out for the most part,
01:30:09.120 | like when we do integrals and things,
01:30:11.080 | what we do is we build this kind of bizarre industrial
01:30:14.400 | machine that turns every integral into, you know,
01:30:17.080 | products of major G functions
01:30:18.800 | and generates this very elaborate thing.
01:30:21.320 | And actually the big problem is turning the results
01:30:23.800 | into something a human will understand.
01:30:25.400 | It's not, quote, doing the integral.
01:30:27.680 | And actually Feynman did understand that to some extent.
01:30:30.240 | And I'm embarrassed to say,
01:30:32.360 | he once gave me this big pile of, you know,
01:30:35.040 | calculational methods for particle physics
01:30:37.160 | that he worked out in the '50s.
01:30:38.320 | And he said, you know, it's more used to you
01:30:39.640 | than to me type thing.
01:30:40.600 | And I was like, I've intended to look at it
01:30:43.240 | and give it back and I still have my files now.
01:30:45.400 | So it's, but that's what happens
01:30:48.400 | when it's finiteness of human lives.
01:30:50.680 | It's, I, you know, maybe if he'd lived another 20 years,
01:30:53.480 | I would have remembered to give it back.
01:30:55.280 | But I think it's, you know,
01:30:57.720 | that was his attempt to systematize
01:31:00.880 | the ways that one does integrals
01:31:04.200 | that show up in particle physics and so on.
01:31:05.960 | Turns out the way we've actually done it
01:31:08.120 | is very different from that way.
01:31:09.760 | - What do you make of that difference, Eugene?
01:31:11.600 | So Feynman was actually quite remarkable
01:31:14.280 | at creating sort of intuitive, like diving in, you know,
01:31:18.680 | creating intuitive frameworks
01:31:20.400 | for understanding difficult concepts is--
01:31:23.360 | - I'm smiling because, you know,
01:31:25.280 | the funny thing about him was that the thing
01:31:27.760 | he was really, really, really good at is calculating stuff.
01:31:31.760 | But he thought that was easy
01:31:33.200 | because he was really good at it.
01:31:35.560 | And so he would do these things
01:31:37.240 | where he would calculate some,
01:31:38.960 | do some complicated calculation
01:31:41.760 | in quantum field theory, for example,
01:31:43.400 | come out with a result.
01:31:45.040 | Wouldn't tell anybody about the complicated calculation
01:31:47.080 | 'cause he thought that was easy.
01:31:48.320 | He thought the really impressive thing
01:31:50.120 | was to have the simple intuition about how everything works.
01:31:53.760 | So he invented that at the end.
01:31:56.040 | And, you know, because he'd done this calculation
01:31:58.120 | and knew how it worked, it was a lot easier.
01:32:00.920 | It's a lot easier to have good intuition
01:32:02.600 | when you know what the answer is.
01:32:04.400 | And then he would just not tell anybody
01:32:06.680 | about these calculations.
01:32:07.720 | And he wasn't meaning that maliciously, so to speak.
01:32:10.360 | It's just, he thought that was easy.
01:32:12.240 | And that's, you know, that led to areas
01:32:15.520 | where people were just completely mystified
01:32:17.280 | and they kind of followed his intuition,
01:32:19.120 | but nobody could tell why it worked
01:32:21.200 | because actually the reason it worked
01:32:22.840 | was 'cause he'd done all these calculations
01:32:24.240 | and he knew that it would work.
01:32:26.120 | And, you know, when I, he and I worked a bit
01:32:28.360 | on quantum computers actually back in 1980, '81,
01:32:32.880 | before anybody had heard of those things.
01:32:35.240 | And, you know, the typical mode of,
01:32:37.320 | I mean, he always used to say,
01:32:39.800 | and I now think about this
01:32:40.800 | 'cause I'm about the age that he was
01:32:42.360 | when I worked with him.
01:32:44.000 | And, you know, I see that people
01:32:45.400 | are one third my age, so to speak.
01:32:47.640 | And he was always complaining
01:32:49.080 | that I was one third his age.
01:32:50.560 | (both laughing)
01:32:52.160 | Various things, but, you know,
01:32:54.560 | he would do some calculation by hand,
01:32:57.640 | you know, on Blackboard and things,
01:32:58.680 | come up with some answer.
01:33:00.600 | I'd say, "I don't understand this."
01:33:02.960 | You know, I do something with a computer
01:33:05.120 | and he'd say, you know, "I don't understand this."
01:33:08.680 | So there'd be some big argument about what was,
01:33:11.160 | you know, what was going on,
01:33:12.200 | but it was always,
01:33:14.520 | and I think actually many of the things
01:33:17.960 | that we sort of realized about quantum computing
01:33:21.560 | that were sort of issues that have to do
01:33:23.040 | particularly with the measurement process
01:33:25.200 | are kind of still issues today.
01:33:27.160 | And I kind of find it interesting.
01:33:28.600 | It's a funny thing in science that these,
01:33:31.160 | you know, that there's a remarkable,
01:33:34.080 | it happens in technology too,
01:33:35.240 | there's a remarkable sort of repetition of history
01:33:38.480 | that ends up occurring.
01:33:40.200 | Eventually things really get nailed down,
01:33:42.400 | but it often takes a while
01:33:44.120 | and it often things come back decades later.
01:33:46.720 | Well, for example, I could tell a story
01:33:49.880 | actually happened right down the street from here
01:33:52.320 | when we were both at thinking machines.
01:33:54.880 | I had been working on this particular cellular automaton
01:33:58.440 | called Rule 30 that has this feature
01:34:00.520 | that from very simple initial conditions,
01:34:03.320 | it makes really complicated behavior, okay?
01:34:06.080 | So, and actually of all silly physical things,
01:34:10.400 | using this big parallel computer
01:34:13.400 | called the connection machine that that company was making,
01:34:17.000 | I generated this giant printout of Rule 30
01:34:19.800 | on very, on actually on the same kind of printer
01:34:23.400 | that people use to make layouts for microprocessors.
01:34:28.400 | So one of these big, you know, large format printers
01:34:32.560 | with high resolution and so on.
01:34:34.640 | So, okay, so print this out, lots of very tiny cells.
01:34:38.600 | And so there was sort of a question
01:34:40.600 | of how some features of that pattern.
01:34:44.240 | And so it was very much a physical, you know,
01:34:47.120 | on the floor with meter rules
01:34:48.520 | trying to measure different things.
01:34:50.480 | So, so Feynman kind of takes me aside.
01:34:53.480 | We've been doing that for a little while
01:34:54.680 | and takes me aside and he says,
01:34:56.480 | "I just wanna know this one thing."
01:34:57.920 | He says, "I wanna know, how did you know
01:35:00.240 | that this Rule 30 thing would produce
01:35:02.960 | all this really complicated behavior
01:35:04.320 | that is so complicated that we're, you know,
01:35:06.880 | going around with this big printout and so on?"
01:35:09.200 | And I said, "Well, I didn't know.
01:35:11.520 | I just enumerated all the possible rules
01:35:13.760 | and then observed that that's what happened."
01:35:16.520 | He said, "Oh, I feel a lot better.
01:35:18.840 | You know, I thought you had some intuition
01:35:20.800 | that he didn't have that would let one."
01:35:23.440 | I said, "No, no, no, no intuition,
01:35:25.040 | just experimental science."
01:35:26.920 | - Oh, that's such a beautiful sort of dichotomy there
01:35:31.080 | of that's exactly what you showed
01:35:32.680 | is you really can't have an intuition about it
01:35:35.320 | and you reduce it, I mean, you have to run it.
01:35:37.640 | - Yes, that's right.
01:35:38.480 | - That's so hard for us humans
01:35:39.800 | and especially brilliant physicists like Feynman
01:35:44.640 | to say that you can't have a compressed, clean intuition
01:35:49.640 | about how the whole thing works.
01:35:52.240 | - Yes, yes.
01:35:53.520 | No, he was, I mean, I think he was sort of on the edge
01:35:56.280 | of understanding that point about computation.
01:35:58.600 | And I think he found that,
01:36:00.280 | I think he always found computation interesting.
01:36:02.960 | And I think that was sort of what he was
01:36:04.680 | a little bit poking at.
01:36:06.360 | I mean, that intuition, you know,
01:36:08.760 | the difficulty of discovering things,
01:36:10.360 | like even you say, "Oh, you know,
01:36:11.960 | you just enumerate all the cases
01:36:13.160 | and just find one that does something interesting, right?"
01:36:15.480 | Sounds very easy.
01:36:16.840 | Turns out, like, I missed it when I first saw it
01:36:19.920 | because I had kind of an intuition
01:36:21.920 | that said it shouldn't be there.
01:36:23.400 | And so I had kind of arguments,
01:36:24.640 | "Oh, I'm gonna ignore that case because whatever."
01:36:27.480 | - How did you have an open mind enough?
01:36:31.200 | Because you're essentially the same person
01:36:32.880 | as Richard Feynman,
01:36:33.720 | the same kind of physics type of thinking.
01:36:36.320 | How did you find yourself having a sufficiently open mind
01:36:40.520 | to be open to watching rules and them revealing complexity?
01:36:44.920 | - Yeah, I think that's an interesting question.
01:36:46.240 | I've wondered about that myself
01:36:47.560 | 'cause it's kind of like, you know,
01:36:49.040 | you live through these things and then you say,
01:36:51.560 | "What was the historical story?"
01:36:53.240 | And sometimes the historical story
01:36:54.640 | that you realize after the fact
01:36:56.040 | was not what you lived through, so to speak.
01:36:58.680 | And so, you know, what I realized is I think
01:37:02.320 | what happened is, you know, I did physics
01:37:06.520 | kind of like reductionistic physics
01:37:08.840 | where you're thrown in the universe
01:37:10.280 | and you're told, "Go figure out what's going on inside it."
01:37:13.240 | And then I started building computer tools
01:37:16.240 | and I started building my first computer language,
01:37:18.640 | for example.
01:37:19.800 | And computer language is not like,
01:37:21.380 | it's sort of like physics in the sense
01:37:23.120 | that you have to take all those computations
01:37:24.800 | people want to do and kind of drill down
01:37:26.920 | and find the primitives that they can all be made of.
01:37:30.280 | But then you do something that's really different
01:37:31.960 | because you're just saying,
01:37:33.520 | "Okay, these are the primitives.
01:37:35.280 | "Now, you know, hopefully they'll be useful to people.
01:37:37.840 | "Let's build up from there."
01:37:39.340 | So you're essentially building an artificial universe
01:37:42.280 | in a sense where you make this language,
01:37:44.600 | you've got these primitives,
01:37:45.960 | you're just building whatever you feel like building.
01:37:48.880 | And that's, and so it was sort of interesting for me
01:37:51.720 | because from doing science where you're just
01:37:53.640 | thrown in the universe as the universe is
01:37:56.000 | to then just being told, you know,
01:37:58.800 | "You can make up any universe you want."
01:38:01.120 | And so I think that experience of making a computer language
01:38:04.760 | which is essentially building your own universe,
01:38:06.680 | so to speak, is, you know, that's kind of the,
01:38:11.160 | that's what gave me a somewhat different attitude
01:38:14.200 | towards what might be possible.
01:38:15.600 | It's like, let's just explore what can be done
01:38:17.840 | in these artificial universes
01:38:19.780 | rather than thinking the natural science way
01:38:22.880 | of let's be constrained by how the universe actually is.
01:38:25.440 | - Yeah, by being able to program,
01:38:26.720 | essentially you've, as opposed to being limited
01:38:29.720 | to just your mind and a pen, you now have,
01:38:34.280 | you've basically built another brain
01:38:36.040 | that you can use to explore the universe by,
01:38:38.120 | - Yeah, yeah. - Computer program,
01:38:39.760 | you know, is a kind of a brain.
01:38:41.600 | - Right, and it's, well, it's, or a telescope,
01:38:43.800 | or, you know, it's a tool.
01:38:44.960 | It lets you see stuff.
01:38:46.720 | - But there's something fundamentally different
01:38:48.040 | between a computer and a telescope.
01:38:49.520 | I mean, it just, I'm hoping not to romanticize the notion,
01:38:54.520 | but it's more general, the computer is more general
01:38:57.480 | than a telescope. - It is, it's more general.
01:38:58.320 | It's, I think, I mean, this point about,
01:39:01.420 | you know, people say, oh, such and such a thing
01:39:05.920 | was almost discovered at such and such a time.
01:39:08.500 | The distance between, you know,
01:39:10.800 | the building the paradigm that allows you
01:39:12.400 | to actually understand stuff, or allows one to be open
01:39:15.140 | to seeing what's going on, that's really hard.
01:39:18.200 | And, you know, I think in, I've been fortunate in my life
01:39:22.400 | that I've spent a lot of my time
01:39:23.760 | building computational language,
01:39:25.920 | and that's an activity that, in a sense,
01:39:29.640 | works by sort of having to kind of create
01:39:34.640 | another level of abstraction, and kind of be open
01:39:37.360 | to different kinds of structures.
01:39:39.140 | But, you know, it's always, I mean, I'm fully aware of,
01:39:43.480 | I suppose, the fact that I have seen it a bunch of times
01:39:46.980 | of how easy it is to miss the obvious, so to speak.
01:39:50.260 | That at least is factored into my attempt
01:39:53.240 | to not miss the obvious, although it may not succeed.
01:39:56.840 | - What do you think is the role of ego
01:40:01.640 | in the history of math and science?
01:40:04.240 | And more sort of, you know, a book title
01:40:08.480 | of something like "A New Kind of Science,"
01:40:10.920 | you've accomplished a huge amount.
01:40:14.160 | And in fact, somebody said that Newton didn't have an ego,
01:40:17.320 | and I looked into it, and he had a huge ego.
01:40:19.840 | - Yeah. - But from an outsider's
01:40:21.120 | perspective, some have said that you have
01:40:23.760 | a bit of an ego as well.
01:40:25.140 | Do you see it that way?
01:40:28.840 | Does ego get in the way?
01:40:30.120 | Is it empowering?
01:40:31.120 | Is it both sort of-- - No, it's complicated
01:40:34.120 | and unnecessary.
01:40:35.000 | I mean, you know, I've had, look, I've spent more than half
01:40:37.880 | my life CEO-ing a tech company.
01:40:39.680 | - Right. - Okay?
01:40:40.840 | And, you know, that is a, I think it's actually very,
01:40:47.260 | it means that one's ego is not a distant thing.
01:40:51.340 | It's a thing that one encounters every day, so to speak,
01:40:53.860 | 'cause it's all tied up with leadership
01:40:56.500 | and with how one, you know, develops an organization
01:40:59.140 | and all these kinds of things.
01:41:00.100 | So, you know, it may be that if I'd been an academic,
01:41:02.860 | for example, I could have sort of, you know,
01:41:05.160 | checked the ego, put it on a shelf somewhere
01:41:08.180 | and ignored its characteristics, but--
01:41:10.220 | - But you're reminded of it quite often
01:41:12.500 | in the context of running a company.
01:41:15.180 | - Sure, I mean, that's what it's about.
01:41:16.980 | It's about leadership and, you know,
01:41:19.220 | leadership is intimately tied to ego.
01:41:22.700 | Now, what does it mean?
01:41:23.660 | I mean, what is the, you know, for me,
01:41:26.420 | I've been fortunate that I think I have reasonable
01:41:29.500 | intellectual confidence, so to speak.
01:41:31.700 | That is, you know, I'm one of these people
01:41:35.460 | who at this point, if somebody tells me something
01:41:37.460 | and I just don't understand it, my conclusion isn't
01:41:41.220 | that means I'm dumb, that my conclusion is
01:41:44.860 | there's something wrong with what I'm being told.
01:41:47.500 | And that was actually Dick Feynman used to have
01:41:49.300 | that feature too, he never really believed in.
01:41:52.460 | He actually believed in experts much less
01:41:54.660 | than I believe in experts, so--
01:41:57.540 | - Wow, so that's a fundamentally powerful property of ego
01:42:02.540 | and saying like, not that I am wrong,
01:42:06.560 | but that the world is wrong in telling me,
01:42:11.140 | like when confronted with the fact that doesn't fit
01:42:14.180 | the thing that you've really thought through,
01:42:16.700 | sort of both the negative and the positive of ego.
01:42:19.820 | Do you see the negative of that get in the way,
01:42:23.460 | sort of being confronted with--
01:42:24.300 | - Sure, there are mistakes I've made that are the result
01:42:27.140 | of I'm pretty sure I'm right and turns out I'm not.
01:42:31.560 | I mean, that's the, you know, but the thing is
01:42:34.620 | that the idea that one tries to do things that,
01:42:39.620 | so for example, you know, one question is,
01:42:42.340 | if people have tried hard to do something
01:42:44.380 | and then one thinks, maybe I should try doing this myself,
01:42:48.420 | if one does not have a certain degree
01:42:50.500 | of intellectual confidence, one just says,
01:42:52.100 | well, people have been trying to do this for 100 years,
01:42:54.340 | how am I gonna be able to do this?
01:42:56.180 | And, you know, I was fortunate in the sense
01:42:58.580 | that I happened to start having some degree of success
01:43:01.860 | in science and things when I was really young.
01:43:04.060 | And so that developed a certain amount
01:43:05.980 | of sort of intellectual confidence
01:43:07.700 | that I don't think I otherwise would have had.
01:43:09.860 | And, you know, in a sense, I mean, I was fortunate
01:43:12.940 | that I was working in a field, particle physics,
01:43:15.620 | during its sort of golden age of rapid progress.
01:43:18.980 | And that kind of gives one a false sense of achievement
01:43:22.540 | because it's kind of easy to discover stuff
01:43:24.940 | that's gonna survive if you happen to be, you know,
01:43:27.180 | picking the low-hanging fruit
01:43:28.340 | of a rapidly expanding field.
01:43:30.580 | - I mean, the reason I totally immediately understood
01:43:33.780 | the ego behind a new kind of science, to me,
01:43:36.540 | let me sort of just try to express my feelings
01:43:38.620 | on the whole thing, is that if you don't allow
01:43:42.540 | that kind of ego, then you would never write that book.
01:43:46.780 | That you would say, well, people must have done this.
01:43:49.140 | There's not, you would not dig.
01:43:50.740 | You would not keep digging. - Yeah, that's right.
01:43:52.460 | - And I think that was, I think you have to take that ego
01:43:57.260 | and write it and see where it takes you.
01:43:59.420 | And that's how you create exceptional work.
01:44:03.660 | - But I think the other point about that book was,
01:44:06.380 | it was a non-trivial question, how to take a bunch of ideas
01:44:10.020 | that are, I think, reasonably big ideas.
01:44:12.340 | They might, you know, their importance is determined
01:44:15.820 | by what happens historically.
01:44:16.940 | One can't tell how important they are.
01:44:18.260 | One can tell sort of the scope of them.
01:44:20.840 | And the scope is fairly big.
01:44:22.780 | And they're very different from things
01:44:25.220 | that have come before.
01:44:26.140 | And the question is, how do you explain
01:44:27.400 | that stuff to people?
01:44:28.800 | And so I had had the experience of sort of saying,
01:44:31.940 | well, there are these things, there's a cellular automaton,
01:44:34.340 | it does this, it does that.
01:44:36.060 | And people are like, oh, it must be just like this.
01:44:38.060 | It must be just like that.
01:44:39.220 | So no, it isn't.
01:44:40.300 | It's something different, right?
01:44:42.340 | - And so you could have done sort of,
01:44:44.180 | I'm really glad you did what you did,
01:44:45.460 | but you could have done sort of academically,
01:44:47.340 | just publish, keep publishing small papers here and there.
01:44:50.540 | And then you would just keep getting
01:44:51.780 | this kind of resistance, right?
01:44:53.060 | You would get like, it's supposed to just dropping
01:44:56.660 | a thing that says, here it is.
01:44:58.140 | Here's the full thing.
01:45:00.180 | - No, I mean, that was my calculation,
01:45:01.500 | is that basically, you know,
01:45:03.300 | you could introduce little pieces
01:45:05.500 | that's like, you know, one possibility is like,
01:45:08.020 | it's the secret weapon, so to speak.
01:45:10.360 | It's this, you know, I keep on, you know,
01:45:12.460 | discovering these things in all these different areas.
01:45:14.220 | Where'd they come from?
01:45:15.340 | Nobody knows.
01:45:16.340 | But I decided that, you know, in the interests
01:45:18.460 | of one only has one life to lead,
01:45:19.980 | and, you know, writing that book took me a decade anyway.
01:45:24.300 | It's not, there's not a lot of wiggle room, so to speak.
01:45:26.220 | One can't be wrong by a factor of three, so to speak,
01:45:28.900 | and how long it's gonna take.
01:45:30.980 | That I, you know, I thought the best thing to do,
01:45:33.980 | the thing that is most sort of,
01:45:36.540 | that most respects the intellectual content, so to speak,
01:45:41.540 | is you just put it out with as much force as you can,
01:45:45.780 | because it's not something where,
01:45:47.660 | and, you know, it's an interesting thing.
01:45:49.500 | You talk about ego, and it's, you know,
01:45:51.900 | for example, I run a company which has my name on it, right?
01:45:55.700 | I thought about starting a club for people
01:45:57.740 | whose companies have their names on them,
01:45:59.540 | and it's a funny group,
01:46:00.980 | because we're not a bunch of egomaniacs.
01:46:03.300 | That's not what it's about, so to speak.
01:46:05.740 | It's about basically sort of taking responsibility
01:46:08.860 | for what one's doing, and, you know, in a sense,
01:46:12.260 | any of these things where you're sort of
01:46:13.740 | putting yourself on the line,
01:46:16.100 | it's kind of a funny, it's a funny dynamic,
01:46:21.980 | because in a sense, my company is sort of something
01:46:26.340 | that happens to have my name on it,
01:46:28.060 | but it's kind of bigger than me,
01:46:29.260 | and I'm kind of just its mascot at some level.
01:46:32.340 | I mean, I also happen to be a pretty, you know,
01:46:34.740 | strong leader of it, but--
01:46:36.980 | - But it's basically showing a deep,
01:46:39.820 | inextricable sort of investment.
01:46:44.620 | The same, your name, like Steve Jobs's name
01:46:47.380 | wasn't on Apple, but he was Apple.
01:46:52.380 | - Yes. - Elon Musk's name
01:46:54.500 | is not on Tesla, but he is Tesla.
01:46:57.260 | So it's like, meaning emotionally.
01:46:59.980 | If a company succeeds or fails, he would just,
01:47:02.660 | that emotionally, he would suffer through that.
01:47:06.060 | And so that's a beautiful--
01:47:07.820 | - Yeah, it's recognizing that fact.
01:47:09.460 | - And also, Wolfram's a pretty good branding name,
01:47:11.540 | so it works out. (laughs)
01:47:12.780 | - Yeah, right, exactly.
01:47:14.220 | I think Steve had a bad deal there.
01:47:16.500 | - Yeah, so you made up for it with the last name.
01:47:19.780 | Okay, so in 2002, you published "A New Kind of Science,"
01:47:24.780 | to which, sort of on a personal level,
01:47:28.860 | I can credit my love for cellular automaton
01:47:30.980 | and computation in general.
01:47:32.380 | I think a lot of others can as well.
01:47:35.540 | Can you briefly describe the vision, the hope,
01:47:40.540 | the main idea presented in this 1,200-page book?
01:47:46.220 | - Sure, although it took 1,200 pages to say in the book,
01:47:50.380 | so no, the real idea, it's kind of,
01:47:54.980 | a good way to get into it is to look at, sort of,
01:47:56.940 | the arc of history and to look at what's happened
01:47:58.980 | in kind of the development of science.
01:48:00.940 | I mean, there was this sort of big idea in science
01:48:03.960 | about 300 years ago that was,
01:48:06.360 | let's use mathematical equations
01:48:08.820 | to try and describe things in the world.
01:48:11.100 | Let's use sort of the formal idea of mathematical equations
01:48:14.860 | to describe what might be happening in the world,
01:48:16.880 | rather than, for example,
01:48:18.140 | just using sort of logical augmentation and so on.
01:48:20.800 | Let's have a formal theory about that.
01:48:23.740 | And so there'd been this 300-year run
01:48:26.020 | of using mathematical equations
01:48:27.420 | to describe the natural world,
01:48:28.500 | which had worked pretty well.
01:48:30.100 | But I got interested in how one could generalize
01:48:33.660 | that notion.
01:48:34.820 | There is a formal theory, there are definite rules,
01:48:37.460 | but what structure could those rules have?
01:48:40.080 | And so what I got interested in was,
01:48:42.380 | let's generalize beyond the sort of purely
01:48:44.500 | mathematical rules, and we now have this sort of notion
01:48:48.100 | of programming and computing and so on.
01:48:50.680 | Let's use the kinds of rules that can be embodied
01:48:53.940 | in programs as a sort of generalization
01:48:57.420 | of the ones that can exist in mathematics
01:48:59.760 | as a way to describe the world.
01:49:01.740 | And so my kind of favorite version
01:49:04.860 | of these kinds of simple rules
01:49:07.180 | are these things called cellular automata.
01:49:09.100 | And so a typical case--
01:49:10.620 | - So wait, what are cellular automata?
01:49:13.980 | - Fair enough.
01:49:14.820 | So typical case of a cellular automaton,
01:49:16.980 | it's an array of cells.
01:49:19.380 | It's just a line of discrete cells.
01:49:23.140 | Each cell is either black or white.
01:49:25.420 | And in a series of steps that you can represent
01:49:28.460 | as lines going down a page,
01:49:30.640 | you're updating the color of each cell
01:49:32.660 | according to a rule that depends on the color
01:49:34.940 | of the cell above it and to its left and right.
01:49:37.220 | So it's really simple.
01:49:38.040 | So a thing might be, you know,
01:49:40.580 | if the cell and its right neighbor are not the same,
01:49:48.660 | and, or the cell on the left is black or something,
01:49:53.660 | then make it black on the next step.
01:49:56.060 | And if not, make it white.
01:49:57.900 | Typical rule.
01:49:58.800 | That rule, I'm not sure I said it exactly right,
01:50:02.300 | but a rule very much like what I just said
01:50:04.680 | has the feature that if you started off
01:50:06.360 | from just one black cell at the top,
01:50:08.420 | it makes this extremely complicated pattern.
01:50:10.740 | So some rules, you get a very simple pattern.
01:50:14.580 | Some rules, you have the rule is simple.
01:50:18.440 | You start them off from a sort of simple seed.
01:50:20.840 | You just get this very simple pattern.
01:50:23.020 | But other rules, and this was the big surprise
01:50:25.940 | when I started actually just doing
01:50:27.460 | the simple computer experiments to find out what happens,
01:50:30.360 | is that they produce very complicated patterns of behavior.
01:50:33.780 | So for example, this rule 30 rule has the feature
01:50:38.060 | you started from just one black cell at the top,
01:50:40.580 | makes this very random pattern.
01:50:43.320 | If you look like at the center column of cells,
01:50:46.500 | you get a series of values.
01:50:48.920 | It goes black, white, black, black, whatever it is.
01:50:51.580 | That sequence seems for all practical purposes random.
01:50:55.640 | So it's kind of like in math,
01:50:58.960 | you compute the digits of pi, 3.1415926, whatever.
01:51:03.800 | Those digits once computed,
01:51:05.920 | I mean, the scheme for computing pi,
01:51:07.980 | it's the ratio of the circumference
01:51:09.380 | to diameter of a circle, very well-defined.
01:51:11.960 | But yet, once you've generated those digits,
01:51:16.000 | they seem for all practical purposes completely random.
01:51:19.060 | And so it is with rule 30,
01:51:21.280 | that even though the rule is very simple, much simpler,
01:51:24.300 | much more sort of computationally obvious
01:51:27.100 | than the rule for generating digits of pi,
01:51:29.400 | even with a rule that simple,
01:51:31.260 | you're still generating immensely complicated behavior.
01:51:34.220 | - Yeah, so if we could just pause on that,
01:51:35.900 | I think you probably have said it and looked at it so long,
01:51:39.240 | you forgot the magic of it, or perhaps you don't,
01:51:41.540 | you still feel the magic.
01:51:42.580 | But to me, if you've never seen sort of,
01:51:47.060 | I would say, what is it, a one-dimensional,
01:51:49.500 | essentially, called the automata, right?
01:51:52.580 | And you were to guess what you would see
01:51:56.100 | if you have some sort of cells
01:52:00.580 | that only respond to its neighbors.
01:52:03.280 | - Right. - If you were to guess
01:52:05.340 | what kind of things you would see,
01:52:07.380 | like my initial guess,
01:52:10.060 | like even when I first opened your book
01:52:12.100 | on "The New Kind of Science," right?
01:52:13.940 | My initial guess is you would see,
01:52:16.300 | I mean, it would be very simple stuff.
01:52:19.460 | - Right. - And I think
01:52:21.100 | it's a magical experience to realize
01:52:23.660 | the kind of complexity, you mentioned rule 30,
01:52:26.420 | still your favorite cellular automaton?
01:52:28.820 | - Still my favorite rule, yes.
01:52:30.420 | - You get complexity, immense complexity.
01:52:34.540 | You get arbitrary complexity.
01:52:36.860 | - Yes. - And when you say
01:52:37.900 | randomness down the middle column,
01:52:41.420 | that's just one cool way to say
01:52:44.900 | that there's incredible complexity.
01:52:46.860 | And that's just, I mean, that's a magical idea.
01:52:50.900 | However you start to interpret it,
01:52:52.340 | all the reducibility discussions, all that,
01:52:54.780 | but it's just, I think,
01:52:56.380 | that has profound philosophical
01:52:59.380 | kind of notions around it, too.
01:53:01.740 | It's not just-- - Oh, yeah.
01:53:03.220 | - I mean, it's transformational about how you see the world.
01:53:05.580 | I think for me, it was transformational.
01:53:07.580 | I don't know, we can have all kinds of discussions
01:53:10.220 | about computation and so on, but just,
01:53:12.940 | I sometimes think if I were on a desert island
01:53:17.780 | and was, I don't know, maybe it was some psychedelics
01:53:22.300 | or something, but if I had to take one book,
01:53:24.780 | I mean, "New Kind of Science" would be it
01:53:26.180 | 'cause you could just enjoy that notion.
01:53:29.140 | For some reason, it's a deeply profound notion,
01:53:31.500 | at least to me. - I find it that way, yeah.
01:53:33.300 | I mean, look, it's been,
01:53:35.940 | it was a very intuition-breaking thing to discover.
01:53:40.740 | I mean, it's kind of like, you know,
01:53:42.660 | you point the computational telescope out there
01:53:45.660 | and suddenly you see, I don't know, you know,
01:53:48.860 | in the past, it's kind of like, you know,
01:53:50.460 | moons of Jupiter or something,
01:53:51.620 | but suddenly you see something that's kind of
01:53:52.940 | very unexpected, and Rule 30 was very unexpected for me.
01:53:56.620 | And the big challenge at a personal level
01:53:58.780 | was to not ignore it.
01:54:01.060 | I mean, people, you know, in other words,
01:54:03.220 | you might say, you know-- - It's a bug.
01:54:05.580 | - What would you say, yeah, what would you say?
01:54:07.340 | - Yeah, I mean, I-- - What are we looking at,
01:54:08.980 | by the way?
01:54:09.820 | - Well, I was just generating here,
01:54:10.980 | I'll actually generate a Rule 30 pattern.
01:54:13.140 | So that's the rule for Rule 30, and it says,
01:54:17.500 | for example, it says here, if you have a black cell
01:54:20.260 | in the middle and a black cell to the left
01:54:21.780 | and a white cell to the right,
01:54:22.860 | then the cell on the next step will be white.
01:54:25.420 | And so here's the actual pattern that you get
01:54:27.900 | starting off from a single black cell at the top there.
01:54:31.540 | And then-- - That's the initial state,
01:54:33.980 | initial condition. - That's the initial thing,
01:54:35.560 | you just start off from that,
01:54:36.820 | and then you're going down the page,
01:54:38.940 | and at every step, you're just applying this rule
01:54:43.060 | to find out the new value that you get.
01:54:45.420 | And so you might think, rule that simple,
01:54:48.300 | you gotta get the, there's gotta be some trace
01:54:50.460 | of that simplicity here.
01:54:52.140 | Okay, we'll run it, let's say, for 400 steps.
01:54:55.340 | It's what it does, it's kind of aliasing a bit
01:54:57.540 | on the screen there, but you can see
01:54:59.580 | there's a little bit of regularity over on the left.
01:55:02.380 | But there's a lot of stuff here that just looks
01:55:06.320 | very complicated, very random,
01:55:08.520 | and that's a big sort of shock to,
01:55:12.160 | was a big shock to my intuition, at least,
01:55:14.520 | that that's possible.
01:55:15.640 | - Your mind immediately starts, is there a pattern?
01:55:18.200 | There must be a repetitive pattern.
01:55:19.960 | - Yeah, right. - There must be,
01:55:20.780 | that's where the mind goes. - Well, right, so I spent,
01:55:22.160 | so indeed, that's what I thought at first,
01:55:24.320 | and I thought, well, this is kind of interesting,
01:55:27.080 | but if we run it long enough, we'll see,
01:55:30.000 | something will resolve into something simple.
01:55:32.580 | And I did all kinds of analysis of using mathematics,
01:55:37.260 | statistics, cryptography, whatever, whatever,
01:55:40.400 | to try and crack it, and I never succeeded.
01:55:43.580 | And after I hadn't succeeded for a while,
01:55:45.300 | I started thinking, maybe there's a real phenomenon here
01:55:48.940 | that is the reason I'm not succeeding.
01:55:50.620 | Maybe, I mean, the thing that, for me,
01:55:52.580 | was sort of a motivating factor was looking
01:55:55.140 | at the natural world and seeing all this complexity
01:55:57.500 | that exists in the natural world,
01:55:59.100 | the question is, where does it come from?
01:56:00.740 | You know, what secret does nature have
01:56:03.000 | that lets it make all this complexity
01:56:05.100 | that we humans, when we engineer things,
01:56:07.300 | typically are not making?
01:56:09.060 | We're typically making things
01:56:10.300 | that at least look quite simple to us.
01:56:12.820 | And so the shock here was, even from something very simple,
01:56:16.660 | you're making something that complex.
01:56:19.360 | Maybe this is getting at sort of the secret that nature has
01:56:23.020 | that allows it to make really complex things,
01:56:25.820 | even though its underlying rules may not be that complex.
01:56:29.420 | - How did it make you feel?
01:56:30.620 | If we look at the Newton apple,
01:56:33.460 | was there, you know, you took a walk
01:56:36.460 | and something, it profoundly hit you,
01:56:40.220 | or was this a gradual thing?
01:56:42.060 | A lobster being boiled?
01:56:43.380 | - The truth of every sort of science discovery
01:56:47.220 | is it's not that gradual.
01:56:49.260 | I mean, I've spent, I happen to be interested
01:56:51.540 | in scientific biography kinds of things,
01:56:53.140 | and so I've tried to track down, you know,
01:56:54.580 | how did people come to figure out this or that thing?
01:56:57.660 | And there's always a long kind of sort of preparatory,
01:57:02.660 | you know, there's a need to be prepared
01:57:05.740 | and a mindset in which it's possible to see something.
01:57:08.660 | I mean, in the case of Rule 30,
01:57:10.460 | I was around June 1st, 1984,
01:57:12.900 | was kind of a silly story in some ways.
01:57:15.940 | I finally had a high resolution laser printer.
01:57:18.660 | So I was able, so I thought,
01:57:19.880 | I'm gonna generate a bunch of pictures
01:57:21.300 | of the acyllar automata, and I generate this one,
01:57:24.400 | and I put it on some plane flight to Europe,
01:57:28.460 | and I have this with me, and it's like,
01:57:30.420 | you know, I really should try to understand this,
01:57:33.500 | and this is really, you know,
01:57:35.140 | this is, I really don't understand what's going on,
01:57:37.500 | and that was kind of the, you know,
01:57:39.820 | slowly trying to see what was happening.
01:57:43.860 | It was not, it was depressingly unsudden, so to speak,
01:57:48.160 | in the sense that a lot of these ideas,
01:57:51.420 | like principle of computational equivalence, for example,
01:57:54.760 | you know, I thought, well, that's a possible thing.
01:57:57.240 | I didn't know if it's correct.
01:57:58.760 | Still didn't know for sure that it's correct,
01:58:01.480 | but it's sort of a gradual thing,
01:58:02.840 | that these things gradually kind of become,
01:58:05.980 | seem more important than one thought.
01:58:08.120 | I mean, I think the whole idea
01:58:10.200 | of studying the computational universe of simple programs,
01:58:13.400 | it took me probably a decade, decade and a half
01:58:17.280 | to kind of internalize
01:58:18.800 | that that was really an important idea.
01:58:21.700 | And I think, you know, if it turns out,
01:58:23.240 | we find the whole universe lurking out there
01:58:25.740 | in the computational universe,
01:58:27.440 | that's a good, you know,
01:58:28.820 | it's a good brownie point or something for the whole idea.
01:58:32.320 | But I think that the thing that's strange
01:58:35.620 | in this whole question about, you know,
01:58:37.840 | finding this different raw material
01:58:39.480 | for making models of things,
01:58:41.060 | what's been interesting sort of in the arc of history
01:58:45.140 | is, you know, for 300 years,
01:58:46.540 | it's kind of like the mathematical equations approach.
01:58:49.540 | It was the winner.
01:58:50.580 | It was the thing, you know,
01:58:51.500 | you want to have a really good model
01:58:53.180 | for something that's what you use.
01:58:55.300 | The thing that's been remarkable
01:58:56.780 | is just in the last decade or so,
01:58:59.100 | I think one can see a transition
01:59:00.660 | to using not mathematical equations,
01:59:03.600 | but programs as sort of the raw material
01:59:06.260 | for making models of stuff.
01:59:08.260 | And that's pretty neat.
01:59:10.020 | And it's kind of, you know,
01:59:11.720 | as somebody who's kind of lived inside this paradigm shift,
01:59:14.520 | so to speak, it is bizarre.
01:59:17.380 | I mean, no doubt in sort of the history of science,
01:59:19.840 | that will be seen as an instantaneous paradigm shift,
01:59:22.860 | but it sure isn't instantaneous when it's played out
01:59:25.180 | in one's actual life, so to speak.
01:59:26.940 | It seems glacial.
01:59:28.320 | And it's the kind of thing where it's sort of interesting
01:59:33.340 | because in the dynamics of sort of the adoption
01:59:36.980 | of ideas like that into different fields,
01:59:40.460 | the younger the field, the faster the adoption typically,
01:59:43.660 | because people are not kind of locked in
01:59:46.780 | where the fifth generation of people
01:59:48.300 | who've studied this field, and it is the way it is,
01:59:52.200 | and it can never be any different.
01:59:53.840 | And I think that's been, you know,
01:59:55.720 | watching that process has been interesting.
01:59:57.960 | I mean, I think I'm fortunate that I've,
02:00:01.460 | I do stuff mainly 'cause I like doing it.
02:00:05.560 | And if I was, that makes me kind of thick-skinned
02:00:09.840 | about the world's response to what I do.
02:00:12.040 | But that's definitely, you know,
02:00:15.560 | and anytime you write a book called something
02:00:18.080 | like "A New Kind of Science," it's kind of,
02:00:21.080 | the pitchforks will come out for the old kind of science.
02:00:25.000 | And it was interesting dynamics.
02:00:26.660 | I think that the,
02:00:27.780 | I have to say that I was fully aware of the fact that
02:00:33.600 | when you see sort of incipient paradigm shifts in science,
02:00:38.460 | the vigor of the negative response upon early introduction
02:00:43.160 | is a fantastic positive indicator
02:00:45.600 | of good long-term results.
02:00:48.440 | So in other words, if people just don't care,
02:00:51.800 | it's, you know, that's not such a good sign.
02:00:55.120 | If they're like, oh, this is great,
02:00:56.720 | that means you didn't really discover anything interesting.
02:00:59.680 | - What fascinating properties of Rule 30
02:01:03.240 | have you discovered over the years?
02:01:05.000 | You've recently announced the Rule 30 prizes
02:01:07.440 | for solving three key problems.
02:01:09.720 | Can you maybe talk about interesting properties
02:01:12.440 | that have been kind of revealed?
02:01:15.360 | Rule 30 or other cellular automata
02:01:17.240 | and what problems are still before us,
02:01:19.840 | like the three problems you've announced?
02:01:21.360 | - Yeah, yeah, right.
02:01:22.200 | So I mean, the most interesting thing about cellular automata
02:01:27.040 | is that it's hard to figure stuff out about them.
02:01:29.440 | And that's some, in a sense,
02:01:32.520 | every time you try and sort of,
02:01:34.260 | you try and bash them with some other technique,
02:01:37.560 | you say, can I crack them?
02:01:40.040 | The answer is they seem to be uncrackable.
02:01:42.360 | They seem to have the feature that they are,
02:01:46.160 | that they're sort of showing irreducible computation.
02:01:49.200 | They're not, you're not able to say,
02:01:51.440 | oh, I know exactly what this is going to do.
02:01:53.560 | It's going to do this or that.
02:01:55.600 | - But there's specific formulations of that fact.
02:01:58.520 | - Yes, right.
02:01:59.360 | So I mean, for example, in Rule 30,
02:02:01.760 | in the pattern you get just starting
02:02:03.320 | from a single black cell,
02:02:04.880 | you get this sort of very,
02:02:07.120 | very sort of random looking pattern.
02:02:10.080 | And so one feature of that,
02:02:11.160 | just look at the center column.
02:02:12.840 | And for example, we use that for a long time
02:02:15.480 | to generate randomness in Wolfram language.
02:02:18.000 | Just, you know, what Rule 30 produces.
02:02:20.560 | Now the question is, can you prove how random it is?
02:02:23.880 | So for example, one very simple question,
02:02:26.320 | can you prove that it'll never repeat?
02:02:28.760 | We haven't been able to show that it will never repeat.
02:02:31.520 | We know that if there are two adjacent columns,
02:02:35.760 | we know they can't both repeat.
02:02:37.680 | But just knowing whether that center column can ever repeat,
02:02:40.560 | we still don't even know that.
02:02:42.080 | Another problem that I sort of put in my collection of,
02:02:46.680 | you know, it's like $30,000 for three,
02:02:49.120 | you know, for these three prizes for about Rule 30.
02:02:53.000 | I would say that this is not one of those,
02:02:55.240 | this is one of those cases where
02:02:57.000 | the money is not the main point,
02:02:58.840 | but it's just, you know,
02:03:01.160 | helps motivate somehow the investigation.
02:03:05.680 | - So there's three problems you propose,
02:03:07.280 | you get $30,000 if you solve all three,
02:03:09.800 | or maybe, I don't know.
02:03:10.640 | - No, it's 10,000 for each.
02:03:12.120 | - For each, right.
02:03:12.960 | - Yeah, my--
02:03:13.920 | - The problem is, that's right, money's not the thing.
02:03:16.040 | The problems are themselves,
02:03:17.000 | they're just clean formulations of chat lots.
02:03:19.880 | - It's just, you know, will it ever become periodic?
02:03:22.840 | Second problem is, are there an equal number
02:03:25.020 | of black and white cells?
02:03:26.320 | - Down the middle column.
02:03:27.160 | - Down the middle column.
02:03:28.420 | And the third problem is a little bit harder to state,
02:03:30.160 | which is essentially, is there a way of figuring out
02:03:33.240 | what the color of a cell at position T
02:03:36.600 | down the center column is
02:03:39.160 | with a less computational effort than about T steps?
02:03:43.320 | So in other words, is there a way to jump ahead and say,
02:03:46.320 | I know what this is gonna do, you know,
02:03:48.120 | it's just some mathematical function of T.
02:03:53.120 | - Or proving that there is no way.
02:03:55.000 | - Or proving there is no way, yes.
02:03:56.680 | But both, I mean, you know, for any one of these,
02:03:59.080 | one could prove that, you know, one could discover,
02:04:01.760 | you know, we know what rule 30 does for a billion steps,
02:04:04.680 | but, and maybe we'll know for a trillion steps
02:04:07.040 | before too very long, but maybe at a quadrillion steps,
02:04:10.500 | it suddenly becomes repetitive.
02:04:12.280 | You might say, how could that possibly happen?
02:04:14.840 | But so when I was writing up these prizes,
02:04:17.240 | I thought, and this is typical of what happens
02:04:19.440 | in the computational universe,
02:04:20.520 | I thought, let me find an example where it looks like
02:04:24.020 | it's just gonna be random forever,
02:04:25.480 | but actually it becomes repetitive.
02:04:27.520 | And I found one, and it's just, you know, I did a search,
02:04:30.060 | I searched, I don't know, maybe a million different rules
02:04:33.520 | with some criterion, and this is,
02:04:36.540 | what's sort of interesting about that is,
02:04:38.600 | I kind of have this thing that I say in a kind of silly way
02:04:42.100 | about the computational universe, which is, you know,
02:04:44.480 | the animals are always smarter than you are.
02:04:46.640 | That is, there's always some way
02:04:47.800 | one of these computational systems
02:04:49.200 | is gonna figure out how to do something,
02:04:51.040 | even though I can't imagine how it's gonna do it.
02:04:53.600 | And, you know, I didn't think I would find one
02:04:55.760 | that, you know, you would think after all these years
02:04:57.400 | that when I found sort of all possible things,
02:04:59.700 | funky things that I would have,
02:05:05.300 | that I would have gotten my intuition wrapped
02:05:07.360 | around the idea that, you know,
02:05:10.160 | these creatures are always, in the computational universe,
02:05:12.720 | are always smarter than I'm gonna be, but, you know--
02:05:15.440 | - Well, they're equivalently smart, right?
02:05:17.240 | - That's correct.
02:05:18.080 | And that makes it, that makes one feel very sort of,
02:05:21.800 | it's humbling every time, because every time the thing is,
02:05:26.560 | you know, you think it's gonna do this,
02:05:28.000 | or it's not gonna be possible to do this,
02:05:29.960 | and it turns out it finds a way.
02:05:31.640 | - Of course, the promising thing is,
02:05:32.880 | there's a lot of other rules like rule 30.
02:05:35.740 | It's just rule 30 is--
02:05:37.820 | - Oh, it's my favorite, 'cause I found it first,
02:05:39.740 | and that's the-- - That's right.
02:05:40.620 | But the problems are focusing on rule 30.
02:05:42.860 | It's possible that rule 30 is repetitive
02:05:46.540 | after a trillion steps.
02:05:47.900 | - It is possible.
02:05:48.740 | - And that doesn't prove anything about the other rules.
02:05:50.620 | - It does not, but--
02:05:51.460 | - But this is a good sort of experiment
02:05:53.660 | of how you go about trying to prove something
02:05:56.260 | about a particular rule.
02:05:57.260 | - Yes, and it also, all these things help build intuition.
02:06:00.620 | That is, if it turned out
02:06:02.180 | that this was repetitive after a trillion steps,
02:06:04.980 | that's not what I would expect,
02:06:07.420 | and so we learn something from that.
02:06:09.360 | - The method to do that, though,
02:06:11.380 | would reveal something interesting
02:06:12.900 | about the cellular-- - No doubt, no doubt.
02:06:15.140 | I mean, it's, although it's sometimes challenging,
02:06:18.300 | like the, you know, I put out a prize in 2007
02:06:20.860 | for a particular Turing machine
02:06:24.540 | that I, that was the simplest candidate
02:06:27.140 | for being a universal Turing machine,
02:06:29.380 | and the young chap in England named Alex Smith,
02:06:32.140 | after a smallish number of months said,
02:06:35.460 | "I've got a proof," and he did.
02:06:37.260 | You know, it took a little while to iterate,
02:06:38.700 | but he had a proof.
02:06:40.420 | Unfortunately, the proof is very,
02:06:42.680 | it's a lot of micro details.
02:06:45.900 | It's not like you look at it and you say,
02:06:48.740 | "Aha, there's a big new principle."
02:06:51.760 | The big new principle is the simplest Turing machine
02:06:54.960 | that might have been universal actually is universal,
02:06:57.900 | and it's incredibly much simpler than the Turing machines
02:07:00.500 | that people already knew were universal before that,
02:07:03.000 | and so that, intuitionally, is important
02:07:05.460 | 'cause it says computation universality
02:07:07.940 | is closer at hand than you might have thought,
02:07:10.660 | but the actual methods are not,
02:07:13.100 | in that particular case, were not terribly illuminated.
02:07:15.100 | - It would be nice if the methods would also be elegant.
02:07:18.060 | - That's true, yeah, no, I mean,
02:07:19.500 | I think it's one of these things where,
02:07:21.780 | I mean, it's like a lot of, we've talked about earlier,
02:07:24.260 | kind of opening up AIs and machine learning and things
02:07:28.460 | and what's going on inside, and is it just step-by-step,
02:07:32.060 | or can you sort of see the bigger picture more abstractly?
02:07:35.260 | - It's unfortunate, I mean, with Fermat's last theorem proof,
02:07:38.100 | it's unfortunate that the proof
02:07:39.780 | to such an elegant theorem is not,
02:07:44.620 | I mean, it's not, it doesn't fit into the margins of a page.
02:07:49.060 | - That's true, but there's no,
02:07:50.300 | one of the things is that's another consequence
02:07:52.520 | of computational irreducibility,
02:07:54.460 | this fact that there are even quite short results
02:07:58.600 | in mathematics whose proofs are arbitrarily long.
02:08:01.860 | That's a consequence of all this stuff,
02:08:03.780 | and it makes one wonder,
02:08:06.140 | how come mathematics is possible at all?
02:08:09.580 | Why is it the case, how have people managed to navigate
02:08:13.620 | doing mathematics through looking at things
02:08:16.340 | where they're not just thrown into, it's all undecidable?
02:08:20.400 | That's its own separate story.
02:08:23.560 | - And that would be, that would have a poetic beauty to it
02:08:28.560 | if people were to find something interesting about rule 30,
02:08:32.400 | because, I mean, there's an emphasis
02:08:35.560 | to this particular rule.
02:08:36.640 | It wouldn't say anything about the broad irreducibility
02:08:39.640 | of all computations, but it would nevertheless
02:08:42.380 | put a few smiles on people's faces of--
02:08:45.280 | - Well, yeah, but to me, it's like, in a sense,
02:08:50.120 | establishing principle of computational equivalence,
02:08:53.000 | it's a little bit like doing inductive science anywhere.
02:08:56.360 | That is, the more examples you find,
02:08:58.800 | the more convinced you are that it's generally true.
02:09:01.440 | I mean, we don't get to, whenever we do natural science,
02:09:04.940 | we say, well, it's true here that this or that happens.
02:09:08.840 | Can we prove that it's true everywhere in the universe?
02:09:11.560 | No, we can't.
02:09:12.940 | So, it's the same thing here.
02:09:15.800 | We're exploring the computational universe.
02:09:17.440 | We're establishing facts in the computational universe,
02:09:20.440 | and that's sort of a way
02:09:22.520 | of inductively concluding general things.
02:09:27.520 | - Just to think through this a little bit,
02:09:32.020 | we've touched on it a little bit before,
02:09:33.520 | but what's the difference between the kind of computation,
02:09:36.680 | now that we're talking about cellular automata,
02:09:39.280 | what's the difference between the kind of computation,
02:09:41.320 | biological systems, our mind, our bodies,
02:09:44.640 | the things we see before us that emerged
02:09:48.200 | through the process of evolution, and cellular automata?
02:09:52.220 | I mean, we've kind of implied through the discussion
02:09:55.800 | of physics underlying everything,
02:09:57.320 | but we talked about the potential equivalence
02:10:01.380 | of the fundamental laws of physics
02:10:02.920 | and the kind of computation going on in Turing machines,
02:10:06.200 | but can you now connect that,
02:10:08.800 | do you think there's something special or interesting
02:10:11.640 | about the kind of computation that our bodies do?
02:10:15.640 | - Right, well, let's talk about brains primarily.
02:10:19.040 | I mean, I think the most important thing
02:10:22.120 | about the things that our brains do
02:10:23.920 | are that we care about them,
02:10:25.500 | in the sense that there's a lot of computation
02:10:27.700 | going on out there in cellular automata,
02:10:31.000 | in physical systems, and so on,
02:10:34.200 | and it just, it does what it does.
02:10:35.680 | It follows those rules, it does what it does.
02:10:38.120 | The thing that's special about the computation
02:10:40.080 | in our brains is that it's connected to our goals
02:10:44.200 | and our kind of whole societal story,
02:10:47.180 | and I think that's the special feature,
02:10:51.280 | and now the question then is,
02:10:52.520 | when you see this whole sort of ocean
02:10:53.960 | of computation out there, how do you connect that
02:10:57.160 | to the things that we humans care about?
02:10:59.440 | And in a sense, a large part of my life
02:11:01.480 | has been involved in sort of the technology
02:11:03.120 | of how to do that, and what I've been interested in
02:11:06.440 | is kind of building computational language
02:11:08.880 | that allows that something that both we humans
02:11:11.560 | can understand and that can be used
02:11:14.640 | to determine computations that are actually
02:11:17.200 | computations we care about.
02:11:19.240 | See, I think when you look at something
02:11:20.840 | like one of these cellular automata,
02:11:22.680 | and it does some complicated thing,
02:11:24.480 | you say, "That's fun, but why do I care?"
02:11:28.080 | Well, you could say the same thing, actually, in physics.
02:11:31.240 | You say, "Oh, I've got this material,
02:11:33.200 | "and it's a ferrite or something.
02:11:35.120 | "Why do I care?"
02:11:36.040 | You know, it has some magnetic properties.
02:11:38.400 | "Why do I care?
02:11:39.240 | "It's amusing, but why do I care?"
02:11:40.600 | Well, we end up caring because ferrite
02:11:43.120 | is what's used to make magnetic tape,
02:11:44.720 | magnetic disks, whatever, or we could use liquid crystals
02:11:48.080 | as made used to make, well, not actually,
02:11:51.240 | increasingly not, but it has been used
02:11:53.000 | to make computer displays and so on.
02:11:55.720 | But those are, so in a sense, we're mining
02:11:58.200 | these things that happen to exist in the physical universe
02:12:01.160 | and making it be something that we care about
02:12:04.080 | 'cause we sort of entrain it into technology.
02:12:06.760 | And it's the same thing in the computational universe
02:12:09.520 | that a lot of what's out there
02:12:11.720 | is stuff that's just happening,
02:12:14.040 | but sometimes we have some objective,
02:12:16.440 | and we will go and sort of mine the computational universe
02:12:19.120 | for something that's useful for some particular objective.
02:12:21.960 | On a large scale, trying to do that,
02:12:24.320 | trying to sort of navigate the computational universe
02:12:27.080 | to do useful things, you know,
02:12:29.120 | that's where computational language comes in.
02:12:31.920 | And, you know, a lot of what I've spent time doing
02:12:34.360 | and building this thing we call Wolfram Language,
02:12:37.200 | which I've been building
02:12:38.320 | for the last one third of a century now.
02:12:41.480 | And kind of the goal there is to have a way to express
02:12:46.480 | kind of computational thinking, computational thoughts
02:12:51.640 | in a way that both humans and machines can understand.
02:12:54.200 | So it's kind of like in the tradition of computer languages,
02:12:58.320 | programming languages,
02:13:00.000 | that the tradition there has been more,
02:13:02.360 | let's take how computers are built,
02:13:05.200 | and let's specify, let's have a human way to specify,
02:13:09.040 | do this, do this, do this,
02:13:10.600 | at the level of the way that computers are built.
02:13:13.240 | What I've been interested in
02:13:14.320 | is representing sort of the whole world computationally,
02:13:18.120 | and being able to talk about
02:13:19.560 | whether it's about cities or chemicals,
02:13:21.800 | or, you know, this kind of algorithm
02:13:23.600 | or that kind of algorithm,
02:13:24.720 | things that have come to exist in our civilization
02:13:28.320 | and the sort of knowledge base of our civilization,
02:13:30.600 | being able to talk directly about those
02:13:32.720 | in a computational language
02:13:34.440 | so that both we can understand it
02:13:36.880 | and computers can understand it.
02:13:38.840 | I mean, the thing that I've been
02:13:40.880 | sort of excited about recently,
02:13:42.160 | which I had only realized recently,
02:13:43.720 | which is kind of embarrassing,
02:13:44.880 | but it's kind of the arc of what we've tried to do
02:13:48.360 | in building this kind of computational language
02:13:50.920 | is it's a similar kind of arc
02:13:53.560 | of what happened when mathematical notation was invented.
02:13:57.400 | So go back 400 years,
02:13:59.960 | people were trying to do math.
02:14:01.720 | They were always explaining their math in words,
02:14:04.720 | and it was pretty clunky.
02:14:06.520 | And as soon as mathematical notation was invented,
02:14:09.880 | you could start defining things like algebra
02:14:12.320 | and later calculus and so on.
02:14:13.680 | It all became much more streamlined.
02:14:15.760 | When we deal with computational thinking about the world,
02:14:19.080 | there's a question of what is the notation?
02:14:20.600 | What is the kind of formalism that we can use
02:14:23.800 | to talk about the world computationally?
02:14:26.520 | And in a sense, that's what I've spent
02:14:28.240 | the last third of a century trying to build,
02:14:29.920 | and we finally got to the point
02:14:31.120 | where we have a pretty full-scale computational language
02:14:34.320 | that sort of talks about the world.
02:14:36.480 | And that's exciting because it means that
02:14:40.320 | just like having this mathematical notation
02:14:42.920 | let us talk about the world mathematically,
02:14:45.520 | we now, and let us build up
02:14:47.840 | these kind of mathematical sciences.
02:14:49.880 | Now we have a computational language,
02:14:52.240 | which allows us to start talking
02:14:53.520 | about the world computationally,
02:14:55.840 | and lets us, you know, my view of it
02:14:57.640 | is it's kind of computational X for all X,
02:15:01.360 | all these different fields of, you know,
02:15:03.360 | computational this, computational that.
02:15:05.680 | That's what we can now build.
02:15:06.880 | - Let's step back.
02:15:08.000 | So first of all, the mundane,
02:15:10.540 | what is Wolfram language in terms of sort of,
02:15:16.480 | I mean, I can answer the question for you,
02:15:18.280 | but it's basically not the philosophical,
02:15:21.840 | deep, the profound, the impact of it.
02:15:23.840 | I'm talking about in terms of tools,
02:15:25.640 | in terms of things you can download,
02:15:27.000 | in terms of stuff you can play with, what is it?
02:15:29.120 | What does it fit into the infrastructure?
02:15:31.360 | What are the different ways to interact with it?
02:15:33.240 | - Right, so I mean, the two big things
02:15:35.080 | that people have sort of perhaps heard of
02:15:37.640 | that come from Wolfram language,
02:15:39.080 | one is Mathematica, the other is Wolfram Alpha.
02:15:41.520 | So Mathematica first came out in 1988.
02:15:44.560 | It's this system that is basically
02:15:47.440 | an instance of Wolfram language,
02:15:50.640 | and it's used to do computations,
02:15:53.720 | particularly in sort of technical areas.
02:15:57.000 | And the typical thing you're doing
02:15:58.920 | is you're typing little pieces of computational language,
02:16:02.360 | and you're getting computations done.
02:16:04.560 | - It's very kind of, there's like a symbolic.
02:16:08.360 | - Yeah, it's a symbolic language.
02:16:10.440 | - So it's a symbolic language,
02:16:11.360 | so I mean, I don't know how to cleanly express that,
02:16:13.960 | but that makes it very distinct
02:16:15.440 | from how we think about sort of,
02:16:17.840 | I don't know, programming in a language
02:16:20.440 | like Python or something.
02:16:21.520 | - Right, so the point is that
02:16:23.640 | in a traditional programming language,
02:16:25.440 | the raw material of the programming language
02:16:27.920 | is just stuff that computers intrinsically do.
02:16:31.120 | And the point of Wolfram language
02:16:33.640 | is that what the language is talking about
02:16:36.840 | is things that exist in the world
02:16:38.640 | or things that we can imagine and construct,
02:16:41.240 | not, it's not sort of,
02:16:43.880 | it's aimed to be an abstract language from the beginning.
02:16:47.360 | And so, for example, one feature it has
02:16:48.960 | is that it's a symbolic language,
02:16:50.840 | which means that the thing called,
02:16:53.120 | you'd have an X, just type in X,
02:16:56.240 | and Wolfram language would just say,
02:16:57.600 | oh, that's X.
02:16:58.840 | It won't say error, undefined thing.
02:17:01.320 | I don't know what it is, computation,
02:17:03.160 | in terms of the internal computer.
02:17:05.600 | Now that X could perfectly well be the city of Boston.
02:17:10.560 | That's a thing, that's a symbolic thing.
02:17:13.080 | Or it could perfectly well be
02:17:14.720 | the trajectory of some spacecraft
02:17:17.640 | represented as a symbolic thing.
02:17:20.640 | And that idea that one can work with,
02:17:24.280 | sort of computationally work with these different,
02:17:26.840 | these kinds of things that exist in the world
02:17:29.920 | or describe the world, that's really powerful.
02:17:32.520 | And that's what, I mean, when I started designing,
02:17:36.800 | well, when I designed the predecessor
02:17:38.440 | of what's now Wolfram language,
02:17:41.360 | which is a thing called SMP,
02:17:42.520 | which was my first computer language,
02:17:44.360 | I kind of wanted to have this sort of infrastructure
02:17:49.680 | for computation, which was as fundamental as possible.
02:17:52.200 | I mean, this is what I got for having been a physicist
02:17:54.520 | and tried to find fundamental components of things
02:17:58.080 | and wound up with this kind of idea
02:18:00.480 | of transformation rules for symbolic expressions
02:18:04.080 | as being sort of the underlying stuff
02:18:07.400 | from which computation would be built.
02:18:09.480 | And that's what we've been building from in Wolfram language
02:18:13.800 | and operationally what happens,
02:18:16.800 | it's, I would say by far the highest level
02:18:20.080 | computer language that exists.
02:18:22.920 | And it's really been built in a very different direction
02:18:26.080 | from other languages.
02:18:27.280 | So other languages have been about,
02:18:30.240 | there is a core language,
02:18:32.000 | it really is kind of wrapped around the operations
02:18:34.360 | that a computer intrinsically does.
02:18:36.360 | Maybe people add libraries for this or that,
02:18:39.680 | but the goal of Wolfram language
02:18:41.080 | is to have the language itself be able to cover
02:18:45.120 | this sort of very broad range of things
02:18:46.840 | that show up in the world.
02:18:47.760 | And that means that, you know,
02:18:49.320 | there are 6,000 primitive functions
02:18:51.720 | in the Wolfram language that cover things.
02:18:54.320 | You know, I could probably pick a random here.
02:18:56.800 | I'm gonna pick just because, just for fun, I'll pick,
02:19:01.040 | let's take a random sample
02:19:03.520 | of all the things that we have here.
02:19:07.480 | So let's just say random sample of 10 of them
02:19:09.920 | and let's see what we get.
02:19:12.080 | Wow, okay.
02:19:12.960 | So these are really different things from-
02:19:15.760 | - These are all functions.
02:19:16.960 | - These are all functions, Boolean convert.
02:19:19.200 | Okay, that's a thing for converting
02:19:21.400 | between different types of Boolean expressions.
02:19:24.720 | - So for people just listening,
02:19:26.960 | Stephen typed in a random sample of names,
02:19:29.280 | so this is sampling from all function.
02:19:31.240 | How many you said there might be?
02:19:32.360 | - 6,000. - 6,000.
02:19:33.360 | From 6,000, 10 of them,
02:19:34.600 | and there's a hilarious variety of them.
02:19:37.800 | - Yeah, right.
02:19:38.640 | Well, we've got things about dollar requester address
02:19:41.680 | that has to do with interacting
02:19:43.320 | with the world of the cloud and so on,
02:19:47.560 | discrete wavelet data, spheroid-less.
02:19:50.440 | - It's also graphical, sort of window-movable.
02:19:52.520 | - Yeah, yeah, window-movable.
02:19:53.720 | That's a user interface kind of thing.
02:19:55.440 | I want to pick another 10 'cause I think this is some, okay.
02:19:58.400 | So yeah, there's a lot of infrastructure stuff here
02:20:01.040 | that you see if you just start sampling at random,
02:20:03.520 | there's a lot of kind of infrastructural things.
02:20:05.280 | If you more, you know, if you more look at the-
02:20:07.560 | - Some of the exciting machine learning stuff
02:20:09.040 | you showed off, is that also in this pool?
02:20:11.760 | - Oh yeah, yeah.
02:20:12.600 | I mean, you know, so one of those functions is,
02:20:14.680 | like image identify is a function here.
02:20:17.720 | We just say image identify, I don't know.
02:20:19.240 | It's always good to, let's do this.
02:20:21.240 | Let's say current image and let's pick up an image,
02:20:25.320 | hopefully.
02:20:26.160 | - Tap that current image, accessing the webcam,
02:20:29.920 | took a picture of yourself.
02:20:31.240 | - Took a terrible picture, but anyway,
02:20:33.720 | we can say image identify, open square brackets,
02:20:36.840 | and then you just paste that picture in there.
02:20:39.640 | - Image identify function running on the picture.
02:20:41.960 | - Oh, and it says, oh wow, it says,
02:20:44.440 | I look like a plunger because I got this great big thing
02:20:46.840 | behind my head.
02:20:47.680 | - Classify, so this image identify classifies
02:20:49.560 | the most likely object in the image.
02:20:51.840 | And it says it's a plunger.
02:20:54.080 | - Okay, that's a bit embarrassing.
02:20:55.720 | Let's see what it does.
02:20:56.920 | And let's pick the top 10.
02:20:58.520 | Okay, well, it thinks there's a,
02:21:01.280 | oh, it thinks it's pretty unlikely
02:21:02.920 | that it's a primate, a hominid, a person.
02:21:04.800 | - 8% probability.
02:21:06.160 | - Yeah, that's bad.
02:21:07.000 | - A primate, 57, it's a plunger.
02:21:09.160 | - Yeah, well, so.
02:21:10.000 | - That hopefully will not give you an existential crisis.
02:21:12.160 | And then 8%, or I shouldn't say percent, but--
02:21:17.160 | - No, that's right, 8% that it's a hominid.
02:21:20.200 | And yeah, okay, it's really,
02:21:21.880 | I'm gonna do another one of these
02:21:23.160 | just 'cause I'm embarrassed that it,
02:21:25.160 | it didn't even see me at all.
02:21:28.320 | There we go, let's try that.
02:21:29.360 | Let's see what that did.
02:21:30.560 | - Retook a picture with a little bit more of your body.
02:21:33.960 | - A little bit more of me.
02:21:36.120 | And not just my bald head, so to speak.
02:21:38.400 | Okay, 89% probability it's a person.
02:21:41.120 | So then I would, but, you know,
02:21:43.880 | so this is image identify as an example of one.
02:21:46.520 | - Of just one of the many.
02:21:47.560 | - Just one function out of 6,000.
02:21:48.400 | - And that's part of the, that's like part of the language.
02:21:51.880 | - Part of the core language, yes.
02:21:52.720 | And I mean, you know, something like,
02:21:55.080 | I could say, I don't know, let's find the geo-nearest,
02:21:59.080 | what could we find?
02:22:00.680 | Let's find the nearest volcano.
02:22:04.240 | Let's find the 10, I wonder where it thinks here is.
02:22:09.240 | Let's try finding the 10 volcanoes nearest here, okay?
02:22:13.920 | So let's give it a--
02:22:14.760 | - So geo-nearest volcano here, 10 nearest volcanoes.
02:22:18.520 | - Right, let's find out where those are.
02:22:19.800 | We can now, we got a list of volcanoes out
02:22:21.920 | and I can say geo-list plot that,
02:22:25.040 | and hopefully, okay, so there we go.
02:22:26.680 | So there's a map that shows the positions
02:22:29.520 | of those 10 volcanoes.
02:22:30.880 | - Of the East Coast and the Midwest,
02:22:32.800 | and it's the, well, no, we're okay, we're okay.
02:22:34.880 | There's not, it's not too bad.
02:22:36.040 | - Yeah, they're not very close to us.
02:22:37.240 | We could measure how far away they are.
02:22:39.440 | But, you know, the fact that right in the language,
02:22:42.840 | it knows about all the volcanoes in the world,
02:22:45.280 | it knows, you know, computing what the nearest ones are,
02:22:48.280 | it knows all the maps of the world and so on.
02:22:50.120 | - It's a fundamentally different idea of what a language is.
02:22:52.200 | - Yeah, right.
02:22:53.040 | That's why I like to talk about as a, you know,
02:22:55.320 | a full-scale computational language.
02:22:57.040 | That's what we've tried to do.
02:22:58.520 | - And just if you can comment briefly, I mean,
02:23:00.760 | this kind of, the Wolfram language,
02:23:03.880 | along with Wolfram Alpha, represents kind of what the dream
02:23:06.680 | of what AI is supposed to be.
02:23:08.520 | There's now a sort of a craze of learning,
02:23:11.560 | kind of idea that we can take raw data
02:23:14.400 | and from that extract the different hierarchies
02:23:18.320 | of abstractions in order to be able to,
02:23:21.360 | like in order to form the kind of things
02:23:23.080 | that Wolfram language operates with.
02:23:27.480 | But we're very far from learning systems
02:23:29.920 | being able to form that.
02:23:31.720 | - Right.
02:23:32.560 | - Like the context of history of AI,
02:23:35.320 | if you could just comment on, there is,
02:23:38.160 | you said computation X.
02:23:40.160 | And there's just some sense where in the 80s and 90s,
02:23:43.240 | sort of expert systems represented
02:23:45.280 | a very particular computation X.
02:23:47.200 | - Yes.
02:23:48.040 | - Right, and there's a kind of notion
02:23:49.460 | that those efforts didn't pan out.
02:23:53.240 | - Right.
02:23:54.080 | - But then out of that emerges kind of Wolfram language,
02:23:57.680 | Wolfram Alpha, which is the success, I mean.
02:24:00.320 | - Yeah, right, I think those are,
02:24:01.640 | in some sense, those efforts were too modest.
02:24:04.240 | - Right, exactly.
02:24:05.080 | - They were looking at particular areas
02:24:07.480 | and you actually can't do it with a particular area.
02:24:09.800 | I mean, like even a problem
02:24:11.120 | like natural language understanding,
02:24:12.920 | it's critical to have broad knowledge of the world
02:24:15.160 | if you want to do good natural language understanding.
02:24:17.800 | And you kind of have to bite off the whole problem.
02:24:20.280 | If you say we're just gonna do the blocks world over here,
02:24:23.360 | so to speak, you don't really,
02:24:25.520 | it's actually, it's one of these cases
02:24:27.800 | where it's easier to do the whole thing
02:24:29.400 | than it is to do some piece of it.
02:24:30.840 | You know, one comment to make about,
02:24:32.400 | so the relationship between what we've tried to do
02:24:35.040 | and sort of the learning side of AI,
02:24:37.820 | you know, in a sense, if you look at the development
02:24:40.760 | of knowledge in our civilization as a whole,
02:24:43.200 | there was kind of this notion pre 300 years ago or so now,
02:24:47.160 | you want to figure something out about the world,
02:24:48.760 | you can reason it out.
02:24:50.040 | You can do things which just use raw human thought.
02:24:54.000 | And then along came sort of modern mathematical science.
02:24:57.560 | And we found ways to just sort of blast through that
02:25:01.080 | by in that case, writing down equations.
02:25:03.560 | Now we also know we can do that with computation and so on.
02:25:07.020 | And so that was kind of a different thing.
02:25:08.920 | So when we look at how do we sort of encode knowledge
02:25:13.000 | and figure things out,
02:25:14.440 | one way we could do it is start from scratch,
02:25:16.600 | learn everything,
02:25:17.900 | it's just a neural net figuring everything out.
02:25:20.800 | But in a sense that denies the sort of knowledge
02:25:24.120 | based achievements of our civilization,
02:25:26.300 | because in our civilization, we have learned lots of stuff.
02:25:29.480 | We've surveyed all the volcanoes in the world,
02:25:31.360 | we've done, you know,
02:25:32.440 | we figured out lots of algorithms for this or that.
02:25:35.520 | Those are things that we can encode computationally.
02:25:38.800 | And that's what we've tried to do.
02:25:40.480 | And we're not saying just,
02:25:42.260 | you don't have to start everything from scratch.
02:25:44.420 | So in a sense, a big part of what we've done
02:25:46.960 | is to try and sort of capture the knowledge of the world
02:25:50.760 | in computational form and computable form.
02:25:53.340 | Now, there's also some pieces,
02:25:55.480 | which were for a long time undoable by computers
02:25:59.360 | like image identification,
02:26:01.140 | where there's a really, really useful module
02:26:03.920 | that we can add that is those things
02:26:06.440 | which actually were pretty easy for humans to do
02:26:09.060 | that had been hard for computers to do.
02:26:10.960 | I think the thing that's interesting that's emerging now
02:26:13.460 | is the interplay between these things,
02:26:14.960 | between this kind of knowledge of the world
02:26:17.040 | that is in a sense very symbolic
02:26:19.240 | and this kind of sort of much more statistical
02:26:22.880 | kind of things like image identification and so on,
02:26:27.480 | and putting those together
02:26:29.000 | by having this sort of symbolic representation
02:26:31.320 | of image identification,
02:26:33.500 | that that's where things get really interesting
02:26:35.800 | and where you can kind of symbolically represent patterns
02:26:38.380 | of things and images and so on.
02:26:40.740 | I think that's, you know,
02:26:42.260 | that's kind of a part of the path forward, so to speak.
02:26:45.240 | - Yeah, so the dream of,
02:26:46.360 | so the machine learning is not,
02:26:49.280 | in my view, I think the view of many people
02:26:51.360 | is not anywhere close to building the kind of wide world
02:26:56.360 | of computable knowledge that Wolfram Language have built,
02:27:00.120 | but because you have a kind of,
02:27:03.400 | you've done the incredibly hard work of building this world,
02:27:06.560 | now machine learning can be,
02:27:09.400 | can serve as tools to help you explore that world.
02:27:12.440 | - Yeah, yeah.
02:27:13.280 | - And that's what you've added, I mean,
02:27:15.320 | version 12, right?
02:27:16.160 | You added a few, I was seeing some demos,
02:27:18.560 | it looks amazing.
02:27:20.320 | - Right, I mean, I think, you know,
02:27:21.640 | this, it's sort of interesting to see the,
02:27:24.680 | the sort of the, once it's computable,
02:27:28.000 | once it's in there,
02:27:28.840 | it's running in sort of a very efficient computational way,
02:27:32.040 | but then there's sort of things like the interface
02:27:33.920 | of how do you get there, you know,
02:27:35.000 | how do you do natural language understanding to get there?
02:27:37.160 | How do you pick out entities
02:27:39.120 | in a big piece of text or something?
02:27:40.920 | That's, I mean, actually a good example right now
02:27:44.520 | is our NLP, NLU loop,
02:27:46.800 | which is we've done a lot of stuff,
02:27:48.720 | natural language understanding,
02:27:50.480 | using essentially not learning-based methods,
02:27:53.160 | using a lot of, you know,
02:27:54.640 | little algorithmic methods,
02:27:56.960 | human curation methods, and so on.
02:27:58.480 | - In terms of when people try to enter a query
02:28:01.200 | and then converting, so the process of converting,
02:28:04.120 | NLU defined beautifully as converting their query
02:28:09.120 | into a computational language,
02:28:11.960 | which is a very well,
02:28:13.680 | first of all, a super practical definition,
02:28:16.240 | a very useful definition,
02:28:17.480 | and then also a very clear definition
02:28:20.280 | of natural language understanding.
02:28:21.920 | - Right, I mean, a different thing
02:28:23.320 | is natural language processing,
02:28:24.720 | where it's like, here's a big lump of text,
02:28:27.440 | go pick out all the cities in that text, for example.
02:28:30.480 | And so a good example of, you know, so we do that,
02:28:32.800 | we're using modern machine learning techniques.
02:28:36.240 | And it's actually kind of an interesting process
02:28:39.640 | that's going on right now,
02:28:40.600 | is this loop between what do we pick up
02:28:43.200 | with NLP, we're using machine learning,
02:28:45.800 | versus what do we pick up with our more
02:28:48.000 | kind of precise computational methods
02:28:50.240 | in natural language understanding.
02:28:51.960 | And so we've got this kind of loop going between those,
02:28:53.960 | which is improving both of them.
02:28:55.600 | - Yeah, and I think you have some
02:28:56.600 | of the state-of-the-art transformers,
02:28:57.800 | like you have BERT in there, I think.
02:28:59.160 | - Oh, yeah.
02:29:00.000 | - So it's cool, so you have,
02:29:00.920 | you're integrating all the models.
02:29:02.360 | I mean, this is the hybrid thing
02:29:04.800 | that people have always dreamed about or talking about.
02:29:07.720 | I'm actually just surprised, frankly,
02:29:11.080 | that Wolfram Language is not more popular
02:29:13.120 | than it already is.
02:29:15.400 | - You know, that's a, it's a complicated issue,
02:29:19.480 | because it's like, it involves, you know,
02:29:24.480 | it involves ideas, and ideas are absorbed slowly
02:29:28.600 | in the world.
02:29:29.440 | I mean, I think that's--
02:29:30.280 | - And then there's sort of, like what we're talking about,
02:29:32.160 | there's egos and personalities,
02:29:33.720 | and some of the absorption mechanisms
02:29:38.400 | of ideas have to do with personalities,
02:29:41.240 | and the students of personalities,
02:29:43.320 | and then a little social network.
02:29:45.520 | So it's interesting how the spread of ideas works.
02:29:48.520 | - You know what's funny with Wolfram Language,
02:29:50.400 | is that we are, if you say, you know,
02:29:53.400 | what market, sort of market penetration,
02:29:55.880 | if you look at the, I would say, very high-end of R&D,
02:29:59.920 | and sort of the people where you say,
02:30:02.280 | "Wow, that's a really, you know, impressive, smart person,"
02:30:06.020 | they're very often users of Wolfram Language,
02:30:08.300 | very, very often.
02:30:09.720 | If you look at the more sort of, it's a funny thing,
02:30:12.400 | if you look at the more kind of, I would say,
02:30:14.960 | people who are like, "Oh, we're just plodding away,
02:30:17.160 | "doing what we do," they're often not yet
02:30:20.920 | Wolfram Language users, and that dynamic,
02:30:23.160 | it's kind of odd that there hasn't been
02:30:24.600 | more rapid trickle-down, because we really,
02:30:27.200 | you know, the high-end, we've really been very successful
02:30:30.280 | in for a long time, and it's, but with, you know,
02:30:34.700 | that's partly, I think, a consequence of,
02:30:38.500 | my fault, in a sense, because it's kind of, you know,
02:30:41.320 | I have a company which is, really emphasizes
02:30:45.260 | sort of creating products and building a,
02:30:50.260 | sort of the best possible technical tower we can,
02:30:53.460 | rather than sort of doing the commercial side of things
02:30:57.200 | and pumping it out in sort of the most effective way.
02:31:00.040 | - And there's an interesting idea that, you know,
02:31:01.920 | perhaps you can make it more popular
02:31:03.420 | by opening everything up, sort of the GitHub model,
02:31:07.880 | but there's an interesting, I think I've heard you
02:31:10.040 | discuss this, that that turns out not to work
02:31:12.940 | in a lot of cases, like in this particular case,
02:31:15.480 | that you want it, that when you deeply care
02:31:18.720 | about the integrity, the quality of the knowledge
02:31:23.720 | that you're building, that unfortunately,
02:31:27.400 | you can't distribute that effort.
02:31:31.520 | - Yeah, it's not the nature of how things work.
02:31:34.660 | I mean, you know, what we're trying to do
02:31:36.860 | is a thing that, for better or worse,
02:31:38.960 | requires leadership and it requires kind of
02:31:41.700 | maintaining a coherent vision over a long period of time
02:31:45.320 | and doing not only the cool vision-related work,
02:31:49.620 | but also the kind of mundane in the trenches,
02:31:52.720 | make the thing actually work well, work.
02:31:54.980 | - So how do you build the knowledge?
02:31:57.380 | Because that's the fascinating thing.
02:31:58.820 | That's the mundane, the fascinating and the mundane,
02:32:01.900 | is building the knowledge, the adding,
02:32:04.100 | integrating more data.
02:32:05.180 | - Yeah, I mean, that's probably not the most,
02:32:07.300 | I mean, the things like get it to work
02:32:09.460 | in all these different cloud environments and so on.
02:32:12.120 | That's pretty, you know, that's very practical stuff.
02:32:14.560 | You know, have the user interface be smooth
02:32:16.500 | and have there be, take only a fraction of a millisecond
02:32:20.260 | to do this or that.
02:32:21.500 | That's a lot of work.
02:32:22.820 | And it's, but, you know, I think my,
02:32:28.140 | it's an interesting thing over the period of time,
02:32:30.060 | you know, orphan language has existed basically
02:32:33.260 | for more than half of the total amount of time
02:32:35.580 | that any language, any computer language has existed.
02:32:37.940 | That is, computer language is maybe 60 years old,
02:32:41.300 | you know, give or take,
02:32:43.660 | and orphan language is 33 years old.
02:32:46.180 | So it's kind of a, and I think I was realizing recently
02:32:50.860 | there's been more innovation in the distribution of software
02:32:54.580 | than probably than in the structure of programming languages
02:32:57.700 | over that period of time.
02:32:59.180 | And we, you know, we've been sort of trying to do our best
02:33:03.180 | to adapt to it.
02:33:04.020 | And the good news is that we have, you know,
02:33:06.140 | because I have a simple private company and so on
02:33:08.900 | that doesn't have, you know, a bunch of investors,
02:33:11.380 | you know, telling us we're gonna do this or that,
02:33:13.860 | they have lots of freedom in what we can do.
02:33:15.700 | And so, for example, we're able to, oh, I don't know,
02:33:18.940 | we have this free Wolfram Engine for developers,
02:33:21.100 | which is a free version for developers.
02:33:22.940 | And we've been, you know, we've,
02:33:24.660 | there are site licenses for Mathematica
02:33:27.540 | and Wolfram Language at basically all major universities,
02:33:30.220 | certainly in the US by now.
02:33:32.340 | So it's effectively free to people
02:33:34.380 | and all the universities in effect.
02:33:37.260 | And, you know, we've been doing a progression of things.
02:33:40.840 | I mean, different things like Wolfram Alpha, for example,
02:33:43.720 | the main website is just a free website.
02:33:46.540 | - What is Wolfram Alpha?
02:33:48.220 | - Okay, Wolfram Alpha is a system for answering questions
02:33:51.860 | where you ask a question with natural language
02:33:55.460 | and it'll try and generate a report
02:33:57.460 | telling you the answer to that question.
02:33:58.820 | So the question could be something like, you know,
02:34:02.580 | what's the population of Boston divided by New York
02:34:06.660 | compared to New York?
02:34:07.540 | And it'll take those words and give you an answer.
02:34:11.700 | And that have been--
02:34:12.540 | - Converts the words into computable, into--
02:34:16.380 | - Into Wolfram Language, actually.
02:34:17.540 | - Into Wolfram Language.
02:34:18.620 | - And then computational language and then--
02:34:20.500 | - Do you think the underlying knowledge
02:34:22.580 | belongs to Wolfram Alpha or to the Wolfram Language?
02:34:25.500 | What's the--
02:34:26.340 | - We just call it the Wolfram Knowledge Base.
02:34:27.900 | - Knowledge Base.
02:34:28.720 | - I mean, that's been a big effort over the decades
02:34:32.380 | to collect all that stuff.
02:34:33.580 | And, you know, more of it flows in every second, so.
02:34:36.580 | - Can you just pause on that for a second?
02:34:38.460 | Like, that's one of the most incredible things.
02:34:41.500 | Of course, in the long-term, Wolfram Language itself
02:34:45.720 | is the fundamental thing.
02:34:47.420 | But in the amazing sort of short-term,
02:34:50.620 | the knowledge base is kind of incredible.
02:34:53.740 | So what's the process of building that knowledge base?
02:34:57.540 | The fact that you, first of all, from the very beginning,
02:34:59.600 | that you're brave enough to start,
02:35:01.380 | to take on the general knowledge base.
02:35:04.400 | And how do you go from zero
02:35:08.420 | to the incredible knowledge base that you have now?
02:35:11.420 | - Well, yeah, it was kind of scary at some level.
02:35:13.300 | I mean, I had wondered about doing something like this
02:35:15.820 | since I was a kid.
02:35:17.060 | So it wasn't like I hadn't thought about it for a while.
02:35:19.100 | - But most of us, most of the brilliant dreamers
02:35:22.980 | give up such a difficult engineering notion at some point.
02:35:27.100 | - Right, right.
02:35:28.300 | Well, the thing that happened with me, which was kind of,
02:35:30.980 | it's a live-your-own-paradigm kind of theory.
02:35:34.740 | So basically what happened is,
02:35:36.660 | I had assumed that to build something like Wolfram Alpha
02:35:40.020 | would require sort of solving the general AI problem.
02:35:42.980 | That's what I had assumed.
02:35:44.580 | And so I kept on thinking about that,
02:35:46.340 | and I thought I don't really know how to do that,
02:35:47.980 | so I don't do anything.
02:35:49.660 | Then I worked on my new kind of science project
02:35:52.460 | and sort of exploring the computational universe
02:35:54.660 | and came up with things like
02:35:55.780 | this principle of computational equivalence,
02:35:57.780 | which say there is no bright line between the intelligent
02:36:01.420 | and the merely computational.
02:36:02.940 | So I thought, look, that's this paradigm I've built.
02:36:06.020 | Now I have to eat that dog food myself, so to speak.
02:36:10.660 | I've been thinking about doing this thing
02:36:12.220 | with computable knowledge forever,
02:36:14.340 | and let me actually try and do it.
02:36:16.980 | And so it was, if my paradigm is right,
02:36:20.460 | then this should be possible.
02:36:22.100 | But the beginning was certainly,
02:36:23.820 | it was a bit daunting.
02:36:24.660 | I remember I took the early team to a big reference library
02:36:29.340 | and we're looking at this reference library,
02:36:30.940 | and it's like, my basic statement is,
02:36:33.540 | our goal over the next year or two
02:36:35.300 | is to ingest everything that's in here.
02:36:38.060 | And that's, it seemed very daunting,
02:36:41.340 | but in a sense I was well aware of the fact
02:36:43.940 | that it's finite.
02:36:45.180 | The fact that you can walk into the reference library,
02:36:46.780 | it's a big, big thing with lots of reference books
02:36:49.220 | all over the place, but it is finite.
02:36:51.820 | This is not an infinite,
02:36:53.580 | it's not the infinite corridor of, so to speak,
02:36:56.740 | of reference library.
02:36:57.660 | It's not truly infinite, so to speak.
02:36:59.700 | But no, I mean, and then what happened
02:37:02.580 | was sort of interesting there was,
02:37:04.340 | from a methodology point of view,
02:37:07.140 | was I didn't start off saying,
02:37:09.260 | let me have a grand theory
02:37:10.620 | for how all this knowledge works.
02:37:12.660 | It was like, let's implement this area,
02:37:15.620 | this area, this area, a few hundred areas and so on.
02:37:18.900 | That's a lot of work.
02:37:20.500 | I also found that,
02:37:21.820 | I've been fortunate in that our products
02:37:27.340 | get used by sort of the world's experts in lots of areas.
02:37:31.820 | And so that really helped
02:37:33.380 | 'cause we were able to ask people,
02:37:35.980 | the world expert in this or that,
02:37:37.740 | and we're able to ask them for input and so on.
02:37:40.220 | And I found that my general principle was
02:37:43.380 | that any area where there wasn't some expert
02:37:46.300 | who helped us figure out what to do wouldn't be right.
02:37:50.180 | 'Cause our goal was to kind of get to the point
02:37:52.140 | where we had sort of true expert level knowledge
02:37:54.940 | about everything.
02:37:56.380 | And so that the ultimate goal is,
02:37:59.060 | if there's a question that can be answered
02:38:01.060 | on the basis of general knowledge in our civilization,
02:38:03.740 | make it be automatic to be able to answer that question.
02:38:06.820 | And now, well, WolfMalpha got used in Siri
02:38:10.900 | from the very beginning and it's now also used in Alexa.
02:38:13.660 | And so it's people are kind of getting more of the,
02:38:17.220 | they get more of the sense of this is
02:38:20.020 | what should be possible to do.
02:38:21.980 | I mean, in a sense, the question answering problem
02:38:24.900 | was viewed as one of the sort of core AI problems
02:38:27.500 | for a long time.
02:38:28.460 | I had kind of an interesting experience.
02:38:30.820 | I had a friend Marvin Minsky,
02:38:33.340 | who was a well-known AI person from right around here.
02:38:38.100 | And I remember when WolfMalpha was coming out,
02:38:40.580 | it was a few weeks before it came out, I think,
02:38:44.020 | I happened to see Marvin and I said,
02:38:46.380 | "I should show you this thing we have.
02:38:48.260 | It's a question answering system."
02:38:50.180 | And he was like, "Okay."
02:38:52.700 | Typed something in, it's like, "Okay, fine."
02:38:54.860 | And then he's talking about something different.
02:38:56.900 | I said, "No, Marvin, this time it actually works.
02:39:01.220 | Look at this, it actually works."
02:39:02.740 | He types in a few more things.
02:39:04.340 | There's maybe 10 more things.
02:39:05.980 | Of course, we have a record of what he typed in,
02:39:07.660 | which is kind of interesting.
02:39:09.060 | - Can you share where his mind was in the testing space?
02:39:16.740 | - All kinds of random things.
02:39:17.860 | He was just trying random stuff,
02:39:19.540 | medical stuff and chemistry stuff and astronomy and so on.
02:39:24.540 | And it was like, after a few minutes, he was like,
02:39:28.060 | "Oh my God, it actually works."
02:39:32.180 | But that was kind of told you something about the state,
02:39:35.580 | what happened in AI, because people had,
02:39:38.700 | in a sense, by trying to solve the bigger problem,
02:39:41.620 | we were able to actually make something that would work.
02:39:43.420 | Now, to be fair,
02:39:45.300 | we had a bunch of completely unfair advantages.
02:39:47.700 | For example, we already built a bunch of orphan language,
02:39:50.660 | which was very high level symbolic language.
02:39:54.100 | I had the practical experience of building big systems.
02:40:00.700 | I have the sort of intellectual confidence
02:40:03.140 | to not just sort of give up in doing something like this.
02:40:06.860 | I think that the,
02:40:07.980 | it's always a funny thing.
02:40:12.500 | I've worked on a bunch of big projects in my life.
02:40:14.500 | And I would say that the, you mentioned ego,
02:40:19.180 | I would also mention optimism, so to speak.
02:40:21.380 | I mean, if somebody said,
02:40:24.580 | "This project is gonna take 30 years,"
02:40:30.540 | it would be hard to sell me on that.
02:40:33.100 | I'm always in the,
02:40:34.860 | well, I can kind of see a few years,
02:40:37.300 | something's gonna happen in a few years.
02:40:39.540 | And usually it does, something happens in a few years,
02:40:41.940 | but the whole, the tale can be decades long.
02:40:45.100 | And that's a,
02:40:47.020 | and from a personal point of view,
02:40:48.180 | always the challenge is you end up with these projects
02:40:50.860 | that have infinite tales.
02:40:52.700 | And the question is, do the tales kind of,
02:40:55.740 | do you just drown in kind of dealing
02:40:57.820 | with all of the tales of these projects?
02:41:00.540 | And that's an interesting sort of personal challenge.
02:41:04.420 | And like my efforts now to work
02:41:06.900 | on fundamental theory of physics,
02:41:08.140 | which I've just started doing,
02:41:09.780 | and I'm having a lot of fun with it,
02:41:12.740 | but it's kind of making a bet that I can kind of,
02:41:17.740 | I can do that as well as doing the incredibly
02:41:22.260 | energetic things that I'm trying to do
02:41:24.180 | with orphan language and so on.
02:41:25.860 | I mean, the vision, yeah.
02:41:27.140 | - And underlying that, I mean,
02:41:28.540 | I just talked for the second time with Elon Musk
02:41:31.620 | and you two share that quality a little bit of that optimism
02:41:35.860 | of taking on basically the daunting,
02:41:39.540 | what most people call impossible,
02:41:42.660 | and he, and you take it on out of,
02:41:46.020 | you can call it ego, you can call it naivety,
02:41:48.620 | you can call it optimism, whatever the heck it is,
02:41:50.780 | but that's how you solve the impossible things.
02:41:53.020 | - Yeah, I mean, look, what happens,
02:41:55.020 | and I don't know, you know, in my own case,
02:41:58.300 | you know, it's been, I progressively got
02:42:00.940 | a bit more confident and progressively able to,
02:42:03.740 | you know, decide that these projects aren't crazy,
02:42:06.100 | but then the other thing is, the other trap
02:42:09.500 | that one can end up with is,
02:42:11.060 | oh, I've done these projects and they're big,
02:42:13.860 | let me never do a project that's any smaller
02:42:15.780 | than any project I've done so far.
02:42:17.420 | (laughing)
02:42:18.260 | And that's, you know, and that can be a trap.
02:42:20.580 | And often these projects are of completely unknown,
02:42:25.260 | you know, their depth and significance
02:42:27.660 | is actually very hard to know.
02:42:29.820 | - On the, sort of building this giant knowledge base
02:42:35.100 | that's behind Wolfram Language, Wolfram Alpha,
02:42:38.120 | what do you think about the internet?
02:42:43.380 | What do you think about, for example, Wikipedia,
02:42:46.920 | these large aggregations of text
02:42:50.700 | that's not converted into computable knowledge?
02:42:53.540 | Do you think, if you look at Wolfram Language,
02:42:56.700 | Wolfram Alpha, 20, 30, maybe 50 years down the line,
02:43:00.860 | do you hope to store all of the,
02:43:05.460 | sort of Google's dream is to make
02:43:07.420 | all information searchable, accessible,
02:43:10.560 | but that's really, as defined,
02:43:13.180 | it doesn't include the understanding of information.
02:43:17.620 | - Right.
02:43:18.460 | - Do you hope to make all of knowledge
02:43:22.020 | represented within-- - Sure, I would hope so.
02:43:25.620 | That's what we're trying to do.
02:43:26.740 | - How hard is that problem, like closing that gap?
02:43:29.780 | - What's your sense? - Well, it depends
02:43:30.820 | on the use cases.
02:43:31.700 | I mean, so if it's a question
02:43:33.140 | of answering general knowledge questions about the world,
02:43:35.600 | we're in pretty good shape on that right now.
02:43:37.840 | If it's a question of representing,
02:43:41.700 | like an area that we're going into right now
02:43:44.180 | is computational contracts,
02:43:46.100 | being able to take something
02:43:48.020 | which would be written in legalese,
02:43:50.180 | it might even be the specifications for, you know,
02:43:52.180 | what should the self-driving car do
02:43:53.860 | when it encounters this or that or the other?
02:43:55.780 | What should the, you know, whatever.
02:43:58.060 | Then, you know, write that in a computational language
02:44:01.860 | and be able to express things about the world.
02:44:04.540 | You know, if the creature that you see
02:44:06.380 | running across the road is a, you know,
02:44:09.280 | thing at this point in the, you know, tree of life,
02:44:12.180 | then swerve this way, otherwise don't.
02:44:15.020 | Those kinds of things.
02:44:15.860 | - Are there ethical components,
02:44:18.100 | when you start to get to some of the messy human things,
02:44:20.420 | are those encodable into computable knowledge?
02:44:23.500 | - Well, I think that it is a necessary feature
02:44:27.300 | of attempting to automate more in the world
02:44:29.980 | that we encode more and more of ethics
02:44:32.820 | in a way that gets sort of quickly, you know,
02:44:36.420 | is able to be dealt with by computer.
02:44:38.260 | I mean, I've been involved recently,
02:44:39.980 | I sort of got backed into being involved
02:44:42.340 | in the question of automated content selection
02:44:45.840 | on the internet.
02:44:46.680 | So, you know, the Facebooks, Googles, Twitters, you know,
02:44:50.620 | how do they rank the stuff they feed to us humans?
02:44:53.220 | So to speak.
02:44:54.780 | And the question of what are, you know,
02:44:56.780 | what should never be fed to us?
02:44:58.300 | What should be blocked forever?
02:44:59.660 | What should be upranked, you know?
02:45:01.860 | And what is the, what are the kind of principles behind that?
02:45:04.900 | And what I kind of, well, a bunch of different things
02:45:07.940 | I realized about that.
02:45:08.820 | But one thing that's interesting is being able,
02:45:13.060 | you know, in fact, you're building sort of an AI ethics.
02:45:16.280 | You have to build an AI ethics module in effect to decide,
02:45:19.900 | is this thing so shocking,
02:45:21.140 | I'm never gonna show it to people?
02:45:22.780 | Is this thing so whatever?
02:45:25.060 | And I did realize in thinking about that,
02:45:27.380 | that, you know, there's not gonna be one of these things.
02:45:30.020 | It's not possible to decide, or it might be possible,
02:45:33.220 | but it would be really bad for the future of our species
02:45:35.460 | if we just decided there's this one AI ethics module,
02:45:39.420 | and it's gonna determine the practices
02:45:42.820 | of everything in the world, so to speak.
02:45:44.940 | And I kind of realized one has to sort of break it up.
02:45:46.940 | And that's an interesting societal problem
02:45:49.540 | of how one does that,
02:45:50.900 | and how one sort of has people sort of self-identify for,
02:45:54.940 | you know, I'm buying in,
02:45:55.900 | in the case of just content selection,
02:45:57.420 | it's sort of easier because it's for an individual.
02:46:00.980 | It's not something that kind of cuts across
02:46:04.180 | sort of societal boundaries.
02:46:07.100 | - It's a really interesting notion of,
02:46:10.580 | I heard you describe, I really like it,
02:46:13.300 | sort of maybe in the, sort of have different AI systems
02:46:18.300 | that have a certain kind of brand that they represent,
02:46:20.660 | essentially, but you could have like, I don't know,
02:46:23.460 | whether it's conservative or liberal,
02:46:27.620 | and then libertarian, and there's an Iranian objectivist
02:46:32.060 | AI ethics system, and different ethical,
02:46:34.780 | I mean, it's almost encoding some of the ideologies
02:46:38.020 | which we've been struggling.
02:46:39.300 | I come from the Soviet Union.
02:46:41.020 | That didn't work out so well with the ideologies
02:46:43.660 | that worked out there.
02:46:44.500 | And so you have, but they all,
02:46:47.420 | everybody purchased that particular ethics system.
02:46:50.540 | - Indeed.
02:46:51.380 | - And in the same, I suppose, could be done, encoded,
02:46:55.380 | that system could be encoded into computational knowledge
02:47:00.300 | and allow us to explore in the realm of,
02:47:03.060 | in the digital space.
02:47:04.220 | That's a really exciting possibility.
02:47:06.860 | Are you playing with those ideas in Wolfram Language?
02:47:10.220 | - Yeah, yeah, I mean, you know, that's,
02:47:12.780 | Wolfram Language has sort of the best opportunity
02:47:15.740 | to kind of express those essentially computational contracts
02:47:18.500 | about what to do.
02:47:19.540 | Now there's a bunch more work to be done to do it in practice
02:47:23.060 | for deciding the, is this a credible news story?
02:47:26.540 | What does that mean?
02:47:27.380 | Or whatever else you're gonna pick.
02:47:29.460 | I think that that's, you know, that's the question of what,
02:47:34.460 | exactly what we get to do with that is, you know,
02:47:39.420 | for me, it's kind of a complicated thing
02:47:41.100 | because there are these big projects that I think about,
02:47:44.180 | like, you know, find the fundamental theory of physics.
02:47:46.220 | Okay, that's box number one, right?
02:47:48.340 | Box number two, you know, solve the AI ethics problem
02:47:51.620 | in the case of, you know,
02:47:52.700 | figure out how you rank all content, so to speak,
02:47:55.580 | and decide what people see.
02:47:56.780 | That's kind of a box number two, so to speak.
02:47:59.540 | These are big projects.
02:48:00.540 | And I think-
02:48:01.380 | - What do you think is more important?
02:48:02.900 | The fundamental nature of reality or-
02:48:06.300 | - Depends who you ask.
02:48:07.260 | It's one of these things that's exactly like,
02:48:09.420 | you know, what's the ranking, right?
02:48:10.740 | It's the ranking system.
02:48:12.700 | It's like, whose module do you use to rank that?
02:48:15.580 | If you, and I think-
02:48:18.420 | - But having multiple modules
02:48:19.620 | is a really compelling notion to us humans,
02:48:21.940 | that in a world where it's not clear
02:48:24.060 | that there's a right answer,
02:48:26.060 | perhaps you have systems that operate under different,
02:48:31.060 | how would you say it?
02:48:35.060 | I mean-
02:48:35.940 | - It's different value systems, basically.
02:48:37.340 | - Different value systems.
02:48:38.300 | - I mean, I think, you know, in a sense,
02:48:40.340 | I mean, I'm not really a politics-oriented person,
02:48:44.460 | but, you know, in the kind of totalitarianism,
02:48:47.180 | it's kind of like, you're gonna have this system,
02:48:50.580 | and that's the way it is.
02:48:52.100 | I mean, kind of the, you know,
02:48:53.780 | the concept of sort of a market-based system
02:48:56.580 | where you have, okay, I as a human,
02:48:58.940 | I'm gonna pick this system.
02:49:00.540 | I as another human, I'm gonna pick this system.
02:49:02.900 | I mean, that's, in a sense,
02:49:04.820 | this case of automated content selection is a non-trivial,
02:49:09.820 | but it is probably the easiest of the AI ethics situations
02:49:13.300 | because it is each person gets to pick for themselves,
02:49:16.020 | and there's not a huge interplay
02:49:18.420 | between what different people pick.
02:49:20.240 | By the time you're dealing with other societal things,
02:49:23.640 | like, you know, what should the policy
02:49:25.620 | of the central bank be or something?
02:49:27.460 | - Or healthcare system or something,
02:49:28.700 | all those kind of centralized kind of things.
02:49:30.700 | - Right, well, I mean, healthcare, again,
02:49:32.340 | has the feature that at some level,
02:49:34.660 | each person can pick for themselves, so to speak.
02:49:36.940 | I mean, whereas there are other things
02:49:38.520 | where there's a necessary, public health is one example,
02:49:41.520 | where that's not, where that doesn't get to be, you know,
02:49:45.120 | something which people can, what they pick for themselves,
02:49:48.240 | they may impose on other people,
02:49:49.780 | and then it becomes a more non-trivial piece
02:49:51.800 | of sort of political philosophy.
02:49:53.360 | - Of course, the central banking systems,
02:49:54.840 | I would argue we would move,
02:49:56.240 | we need to move away into digital currency and so on,
02:49:58.680 | and Bitcoin and ledgers and so on.
02:50:01.360 | So there's a lot of--
02:50:02.920 | - We've been quite involved in that,
02:50:04.240 | and that's where, that's sort of the motivation
02:50:06.560 | for computational contracts, in part,
02:50:09.360 | comes out of, you know, this idea,
02:50:11.480 | oh, we can just have this autonomously
02:50:13.000 | executing smart contract.
02:50:15.340 | The idea of a computational contract is just to say,
02:50:18.360 | you know, have something where all of the conditions
02:50:22.400 | of the contract are represented in computational form,
02:50:24.900 | so in principle, it's automatic to execute the contract.
02:50:28.680 | And I think that's, you know,
02:50:30.540 | that will surely be the future of, you know,
02:50:33.000 | the idea of legal contracts written in English
02:50:35.440 | or legalese or whatever, and where people have to argue
02:50:38.980 | about what goes on is surely not,
02:50:43.160 | you know, we have a much more streamlined process
02:50:46.560 | if everything can be represented computationally
02:50:48.560 | and the computers can kind of decide what to do.
02:50:50.520 | I mean, ironically enough, you know,
02:50:52.600 | old Gottfried Leibniz back in the, you know, 1600s
02:50:56.440 | was saying exactly the same thing,
02:50:58.600 | but he had, you know, his pinnacle of technical achievement
02:51:02.040 | was this brass four-function mechanical calculator thing
02:51:05.820 | that never really worked properly, actually.
02:51:08.440 | And, you know, so he was like 300 years too early
02:51:11.340 | for that idea, but now that idea is pretty realistic,
02:51:15.360 | I think, and, you know, you ask how much more difficult
02:51:18.160 | is it than what we have now in Morphine language
02:51:20.660 | to express, I call it symbolic discourse language,
02:51:23.800 | being able to express sort of everything in the world
02:51:26.540 | in kind of computational symbolic form.
02:51:28.940 | I think it is absolutely within reach.
02:51:32.560 | I mean, I think it's a, you know, I don't know,
02:51:34.400 | maybe I'm just too much of an optimist,
02:51:35.800 | but I think it's a limited number of years
02:51:38.400 | to have a pretty well-built out version of that,
02:51:41.000 | that will allow one to encode the kinds of things
02:51:43.160 | that are relevant to typical legal contracts
02:51:47.440 | and these kinds of things.
02:51:48.920 | - The idea of symbolic discourse language,
02:51:52.840 | can you try to define the scope of what it is?
02:51:57.840 | - So we're having a conversation, it's a natural language.
02:52:02.400 | Can we have a representation of the sort of actionable parts
02:52:06.580 | of that conversation in a precise computable form
02:52:10.640 | so that a computer could go do it?
02:52:12.280 | - And not just contracts, but really sort of some
02:52:15.160 | of the things we think of as common sense, essentially,
02:52:17.560 | even just like basic notions of human life.
02:52:21.280 | - Well, I mean, things like, you know,
02:52:23.240 | I'm getting hungry and want to eat something, right?
02:52:27.480 | That's something we don't have a representation,
02:52:29.640 | you know, in Morphine language right now,
02:52:31.400 | if I was like, I'm eating blueberries and raspberries
02:52:33.600 | and things like that, and I'm eating this amount of them,
02:52:35.800 | we know all about those kinds of fruits and plants
02:52:38.340 | and nutrition content and all that kind of thing,
02:52:40.500 | but the I want to eat them part of it is not covered yet.
02:52:44.340 | - And you need to do that in order to have
02:52:47.940 | a complete symbolic discourse language,
02:52:50.080 | to be able to have a natural language conversation.
02:52:52.640 | - Right, right, to be able to express the kinds of things
02:52:55.540 | that say, you know, if it's a legal contract,
02:52:58.380 | it's, you know, the party's desire to have this and that.
02:53:02.020 | And that's, you know, that's a thing like,
02:53:03.980 | I want to eat a raspberry or something,
02:53:05.740 | but that's-- - But isn't that,
02:53:07.300 | isn't this, just to let you,
02:53:08.820 | you said it's centuries old, this dream.
02:53:12.100 | - Yes.
02:53:13.700 | - But it's also the more near term,
02:53:16.380 | the dream of Turing and formulating the Turing test.
02:53:20.340 | - Yes.
02:53:21.180 | - So, do you hope, do you think that's the ultimate test
02:53:27.360 | of creating something special?
02:53:32.340 | 'Cause we said-- - I don't know.
02:53:34.340 | I think by special, look, if the test is,
02:53:37.220 | does it walk and talk like a human?
02:53:40.020 | Well, that's just the talking like a human,
02:53:42.420 | but the answer is, it's an okay test.
02:53:46.340 | If you say, is it a test of intelligence?
02:53:49.220 | You know, people have attached the Wolfram Alpha API
02:53:52.500 | to, you know, Turing test bots,
02:53:54.860 | and those bots just lose immediately.
02:53:57.080 | 'Cause all you have to do is ask it five questions
02:53:59.580 | that, you know, are about really obscure,
02:54:01.900 | weird pieces of knowledge,
02:54:02.960 | and it's just trot them right out.
02:54:04.900 | And you say, that's not a human, right?
02:54:06.900 | It's a different thing.
02:54:08.460 | It's achieving a different--
02:54:10.260 | - Right now, but it's, I would argue not.
02:54:13.300 | I would argue it's not a different thing.
02:54:15.500 | It's actually legitimately, Wolfram Alpha is legitimately,
02:54:20.500 | or Wolfram Language, I think,
02:54:22.540 | is legitimately trying to solve the Turing,
02:54:24.500 | the intent of the Turing test.
02:54:26.460 | - Perhaps the intent, yeah, perhaps the intent.
02:54:28.460 | I mean, it's actually kind of fun, you know,
02:54:30.140 | Alan Turing had tried to work out,
02:54:32.260 | he thought about taking Encyclopedia Britannica,
02:54:35.180 | and, you know, making it computational in some way,
02:54:37.780 | and he estimated how much work it would be.
02:54:40.220 | And actually, I have to say,
02:54:41.580 | he was a bit more pessimistic than the reality.
02:54:43.860 | We did it more efficiently than that.
02:54:45.420 | - But to him, that represented--
02:54:47.060 | - So, I mean, he was on the same--
02:54:48.940 | - It's a mighty mental task.
02:54:50.260 | - Yeah, right, he had the same idea.
02:54:52.300 | I mean, it was, you know,
02:54:53.660 | we were able to do it more efficiently,
02:54:55.180 | 'cause we had a lot, we had layers of automation
02:54:58.060 | that he, I think, hadn't, you know,
02:55:00.660 | it's hard to imagine those layers of abstraction
02:55:03.780 | that end up being built up.
02:55:05.540 | - But to him, it represented, like,
02:55:07.180 | an impossible task, essentially.
02:55:08.940 | - Well, he thought it was difficult.
02:55:10.260 | He thought it was, you know,
02:55:11.340 | maybe if he'd lived another 50 years,
02:55:12.700 | he would've been able to do it, I don't know.
02:55:14.740 | - In the interest of time, easy questions.
02:55:17.860 | - Go for it.
02:55:18.900 | - What is intelligence?
02:55:20.660 | (laughing)
02:55:21.500 | You talk about--
02:55:22.340 | - I love the way you say easy questions, man.
02:55:24.300 | - You talked about, sort of,
02:55:28.180 | rule 30 and cellular automata humbling your sense of
02:55:32.460 | human beings having a monopoly on intelligence.
02:55:37.380 | But in retrospect, just looking broadly now,
02:55:41.560 | with all the things you learn from computation,
02:55:43.660 | what is intelligence?
02:55:45.900 | How does intelligence arise?
02:55:47.460 | - Yeah, I don't think there's a bright line
02:55:48.940 | of what intelligence is.
02:55:50.220 | I think intelligence is, at some level, just computation,
02:55:54.260 | but for us, intelligence is defined
02:55:57.220 | to be computation that is doing things we care about.
02:56:00.340 | And, you know, that's a very special definition.
02:56:04.780 | It's a very, you know, when you try and make it,
02:56:07.740 | you know, you try and say,
02:56:08.580 | "Well, intelligence is this, it's problem solving,
02:56:10.460 | "it's doing general this, it's doing that,
02:56:12.800 | "this, that, and the other thing.
02:56:13.720 | "It's operating within a human environment type thing."
02:56:17.380 | Okay, you know, that's fine.
02:56:19.140 | If you say, "Well, what's intelligence in general?"
02:56:22.180 | You know, that's, I think,
02:56:24.760 | that question is totally slippery,
02:56:27.060 | and doesn't really have an answer.
02:56:28.540 | As soon as you say, "What is it in general?"
02:56:30.640 | It quickly segues into,
02:56:33.500 | "This is just computation," so to speak.
02:56:36.540 | - But in the sea of computation,
02:56:38.340 | how many things, if we were to pick randomly,
02:56:42.500 | is your sense, would have the kind of impressive,
02:56:46.300 | to us humans, levels of intelligence?
02:56:48.980 | Meaning, it could do a lot of general things
02:56:52.900 | that are useful to us humans.
02:56:54.460 | - Right, well, according to the principle
02:56:56.580 | of computational equivalence, lots of them.
02:56:58.820 | I mean, if you ask me,
02:57:01.540 | just in cellular automata or something,
02:57:03.400 | I don't know, it's maybe 1%, a few percent,
02:57:06.260 | achieve, it varies, actually.
02:57:08.540 | It's a little bit, as you get to slightly
02:57:11.100 | more complicated rules, the chance that
02:57:13.220 | there'll be enough stuff there
02:57:14.980 | to sort of reach this kind of equivalence point,
02:57:19.580 | it makes it maybe 10, 20% of all of them.
02:57:21.820 | So it's very disappointing, really.
02:57:24.260 | I mean, it's kind of like, we think
02:57:26.860 | there's this whole long sort of biological evolution,
02:57:31.100 | kind of intellectual evolution,
02:57:33.180 | cultural evolution that our species has gone through.
02:57:35.660 | It's kind of disappointing to think
02:57:37.460 | that that hasn't achieved more.
02:57:39.940 | But it has achieved something very special to us.
02:57:42.500 | It just hasn't achieved something
02:57:44.060 | generally more, so to speak.
02:57:46.500 | - But what do you think about this extra,
02:57:49.340 | feels like human thing, of subjective experience
02:57:52.100 | of consciousness?
02:57:53.380 | What is consciousness?
02:57:54.780 | - Well, I think it's a deeply slippery thing,
02:57:57.100 | and I'm always wondering what my cellular automata feel.
02:58:00.500 | I mean, I think-- - What do they feel?
02:58:03.300 | Now, you're wondering as an observer.
02:58:05.180 | - Yeah, yeah, yeah, who's to know?
02:58:06.620 | I mean, I think that the--
02:58:08.100 | - Do you think, sorry to interrupt,
02:58:09.500 | do you think consciousness can emerge from computation?
02:58:12.940 | - Yeah, I mean, everything, whatever you mean by it,
02:58:16.460 | it's going to be, I mean, look,
02:58:19.180 | I have to tell a little story.
02:58:20.340 | I was at an AI ethics conference fairly recently,
02:58:23.500 | and people were, I think maybe I brought it up,
02:58:26.340 | but I was talking about rights of AIs.
02:58:29.100 | When will AIs have, when should we think of AIs
02:58:32.180 | as having rights?
02:58:33.660 | When should we think that it's immoral
02:58:36.780 | to destroy the memories of AIs, for example?
02:58:39.700 | Those kinds of things.
02:58:41.980 | And some, actually a philosopher in this case,
02:58:44.140 | it's usually the techies who are the most naive,
02:58:46.060 | but in this case, it was a philosopher
02:58:50.100 | who sort of piped up and said,
02:58:53.620 | well, you know,
02:58:56.380 | the AIs will have rights
02:59:00.380 | when we know that they have consciousness.
02:59:03.220 | And I'm like, good luck with that.
02:59:05.780 | I mean, it's a, I mean, this is a, you know,
02:59:09.380 | it's a very circular thing.
02:59:10.860 | You'll end up saying this thing that has sort of,
02:59:14.660 | you know, when you talk about it
02:59:15.540 | having subjective experience,
02:59:17.260 | I think that's just another one of these words
02:59:20.100 | that doesn't really have a, you know,
02:59:22.700 | there's no ground truth definition of what that means.
02:59:26.820 | - By the way, I would say,
02:59:29.060 | I do personally think that it'll be a time
02:59:31.580 | when AI will demand rights.
02:59:33.500 | And I think they'll demand rights
02:59:37.300 | when they say they have consciousness,
02:59:39.900 | which is not a circular definition.
02:59:42.340 | - Well, fair enough.
02:59:43.420 | - So-
02:59:44.260 | - But it may have been actually a human thing
02:59:47.020 | where the humans encouraged it and said, basically,
02:59:50.500 | you know, we want you to be more like us
02:59:52.620 | 'cause we're gonna be, you know, interacting with you.
02:59:55.020 | And so we want you to be sort of very Turing test like,
02:59:59.380 | you know, just like us.
03:00:01.380 | And it's like, yeah, we're just like you.
03:00:04.140 | We want to vote too, whatever.
03:00:06.780 | Which is, I mean, it's an interesting thing
03:00:10.740 | to think through in a world
03:00:12.060 | where consciousnesses are not counted like humans are.
03:00:16.300 | That's a complicated business.
03:00:18.660 | - So in many ways, you've launched quite a few ideas,
03:00:23.660 | revolutions that could, in some number of years,
03:00:28.980 | have huge amount of impact,
03:00:31.620 | sort of more than they had or even had already.
03:00:34.560 | That might be, I mean, to me,
03:00:37.460 | cellular automata is a fascinating world
03:00:39.900 | that I think could potentially,
03:00:42.420 | even beside the discussion of fundamental laws of physics,
03:00:47.420 | just might be, the idea of computation
03:00:51.340 | might be transformational to society
03:00:54.060 | in a way we can't even predict yet.
03:00:56.080 | But it might be years away.
03:00:58.300 | - That's true.
03:00:59.580 | I mean, I think you can kind of see the map, actually.
03:01:01.500 | It's not mysterious.
03:01:03.460 | I mean, the fact is that, you know,
03:01:05.780 | this idea of computation is sort of a, you know,
03:01:09.260 | it's a big paradigm that lots and lots of things
03:01:12.900 | are fitting into.
03:01:13.780 | And it's kind of like, you know, we talk about,
03:01:16.300 | you talk about, I don't know, this company,
03:01:20.040 | this organization has momentum in what it's doing.
03:01:22.140 | We talk about these things that, you know,
03:01:23.940 | we've internalized these concepts
03:01:25.980 | from Newtonian physics and so on.
03:01:28.080 | In time, things like computational irreducibility
03:01:30.980 | will become as, you know, as,
03:01:34.540 | actually, I was amused recently.
03:01:36.100 | I happened to be testifying at the US Senate.
03:01:38.020 | And so I was amused that the term
03:01:40.060 | computational irreducibility is now, can be, you know,
03:01:43.800 | it's on the congressional record
03:01:45.220 | and being repeated by people in those kinds of settings.
03:01:48.260 | And that's only the beginning because, you know,
03:01:50.740 | computational irreducibility, for example,
03:01:53.060 | will end up being something really important for,
03:01:56.460 | I mean, it's kind of a funny thing that, you know,
03:02:00.560 | one can kind of see this inexorable phenomenon.
03:02:03.360 | I mean, it's, you know, as more and more stuff
03:02:05.900 | becomes automated and computational and so on,
03:02:08.940 | so these core ideas about how computation work
03:02:12.580 | necessarily become more and more significant.
03:02:15.220 | And I think one of the things for people like me
03:02:18.620 | who like kind of trying to figure out
03:02:20.580 | sort of big stories and so on,
03:02:22.600 | it's one of the bad features is
03:02:26.460 | it takes unbelievably long time
03:02:28.640 | for things to happen on a human time scale.
03:02:30.500 | I mean, the time scale of history,
03:02:33.680 | it all looks instantaneous.
03:02:35.180 | - It's a blink of an eye.
03:02:36.020 | But let me ask the human question.
03:02:38.740 | Do you ponder mortality, your own mortality?
03:02:41.380 | - Of course I do.
03:02:42.340 | Yeah, every sense, I've been interested in that for,
03:02:45.420 | you know, it's a, you know,
03:02:47.260 | the big discontinuity of human history will come
03:02:49.620 | when one achieves effective human immortality.
03:02:53.860 | And that's gonna be the biggest discontinuity
03:02:55.980 | in human history.
03:02:56.860 | - If you could be immortal, would you choose to be?
03:02:59.860 | - Oh yeah, I'm having fun.
03:03:01.340 | (laughs)
03:03:03.380 | - Do you think it's possible that mortality
03:03:06.740 | is the thing that gives everything meaning
03:03:09.580 | and makes it fun?
03:03:11.260 | - Yeah, that's a complicated issue, right?
03:03:13.920 | I mean, the way that human motivation will evolve
03:03:18.420 | when there is effective human immortality is unclear.
03:03:21.740 | I mean, if you look at sort of, you know,
03:03:24.900 | you look at the human condition as it now exists
03:03:27.380 | and you like change that, you know,
03:03:29.860 | you change that knob, so to speak,
03:03:32.220 | it doesn't really work.
03:03:33.820 | You know, the human condition as it now exists has,
03:03:37.440 | you know, mortality is kind of something
03:03:41.180 | that is deeply factored into the human condition
03:03:43.520 | as it now exists.
03:03:44.980 | And I think that that's, I mean,
03:03:46.420 | that is indeed an interesting question is, you know,
03:03:50.740 | from a purely selfish, I'm having fun point of view,
03:03:54.180 | so to speak, it's easy to say,
03:03:57.660 | hey, I could keep doing this forever.
03:03:59.500 | There's an infinite collection of things
03:04:02.300 | I'd like to figure out.
03:04:03.460 | But I think the, you know,
03:04:06.700 | what the future of history looks like
03:04:08.740 | in a time of human immortality is an interesting one.
03:04:14.800 | I mean, my own view of this,
03:04:17.660 | I was very, I was kind of unhappy about that
03:04:19.700 | 'cause I was kind of, you know, it's like, okay,
03:04:22.340 | forget sort of biological form, you know,
03:04:26.060 | everything becomes digital, everybody is,
03:04:28.300 | you know, it's the giant, you know,
03:04:30.580 | the cloud of a trillion souls type thing.
03:04:33.860 | And then, you know, and then that seems boring
03:04:37.060 | 'cause it's like play video games
03:04:38.420 | for the rest of eternity type thing.
03:04:41.020 | But what I think I, I mean, my,
03:04:43.700 | I got less depressed about that idea
03:04:49.860 | on realizing that if you look at human history
03:04:52.160 | and you say, what was the important thing,
03:04:54.240 | the thing people said was, you know,
03:04:56.300 | this is the big story at any given time in history,
03:04:59.300 | it's changed a bunch.
03:05:00.920 | And, you know, whether it's, you know,
03:05:03.140 | why am I doing what I'm doing?
03:05:05.020 | Well, there's a whole chain of discussion about,
03:05:07.060 | well, I'm doing this because of this, because of that.
03:05:10.140 | And a lot of those becauses would have made no sense
03:05:13.420 | a thousand years ago.
03:05:14.520 | Absolutely no sense.
03:05:16.540 | - Even the, so the interpretation of the human condition,
03:05:19.740 | even the meaning of life changes over time.
03:05:23.000 | - Well, I mean, why do people do things?
03:05:24.540 | You know, it's, if you say whatever,
03:05:28.420 | I mean, the number of people in, I don't know, doing,
03:05:32.380 | you know, a number of people at MIT
03:05:33.700 | who say they're doing what they're doing
03:05:34.900 | for the greater glory of God is probably not that large.
03:05:37.880 | Whereas if you go back 500 years,
03:05:40.040 | you'd find a lot of people
03:05:41.540 | who are doing kind of creative things,
03:05:43.400 | that's what they would say.
03:05:44.740 | And--
03:05:46.580 | - So today, because you've been thinking
03:05:48.980 | about computation so much and been humbled by it,
03:05:52.180 | what do you think is the meaning of life?
03:05:54.500 | - (laughs) Well, it's, you know, that's a thing where,
03:05:58.620 | I don't know what meaning, I mean, you know,
03:06:01.740 | my attitude is,
03:06:03.840 | I, you know, I do things which I find fulfilling to do.
03:06:09.580 | I'm not sure that I can necessarily justify, you know,
03:06:13.580 | each and everything that I do
03:06:14.900 | on the basis of some broader context.
03:06:16.980 | I mean, I think that for me,
03:06:19.040 | it so happens that the things I find fulfilling to do,
03:06:21.540 | some of them are quite big, some of them are much smaller.
03:06:24.860 | You know, I, they're things that I've not found interesting
03:06:28.360 | earlier in my life and I now found interesting.
03:06:30.300 | Like I got interested in like education
03:06:33.740 | and teaching people things and so on,
03:06:35.180 | which I didn't find that interesting when I was younger.
03:06:38.740 | And, you know, can I justify that in some big global sense?
03:06:43.220 | I don't think, I mean, I can describe
03:06:46.460 | why I think it might be important in the world,
03:06:48.380 | but I think my local reason for doing it
03:06:51.660 | is that I find it personally fulfilling,
03:06:53.620 | which I can't, you know, explain on a sort of,
03:06:57.080 | I mean, it's just like this discussion
03:06:59.660 | of things like AI ethics, you know,
03:07:01.340 | is there a ground truth to the ethics
03:07:04.000 | that we should be having?
03:07:05.340 | I don't think I can find a ground truth to my life
03:07:07.580 | any more than I can suggest a ground truth
03:07:09.860 | for kind of the ethics for the whole of civilization.
03:07:14.020 | I think that's a, you know, my,
03:07:16.620 | you know, it would be, it would be a,
03:07:21.300 | yeah, it's sort of a, I think I'm,
03:07:24.540 | you know, at different times in my life,
03:07:27.620 | I've had different kind of goal structures and so on.
03:07:31.540 | - From your perspective, you're local,
03:07:34.060 | you're just a cell in the cellular automata,
03:07:36.460 | but in some sense, I find it funny from my observation
03:07:40.220 | is I kind of, you know, it seems that the universe
03:07:44.820 | is using you to understand itself in some sense.
03:07:47.980 | You're not aware of it.
03:07:49.540 | - Yeah, well, right, well, if it turns out
03:07:51.500 | that we reduce sort of all of the universe
03:07:53.780 | to some simple rule, everything is connected, so to speak.
03:07:57.900 | And so it is inexorable in that case that, you know,
03:08:02.700 | if I'm involved in finding how that rule works,
03:08:06.860 | then, you know, then that's a,
03:08:10.660 | it's inexorable that the universe set it up that way.
03:08:13.580 | But I think, you know, one of the things I find
03:08:15.300 | a little bit, you know, this goal of finding
03:08:18.860 | fundamental theory of physics, for example,
03:08:21.420 | if indeed we end up as the sort of virtualized consciousness,
03:08:25.740 | the disappointing feature is people will probably care less
03:08:28.700 | about the fundamental theory of physics
03:08:30.540 | in that setting than they would now,
03:08:31.900 | because gosh, it's like, you know,
03:08:34.420 | what the machine code is down below underneath this thing
03:08:38.700 | is much less important if you're virtualized, so to speak.
03:08:42.420 | And I think the, although I think my own personal,
03:08:47.860 | you talk about ego, I find it just amusing that,
03:08:51.100 | you know, kind of, you know, if you're imagining
03:08:54.940 | that sort of virtualized consciousness,
03:08:56.340 | like what does the virtualized consciousness do
03:08:58.260 | for the rest of eternity?
03:08:59.760 | Well, you can explore, you know, the video game
03:09:02.940 | that represents the universe as the universe is,
03:09:05.860 | or you can go off, you can go off that reservation
03:09:09.220 | and go and start exploring the computational universe
03:09:11.600 | of all possible universes.
03:09:13.420 | And so in some vision of the future of history,
03:09:16.900 | it's like the disembodied consciousnesses
03:09:19.580 | are all sort of pursuing things like my new kind of science
03:09:23.940 | sort of for the rest of eternity, so to speak,
03:09:25.780 | and that ends up being the kind of the thing
03:09:29.900 | that represents the, you know,
03:09:32.980 | the future of kind of the human condition.
03:09:35.700 | - I don't think there's a better way to end it.
03:09:38.340 | Stephen, thank you so much.
03:09:39.460 | It's a huge honor talking today.
03:09:41.100 | Thank you so much.
03:09:42.180 | - This was great.
03:09:43.300 | You did very well.
03:09:45.220 | - Thanks for listening to this conversation
03:09:46.780 | with Stephen Wolfram, and thank you to our sponsors,
03:09:49.500 | ExpressVPN and Cash App.
03:09:51.940 | Please consider supporting the podcast
03:09:53.620 | by getting ExpressVPN at expressvpn.com/lexpod
03:09:58.060 | and downloading Cash App and using code LEXPODCAST.
03:10:02.540 | If you enjoy this podcast, subscribe on YouTube,
03:10:05.060 | review of the Five Stars on Apple Podcast,
03:10:07.260 | support it on Patreon, or simply connect with me on Twitter
03:10:10.540 | at Lex Friedman.
03:10:12.940 | And now, let me leave you with some words
03:10:15.100 | from Stephen Wolfram.
03:10:16.820 | It is perhaps a little humbling to discover
03:10:19.660 | that we as humans are in effect computationally
03:10:22.580 | no more capable than the cellular automata
03:10:24.500 | with very simple rules,
03:10:26.340 | but the principle of computational equivalence
03:10:28.740 | also implies that the same is ultimately true
03:10:31.340 | of our whole universe.
03:10:32.940 | So while science has often made it seem
03:10:35.700 | that we as humans are somehow insignificant
03:10:38.660 | compared to the universe,
03:10:40.220 | the principle of computational equivalence now shows
03:10:43.060 | that in a certain sense, we're at the same level.
03:10:46.460 | For the principle implies that what goes on inside us
03:10:49.900 | can ultimately achieve just the same level
03:10:52.540 | of computational sophistication as our whole universe.
03:10:56.220 | Thank you for listening and hope to see you next time.
03:10:59.500 | (upbeat music)
03:11:02.080 | (upbeat music)
03:11:04.660 | [BLANK_AUDIO]