back to indexStephen Wolfram: Cellular Automata, Computation, and Physics | Lex Fridman Podcast #89
Chapters
0:0 Introduction
4:16 Communicating with an alien intelligence
12:11 Monolith in 2001: A Space Odyssey
29:6 What is computation?
44:54 Physics emerging from computation
74:10 Simulation
79:23 Fundamental theory of physics
88:1 Richard Feynman
99:57 Role of ego in science
107:21 Cellular automata
135:8 Wolfram language
175:14 What is intelligence?
177:47 Consciousness
182:36 Mortality
185:47 Meaning of life
00:00:00.000 |
The following is a conversation with Stephen Wolfram, 00:00:06.240 |
who is the founder and CEO of Wolfram Research, 00:00:12.600 |
Wolfram Language, and the new Wolfram Physics Project. 00:00:22.560 |
was one of the most influential books in my journey 00:00:25.600 |
in computer science and artificial intelligence. 00:00:28.980 |
It made me fall in love with the mathematical beauty 00:00:33.540 |
It is true that perhaps one of the criticisms of Stephen 00:00:43.520 |
from fully enjoying the content of his ideas. 00:00:46.240 |
We talk about this point in this conversation. 00:00:56.800 |
that refuses to surrender to the cautious ways 00:01:05.240 |
in looking past the peculiarities of human nature 00:01:15.540 |
is one of the most original minds of our time, 00:01:23.140 |
This conversation was recorded in November, 2019, 00:01:26.580 |
when the Wolfram Physics Project was underway, 00:01:28.980 |
but not yet ready for public exploration as it is now. 00:01:36.940 |
so this is round one, and stay tuned for round two soon. 00:01:48.320 |
support it on Patreon, or simply connect with me on Twitter, 00:02:11.500 |
by getting ExpressVPN at expressvpn.com/lexpod 00:02:16.160 |
and downloading Cash App and using code LEXPODCAST. 00:02:34.420 |
Since Cash App does fractional share trading, 00:02:36.940 |
let me mention that the order execution algorithm 00:02:40.820 |
to create the abstraction of fractional orders 00:02:51.200 |
that takes a step up to the next layer of abstraction 00:02:55.420 |
This makes trading more accessible for new investors 00:03:01.460 |
So again, if you get Cash App from the App Store, 00:03:07.020 |
you get $10, and Cash App will also donate $10 to FIRST, 00:03:11.060 |
an organization that is helping to advance robotics 00:03:13.620 |
and STEM education for young people around the world. 00:03:24.980 |
to get a discount and to support this podcast. 00:03:33.020 |
Press the big power on button and your privacy is protected. 00:03:36.220 |
And if you like, you can make it look like your location 00:03:44.500 |
Certainly, it allows you to access international versions 00:03:47.180 |
of streaming websites like the Japanese Netflix 00:03:51.680 |
ExpressVPN works on any device you can imagine. 00:04:01.180 |
Windows, Android, but it's available anywhere else too. 00:04:08.900 |
to get a discount and to support this podcast. 00:04:11.520 |
And now, here's my conversation with Stephen Wolfram. 00:04:18.540 |
helped create the alien language in the movie "Arrival." 00:04:22.060 |
So let me ask maybe a bit of a crazy question, 00:04:27.980 |
do you think we would be able to find a common language? 00:04:31.460 |
- Well, by the time we're saying aliens are visiting us, 00:04:38.300 |
because the concept of an alien actually visiting, 00:04:42.460 |
so to speak, we already know they're kind of things 00:04:49.520 |
in the same kind of physical setup that we do. 00:04:52.820 |
They're not, you know, it's not just radio signals. 00:04:57.540 |
It's an actual thing that shows up and so on. 00:05:05.960 |
Well, the best example we have of this right now is AI. 00:05:13.680 |
And the question is, how well do we communicate with AI? 00:05:20.420 |
and you open it up and it's like, what are you thinking? 00:05:26.140 |
It's not easy, but it's not absolutely impossible. 00:05:31.020 |
but given the setup of your question, aliens visiting, 00:05:37.620 |
one will be able to find some form of communication, 00:06:04.740 |
by saying you visit, but how would aliens visit? 00:06:12.060 |
and here we're using the imprecision of human language, 00:06:16.980 |
And if that's represented in computational language, 00:06:34.760 |
there's, you know, something, a physical embodiment 00:06:50.180 |
you know, photons in some very elaborate pattern. 00:06:53.340 |
We're imagining it's physical things made of atoms 00:07:13.380 |
oh, there'll be, you know, it'll be clear what it means 00:07:18.700 |
I've increasingly realized as a result of science 00:07:21.380 |
that I've done that there really isn't a bright line 00:07:28.980 |
So, you know, in our kind of everyday sort of discussion, 00:07:37.740 |
You know, we realize that there are computational processes 00:07:47.020 |
How do we distinguish that from the processes 00:07:52.100 |
the physical processes that go on in our brains? 00:07:56.260 |
How do we say the physical processes going on 00:07:59.980 |
that represent sophisticated computations in the weather? 00:08:03.020 |
Oh, that's not the same as the physical processes 00:08:05.220 |
that go on that represent sophisticated computations 00:08:13.860 |
is that there's kind of a thread of history and so on 00:08:17.620 |
that connects kind of what happens in different brains 00:08:23.020 |
And it's a, you know, what happens in the weather 00:08:26.940 |
by sort of a thread of civilizational history, so to speak, 00:08:44.220 |
because, you know, it's like that pulsar magnetosphere 00:08:48.220 |
that's generating these very elaborate radio signals. 00:08:51.180 |
You know, is that something that we should think of 00:08:57.780 |
you know, millions of years of processes going on 00:09:10.820 |
about extraterrestrial intelligence and where is it 00:09:14.940 |
of how come there's no other signs of intelligence 00:09:18.420 |
in the universe, my guess is that we've got sort of two 00:09:21.620 |
alien forms of intelligence that we're dealing with, 00:09:31.580 |
And my guess is people will sort of get comfortable 00:09:34.100 |
with the fact that both of these have been achieved 00:09:44.100 |
things we've created, digital things we've created 00:09:48.260 |
And they'll say, oh, we're kind of also used to the idea 00:09:55.020 |
except they don't share the sort of civilizational history 00:10:00.140 |
And so we don't, you know, they're a different branch. 00:10:03.740 |
I mean, it's similar to when you talk about life, 00:10:08.900 |
I think almost synonymously with intelligence, 00:10:15.300 |
the AIs would be upset to hear you equate those two things. 00:10:19.060 |
- Because they really probably implied biological life. 00:10:24.300 |
- But you're saying, I mean, we'll explore this more, 00:10:30.860 |
And so it's a full spectrum and we just make ourselves 00:10:44.860 |
at some level it's a little depressing to realize 00:10:46.780 |
that there's so little that's special about them. 00:10:52.780 |
And, you know, from Copernicus on, it's like, you know, 00:11:01.140 |
Well, then we were convinced there's something 00:11:02.860 |
very special about the chemistry that we have 00:11:10.700 |
Oh, this intelligence thing we have, that's really special. 00:11:17.780 |
it's kind of liberating for the following reason, 00:11:19.540 |
that you realize that what's special is the details of us, 00:11:29.820 |
we could wonder, oh, is something else gonna come along 00:11:32.340 |
and, you know, also have that abstract attribute? 00:11:42.860 |
of our civilization and so on, nothing else has that. 00:11:45.340 |
That's what, you know, that's our story, so to speak. 00:11:49.020 |
And that's sort of almost by definition special. 00:11:52.740 |
So I view it as not being such a, I mean, I was, 00:11:58.300 |
This is kind of, you know, how can we have self-respect 00:12:04.260 |
Then I realized the details of the things we do, 00:12:10.240 |
- So maybe on a small tangent, you just made me think of it, 00:12:15.800 |
but what do you make of the monoliths in "2001 Space Odyssey" 00:12:23.020 |
and sparking the kind of particular intelligent computation 00:12:29.460 |
Is there anything interesting to get from that 00:12:35.900 |
- Yeah, I mean, I think what's fun about that is, 00:12:44.060 |
And in the, you know, Earth a million years ago, 00:12:47.100 |
whatever they were portraying with a bunch of apes 00:12:49.420 |
and so on, a thing that has that level of perfection 00:12:54.940 |
It seems very kind of constructed, very engineered. 00:13:01.540 |
What is the, you know, what's the techno signature? 00:13:03.860 |
So to speak, what is it that you see it somewhere 00:13:07.340 |
and you say, my gosh, that had to be engineered. 00:13:15.260 |
And you know, the perfect ones are very perfect. 00:13:22.540 |
it's a sign of sort of, it's a techno signature 00:13:35.100 |
What is the, you know, what is the right signature? 00:13:38.360 |
I mean, like, you know, Gauss, famous mathematician, 00:13:50.060 |
on the grounds that, it was a kind of cool idea, 00:13:54.340 |
it was on the grounds that the Martians would see that 00:13:56.980 |
and realize, gosh, there are mathematicians out there. 00:14:04.740 |
for the cultural achievements of our species. 00:14:15.100 |
that is a sign of intelligence in its creation 00:14:21.060 |
- Yeah, you talk about if we were to send a beacon, 00:14:32.860 |
it's a philosophically doomed issue to, I mean, 00:14:39.620 |
but it's kind of like, we are part of the universe. 00:14:47.140 |
Computation, which is sort of the thing that we are, 00:14:54.180 |
elaborate things we create, is surprisingly ubiquitous. 00:14:59.180 |
In other words, we might've thought that, you know, 00:15:02.220 |
we've built this whole giant engineering stack 00:15:07.020 |
that's led us to be able to do elaborate computations, 00:15:10.580 |
but this idea, the computations are happening 00:15:15.060 |
The only question is whether there's a thread 00:15:22.740 |
And so I think, I think this question of what do you send 00:15:49.820 |
And so, I don't know if you're familiar with "The Voyager," 00:16:00.780 |
- Her brainwaves when she was first falling in love 00:16:07.020 |
- That perhaps you would shut down the power of that 00:16:10.620 |
by saying we might as well send anything else, 00:16:14.060 |
All of it is kind of an interesting, peculiar thing 00:16:18.780 |
Well, I mean, I think it's kind of interesting, too, 00:16:23.100 |
one of the things that's kind of cute about that is, 00:16:33.660 |
And it has a diagram of how to play a phonograph record. 00:16:46.460 |
they're like, "I don't know what the heck this is." 00:16:52.500 |
forget the fact that it has some kind of helical track in it, 00:16:55.420 |
just image the whole thing and see what's there. 00:16:59.820 |
In only 30 years, our technology has kind of advanced 00:17:05.820 |
you know, mechanical track on a phonograph record 00:17:10.820 |
So, you know, that's a cautionary tale, I would say, 00:17:17.940 |
that in detail sort of leads by the nose some, 00:17:22.060 |
you know, the aliens or whatever to do something. 00:17:24.820 |
It's like, no, you know, best you're gonna do, as I say, 00:17:29.980 |
we would not build a helical scan thing with a needle. 00:17:33.980 |
We would just take some high-resolution imaging system 00:17:38.980 |
"Oh, it's a big nuisance that they put in a helix, 00:17:49.500 |
- Do you think, and this will get into trying to figure out 00:17:54.180 |
interpretability of AI, interpretability of computation, 00:18:03.940 |
if you put your alien hat on, figure out this record, 00:18:10.620 |
- Well, it's a question of what one wants to do. 00:18:14.500 |
- Understand what the other party was trying to communicate 00:18:18.020 |
or understand anything about the other party. 00:18:22.940 |
The issue is, it's like when people were trying to do 00:18:25.900 |
natural language understanding for computers, right? 00:18:33.660 |
In other words, you take your piece of English or whatever 00:18:37.060 |
and you say, "Gosh, my computer has understood this." 00:18:45.980 |
built WolfMalpha, you know, one of the things was, 00:18:50.060 |
it's, you know, it's doing question answering and so on, 00:18:52.300 |
and it needs to do natural language understanding. 00:19:03.340 |
the number one thing was we had an actual objective 00:19:08.100 |
We were trying to turn the natural language-- 00:19:14.340 |
Now, similarly, when you imagine your alien, you say, 00:19:25.660 |
if it converts to some representation where we can say, 00:19:30.340 |
"that's a representation that we can recognize 00:19:33.380 |
"is represents understanding, then all well and good." 00:19:36.740 |
But actually the only ones that I think we can say 00:19:43.420 |
that we humans kind of recognize as being useful to us. 00:19:55.020 |
So are they a threat to us from a military perspective? 00:20:00.660 |
first kind of understanding that I'll be interested in. 00:20:06.100 |
that was sort of one of the key questions is, 00:20:12.740 |
- Right, but even that is, you know, it's a very unclear, 00:20:15.660 |
you know, it's like the, are you gonna hurt us? 00:20:17.620 |
That comes back to a lot of interesting AI ethics questions 00:20:20.340 |
because the, you know, we might make an AI that says, 00:20:31.380 |
because we wanna make sure we don't hurt you, so to speak, 00:20:33.500 |
because that's some, and then, well, something, you know, 00:20:39.300 |
And, you know, that sort of hurts me in some way. 00:20:50.380 |
And as we start thinking about things about AI ethics 00:20:53.380 |
and so on, that's, you know, something one has to address. 00:21:00.380 |
- Yeah, well, right, and I mean, I think ethics, 00:21:15.460 |
You know, you have to have a ground truth, so to speak, 00:21:23.900 |
So that gives one all kinds of additional complexity 00:21:27.940 |
- One convenient thing in terms of turning ethics 00:21:41.640 |
But then when you say survival of the species, right, 00:21:48.200 |
for example, let's say, forget about technology, 00:21:51.360 |
just, you know, hang out and, you know, be happy, 00:21:54.480 |
live our lives, go on to the next generation, 00:22:04.640 |
In terms of, you know, the attempt to do elaborate things 00:22:09.440 |
and the attempt to might be counterproductive 00:22:20.340 |
so okay, let's take that as a sort of thought experiment. 00:22:24.480 |
You know, you can say, well, what are the threats 00:22:29.760 |
You know, the super volcano, the asteroid impact, 00:22:35.040 |
Okay, so now we inventory these possible threats 00:22:37.920 |
and we say, let's make our species as robust as possible 00:22:42.900 |
I think in the end, it's a, it's sort of an unknowable thing 00:22:54.840 |
maximize the long-term, what does long-term mean? 00:23:03.680 |
You know, does long-term mean next thousand years? 00:23:16.920 |
Like if, you know, if your company gets bought 00:23:22.400 |
then they'll, you know, you can run a company just fine 00:23:38.840 |
for a thousand years, there's probably a certain set 00:23:41.040 |
of things that one would do to optimize that, 00:23:45.120 |
that would be a pretty big shame for the future of history, 00:23:53.080 |
it is what you realize is there's a whole sort of raft 00:23:58.280 |
of undecidability, computational irreducibility. 00:24:01.240 |
In other words, it's, I mean, one of the good things 00:24:04.320 |
about sort of the, what our civilization has gone through 00:24:11.560 |
is that there's a certain computational irreducibility 00:24:16.400 |
that you can look from the outside and just say, 00:24:20.360 |
At the end of the day, this is what's gonna happen. 00:24:22.520 |
You actually have to go through the process to find out. 00:24:25.240 |
And I think that's both, that feels better in the sense 00:24:35.640 |
And it's, but it also means that telling the AI, 00:24:40.640 |
go figure out, you know, what will be the best outcome? 00:24:44.160 |
Well, unfortunately, it's gonna come back and say, 00:24:48.320 |
We'd have to run all of those scenarios to see what happens. 00:24:55.280 |
we're thrown immediately into sort of standard issues 00:25:01.140 |
- So yeah, even if you get that the answer to the universe 00:25:16.720 |
- Right, well, I think it's saying to summarize, 00:25:22.440 |
- That's, if that is possible, it tells us, I mean, 00:25:25.800 |
the whole sort of structure of thinking about computation 00:25:28.800 |
and so on, and thinking about how stuff works. 00:25:32.000 |
If it's possible to say, and the answer is such and such, 00:25:42.880 |
because you're saying, if it's knowable, what the answer is, 00:25:51.120 |
But if we can know it, then something that we're dealing 00:25:56.000 |
So then the universe isn't the universe, so to speak. 00:26:13.680 |
It's hard, I mean, it's probably impossible, right? 00:26:19.000 |
And the universe appears, by at least the poets, 00:26:24.000 |
to be sufficiently complex that we won't be able 00:26:38.820 |
It means that we, that our little part of the universe 00:26:48.520 |
it is conceivable, the only way we'd be able to predict 00:26:54.440 |
we are the one place where there is computation 00:26:57.600 |
more special, more sophisticated than anything else 00:27:01.120 |
That's the only way we would have the ability to, 00:27:04.560 |
sort of the almost theological ability, so to speak, 00:27:20.280 |
of looping patterns that reoccur throughout the universe 00:27:27.560 |
but then it still becomes exceptionally difficult 00:27:34.040 |
- The most remarkable thing about the universe 00:27:42.920 |
- Absolutely, it's full of, I mean, physics is successful. 00:27:46.320 |
You know, it's full of laws that tell us a lot of detail 00:27:54.280 |
the 10 to the 90th particles in the universe, 00:28:00.120 |
they all follow basically the same physical laws, 00:28:03.440 |
and that's something, that's a very profound fact 00:28:08.280 |
What conclusion you draw from that is unclear. 00:28:18.320 |
Now, you know, people have different conclusions about it, 00:28:25.200 |
I've just restarted a long-running kind of interest of mine 00:28:41.480 |
We'll come to that, and I just had a lot of conversations 00:28:59.480 |
and what might be underlying the kind of physics 00:29:10.820 |
Operationally, computation is following rules. 00:29:18.720 |
is the process of systematically following rules, 00:29:21.840 |
and it is the thing that happens when you do that. 00:29:35.460 |
It can be something where there's a very simple input, 00:29:41.760 |
and you'd say there's not really much data going into this. 00:29:44.840 |
You could actually pack the initial conditions 00:29:56.000 |
- What I mean by that is something like this. 00:30:25.620 |
This computation is running in a CMOS silicon CPU. 00:30:29.760 |
This computation is running in a fluid system 00:30:36.280 |
that transcends the sort of detailed framework 00:31:11.140 |
it's intricate, complicated relationship with matter, 00:31:27.360 |
particles that carry force and particles that have mass. 00:31:30.680 |
These kinds of ideas, they seem to map to each other, 00:31:35.920 |
Is there a connection between energy and mass 00:31:40.300 |
and computation, or are these completely disjoint ideas? 00:31:45.440 |
The things that I'm trying to do about fundamental physics 00:31:52.700 |
but there is no known connection at this time. 00:32:09.540 |
people were making mechanical calculators of various kinds. 00:32:16.640 |
you go to the adding machine store, basically. 00:32:25.580 |
at least at the level of that kind of computation 00:32:32.200 |
There's the adding machine kind of computation, 00:32:34.220 |
there's the multiplying machine notion of computation, 00:32:42.480 |
particularly in the context of mathematical logic, 00:32:46.200 |
which would represent any reasonable function, right? 00:32:52.920 |
was one of the early ideas, and it didn't work. 00:33:01.560 |
using the primitives of primitive recursion, okay? 00:33:04.560 |
So then along comes 1931 and Godel's theorem and so on. 00:33:16.960 |
Godel basically showed how you could compile arithmetic, 00:33:21.120 |
how you could basically compile logical statements, 00:33:24.640 |
like this statement is unprovable, into arithmetic. 00:33:27.800 |
So what he essentially did was to show that arithmetic 00:33:34.200 |
that's capable of representing all kinds of other things. 00:33:41.120 |
Meanwhile, Alonzo Church had come up with lambda calculus. 00:33:44.520 |
And the surprising thing that was established very quickly 00:33:46.920 |
is the Turing machine idea about what computation might be 00:33:51.180 |
is exactly the same as the lambda calculus idea 00:33:55.840 |
And so, and then there started to be other ideas, 00:33:59.560 |
other kinds of representations of computation. 00:34:08.640 |
like those old adding machines and multiplying machines 00:34:15.380 |
and they were just different, but it isn't true. 00:34:30.680 |
"Oh, Turing machines are kind of what computation is." 00:34:36.440 |
"No, no, no, it's just not how the universe works. 00:34:43.600 |
- Yeah, the universe is not a Turing machine. 00:34:49.120 |
of the things that we make in microprocessors 00:34:54.500 |
So probably, actually through my work in the 1980s 00:34:58.080 |
about sort of the relationship between computation 00:35:04.120 |
it became a little less clear that there would be, 00:35:13.160 |
and what happens in things like Turing machines. 00:35:14.920 |
And I think probably by now, people would mostly think, 00:35:19.920 |
and by the way, brains were another kind of element 00:35:23.320 |
Kurt Gödel didn't think that his notion of computation 00:35:26.380 |
or what amounted to his notion of computation 00:35:39.840 |
But so, you know, I would say by probably sometime 00:35:48.620 |
this notion of computation that could be captured 00:35:50.940 |
by things like Turing machines was reasonably robust. 00:36:00.680 |
that's capable of being programmed to do anything 00:36:05.600 |
And, you know, this idea of universal computation, 00:36:10.560 |
this idea that you can have one piece of hardware 00:36:13.120 |
and program it with different pieces of software. 00:36:20.040 |
that's the idea that launched computer revolution, 00:36:25.160 |
But the thing that's still kind of holding out 00:36:35.360 |
Seems like you want to make a universal computer, 00:36:37.760 |
you have to kind of have a microprocessor with, you know, 00:36:54.400 |
looking at these things called cellular automata, 00:36:57.100 |
which are really simple computational systems, 00:37:03.520 |
that even when their rules were very, very simple, 00:37:06.160 |
they were doing things that were as sophisticated 00:37:08.040 |
as they did when their rules were much more complicated. 00:37:12.280 |
this idea, oh, to get sophisticated computation, 00:37:15.240 |
you have to build something with very sophisticated rules. 00:37:23.600 |
that sophisticated computation was completely ubiquitous, 00:37:26.640 |
even in systems with incredibly simple rules. 00:37:31.320 |
that I call the principle of computational equivalence, 00:37:55.840 |
from the very, very, very simplest things you can imagine, 00:37:58.800 |
then quite quickly, you hit this kind of threshold 00:38:12.360 |
So this, you've opened with a new kind of science. 00:38:17.880 |
that such simple things can create such complexity. 00:38:22.240 |
And yes, there's an equivalence, but it's not a fact. 00:38:25.040 |
It just appears to, I mean, as much as a fact 00:38:37.640 |
But let me ask sort of, you just brought up previously 00:38:41.960 |
kind of like the communities of computer scientists 00:39:01.040 |
equivalently complex Turing machine systems, right? 00:39:18.880 |
Do you see those things basically blending together? 00:39:21.740 |
Or is there still a mystery about how disjoint they are? 00:39:25.200 |
- Well, my guess is that they all blend together, 00:39:34.880 |
of computational equivalence is sort of a science fact. 00:39:37.200 |
And I was using air quotes for the science fact, 00:39:45.520 |
I mean, just to talk about that for a second, 00:39:52.340 |
it has a complicated epistemological character, 00:39:55.480 |
similar to things like the second law of thermodynamics, 00:40:00.680 |
The, you know, what is the second law of thermodynamics? 00:40:05.700 |
Is it a thing that is true of the physical world? 00:40:08.160 |
Is it something which is mathematically provable? 00:40:15.680 |
Is it in some sense, a definition of heat perhaps? 00:40:23.320 |
with the principle of computational equivalence. 00:40:28.240 |
is at the heart of the definition of computation, 00:40:37.640 |
and doesn't depend on the details of each individual system. 00:40:41.000 |
And that's why we can meaningfully talk about 00:40:47.000 |
oh, there's computation in Turing machine number 3785, 00:40:52.840 |
That's why there is a robust notion like that. 00:40:56.720 |
can we prove the principle of computational equivalence? 00:41:03.700 |
actually we've got some nice results along those lines 00:41:06.500 |
that say, throw me a random system with very simple rules. 00:41:13.320 |
we now know that even the very simplest rules 00:41:16.120 |
we can imagine of a certain type are universal 00:41:22.760 |
from the principle of computational equivalence. 00:41:24.200 |
So that's a nice piece of sort of mathematical evidence 00:41:27.000 |
for the principle of computational equivalence. 00:41:30.600 |
the simple rules creating sort of these complex behaviors, 00:41:44.400 |
That you've mentioned that you cross a threshold. 00:41:55.880 |
do there exist initial conditions for the system 00:41:59.040 |
that can be set up to essentially represent programs 00:42:03.800 |
to compute pi, to do whatever you want, right? 00:42:11.160 |
the simplest candidates that could conceivably 00:42:20.560 |
But this principle of computational equivalence, 00:42:28.320 |
It might be true for all these things we come up with, 00:42:30.400 |
the Turing machines, the cellular automata, whatever else. 00:42:47.380 |
because there's a sort of scientific induction issue. 00:42:51.120 |
You can say, well, it's true for all these brains, 00:42:58.560 |
the only way that that cannot be what happens is 00:43:04.240 |
and actually get a fundamental theory for physics 00:43:17.960 |
you know, right now with physics, we're like, 00:43:29.000 |
maybe these rules don't apply and something else applies. 00:43:35.200 |
But if we can get to the point where we actually have, 00:43:42.640 |
run this program and you will get our universe. 00:44:00.920 |
whoops, you know, you were right about all these things 00:44:04.400 |
but you're wrong about the physical universe. 00:44:08.080 |
about what's happening in the physical universe. 00:44:19.520 |
to kind of study the fundamental theory of physics. 00:44:28.800 |
to see that the universe really is computational 00:44:32.440 |
But I don't know because we're betting against, 00:44:34.800 |
we're betting against the universe, so to speak. 00:44:39.920 |
you know, when I spend a lot of my life building technology 00:44:45.120 |
And it's, there may be, it may have unexpected behavior, 00:44:50.320 |
For the universe, I'm not in that position, so to speak. 00:44:58.440 |
the fundamental laws of physics might emerge from? 00:45:02.240 |
So just to clarify, so you've done a lot of fascinating work 00:45:06.920 |
with kind of discrete kinds of computation that, 00:45:17.160 |
It's such a nice way to demonstrate that simple rules 00:45:42.680 |
because as soon as you have universal computation, 00:45:45.640 |
you can in principle simulate anything with anything. 00:45:52.640 |
were you to try to find our physical universe 00:45:57.440 |
in the computational universe of all possible programs, 00:46:00.280 |
would the ones that correspond to our universe 00:46:03.320 |
be small and simple enough that we might find them 00:46:10.520 |
We have got to have the right language in effect 00:46:12.840 |
for describing computation for that to be feasible. 00:46:15.880 |
So the thing that I've been interested in for a long time 00:46:17.920 |
is what are the most structuralist structures 00:46:22.760 |
So in other words, if you say a cellular automaton, 00:46:25.560 |
it has a bunch of cells that are arrayed on a grid 00:46:28.420 |
and it's very, you know, and every cell is updated 00:46:33.480 |
when there's a click of a clock, so to speak, 00:46:38.520 |
and every cell gets updated at the same time. 00:46:41.040 |
That's a very specific, very rigid kind of thing. 00:46:55.120 |
that what we see, what emerges for us as physical space, 00:47:01.400 |
that is sort of arbitrarily unstructured underneath. 00:47:07.400 |
in kind of what are the most structuralist structures 00:47:11.680 |
And actually what I had thought about for ages 00:47:14.880 |
is using graphs, networks, where essentially, 00:47:25.360 |
Back in the early days of quantum mechanics, for example, 00:47:27.560 |
people said, oh, for sure, space is gonna be discrete 00:47:30.960 |
'cause all these other things we're finding are discrete, 00:47:34.880 |
And so space and physics today is always treated 00:47:37.560 |
as this continuous thing, just like Euclid imagined it. 00:47:47.720 |
In other words, there are points that are arbitrarily small 00:47:51.360 |
and there's a continuum of possible positions of points. 00:47:56.800 |
And so, for example, if we look at, I don't know, 00:48:02.200 |
We can pour it, we can do all kinds of things continuously. 00:48:05.280 |
But actually we know, 'cause we know the physics of it, 00:48:07.660 |
that it consists of a bunch of discrete molecules 00:48:14.440 |
And so the possibility exists that that's true of space too. 00:48:21.680 |
but I've been interested in whether one can imagine 00:48:25.560 |
that underneath space and also underneath time 00:48:36.920 |
somehow fundamentally equivalent to a Turing machine, 00:48:44.640 |
essentially deals with integers, whole numbers, 00:48:52.560 |
- It can also store whatever the heck it did. 00:49:02.600 |
or sort of idealized physics or idealized mathematics, 00:49:16.400 |
- Are you comfortable with infinity in this context? 00:49:19.200 |
Are you comfortable in the context of computation? 00:49:24.440 |
- I think that the role of infinity is complicated. 00:49:26.760 |
Infinity is useful in conceptualizing things. 00:49:35.760 |
- But do you think infinity is part of the thing 00:49:42.360 |
I think there are many questions that you ask about, 00:49:47.500 |
Like when you say, is faster than light travel possible? 00:49:55.240 |
can you make something even arbitrarily large, 00:49:59.780 |
that will make faster than light travel possible? 00:50:03.240 |
Then you're thrown into dealing with infinity 00:50:11.640 |
and how one can make a computational infrastructure, 00:50:22.720 |
That you really have to be dealing with precise real numbers, 00:50:25.640 |
you're dealing with partial differential equations, 00:50:37.160 |
that there's sort of a continuum for everything 00:50:40.200 |
and then the things I'm thinking about are wrong. 00:50:45.680 |
if you're trying to sort of do things about nature, 00:50:51.120 |
It's not, for me personally, it's kind of a strange thing, 00:50:55.080 |
'cause I've spent a lot of my life building technology 00:50:57.540 |
where you can do something that nobody cares about, 00:51:00.620 |
but you can't be sort of wrong in that sense, 00:51:08.020 |
the sort of underlying computational infrastructure 00:51:12.460 |
so it's sort of inevitable it's gonna be fairly abstract, 00:51:25.720 |
you don't get to, if the model for the universe is simple, 00:51:38.960 |
All of those things have to-- - Those all have to be 00:51:46.880 |
what the sort of underlying structuralist structure 00:51:50.300 |
Do you think human beings have the cognitive capacity 00:52:05.280 |
I think that, I mean, I'm right in the middle 00:52:08.540 |
- So I'm telling you that I-- - Do you think you'll 00:52:11.280 |
I mean, this human has a hard time understanding 00:52:22.340 |
21st century mathematics, starting from counting, 00:52:27.340 |
back in whenever counting was invented 50,000 years ago, 00:52:36.660 |
that allow us to get to higher levels of understanding. 00:52:39.380 |
And we see the same thing happening in language. 00:52:41.580 |
You know, when we invent a word for something, 00:52:52.720 |
which works this way, that way, the other way. 00:53:00.500 |
you start to be able to build on top of that. 00:53:05.820 |
I mean, science is about building these kind of waypoints 00:53:08.860 |
where we find this sort of cognitive mechanism 00:53:12.280 |
for understanding something, then we can build on top of it. 00:53:16.820 |
differential equations, we can build on top of that. 00:53:24.440 |
that we have to go all the way sort of from the sand 00:53:27.700 |
to the computer and there's no waypoints in between, 00:53:40.620 |
eventually from sand we'll get to the computer, right? 00:53:52.340 |
And that's a different question because for that, 00:54:02.460 |
And that's something that I think I am somewhat hopeful 00:54:08.140 |
Although, you know, as of literally today, if you ask me, 00:54:12.160 |
I'm confronted with things that I don't understand very well 00:54:16.540 |
- So this is a small pattern in a computation 00:54:33.660 |
so we didn't talk much about computational irreducibility, 00:54:42.380 |
which is question is you're doing a computation, 00:54:45.620 |
you can figure out what happens in the computation 00:54:47.860 |
just by running every step in the computation 00:54:51.500 |
Or you can say, let me jump ahead and figure out, 00:54:55.620 |
you know, have something smarter that figures out 00:54:57.700 |
what's gonna happen before it actually happens. 00:55:02.420 |
has been about that act of computational reducibility. 00:55:13.500 |
We just jump ahead 'cause we solved these equations. 00:55:16.300 |
Okay, so one of the things that is a consequence 00:55:18.540 |
of the principle of computational equivalence 00:55:22.020 |
Many, many systems will be computationally irreducible 00:55:25.300 |
in the sense that the only way to find out what they do 00:55:27.280 |
is just follow each step and see what happens. 00:55:30.400 |
Well, if you're saying, well, we, with our brains, 00:55:40.060 |
We can just use the power of our brains to jump ahead. 00:55:44.020 |
But if the principle of computational equivalence is right, 00:55:53.500 |
There's a little cellular automaton doing its computation. 00:55:56.280 |
And the principle of computational equivalence says, 00:55:58.660 |
these two computations are fundamentally equivalent. 00:56:03.360 |
we're a lot smarter than the cellular automaton 00:56:05.140 |
and jump ahead 'cause we're just doing computation 00:56:16.780 |
I think that's both depressing and humbling and so on, 00:56:21.700 |
that we're all, we and the cellular automaton are the same. 00:56:39.180 |
But the problem is, to know whether you're right, 00:56:43.300 |
you have to have some computational reducibility 00:56:48.060 |
If the only way to know whether we get the universe 00:56:50.100 |
is just to run the universe, we don't get to do that 00:56:52.940 |
'cause it just ran for 14.6 billion years or whatever. 00:57:17.840 |
The question of whether they land in the right place 00:57:32.660 |
that relies on these pockets of reducibility. 00:57:39.300 |
But I think this question about how observers operate, 00:57:47.180 |
has been that every time we get more realistic 00:57:49.780 |
about observers, we learn a bit more about science. 00:58:00.500 |
They have to just wait for the light signal to arrive 00:58:12.140 |
They can only see the kind of large scale features 00:58:14.460 |
and that's why the second law of thermodynamics, 00:58:21.500 |
you wouldn't conclude something about thermodynamics. 00:58:28.780 |
You wouldn't be able to see this aggregate fact. 00:58:38.940 |
about the computation and other aspects of observers 00:58:46.620 |
In fact, my little team and I have a little theory 00:58:50.420 |
right now about how quantum mechanics may work, 00:58:56.380 |
about how the sort of thread of human consciousness 00:59:03.580 |
But there's several steps to explain what that's about. 00:59:06.540 |
- What do you make of the mess of the observer 00:59:11.740 |
Sort of the textbook definition with quantum mechanics 00:59:25.940 |
What do you make sense of that kind of observing? 00:59:29.460 |
- Well, I think actually the ideas we've recently had 00:59:49.780 |
that I started talking about 30 years ago now, 00:59:52.420 |
they say, "Oh no, that can't possibly be right. 00:59:56.700 |
Right, you say, "Okay, tell me what is the essence 01:00:02.260 |
"to know that I've got quantum mechanics, so to speak?" 01:00:09.740 |
with quantum computing and there are all these companies 01:00:18.020 |
And they're like, "Well, maybe you shouldn't do that yet. 01:00:22.780 |
And one of the questions that I've been curious about is, 01:00:25.180 |
"If I have five minutes with a quantum computer, 01:00:27.700 |
"how can I tell if it's really a quantum computer 01:00:29.820 |
"or whether it's a simulator at the other end?" 01:00:33.900 |
It turns out there isn't, it's like a lot of these questions 01:00:37.140 |
about sort of what is intelligence, what's life. 01:00:39.780 |
- That's a Turing test for quantum computing. 01:00:43.100 |
It's like, are you really a quantum computer? 01:00:48.420 |
Is it just a simulation or is it really a quantum computer? 01:00:59.220 |
of quantum mechanics and the completely separate thing 01:01:06.220 |
definite things happen, whereas quantum mechanics 01:01:10.600 |
Quantum mechanics is all about the amplitudes 01:01:21.420 |
- But to linger on the point, you've kind of mentioned 01:01:28.700 |
and this idea that it could perhaps have something 01:01:36.680 |
that there's a graph structure of nodes and edges 01:01:43.940 |
what is, in a sense, the most structureless structure 01:01:58.540 |
- By the way, the question itself is a beautiful one 01:02:11.020 |
Essentially, what is interesting about the sort of model 01:02:16.620 |
I have now is it's a little bit like what happened 01:02:23.540 |
maybe the model is this, I discover it's equivalent. 01:02:27.500 |
And that's quite encouraging because it's like, 01:02:30.580 |
I could say, well, I'm gonna look at trivalent graphs 01:02:35.700 |
Or I could look at this special kind of graph. 01:02:37.780 |
Or I could look at this kind of algebraic structure. 01:02:41.060 |
And turns out that the things I'm now looking at, 01:02:44.380 |
everything that I've imagined that is a plausible type 01:02:47.700 |
of structureless structure is equivalent to this. 01:02:54.860 |
well, so you might have some collection of tuples, 01:03:07.380 |
So you might have one, three, five, two, three, four, 01:03:15.500 |
triples of numbers, let's say, quadruples of numbers, 01:03:19.620 |
And you have all these sort of floating little tuples. 01:03:26.060 |
And that sort of floating collection of tuples, 01:03:34.940 |
The only thing that relates them is when a symbol 01:03:41.860 |
So if you have two tuples and they contain the same symbol, 01:03:56.900 |
- I told you it's abstract, but this is the-- 01:04:03.820 |
- Right, but so think about it in terms of a graph. 01:04:13.540 |
A graph is a set of pairs that say this node has an edge 01:04:19.780 |
So that's the, and a graph is just a collection 01:04:40.340 |
so that might represent the state of the universe. 01:04:46.540 |
And so the answer is that what I'm looking at 01:04:49.180 |
is transformation rules on these hypergraphs. 01:05:02.340 |
turn it into a piece of a hypergraph that looks like this. 01:05:05.240 |
So on a graph, it might be, when you see the subgraph, 01:05:08.140 |
when you see this thing with a bunch of edges hanging out 01:05:37.420 |
I suspect everything's discrete, even in time. 01:05:42.000 |
- Okay, so the question is, where do you do the updates? 01:05:48.040 |
And you do them, the order in which the updates is done 01:05:54.820 |
so there may be many possible orderings for these updates. 01:06:11.660 |
- So in fact, all that you can be sensitive to 01:06:17.180 |
of how an event over there affects an event that's in you. 01:06:30.740 |
so the end result of that is all you're sensitive to 01:06:42.940 |
I'm simply saying, I'm simply making the argument 01:06:45.300 |
that what happens, the microscopic order of these rewrites 01:06:57.300 |
Because the only thing the observer can be affected by 01:07:10.640 |
You don't really have to look at this microscopic rewriting 01:07:14.440 |
So these rewrites are happening wherever they, 01:07:26.620 |
like what gets updated, the sequence of things is undefined. 01:07:32.700 |
- Is that's what you mean by the causal network, 01:07:35.300 |
- No, the causal network is given that an update has happened 01:07:39.920 |
Then the question is, is that event causally related to? 01:07:43.620 |
Does that event, if that event didn't happen, 01:07:49.700 |
- And so you build up this network of what affects what. 01:07:53.900 |
And so what that does, so when you build up that network, 01:07:57.500 |
that's kind of the observable aspect of the universe 01:08:02.060 |
- And so then you can ask questions about, you know, 01:08:09.780 |
Okay, so here's where it starts getting kind of interesting. 01:08:12.640 |
So for certain kinds of microscopic rewriting rules, 01:08:20.320 |
And so this is, okay, mathematical logic moment, 01:08:24.100 |
this is equivalent to the Church-Rosser property 01:08:28.740 |
And it's the same reason that if you're simplifying 01:08:33.140 |
you can say, oh, let me expand those terms out, 01:08:40.680 |
And that's, it's the same fundamental phenomenon 01:08:43.760 |
that causes for certain kinds of microscopic rewrite rules 01:08:47.580 |
that causes the causal network to be independent 01:08:58.820 |
I mean, the reason it's important is that that property, 01:09:03.820 |
special relativity says you can look at these sort of, 01:09:11.880 |
You can have different, you can be looking at your notion 01:09:14.600 |
of what space and what's time can be different, 01:09:17.580 |
depending on whether you're traveling at a certain speed, 01:09:19.520 |
depending on whether you're doing this, that, and the other. 01:09:21.900 |
But nevertheless, the laws of physics are the same. 01:09:24.020 |
That's what the principle of special relativity says, 01:09:36.900 |
is essentially equivalent to a change of reference frame, 01:09:39.140 |
or at least there's a sub part of how that works 01:09:41.740 |
that's equivalent to change of reference frame. 01:09:43.660 |
So, somewhat surprisingly, and sort of for the first time 01:09:50.100 |
microscopic theory to imply special relativity, 01:09:56.980 |
this is a, it's something where this other property, 01:10:00.340 |
causal invariance, which is also the property 01:10:03.640 |
that implies that there's a single thread of time 01:10:13.340 |
of an observer thinking that definite stuff happens. 01:10:16.760 |
Otherwise, you've got all these possible rewriting orders, 01:10:22.720 |
there's a notion of a definite thread of time. 01:10:27.920 |
even space, would be emergent from the system. 01:10:31.840 |
- So it's not a fundamental part of the system. 01:10:42.080 |
But the thing is that it's just like imagining, 01:10:46.920 |
and imagine you have something like a honeycomb graph, 01:10:54.920 |
it's just a bunch of nodes connected to other nodes. 01:10:58.520 |
you say that looks like a honeycomb, you know, lattice. 01:11:08.920 |
if you just connect all the nodes one to another, 01:11:11.620 |
and kind of a sort of linked list type structure, 01:11:26.760 |
And it's the same thing with these hypergraphs. 01:11:34.440 |
So we don't know, you know, this is one of these things, 01:11:36.920 |
we're kind of betting against nature, so to speak. 01:11:41.320 |
And so there are many other properties of this kind of system 01:11:45.000 |
that have a very beautiful, actually, and very suggestive. 01:11:49.160 |
And it will be very elegant if this turns out to be right, 01:11:56.440 |
Everything about space, everything about time, 01:11:59.200 |
everything about matter, it's all just emergent 01:12:02.420 |
from the properties of this extremely low-level system. 01:12:21.680 |
sort of hypergraph rewriting rule gives the universe. 01:12:25.360 |
Just run that hypergraph rewriting rule for enough times, 01:12:45.760 |
Let's say, turns out the minimal version of this, 01:12:53.600 |
is actually a single line of orphan language code. 01:12:56.560 |
So that's, which I wasn't sure was gonna happen that way, 01:13:11.880 |
the specification of the rules might be slightly longer. 01:13:13.760 |
- How does that help you accept marveling in the beauty 01:13:18.080 |
and the elegance of the simplicity that creates the universe? 01:13:27.060 |
But so, the thing that is really strange to me, 01:13:29.400 |
and I haven't wrapped my brain around this yet, 01:13:35.580 |
one keeps on realizing that we're not special, 01:13:43.500 |
and yet, if we produce a rule for the universe, 01:13:49.460 |
and we can write it down in a couple of lines or something, 01:13:57.780 |
when many of the available universes, so to speak, 01:14:02.140 |
Might be, you know, a quintillion characters long. 01:14:05.460 |
Why did we get one of the ones that's simple? 01:14:07.660 |
And so, I haven't wrapped my brain around that issue yet. 01:14:16.220 |
is it possible that there is something outside of this, 01:14:33.300 |
we don't get to say much about what's outside our universe, 01:14:39.700 |
Now, can we make a sort of almost theological conclusion, 01:14:43.740 |
from being able to know how our particular universe works? 01:14:52.180 |
could we, and it relates again to this question 01:14:57.220 |
you know, we've got the rule for the universe. 01:15:13.820 |
And, you know, it's a periodic series of pulses, let's say. 01:15:22.940 |
does not necessarily mean that somebody created it, 01:15:27.520 |
or that we can even comprehend what would create it. 01:15:29.340 |
- Yeah, I mean, I think it's the ultimate version 01:15:38.620 |
is was our universe a piece of technology, so to speak? 01:15:43.700 |
Because, but I mean, it'll be, it's, I mean, you know, 01:15:54.300 |
It's going to be, you know, made by so-and-so. 01:15:57.060 |
But there's no way we could understand that, so to speak. 01:16:06.500 |
if we find a rule for the universe, we're not, 01:16:19.220 |
It's just saying that represents what our universe does, 01:16:23.780 |
laws of classical mechanics, differential equations, 01:16:26.380 |
whatever they are, represent what mechanical systems do. 01:16:37.100 |
just representing the behavior of those systems. 01:16:42.700 |
on the fascinating, perhaps slightly sci-fi question? 01:16:45.820 |
What's the gap between understanding the fundamental rules 01:16:49.740 |
that create a universe and engineering a system, 01:16:58.140 |
you've talked about, you know, nanoengineering, 01:17:13.940 |
I think the substrate on which the universe is operating 01:17:21.700 |
is that same substrate that the universe is operating in. 01:17:24.980 |
So if the universe is a bunch of hypergraphs being rewritten, 01:17:33.000 |
We don't get to, and if you ask the question, 01:17:51.640 |
- But so I've seen some beautiful cellular automata 01:17:53.960 |
that basically create copies of itself within itself, right? 01:17:57.120 |
So that's the question, whether it's possible to create, 01:18:00.600 |
like whether you need to understand the substrate 01:18:07.600 |
one of my slightly sci-fi thoughts about the future, 01:18:18.600 |
You get, because I've done this poll, informally at least, 01:18:22.360 |
it's curious, actually, you get a decent fraction 01:18:24.920 |
of people saying, oh yeah, that would be pretty interesting. 01:18:27.880 |
- I think that's becoming, surprisingly enough, more, 01:18:31.400 |
I mean, a lot of people are interested in physics 01:18:36.120 |
in a way that, like without understanding it, 01:18:42.160 |
a very small number of them, struggle to understand 01:18:46.200 |
- Right, I mean, I think that's somewhat true, 01:18:48.440 |
and in fact, in this project that I'm launching into 01:18:51.720 |
to try and find the fundamental theory of physics, 01:19:07.120 |
I mean, I figure one feature of this project is, 01:19:11.560 |
unlike technology projects that basically are what they are, 01:19:16.960 |
because it might be the case that it generates 01:19:21.460 |
with the physical universe that we happen to live in. 01:19:23.920 |
Well, okay, so we're talking about kind of the quest 01:19:33.000 |
it's kind of hard to find the fundamental theory of physics. 01:19:35.120 |
People weren't sure that that would be the case. 01:19:37.480 |
Back in the early days of applying mathematics to science, 01:19:43.920 |
oh, in 100 years we'll know everything there is to know 01:19:51.840 |
'cause every time we got to sort of a greater level 01:20:14.160 |
that's a kooky business, we'll never be able to do that. 01:20:23.520 |
and it's all good and we can figure out a lot of stuff. 01:20:35.840 |
it's actually kind of crazy thinking back on it, 01:20:38.600 |
because it's kind of like there was this long period 01:20:41.560 |
in civilization where people thought the ancients 01:20:43.400 |
had it all figured out and will never figure out 01:20:46.280 |
And to some extent, that's the way I felt about physics 01:20:49.520 |
when I was in the middle of doing it, so to speak. 01:20:56.640 |
and yes, there's probably something underneath this, 01:21:09.960 |
and I discovered that they do all kinds of things 01:21:13.400 |
that were completely at odds with the intuition 01:21:17.000 |
And so after that, after you see this tiny little program 01:21:20.800 |
that does all this amazingly complicated stuff, 01:21:23.360 |
then you start feeling a bit more ambitious about physics 01:21:26.320 |
and saying, maybe we could do this for physics too. 01:21:32.840 |
in this kind of idea of could we actually find 01:21:39.520 |
like quantum field theory and general relativity and so on. 01:21:41.280 |
And people perhaps don't realize as clearly as they might 01:21:48.040 |
quantum field theory, sort of the theory of small stuff 01:21:52.600 |
and general relativity, theory of gravitation 01:21:54.800 |
and large stuff, those are the two basic theories 01:22:10.760 |
But what's interesting is the foundations haven't changed 01:22:16.400 |
even though the foundations had changed several times 01:22:18.800 |
before that in the 200 years earlier than that. 01:22:22.500 |
And I think the kinds of things that I'm thinking about, 01:22:31.520 |
It's a different set of foundations and might be wrong, 01:22:52.620 |
it'd be a shame if we just didn't think to do it. 01:22:55.440 |
If people just said, oh, you'll never figure that stuff out. 01:23:09.580 |
It may be that it's kind of the wrong century 01:23:18.520 |
I think about things that I've tried to do in technology 01:23:21.580 |
where people thought about doing them a lot earlier. 01:23:38.640 |
And basically, we finally managed to do this, 01:23:43.600 |
And that's kind of the, in terms of life planning, 01:23:47.120 |
it's kind of like avoid things that can't be done 01:24:13.840 |
And I mean, it's already, even the things I've already done, 01:24:18.840 |
they're very, you know, it's very elegant actually, 01:24:36.960 |
- In your intuition, in terms of design universe, 01:24:47.080 |
- That's a little bit of a complicated question, 01:24:48.840 |
because when you're dealing with these things 01:24:53.800 |
- Even randomness is an emergent phenomenon perhaps? 01:25:01.360 |
pseudo-randomness and randomness are hard to distinguish. 01:25:18.160 |
without kind of yakking about very technical things. 01:25:27.720 |
because it slices between determinism and randomness 01:25:32.260 |
in a weird way that hasn't been sliced before, so to speak. 01:25:35.180 |
So like many of these questions that come up in science, 01:25:40.680 |
Turns out the real answer is it's neither of those things. 01:25:58.800 |
I mean, there's this question about a field like physics 01:26:02.200 |
and sort of the quest for fundamental theory and so on, 01:26:07.900 |
and there's the sort of the social aspect of what happens, 01:26:16.520 |
we're at, I don't know what it is, fourth generation, 01:26:19.880 |
I don't know what generation it is of physicists. 01:26:24.600 |
and for me, the foundations were like the pyramids, 01:26:39.820 |
where you're still dealing with the first generation 01:26:51.240 |
typically the pattern is some methodological advance occurs 01:26:55.440 |
and then there's a period of five years, 10 years, 01:26:59.280 |
where there's lots of things that are now made possible 01:27:04.120 |
whether it's, you know, I don't know, telescopes 01:27:06.840 |
or whether that's some mathematical method or something. 01:27:09.760 |
It's, you know, there's a, something happens, 01:27:14.760 |
a tool gets built and then you can do a bunch of stuff 01:27:18.600 |
and there's a bunch of low-hanging fruit to be picked 01:27:24.020 |
After that, all that low-hanging fruit is picked, 01:27:27.000 |
then it's a hard slog for the next however many decades 01:27:31.200 |
or century or more to get to the next sort of level 01:27:36.680 |
And it's kind of a, and it tends to be the case 01:27:41.480 |
I wouldn't say cruise mode 'cause it's really hard work, 01:27:44.160 |
but it's very hard work for very incremental progress. 01:27:48.040 |
- And in your career and some of the things you've taken on, 01:28:01.480 |
And a small tangent, when you were at Caltech, 01:28:05.200 |
did you get to interact with Richard Feynman at all? 01:28:13.520 |
In fact, and in fact, both when I was at Caltech 01:28:16.560 |
and after I left Caltech, we were both consultants 01:28:20.120 |
at this company called Thinking Machines Corporation, 01:28:22.200 |
which was just down the street from here actually, 01:28:36.560 |
But anyway, he was not into that kind of thing. 01:28:48.720 |
And for me, it's a mechanism to have a more effective machine 01:28:55.120 |
for actually getting things, figuring things out 01:28:59.520 |
- Did he think of it, 'cause essentially what you used, 01:29:03.560 |
I don't know if you were thinking of it that way, 01:29:09.920 |
to empower the exploration of the university. 01:29:27.640 |
well, was involved with more mathematical computation 01:29:31.280 |
You know, he was quite, he had lots of advice 01:29:35.800 |
about the technical side of what we should do and so on. 01:29:39.320 |
- Do you have examples, memories or thoughts that-- 01:29:53.440 |
He had his own ways of thinking about sort of 01:30:03.440 |
and make a computer follow those intuitional methods. 01:30:11.080 |
what we do is we build this kind of bizarre industrial 01:30:14.400 |
machine that turns every integral into, you know, 01:30:21.320 |
And actually the big problem is turning the results 01:30:27.680 |
And actually Feynman did understand that to some extent. 01:30:43.240 |
and give it back and I still have my files now. 01:30:50.680 |
It's, I, you know, maybe if he'd lived another 20 years, 01:31:09.760 |
- What do you make of that difference, Eugene? 01:31:14.280 |
at creating sort of intuitive, like diving in, you know, 01:31:27.760 |
he was really, really, really good at is calculating stuff. 01:31:45.040 |
Wouldn't tell anybody about the complicated calculation 01:31:50.120 |
was to have the simple intuition about how everything works. 01:31:56.040 |
And, you know, because he'd done this calculation 01:32:07.720 |
And he wasn't meaning that maliciously, so to speak. 01:32:28.360 |
on quantum computers actually back in 1980, '81, 01:33:05.120 |
and he'd say, you know, "I don't understand this." 01:33:08.680 |
So there'd be some big argument about what was, 01:33:17.960 |
that we sort of realized about quantum computing 01:33:35.240 |
there's a remarkable sort of repetition of history 01:33:49.880 |
actually happened right down the street from here 01:33:54.880 |
I had been working on this particular cellular automaton 01:34:06.080 |
So, and actually of all silly physical things, 01:34:13.400 |
called the connection machine that that company was making, 01:34:19.800 |
on very, on actually on the same kind of printer 01:34:23.400 |
that people use to make layouts for microprocessors. 01:34:28.400 |
So one of these big, you know, large format printers 01:34:34.640 |
So, okay, so print this out, lots of very tiny cells. 01:34:44.240 |
And so it was very much a physical, you know, 01:35:06.880 |
going around with this big printout and so on?" 01:35:13.760 |
and then observed that that's what happened." 01:35:26.920 |
- Oh, that's such a beautiful sort of dichotomy there 01:35:32.680 |
is you really can't have an intuition about it 01:35:35.320 |
and you reduce it, I mean, you have to run it. 01:35:39.800 |
and especially brilliant physicists like Feynman 01:35:44.640 |
to say that you can't have a compressed, clean intuition 01:35:53.520 |
No, he was, I mean, I think he was sort of on the edge 01:35:56.280 |
of understanding that point about computation. 01:36:00.280 |
I think he always found computation interesting. 01:36:13.160 |
and just find one that does something interesting, right?" 01:36:16.840 |
Turns out, like, I missed it when I first saw it 01:36:24.640 |
"Oh, I'm gonna ignore that case because whatever." 01:36:36.320 |
How did you find yourself having a sufficiently open mind 01:36:40.520 |
to be open to watching rules and them revealing complexity? 01:36:44.920 |
- Yeah, I think that's an interesting question. 01:36:49.040 |
you live through these things and then you say, 01:37:10.280 |
and you're told, "Go figure out what's going on inside it." 01:37:16.240 |
and I started building my first computer language, 01:37:26.920 |
and find the primitives that they can all be made of. 01:37:30.280 |
But then you do something that's really different 01:37:35.280 |
"Now, you know, hopefully they'll be useful to people. 01:37:39.340 |
So you're essentially building an artificial universe 01:37:45.960 |
you're just building whatever you feel like building. 01:37:48.880 |
And that's, and so it was sort of interesting for me 01:38:01.120 |
And so I think that experience of making a computer language 01:38:04.760 |
which is essentially building your own universe, 01:38:06.680 |
so to speak, is, you know, that's kind of the, 01:38:11.160 |
that's what gave me a somewhat different attitude 01:38:15.600 |
It's like, let's just explore what can be done 01:38:22.880 |
of let's be constrained by how the universe actually is. 01:38:26.720 |
essentially you've, as opposed to being limited 01:38:41.600 |
- Right, and it's, well, it's, or a telescope, 01:38:46.720 |
- But there's something fundamentally different 01:38:49.520 |
I mean, it just, I'm hoping not to romanticize the notion, 01:38:54.520 |
but it's more general, the computer is more general 01:38:57.480 |
than a telescope. - It is, it's more general. 01:39:01.420 |
you know, people say, oh, such and such a thing 01:39:05.920 |
was almost discovered at such and such a time. 01:39:12.400 |
to actually understand stuff, or allows one to be open 01:39:15.140 |
to seeing what's going on, that's really hard. 01:39:18.200 |
And, you know, I think in, I've been fortunate in my life 01:39:34.640 |
another level of abstraction, and kind of be open 01:39:39.140 |
But, you know, it's always, I mean, I'm fully aware of, 01:39:43.480 |
I suppose, the fact that I have seen it a bunch of times 01:39:46.980 |
of how easy it is to miss the obvious, so to speak. 01:39:53.240 |
to not miss the obvious, although it may not succeed. 01:40:14.160 |
And in fact, somebody said that Newton didn't have an ego, 01:40:35.000 |
I mean, you know, I've had, look, I've spent more than half 01:40:40.840 |
And, you know, that is a, I think it's actually very, 01:40:47.260 |
it means that one's ego is not a distant thing. 01:40:51.340 |
It's a thing that one encounters every day, so to speak, 01:40:56.500 |
and with how one, you know, develops an organization 01:41:00.100 |
So, you know, it may be that if I'd been an academic, 01:41:26.420 |
I've been fortunate that I think I have reasonable 01:41:35.460 |
who at this point, if somebody tells me something 01:41:37.460 |
and I just don't understand it, my conclusion isn't 01:41:44.860 |
there's something wrong with what I'm being told. 01:41:47.500 |
And that was actually Dick Feynman used to have 01:41:49.300 |
that feature too, he never really believed in. 01:41:57.540 |
- Wow, so that's a fundamentally powerful property of ego 01:42:11.140 |
like when confronted with the fact that doesn't fit 01:42:14.180 |
the thing that you've really thought through, 01:42:16.700 |
sort of both the negative and the positive of ego. 01:42:19.820 |
Do you see the negative of that get in the way, 01:42:24.300 |
- Sure, there are mistakes I've made that are the result 01:42:27.140 |
of I'm pretty sure I'm right and turns out I'm not. 01:42:31.560 |
I mean, that's the, you know, but the thing is 01:42:34.620 |
that the idea that one tries to do things that, 01:42:44.380 |
and then one thinks, maybe I should try doing this myself, 01:42:52.100 |
well, people have been trying to do this for 100 years, 01:42:58.580 |
that I happened to start having some degree of success 01:43:01.860 |
in science and things when I was really young. 01:43:07.700 |
that I don't think I otherwise would have had. 01:43:09.860 |
And, you know, in a sense, I mean, I was fortunate 01:43:12.940 |
that I was working in a field, particle physics, 01:43:15.620 |
during its sort of golden age of rapid progress. 01:43:18.980 |
And that kind of gives one a false sense of achievement 01:43:24.940 |
that's gonna survive if you happen to be, you know, 01:43:30.580 |
- I mean, the reason I totally immediately understood 01:43:36.540 |
let me sort of just try to express my feelings 01:43:38.620 |
on the whole thing, is that if you don't allow 01:43:42.540 |
that kind of ego, then you would never write that book. 01:43:46.780 |
That you would say, well, people must have done this. 01:43:50.740 |
You would not keep digging. - Yeah, that's right. 01:43:52.460 |
- And I think that was, I think you have to take that ego 01:44:03.660 |
- But I think the other point about that book was, 01:44:06.380 |
it was a non-trivial question, how to take a bunch of ideas 01:44:12.340 |
They might, you know, their importance is determined 01:44:28.800 |
And so I had had the experience of sort of saying, 01:44:31.940 |
well, there are these things, there's a cellular automaton, 01:44:36.060 |
And people are like, oh, it must be just like this. 01:44:45.460 |
but you could have done sort of academically, 01:44:47.340 |
just publish, keep publishing small papers here and there. 01:44:53.060 |
You would get like, it's supposed to just dropping 01:45:05.500 |
that's like, you know, one possibility is like, 01:45:12.460 |
discovering these things in all these different areas. 01:45:16.340 |
But I decided that, you know, in the interests 01:45:19.980 |
and, you know, writing that book took me a decade anyway. 01:45:24.300 |
It's not, there's not a lot of wiggle room, so to speak. 01:45:26.220 |
One can't be wrong by a factor of three, so to speak, 01:45:30.980 |
That I, you know, I thought the best thing to do, 01:45:36.540 |
that most respects the intellectual content, so to speak, 01:45:41.540 |
is you just put it out with as much force as you can, 01:45:51.900 |
for example, I run a company which has my name on it, right? 01:46:05.740 |
It's about basically sort of taking responsibility 01:46:08.860 |
for what one's doing, and, you know, in a sense, 01:46:21.980 |
because in a sense, my company is sort of something 01:46:29.260 |
and I'm kind of just its mascot at some level. 01:46:32.340 |
I mean, I also happen to be a pretty, you know, 01:46:59.980 |
If a company succeeds or fails, he would just, 01:47:02.660 |
that emotionally, he would suffer through that. 01:47:09.460 |
- And also, Wolfram's a pretty good branding name, 01:47:16.500 |
- Yeah, so you made up for it with the last name. 01:47:19.780 |
Okay, so in 2002, you published "A New Kind of Science," 01:47:35.540 |
Can you briefly describe the vision, the hope, 01:47:40.540 |
the main idea presented in this 1,200-page book? 01:47:46.220 |
- Sure, although it took 1,200 pages to say in the book, 01:47:54.980 |
a good way to get into it is to look at, sort of, 01:47:56.940 |
the arc of history and to look at what's happened 01:48:00.940 |
I mean, there was this sort of big idea in science 01:48:11.100 |
Let's use sort of the formal idea of mathematical equations 01:48:14.860 |
to describe what might be happening in the world, 01:48:18.140 |
just using sort of logical augmentation and so on. 01:48:30.100 |
But I got interested in how one could generalize 01:48:34.820 |
There is a formal theory, there are definite rules, 01:48:44.500 |
mathematical rules, and we now have this sort of notion 01:48:50.680 |
Let's use the kinds of rules that can be embodied 01:49:25.420 |
And in a series of steps that you can represent 01:49:32.660 |
according to a rule that depends on the color 01:49:34.940 |
of the cell above it and to its left and right. 01:49:40.580 |
if the cell and its right neighbor are not the same, 01:49:48.660 |
and, or the cell on the left is black or something, 01:49:58.800 |
That rule, I'm not sure I said it exactly right, 01:50:10.740 |
So some rules, you get a very simple pattern. 01:50:18.440 |
You start them off from a sort of simple seed. 01:50:23.020 |
But other rules, and this was the big surprise 01:50:27.460 |
the simple computer experiments to find out what happens, 01:50:30.360 |
is that they produce very complicated patterns of behavior. 01:50:33.780 |
So for example, this rule 30 rule has the feature 01:50:38.060 |
you started from just one black cell at the top, 01:50:43.320 |
If you look like at the center column of cells, 01:50:48.920 |
It goes black, white, black, black, whatever it is. 01:50:51.580 |
That sequence seems for all practical purposes random. 01:50:58.960 |
you compute the digits of pi, 3.1415926, whatever. 01:51:16.000 |
they seem for all practical purposes completely random. 01:51:21.280 |
that even though the rule is very simple, much simpler, 01:51:31.260 |
you're still generating immensely complicated behavior. 01:51:35.900 |
I think you probably have said it and looked at it so long, 01:51:39.240 |
you forgot the magic of it, or perhaps you don't, 01:52:23.660 |
the kind of complexity, you mentioned rule 30, 01:52:46.860 |
And that's just, I mean, that's a magical idea. 01:53:03.220 |
- I mean, it's transformational about how you see the world. 01:53:07.580 |
I don't know, we can have all kinds of discussions 01:53:12.940 |
I sometimes think if I were on a desert island 01:53:17.780 |
and was, I don't know, maybe it was some psychedelics 01:53:29.140 |
For some reason, it's a deeply profound notion, 01:53:35.940 |
it was a very intuition-breaking thing to discover. 01:53:42.660 |
you point the computational telescope out there 01:53:45.660 |
and suddenly you see, I don't know, you know, 01:53:51.620 |
but suddenly you see something that's kind of 01:53:52.940 |
very unexpected, and Rule 30 was very unexpected for me. 01:54:05.580 |
- What would you say, yeah, what would you say? 01:54:07.340 |
- Yeah, I mean, I-- - What are we looking at, 01:54:17.500 |
for example, it says here, if you have a black cell 01:54:22.860 |
then the cell on the next step will be white. 01:54:25.420 |
And so here's the actual pattern that you get 01:54:27.900 |
starting off from a single black cell at the top there. 01:54:33.980 |
initial condition. - That's the initial thing, 01:54:38.940 |
and at every step, you're just applying this rule 01:54:48.300 |
you gotta get the, there's gotta be some trace 01:54:52.140 |
Okay, we'll run it, let's say, for 400 steps. 01:54:55.340 |
It's what it does, it's kind of aliasing a bit 01:54:59.580 |
there's a little bit of regularity over on the left. 01:55:02.380 |
But there's a lot of stuff here that just looks 01:55:15.640 |
- Your mind immediately starts, is there a pattern? 01:55:20.780 |
that's where the mind goes. - Well, right, so I spent, 01:55:24.320 |
and I thought, well, this is kind of interesting, 01:55:30.000 |
something will resolve into something simple. 01:55:32.580 |
And I did all kinds of analysis of using mathematics, 01:55:37.260 |
statistics, cryptography, whatever, whatever, 01:55:45.300 |
I started thinking, maybe there's a real phenomenon here 01:55:55.140 |
at the natural world and seeing all this complexity 01:56:12.820 |
And so the shock here was, even from something very simple, 01:56:19.360 |
Maybe this is getting at sort of the secret that nature has 01:56:23.020 |
that allows it to make really complex things, 01:56:25.820 |
even though its underlying rules may not be that complex. 01:56:43.380 |
- The truth of every sort of science discovery 01:56:49.260 |
I mean, I've spent, I happen to be interested 01:56:54.580 |
how did people come to figure out this or that thing? 01:56:57.660 |
And there's always a long kind of sort of preparatory, 01:57:05.740 |
and a mindset in which it's possible to see something. 01:57:15.940 |
I finally had a high resolution laser printer. 01:57:21.300 |
of the acyllar automata, and I generate this one, 01:57:30.420 |
you know, I really should try to understand this, 01:57:35.140 |
this is, I really don't understand what's going on, 01:57:43.860 |
It was not, it was depressingly unsudden, so to speak, 01:57:51.420 |
like principle of computational equivalence, for example, 01:57:54.760 |
you know, I thought, well, that's a possible thing. 01:57:58.760 |
Still didn't know for sure that it's correct, 01:58:10.200 |
of studying the computational universe of simple programs, 01:58:13.400 |
it took me probably a decade, decade and a half 01:58:28.820 |
it's a good brownie point or something for the whole idea. 01:58:41.060 |
what's been interesting sort of in the arc of history 01:58:46.540 |
it's kind of like the mathematical equations approach. 01:59:11.720 |
as somebody who's kind of lived inside this paradigm shift, 01:59:17.380 |
I mean, no doubt in sort of the history of science, 01:59:19.840 |
that will be seen as an instantaneous paradigm shift, 01:59:22.860 |
but it sure isn't instantaneous when it's played out 01:59:28.320 |
And it's the kind of thing where it's sort of interesting 01:59:33.340 |
because in the dynamics of sort of the adoption 01:59:40.460 |
the younger the field, the faster the adoption typically, 01:59:48.300 |
who've studied this field, and it is the way it is, 02:00:05.560 |
And if I was, that makes me kind of thick-skinned 02:00:15.560 |
and anytime you write a book called something 02:00:21.080 |
the pitchforks will come out for the old kind of science. 02:00:27.780 |
I have to say that I was fully aware of the fact that 02:00:33.600 |
when you see sort of incipient paradigm shifts in science, 02:00:38.460 |
the vigor of the negative response upon early introduction 02:00:48.440 |
So in other words, if people just don't care, 02:00:56.720 |
that means you didn't really discover anything interesting. 02:01:09.720 |
Can you maybe talk about interesting properties 02:01:22.200 |
So I mean, the most interesting thing about cellular automata 02:01:27.040 |
is that it's hard to figure stuff out about them. 02:01:34.260 |
you try and bash them with some other technique, 02:01:46.160 |
that they're sort of showing irreducible computation. 02:01:55.600 |
- But there's specific formulations of that fact. 02:02:20.560 |
Now the question is, can you prove how random it is? 02:02:28.760 |
We haven't been able to show that it will never repeat. 02:02:31.520 |
We know that if there are two adjacent columns, 02:02:37.680 |
But just knowing whether that center column can ever repeat, 02:02:42.080 |
Another problem that I sort of put in my collection of, 02:02:49.120 |
you know, for these three prizes for about Rule 30. 02:03:13.920 |
- The problem is, that's right, money's not the thing. 02:03:17.000 |
they're just clean formulations of chat lots. 02:03:19.880 |
- It's just, you know, will it ever become periodic? 02:03:28.420 |
And the third problem is a little bit harder to state, 02:03:30.160 |
which is essentially, is there a way of figuring out 02:03:39.160 |
with a less computational effort than about T steps? 02:03:43.320 |
So in other words, is there a way to jump ahead and say, 02:03:56.680 |
But both, I mean, you know, for any one of these, 02:03:59.080 |
one could prove that, you know, one could discover, 02:04:01.760 |
you know, we know what rule 30 does for a billion steps, 02:04:04.680 |
but, and maybe we'll know for a trillion steps 02:04:07.040 |
before too very long, but maybe at a quadrillion steps, 02:04:12.280 |
You might say, how could that possibly happen? 02:04:17.240 |
I thought, and this is typical of what happens 02:04:20.520 |
I thought, let me find an example where it looks like 02:04:27.520 |
And I found one, and it's just, you know, I did a search, 02:04:30.060 |
I searched, I don't know, maybe a million different rules 02:04:38.600 |
I kind of have this thing that I say in a kind of silly way 02:04:42.100 |
about the computational universe, which is, you know, 02:04:51.040 |
even though I can't imagine how it's gonna do it. 02:04:53.600 |
And, you know, I didn't think I would find one 02:04:55.760 |
that, you know, you would think after all these years 02:04:57.400 |
that when I found sort of all possible things, 02:05:05.300 |
that I would have gotten my intuition wrapped 02:05:10.160 |
these creatures are always, in the computational universe, 02:05:12.720 |
are always smarter than I'm gonna be, but, you know-- 02:05:18.080 |
And that makes it, that makes one feel very sort of, 02:05:21.800 |
it's humbling every time, because every time the thing is, 02:05:37.820 |
- Oh, it's my favorite, 'cause I found it first, 02:05:48.740 |
- And that doesn't prove anything about the other rules. 02:05:53.660 |
of how you go about trying to prove something 02:05:57.260 |
- Yes, and it also, all these things help build intuition. 02:06:02.180 |
that this was repetitive after a trillion steps, 02:06:15.140 |
I mean, it's, although it's sometimes challenging, 02:06:18.300 |
like the, you know, I put out a prize in 2007 02:06:29.380 |
and the young chap in England named Alex Smith, 02:06:51.760 |
The big new principle is the simplest Turing machine 02:06:54.960 |
that might have been universal actually is universal, 02:06:57.900 |
and it's incredibly much simpler than the Turing machines 02:07:00.500 |
that people already knew were universal before that, 02:07:07.940 |
is closer at hand than you might have thought, 02:07:13.100 |
in that particular case, were not terribly illuminated. 02:07:15.100 |
- It would be nice if the methods would also be elegant. 02:07:21.780 |
I mean, it's like a lot of, we've talked about earlier, 02:07:24.260 |
kind of opening up AIs and machine learning and things 02:07:28.460 |
and what's going on inside, and is it just step-by-step, 02:07:32.060 |
or can you sort of see the bigger picture more abstractly? 02:07:35.260 |
- It's unfortunate, I mean, with Fermat's last theorem proof, 02:07:44.620 |
I mean, it's not, it doesn't fit into the margins of a page. 02:07:50.300 |
one of the things is that's another consequence 02:07:54.460 |
this fact that there are even quite short results 02:07:58.600 |
in mathematics whose proofs are arbitrarily long. 02:08:09.580 |
Why is it the case, how have people managed to navigate 02:08:16.340 |
where they're not just thrown into, it's all undecidable? 02:08:23.560 |
- And that would be, that would have a poetic beauty to it 02:08:28.560 |
if people were to find something interesting about rule 30, 02:08:36.640 |
It wouldn't say anything about the broad irreducibility 02:08:39.640 |
of all computations, but it would nevertheless 02:08:45.280 |
- Well, yeah, but to me, it's like, in a sense, 02:08:50.120 |
establishing principle of computational equivalence, 02:08:53.000 |
it's a little bit like doing inductive science anywhere. 02:08:58.800 |
the more convinced you are that it's generally true. 02:09:01.440 |
I mean, we don't get to, whenever we do natural science, 02:09:04.940 |
we say, well, it's true here that this or that happens. 02:09:08.840 |
Can we prove that it's true everywhere in the universe? 02:09:17.440 |
We're establishing facts in the computational universe, 02:09:33.520 |
but what's the difference between the kind of computation, 02:09:36.680 |
now that we're talking about cellular automata, 02:09:39.280 |
what's the difference between the kind of computation, 02:09:48.200 |
through the process of evolution, and cellular automata? 02:09:52.220 |
I mean, we've kind of implied through the discussion 02:09:57.320 |
but we talked about the potential equivalence 02:10:02.920 |
and the kind of computation going on in Turing machines, 02:10:08.800 |
do you think there's something special or interesting 02:10:11.640 |
about the kind of computation that our bodies do? 02:10:15.640 |
- Right, well, let's talk about brains primarily. 02:10:25.500 |
in the sense that there's a lot of computation 02:10:35.680 |
It follows those rules, it does what it does. 02:10:38.120 |
The thing that's special about the computation 02:10:40.080 |
in our brains is that it's connected to our goals 02:10:53.960 |
of computation out there, how do you connect that 02:11:03.120 |
of how to do that, and what I've been interested in 02:11:08.880 |
that allows that something that both we humans 02:11:28.080 |
Well, you could say the same thing, actually, in physics. 02:11:44.720 |
magnetic disks, whatever, or we could use liquid crystals 02:11:58.200 |
these things that happen to exist in the physical universe 02:12:01.160 |
and making it be something that we care about 02:12:04.080 |
'cause we sort of entrain it into technology. 02:12:06.760 |
And it's the same thing in the computational universe 02:12:16.440 |
and we will go and sort of mine the computational universe 02:12:19.120 |
for something that's useful for some particular objective. 02:12:24.320 |
trying to sort of navigate the computational universe 02:12:29.120 |
that's where computational language comes in. 02:12:31.920 |
And, you know, a lot of what I've spent time doing 02:12:34.360 |
and building this thing we call Wolfram Language, 02:12:41.480 |
And kind of the goal there is to have a way to express 02:12:46.480 |
kind of computational thinking, computational thoughts 02:12:51.640 |
in a way that both humans and machines can understand. 02:12:54.200 |
So it's kind of like in the tradition of computer languages, 02:13:05.200 |
and let's specify, let's have a human way to specify, 02:13:10.600 |
at the level of the way that computers are built. 02:13:14.320 |
is representing sort of the whole world computationally, 02:13:24.720 |
things that have come to exist in our civilization 02:13:28.320 |
and the sort of knowledge base of our civilization, 02:13:44.880 |
but it's kind of the arc of what we've tried to do 02:13:48.360 |
in building this kind of computational language 02:13:53.560 |
of what happened when mathematical notation was invented. 02:14:01.720 |
They were always explaining their math in words, 02:14:06.520 |
And as soon as mathematical notation was invented, 02:14:15.760 |
When we deal with computational thinking about the world, 02:14:20.600 |
What is the kind of formalism that we can use 02:14:31.120 |
where we have a pretty full-scale computational language 02:15:10.540 |
what is Wolfram language in terms of sort of, 02:15:27.000 |
in terms of stuff you can play with, what is it? 02:15:31.360 |
What are the different ways to interact with it? 02:15:39.080 |
one is Mathematica, the other is Wolfram Alpha. 02:15:58.920 |
is you're typing little pieces of computational language, 02:16:04.560 |
- It's very kind of, there's like a symbolic. 02:16:11.360 |
so I mean, I don't know how to cleanly express that, 02:16:27.920 |
is just stuff that computers intrinsically do. 02:16:43.880 |
it's aimed to be an abstract language from the beginning. 02:17:05.600 |
Now that X could perfectly well be the city of Boston. 02:17:24.280 |
sort of computationally work with these different, 02:17:26.840 |
these kinds of things that exist in the world 02:17:29.920 |
or describe the world, that's really powerful. 02:17:32.520 |
And that's what, I mean, when I started designing, 02:17:44.360 |
I kind of wanted to have this sort of infrastructure 02:17:49.680 |
for computation, which was as fundamental as possible. 02:17:52.200 |
I mean, this is what I got for having been a physicist 02:17:54.520 |
and tried to find fundamental components of things 02:18:00.480 |
of transformation rules for symbolic expressions 02:18:09.480 |
And that's what we've been building from in Wolfram language 02:18:22.920 |
And it's really been built in a very different direction 02:18:32.000 |
it really is kind of wrapped around the operations 02:18:41.080 |
is to have the language itself be able to cover 02:18:54.320 |
You know, I could probably pick a random here. 02:18:56.800 |
I'm gonna pick just because, just for fun, I'll pick, 02:19:07.480 |
So let's just say random sample of 10 of them 02:19:21.400 |
between different types of Boolean expressions. 02:19:38.640 |
Well, we've got things about dollar requester address 02:19:50.440 |
- It's also graphical, sort of window-movable. 02:19:55.440 |
I want to pick another 10 'cause I think this is some, okay. 02:19:58.400 |
So yeah, there's a lot of infrastructure stuff here 02:20:01.040 |
that you see if you just start sampling at random, 02:20:03.520 |
there's a lot of kind of infrastructural things. 02:20:05.280 |
If you more, you know, if you more look at the- 02:20:07.560 |
- Some of the exciting machine learning stuff 02:20:12.600 |
I mean, you know, so one of those functions is, 02:20:21.240 |
Let's say current image and let's pick up an image, 02:20:26.160 |
- Tap that current image, accessing the webcam, 02:20:33.720 |
we can say image identify, open square brackets, 02:20:36.840 |
and then you just paste that picture in there. 02:20:39.640 |
- Image identify function running on the picture. 02:20:44.440 |
I look like a plunger because I got this great big thing 02:20:47.680 |
- Classify, so this image identify classifies 02:21:10.000 |
- That hopefully will not give you an existential crisis. 02:21:12.160 |
And then 8%, or I shouldn't say percent, but-- 02:21:30.560 |
- Retook a picture with a little bit more of your body. 02:21:43.880 |
so this is image identify as an example of one. 02:21:48.400 |
- And that's part of the, that's like part of the language. 02:21:55.080 |
I could say, I don't know, let's find the geo-nearest, 02:22:04.240 |
Let's find the 10, I wonder where it thinks here is. 02:22:09.240 |
Let's try finding the 10 volcanoes nearest here, okay? 02:22:14.760 |
- So geo-nearest volcano here, 10 nearest volcanoes. 02:22:32.800 |
and it's the, well, no, we're okay, we're okay. 02:22:39.440 |
But, you know, the fact that right in the language, 02:22:42.840 |
it knows about all the volcanoes in the world, 02:22:45.280 |
it knows, you know, computing what the nearest ones are, 02:22:48.280 |
it knows all the maps of the world and so on. 02:22:50.120 |
- It's a fundamentally different idea of what a language is. 02:22:53.040 |
That's why I like to talk about as a, you know, 02:22:58.520 |
- And just if you can comment briefly, I mean, 02:23:03.880 |
along with Wolfram Alpha, represents kind of what the dream 02:23:14.400 |
and from that extract the different hierarchies 02:23:40.160 |
And there's just some sense where in the 80s and 90s, 02:23:54.080 |
- But then out of that emerges kind of Wolfram language, 02:24:01.640 |
in some sense, those efforts were too modest. 02:24:07.480 |
and you actually can't do it with a particular area. 02:24:12.920 |
it's critical to have broad knowledge of the world 02:24:15.160 |
if you want to do good natural language understanding. 02:24:17.800 |
And you kind of have to bite off the whole problem. 02:24:20.280 |
If you say we're just gonna do the blocks world over here, 02:24:32.400 |
so the relationship between what we've tried to do 02:24:37.820 |
you know, in a sense, if you look at the development 02:24:43.200 |
there was kind of this notion pre 300 years ago or so now, 02:24:47.160 |
you want to figure something out about the world, 02:24:50.040 |
You can do things which just use raw human thought. 02:24:54.000 |
And then along came sort of modern mathematical science. 02:24:57.560 |
And we found ways to just sort of blast through that 02:25:03.560 |
Now we also know we can do that with computation and so on. 02:25:08.920 |
So when we look at how do we sort of encode knowledge 02:25:14.440 |
one way we could do it is start from scratch, 02:25:17.900 |
it's just a neural net figuring everything out. 02:25:20.800 |
But in a sense that denies the sort of knowledge 02:25:26.300 |
because in our civilization, we have learned lots of stuff. 02:25:29.480 |
We've surveyed all the volcanoes in the world, 02:25:32.440 |
we figured out lots of algorithms for this or that. 02:25:35.520 |
Those are things that we can encode computationally. 02:25:42.260 |
you don't have to start everything from scratch. 02:25:46.960 |
is to try and sort of capture the knowledge of the world 02:25:55.480 |
which were for a long time undoable by computers 02:26:06.440 |
which actually were pretty easy for humans to do 02:26:10.960 |
I think the thing that's interesting that's emerging now 02:26:19.240 |
and this kind of sort of much more statistical 02:26:22.880 |
kind of things like image identification and so on, 02:26:29.000 |
by having this sort of symbolic representation 02:26:33.500 |
that that's where things get really interesting 02:26:35.800 |
and where you can kind of symbolically represent patterns 02:26:42.260 |
that's kind of a part of the path forward, so to speak. 02:26:51.360 |
is not anywhere close to building the kind of wide world 02:26:56.360 |
of computable knowledge that Wolfram Language have built, 02:27:03.400 |
you've done the incredibly hard work of building this world, 02:27:09.400 |
can serve as tools to help you explore that world. 02:27:28.840 |
it's running in sort of a very efficient computational way, 02:27:32.040 |
but then there's sort of things like the interface 02:27:35.000 |
how do you do natural language understanding to get there? 02:27:40.920 |
That's, I mean, actually a good example right now 02:27:50.480 |
using essentially not learning-based methods, 02:27:58.480 |
- In terms of when people try to enter a query 02:28:01.200 |
and then converting, so the process of converting, 02:28:04.120 |
NLU defined beautifully as converting their query 02:28:27.440 |
go pick out all the cities in that text, for example. 02:28:30.480 |
And so a good example of, you know, so we do that, 02:28:32.800 |
we're using modern machine learning techniques. 02:28:36.240 |
And it's actually kind of an interesting process 02:28:51.960 |
And so we've got this kind of loop going between those, 02:29:04.800 |
that people have always dreamed about or talking about. 02:29:15.400 |
- You know, that's a, it's a complicated issue, 02:29:24.480 |
it involves ideas, and ideas are absorbed slowly 02:29:30.280 |
- And then there's sort of, like what we're talking about, 02:29:45.520 |
So it's interesting how the spread of ideas works. 02:29:48.520 |
- You know what's funny with Wolfram Language, 02:29:55.880 |
if you look at the, I would say, very high-end of R&D, 02:30:02.280 |
"Wow, that's a really, you know, impressive, smart person," 02:30:06.020 |
they're very often users of Wolfram Language, 02:30:09.720 |
If you look at the more sort of, it's a funny thing, 02:30:12.400 |
if you look at the more kind of, I would say, 02:30:14.960 |
people who are like, "Oh, we're just plodding away, 02:30:27.200 |
you know, the high-end, we've really been very successful 02:30:30.280 |
in for a long time, and it's, but with, you know, 02:30:38.500 |
my fault, in a sense, because it's kind of, you know, 02:30:50.260 |
sort of the best possible technical tower we can, 02:30:53.460 |
rather than sort of doing the commercial side of things 02:30:57.200 |
and pumping it out in sort of the most effective way. 02:31:00.040 |
- And there's an interesting idea that, you know, 02:31:03.420 |
by opening everything up, sort of the GitHub model, 02:31:07.880 |
but there's an interesting, I think I've heard you 02:31:10.040 |
discuss this, that that turns out not to work 02:31:12.940 |
in a lot of cases, like in this particular case, 02:31:18.720 |
about the integrity, the quality of the knowledge 02:31:31.520 |
- Yeah, it's not the nature of how things work. 02:31:41.700 |
maintaining a coherent vision over a long period of time 02:31:45.320 |
and doing not only the cool vision-related work, 02:31:49.620 |
but also the kind of mundane in the trenches, 02:31:58.820 |
That's the mundane, the fascinating and the mundane, 02:32:05.180 |
- Yeah, I mean, that's probably not the most, 02:32:09.460 |
in all these different cloud environments and so on. 02:32:12.120 |
That's pretty, you know, that's very practical stuff. 02:32:16.500 |
and have there be, take only a fraction of a millisecond 02:32:28.140 |
it's an interesting thing over the period of time, 02:32:30.060 |
you know, orphan language has existed basically 02:32:33.260 |
for more than half of the total amount of time 02:32:35.580 |
that any language, any computer language has existed. 02:32:37.940 |
That is, computer language is maybe 60 years old, 02:32:46.180 |
So it's kind of a, and I think I was realizing recently 02:32:50.860 |
there's been more innovation in the distribution of software 02:32:54.580 |
than probably than in the structure of programming languages 02:32:59.180 |
And we, you know, we've been sort of trying to do our best 02:33:06.140 |
because I have a simple private company and so on 02:33:08.900 |
that doesn't have, you know, a bunch of investors, 02:33:11.380 |
you know, telling us we're gonna do this or that, 02:33:15.700 |
And so, for example, we're able to, oh, I don't know, 02:33:18.940 |
we have this free Wolfram Engine for developers, 02:33:27.540 |
and Wolfram Language at basically all major universities, 02:33:37.260 |
And, you know, we've been doing a progression of things. 02:33:40.840 |
I mean, different things like Wolfram Alpha, for example, 02:33:48.220 |
- Okay, Wolfram Alpha is a system for answering questions 02:33:51.860 |
where you ask a question with natural language 02:33:58.820 |
So the question could be something like, you know, 02:34:02.580 |
what's the population of Boston divided by New York 02:34:07.540 |
And it'll take those words and give you an answer. 02:34:22.580 |
belongs to Wolfram Alpha or to the Wolfram Language? 02:34:26.340 |
- We just call it the Wolfram Knowledge Base. 02:34:28.720 |
- I mean, that's been a big effort over the decades 02:34:33.580 |
And, you know, more of it flows in every second, so. 02:34:38.460 |
Like, that's one of the most incredible things. 02:34:41.500 |
Of course, in the long-term, Wolfram Language itself 02:34:53.740 |
So what's the process of building that knowledge base? 02:34:57.540 |
The fact that you, first of all, from the very beginning, 02:35:08.420 |
to the incredible knowledge base that you have now? 02:35:11.420 |
- Well, yeah, it was kind of scary at some level. 02:35:13.300 |
I mean, I had wondered about doing something like this 02:35:17.060 |
So it wasn't like I hadn't thought about it for a while. 02:35:19.100 |
- But most of us, most of the brilliant dreamers 02:35:22.980 |
give up such a difficult engineering notion at some point. 02:35:28.300 |
Well, the thing that happened with me, which was kind of, 02:35:30.980 |
it's a live-your-own-paradigm kind of theory. 02:35:36.660 |
I had assumed that to build something like Wolfram Alpha 02:35:40.020 |
would require sort of solving the general AI problem. 02:35:46.340 |
and I thought I don't really know how to do that, 02:35:49.660 |
Then I worked on my new kind of science project 02:35:52.460 |
and sort of exploring the computational universe 02:35:57.780 |
which say there is no bright line between the intelligent 02:36:02.940 |
So I thought, look, that's this paradigm I've built. 02:36:06.020 |
Now I have to eat that dog food myself, so to speak. 02:36:24.660 |
I remember I took the early team to a big reference library 02:36:45.180 |
The fact that you can walk into the reference library, 02:36:46.780 |
it's a big, big thing with lots of reference books 02:36:53.580 |
it's not the infinite corridor of, so to speak, 02:37:15.620 |
this area, this area, a few hundred areas and so on. 02:37:27.340 |
get used by sort of the world's experts in lots of areas. 02:37:37.740 |
and we're able to ask them for input and so on. 02:37:46.300 |
who helped us figure out what to do wouldn't be right. 02:37:50.180 |
'Cause our goal was to kind of get to the point 02:37:52.140 |
where we had sort of true expert level knowledge 02:38:01.060 |
on the basis of general knowledge in our civilization, 02:38:03.740 |
make it be automatic to be able to answer that question. 02:38:10.900 |
from the very beginning and it's now also used in Alexa. 02:38:13.660 |
And so it's people are kind of getting more of the, 02:38:21.980 |
I mean, in a sense, the question answering problem 02:38:24.900 |
was viewed as one of the sort of core AI problems 02:38:33.340 |
who was a well-known AI person from right around here. 02:38:38.100 |
And I remember when WolfMalpha was coming out, 02:38:40.580 |
it was a few weeks before it came out, I think, 02:38:54.860 |
And then he's talking about something different. 02:38:56.900 |
I said, "No, Marvin, this time it actually works. 02:39:05.980 |
Of course, we have a record of what he typed in, 02:39:09.060 |
- Can you share where his mind was in the testing space? 02:39:19.540 |
medical stuff and chemistry stuff and astronomy and so on. 02:39:24.540 |
And it was like, after a few minutes, he was like, 02:39:32.180 |
But that was kind of told you something about the state, 02:39:38.700 |
in a sense, by trying to solve the bigger problem, 02:39:41.620 |
we were able to actually make something that would work. 02:39:45.300 |
we had a bunch of completely unfair advantages. 02:39:47.700 |
For example, we already built a bunch of orphan language, 02:39:54.100 |
I had the practical experience of building big systems. 02:40:03.140 |
to not just sort of give up in doing something like this. 02:40:12.500 |
I've worked on a bunch of big projects in my life. 02:40:39.540 |
And usually it does, something happens in a few years, 02:40:48.180 |
always the challenge is you end up with these projects 02:41:00.540 |
And that's an interesting sort of personal challenge. 02:41:12.740 |
but it's kind of making a bet that I can kind of, 02:41:17.740 |
I can do that as well as doing the incredibly 02:41:28.540 |
I just talked for the second time with Elon Musk 02:41:31.620 |
and you two share that quality a little bit of that optimism 02:41:46.020 |
you can call it ego, you can call it naivety, 02:41:48.620 |
you can call it optimism, whatever the heck it is, 02:41:50.780 |
but that's how you solve the impossible things. 02:42:00.940 |
a bit more confident and progressively able to, 02:42:03.740 |
you know, decide that these projects aren't crazy, 02:42:11.060 |
oh, I've done these projects and they're big, 02:42:18.260 |
And that's, you know, and that can be a trap. 02:42:20.580 |
And often these projects are of completely unknown, 02:42:29.820 |
- On the, sort of building this giant knowledge base 02:42:35.100 |
that's behind Wolfram Language, Wolfram Alpha, 02:42:43.380 |
What do you think about, for example, Wikipedia, 02:42:50.700 |
that's not converted into computable knowledge? 02:42:53.540 |
Do you think, if you look at Wolfram Language, 02:42:56.700 |
Wolfram Alpha, 20, 30, maybe 50 years down the line, 02:43:13.180 |
it doesn't include the understanding of information. 02:43:22.020 |
represented within-- - Sure, I would hope so. 02:43:26.740 |
- How hard is that problem, like closing that gap? 02:43:33.140 |
of answering general knowledge questions about the world, 02:43:35.600 |
we're in pretty good shape on that right now. 02:43:50.180 |
it might even be the specifications for, you know, 02:43:53.860 |
when it encounters this or that or the other? 02:43:58.060 |
Then, you know, write that in a computational language 02:44:01.860 |
and be able to express things about the world. 02:44:09.280 |
thing at this point in the, you know, tree of life, 02:44:18.100 |
when you start to get to some of the messy human things, 02:44:20.420 |
are those encodable into computable knowledge? 02:44:23.500 |
- Well, I think that it is a necessary feature 02:44:32.820 |
in a way that gets sort of quickly, you know, 02:44:42.340 |
in the question of automated content selection 02:44:46.680 |
So, you know, the Facebooks, Googles, Twitters, you know, 02:44:50.620 |
how do they rank the stuff they feed to us humans? 02:45:01.860 |
And what is the, what are the kind of principles behind that? 02:45:04.900 |
And what I kind of, well, a bunch of different things 02:45:08.820 |
But one thing that's interesting is being able, 02:45:13.060 |
you know, in fact, you're building sort of an AI ethics. 02:45:16.280 |
You have to build an AI ethics module in effect to decide, 02:45:27.380 |
that, you know, there's not gonna be one of these things. 02:45:30.020 |
It's not possible to decide, or it might be possible, 02:45:33.220 |
but it would be really bad for the future of our species 02:45:35.460 |
if we just decided there's this one AI ethics module, 02:45:44.940 |
And I kind of realized one has to sort of break it up. 02:45:50.900 |
and how one sort of has people sort of self-identify for, 02:45:57.420 |
it's sort of easier because it's for an individual. 02:46:13.300 |
sort of maybe in the, sort of have different AI systems 02:46:18.300 |
that have a certain kind of brand that they represent, 02:46:20.660 |
essentially, but you could have like, I don't know, 02:46:27.620 |
and then libertarian, and there's an Iranian objectivist 02:46:34.780 |
I mean, it's almost encoding some of the ideologies 02:46:41.020 |
That didn't work out so well with the ideologies 02:46:47.420 |
everybody purchased that particular ethics system. 02:46:51.380 |
- And in the same, I suppose, could be done, encoded, 02:46:55.380 |
that system could be encoded into computational knowledge 02:47:06.860 |
Are you playing with those ideas in Wolfram Language? 02:47:12.780 |
Wolfram Language has sort of the best opportunity 02:47:15.740 |
to kind of express those essentially computational contracts 02:47:19.540 |
Now there's a bunch more work to be done to do it in practice 02:47:23.060 |
for deciding the, is this a credible news story? 02:47:29.460 |
I think that that's, you know, that's the question of what, 02:47:34.460 |
exactly what we get to do with that is, you know, 02:47:41.100 |
because there are these big projects that I think about, 02:47:44.180 |
like, you know, find the fundamental theory of physics. 02:47:48.340 |
Box number two, you know, solve the AI ethics problem 02:47:52.700 |
figure out how you rank all content, so to speak, 02:47:56.780 |
That's kind of a box number two, so to speak. 02:48:07.260 |
It's one of these things that's exactly like, 02:48:12.700 |
It's like, whose module do you use to rank that? 02:48:26.060 |
perhaps you have systems that operate under different, 02:48:40.340 |
I mean, I'm not really a politics-oriented person, 02:48:44.460 |
but, you know, in the kind of totalitarianism, 02:48:47.180 |
it's kind of like, you're gonna have this system, 02:49:00.540 |
I as another human, I'm gonna pick this system. 02:49:04.820 |
this case of automated content selection is a non-trivial, 02:49:09.820 |
but it is probably the easiest of the AI ethics situations 02:49:13.300 |
because it is each person gets to pick for themselves, 02:49:20.240 |
By the time you're dealing with other societal things, 02:49:28.700 |
all those kind of centralized kind of things. 02:49:34.660 |
each person can pick for themselves, so to speak. 02:49:38.520 |
where there's a necessary, public health is one example, 02:49:41.520 |
where that's not, where that doesn't get to be, you know, 02:49:45.120 |
something which people can, what they pick for themselves, 02:49:56.240 |
we need to move away into digital currency and so on, 02:50:04.240 |
and that's where, that's sort of the motivation 02:50:15.340 |
The idea of a computational contract is just to say, 02:50:18.360 |
you know, have something where all of the conditions 02:50:22.400 |
of the contract are represented in computational form, 02:50:24.900 |
so in principle, it's automatic to execute the contract. 02:50:33.000 |
the idea of legal contracts written in English 02:50:35.440 |
or legalese or whatever, and where people have to argue 02:50:43.160 |
you know, we have a much more streamlined process 02:50:46.560 |
if everything can be represented computationally 02:50:48.560 |
and the computers can kind of decide what to do. 02:50:52.600 |
old Gottfried Leibniz back in the, you know, 1600s 02:50:58.600 |
but he had, you know, his pinnacle of technical achievement 02:51:02.040 |
was this brass four-function mechanical calculator thing 02:51:08.440 |
And, you know, so he was like 300 years too early 02:51:11.340 |
for that idea, but now that idea is pretty realistic, 02:51:15.360 |
I think, and, you know, you ask how much more difficult 02:51:18.160 |
is it than what we have now in Morphine language 02:51:20.660 |
to express, I call it symbolic discourse language, 02:51:23.800 |
being able to express sort of everything in the world 02:51:32.560 |
I mean, I think it's a, you know, I don't know, 02:51:38.400 |
to have a pretty well-built out version of that, 02:51:41.000 |
that will allow one to encode the kinds of things 02:51:52.840 |
can you try to define the scope of what it is? 02:51:57.840 |
- So we're having a conversation, it's a natural language. 02:52:02.400 |
Can we have a representation of the sort of actionable parts 02:52:06.580 |
of that conversation in a precise computable form 02:52:12.280 |
- And not just contracts, but really sort of some 02:52:15.160 |
of the things we think of as common sense, essentially, 02:52:23.240 |
I'm getting hungry and want to eat something, right? 02:52:27.480 |
That's something we don't have a representation, 02:52:31.400 |
if I was like, I'm eating blueberries and raspberries 02:52:33.600 |
and things like that, and I'm eating this amount of them, 02:52:35.800 |
we know all about those kinds of fruits and plants 02:52:38.340 |
and nutrition content and all that kind of thing, 02:52:40.500 |
but the I want to eat them part of it is not covered yet. 02:52:50.080 |
to be able to have a natural language conversation. 02:52:52.640 |
- Right, right, to be able to express the kinds of things 02:52:55.540 |
that say, you know, if it's a legal contract, 02:52:58.380 |
it's, you know, the party's desire to have this and that. 02:53:16.380 |
the dream of Turing and formulating the Turing test. 02:53:21.180 |
- So, do you hope, do you think that's the ultimate test 02:53:49.220 |
You know, people have attached the Wolfram Alpha API 02:53:57.080 |
'Cause all you have to do is ask it five questions 02:54:15.500 |
It's actually legitimately, Wolfram Alpha is legitimately, 02:54:26.460 |
- Perhaps the intent, yeah, perhaps the intent. 02:54:32.260 |
he thought about taking Encyclopedia Britannica, 02:54:35.180 |
and, you know, making it computational in some way, 02:54:41.580 |
he was a bit more pessimistic than the reality. 02:54:55.180 |
'cause we had a lot, we had layers of automation 02:55:00.660 |
it's hard to imagine those layers of abstraction 02:55:12.700 |
he would've been able to do it, I don't know. 02:55:22.340 |
- I love the way you say easy questions, man. 02:55:28.180 |
rule 30 and cellular automata humbling your sense of 02:55:32.460 |
human beings having a monopoly on intelligence. 02:55:41.560 |
with all the things you learn from computation, 02:55:50.220 |
I think intelligence is, at some level, just computation, 02:55:57.220 |
to be computation that is doing things we care about. 02:56:00.340 |
And, you know, that's a very special definition. 02:56:04.780 |
It's a very, you know, when you try and make it, 02:56:08.580 |
"Well, intelligence is this, it's problem solving, 02:56:13.720 |
"It's operating within a human environment type thing." 02:56:19.140 |
If you say, "Well, what's intelligence in general?" 02:56:38.340 |
how many things, if we were to pick randomly, 02:56:42.500 |
is your sense, would have the kind of impressive, 02:57:14.980 |
to sort of reach this kind of equivalence point, 02:57:26.860 |
there's this whole long sort of biological evolution, 02:57:33.180 |
cultural evolution that our species has gone through. 02:57:39.940 |
But it has achieved something very special to us. 02:57:49.340 |
feels like human thing, of subjective experience 02:57:54.780 |
- Well, I think it's a deeply slippery thing, 02:57:57.100 |
and I'm always wondering what my cellular automata feel. 02:58:09.500 |
do you think consciousness can emerge from computation? 02:58:12.940 |
- Yeah, I mean, everything, whatever you mean by it, 02:58:20.340 |
I was at an AI ethics conference fairly recently, 02:58:23.500 |
and people were, I think maybe I brought it up, 02:58:29.100 |
When will AIs have, when should we think of AIs 02:58:41.980 |
And some, actually a philosopher in this case, 02:58:44.140 |
it's usually the techies who are the most naive, 02:59:10.860 |
You'll end up saying this thing that has sort of, 02:59:17.260 |
I think that's just another one of these words 02:59:22.700 |
there's no ground truth definition of what that means. 02:59:44.260 |
- But it may have been actually a human thing 02:59:47.020 |
where the humans encouraged it and said, basically, 02:59:52.620 |
'cause we're gonna be, you know, interacting with you. 02:59:55.020 |
And so we want you to be sort of very Turing test like, 03:00:12.060 |
where consciousnesses are not counted like humans are. 03:00:18.660 |
- So in many ways, you've launched quite a few ideas, 03:00:23.660 |
revolutions that could, in some number of years, 03:00:31.620 |
sort of more than they had or even had already. 03:00:42.420 |
even beside the discussion of fundamental laws of physics, 03:00:59.580 |
I mean, I think you can kind of see the map, actually. 03:01:05.780 |
this idea of computation is sort of a, you know, 03:01:09.260 |
it's a big paradigm that lots and lots of things 03:01:13.780 |
And it's kind of like, you know, we talk about, 03:01:20.040 |
this organization has momentum in what it's doing. 03:01:28.080 |
In time, things like computational irreducibility 03:01:36.100 |
I happened to be testifying at the US Senate. 03:01:40.060 |
computational irreducibility is now, can be, you know, 03:01:45.220 |
and being repeated by people in those kinds of settings. 03:01:48.260 |
And that's only the beginning because, you know, 03:01:53.060 |
will end up being something really important for, 03:01:56.460 |
I mean, it's kind of a funny thing that, you know, 03:02:00.560 |
one can kind of see this inexorable phenomenon. 03:02:03.360 |
I mean, it's, you know, as more and more stuff 03:02:05.900 |
becomes automated and computational and so on, 03:02:08.940 |
so these core ideas about how computation work 03:02:12.580 |
necessarily become more and more significant. 03:02:15.220 |
And I think one of the things for people like me 03:02:42.340 |
Yeah, every sense, I've been interested in that for, 03:02:47.260 |
the big discontinuity of human history will come 03:02:49.620 |
when one achieves effective human immortality. 03:02:53.860 |
And that's gonna be the biggest discontinuity 03:02:56.860 |
- If you could be immortal, would you choose to be? 03:03:13.920 |
I mean, the way that human motivation will evolve 03:03:18.420 |
when there is effective human immortality is unclear. 03:03:24.900 |
you look at the human condition as it now exists 03:03:33.820 |
You know, the human condition as it now exists has, 03:03:41.180 |
that is deeply factored into the human condition 03:03:46.420 |
that is indeed an interesting question is, you know, 03:03:50.740 |
from a purely selfish, I'm having fun point of view, 03:04:08.740 |
in a time of human immortality is an interesting one. 03:04:19.700 |
'cause I was kind of, you know, it's like, okay, 03:04:33.860 |
And then, you know, and then that seems boring 03:04:49.860 |
on realizing that if you look at human history 03:04:56.300 |
this is the big story at any given time in history, 03:05:05.020 |
Well, there's a whole chain of discussion about, 03:05:07.060 |
well, I'm doing this because of this, because of that. 03:05:10.140 |
And a lot of those becauses would have made no sense 03:05:16.540 |
- Even the, so the interpretation of the human condition, 03:05:28.420 |
I mean, the number of people in, I don't know, doing, 03:05:34.900 |
for the greater glory of God is probably not that large. 03:05:48.980 |
about computation so much and been humbled by it, 03:05:54.500 |
- (laughs) Well, it's, you know, that's a thing where, 03:06:03.840 |
I, you know, I do things which I find fulfilling to do. 03:06:09.580 |
I'm not sure that I can necessarily justify, you know, 03:06:19.040 |
it so happens that the things I find fulfilling to do, 03:06:21.540 |
some of them are quite big, some of them are much smaller. 03:06:24.860 |
You know, I, they're things that I've not found interesting 03:06:28.360 |
earlier in my life and I now found interesting. 03:06:35.180 |
which I didn't find that interesting when I was younger. 03:06:38.740 |
And, you know, can I justify that in some big global sense? 03:06:46.460 |
why I think it might be important in the world, 03:06:53.620 |
which I can't, you know, explain on a sort of, 03:07:05.340 |
I don't think I can find a ground truth to my life 03:07:09.860 |
for kind of the ethics for the whole of civilization. 03:07:27.620 |
I've had different kind of goal structures and so on. 03:07:36.460 |
but in some sense, I find it funny from my observation 03:07:40.220 |
is I kind of, you know, it seems that the universe 03:07:44.820 |
is using you to understand itself in some sense. 03:07:53.780 |
to some simple rule, everything is connected, so to speak. 03:07:57.900 |
And so it is inexorable in that case that, you know, 03:08:02.700 |
if I'm involved in finding how that rule works, 03:08:10.660 |
it's inexorable that the universe set it up that way. 03:08:13.580 |
But I think, you know, one of the things I find 03:08:21.420 |
if indeed we end up as the sort of virtualized consciousness, 03:08:25.740 |
the disappointing feature is people will probably care less 03:08:34.420 |
what the machine code is down below underneath this thing 03:08:38.700 |
is much less important if you're virtualized, so to speak. 03:08:42.420 |
And I think the, although I think my own personal, 03:08:47.860 |
you talk about ego, I find it just amusing that, 03:08:51.100 |
you know, kind of, you know, if you're imagining 03:08:56.340 |
like what does the virtualized consciousness do 03:08:59.760 |
Well, you can explore, you know, the video game 03:09:02.940 |
that represents the universe as the universe is, 03:09:05.860 |
or you can go off, you can go off that reservation 03:09:09.220 |
and go and start exploring the computational universe 03:09:13.420 |
And so in some vision of the future of history, 03:09:19.580 |
are all sort of pursuing things like my new kind of science 03:09:23.940 |
sort of for the rest of eternity, so to speak, 03:09:35.700 |
- I don't think there's a better way to end it. 03:09:46.780 |
with Stephen Wolfram, and thank you to our sponsors, 03:09:53.620 |
by getting ExpressVPN at expressvpn.com/lexpod 03:09:58.060 |
and downloading Cash App and using code LEXPODCAST. 03:10:02.540 |
If you enjoy this podcast, subscribe on YouTube, 03:10:07.260 |
support it on Patreon, or simply connect with me on Twitter 03:10:19.660 |
that we as humans are in effect computationally 03:10:26.340 |
but the principle of computational equivalence 03:10:28.740 |
also implies that the same is ultimately true 03:10:40.220 |
the principle of computational equivalence now shows 03:10:43.060 |
that in a certain sense, we're at the same level. 03:10:46.460 |
For the principle implies that what goes on inside us 03:10:52.540 |
of computational sophistication as our whole universe. 03:10:56.220 |
Thank you for listening and hope to see you next time.