back to indexAll-In Summit: Stephen Wolfram on computation, AI, and the nature of the universe
Chapters
0:0 Dave welcomes Stephen Wolfram to All-In Summit ‘23!
2:37 Computational irreducibility
4:58 The paradox of simple heuristics
6:49 AI
8:57 Cellular automata
14:10 Limitations of AI
18:13 Syntax, logic, LLMs and other high-potential AI realms
23:57 Generative AI and interconcept space
26:20 The nature of the universe
29:54 Electrons – size, topology and structure
31:18 Time, spacetime, gravity and the boundaries of human observers
36:53 Persistence and other elements of consciousness humans take for granted
38:9 The concept of the ruliad
41:33 Joy
00:00:10.760 |
- Yeah, everybody else was scared away, I'm afraid. 00:00:19.400 |
Interview Stephen Wolfram on stage in 40 minutes. 00:00:40.460 |
- It's a huge honor to talk to Stephen Wolfram, 00:00:58.340 |
and quickly became a leader in the emerging field 00:01:09.700 |
He published his first scientific paper at age 15, 00:01:13.220 |
and had received his PhD in theoretical physics 00:01:21.780 |
quantum field theory, cosmology, and complexity, 00:01:24.700 |
discovering a number of fundamental connections 00:01:29.380 |
and inventing such concepts as computational irreducibility, 00:01:34.300 |
Wolfram's work led to a wide range of applications 00:01:45.500 |
to develop a new randomness generation system 00:01:47.740 |
and a new approach to computational fluid dynamics, 00:01:54.940 |
was a historic step that has defined a new dimension 00:02:05.580 |
such as Siri and Alexa and so on, among others. 00:02:13.020 |
- I worked at the Lawrence Berkeley National Lab 00:02:16.900 |
at the Center for Beam Physics for two and a half years, 00:02:24.020 |
So, that's when I first got to know about you and your work. 00:02:29.020 |
Who here has seen an interview that Steven's done before, 00:02:36.180 |
- I wanna try and guide the conversation a little bit. 00:02:52.260 |
and I wanna try and connect with a broad audience 00:02:57.460 |
- Okay, so, at the base, computation is about 00:03:01.300 |
you specify rules, and then you let those rules, 00:03:05.460 |
you figure out what the consequences of those rules are. 00:03:10.700 |
the computer will run your program, it will generate output. 00:03:13.700 |
I would say that the bigger picture of this is 00:03:18.980 |
We can just use words, we can talk vaguely about things. 00:03:27.900 |
That's been done in logic, it's been done in mathematics, 00:03:30.580 |
it's done in its most general form in computation, 00:03:36.740 |
Then the question is, given that you have the rules, 00:03:41.700 |
do you then know everything about what will happen? 00:03:49.220 |
once you work out the equations and so on, then you're done. 00:04:05.180 |
It's kind of like if you just run the rules step by step, 00:04:08.140 |
step by step, it's making some pattern on screen 00:04:10.900 |
or whatever else, you can just run all those steps, 00:04:16.220 |
Then you can ask yourself, can you jump ahead? 00:04:19.900 |
I know the answer is going to be 42 or something at the end. 00:04:22.820 |
Well, the point is that that isn't in general possible. 00:04:26.100 |
That in general, you have to go through all the steps 00:04:32.500 |
of kind of the sort of prediction in science. 00:04:34.780 |
And it's something people have gotten very used to the idea 00:04:37.620 |
that with science, we can predict everything. 00:04:42.340 |
we see this whole phenomenon of computational irreducibility. 00:04:52.220 |
But from within science, we see this fundamental limitation, 00:04:59.500 |
- So we can't just skip ahead in a lot of cases. 00:05:02.700 |
We can't just create simple heuristics or simple solves 00:05:07.580 |
that avoid all of the hard work to simulate something, 00:05:24.860 |
we don't need to go through all of those steps of time. 00:05:27.300 |
We could just say, and the end, it will be 37 or something. 00:05:33.860 |
that it was worthwhile to sort of lead our lives 00:05:38.620 |
It will be a kind of, you don't really need time to progress. 00:05:41.380 |
You can always just say what the answer will be. 00:05:43.580 |
I mean, this kind of idea has many consequences. 00:05:57.820 |
I never want the AI system to do this very bad thing 00:06:03.340 |
So then you say, well, can I work out what it will do? 00:06:15.140 |
You kind of have this trade-off with an AI system. 00:06:17.340 |
You can either say, let's constrain it a lot. 00:06:23.220 |
But then, or let's let it sort of have its way 00:06:32.580 |
of the computational capabilities that it has. 00:06:36.220 |
We either can understand what's gonna happen, 00:06:42.500 |
or we always say, okay, we're going to run the risk 00:06:53.460 |
a lot of what people call AI are predictive models 00:07:01.540 |
that make some prediction of what the right next step 00:07:09.300 |
or a series of words to generate a chat response 00:07:14.300 |
through an LLM model like chat GPT or BARD or what have you. 00:07:17.740 |
Those are statistical models trained on past data. 00:07:22.900 |
Are they, is that different than the problems 00:07:32.340 |
the nature of the universe, solving bigger problems, 00:07:43.260 |
heuristical, statistical thing that just predicts stuff. 00:07:46.620 |
- Right, well, I mean, the computational universe 00:07:49.140 |
of possible programs, possible rules is vast. 00:07:54.060 |
I just wanna make sure everyone understands that. 00:08:02.300 |
- Yes, I mean, so people are used to writing programs 00:08:05.020 |
that are intended for particular human purposes. 00:08:07.620 |
But let's say you just write a program at random. 00:08:09.980 |
You just put in the program, it's a random program. 00:08:13.580 |
Question is, what does the random program do? 00:08:16.060 |
So a big thing that I discovered in the 1980s 00:08:24.860 |
I had assumed that if you want to do complicated things, 00:08:28.180 |
you would have to set up a complicated program. 00:08:32.300 |
Turns out that in nature, nature kind of discovered 00:08:37.540 |
You know, we see all this complexity in nature. 00:08:39.700 |
It seems like the big origin of that complexity 00:08:42.220 |
is just this phenomenon that even a very simple program 00:08:47.620 |
So, I mean, that's the, so this sort of universe-- 00:08:50.500 |
- Just give a quick example, if you wouldn't mind. 00:09:08.020 |
And you have a rule that says you go down the page, 00:09:13.820 |
You go down the page and you say the color of a cell 00:09:34.620 |
- It looks like a pyramid or triangle when it's done. 00:09:41.340 |
into this computational universe and see what's out there. 00:09:50.180 |
Rule 30, you start it off from one black cell 00:09:52.900 |
and it makes this really complicated pattern. 00:09:58.580 |
to a huge amount of effort-- - It looks designed. 00:10:04.820 |
And that's the only way that thing could have been created 00:10:23.260 |
it looks for all practical purposes completely random. 00:10:28.980 |
from some simple rule, when you see it produced, 00:10:34.060 |
It's kind of like a good analogy if you know, 00:10:38.100 |
you know, 3.14159, that's about as far as I can go. 00:10:56.980 |
the ratio of the circumference diameter of a circle, 00:11:08.020 |
it produces things that look very complicated. 00:11:17.540 |
- By the way, it's not such a simple computer. 00:11:19.820 |
Because it turns out, when you kind of try and rank, 00:11:34.060 |
how sophisticated are the computations it can do? 00:12:15.860 |
that complicated behavior to do any computation you want. 00:12:21.260 |
for this computational irreducibility phenomenon, 00:12:29.300 |
and you're trying to predict what it's going to do. 00:12:33.780 |
and you as the predictor, are computational systems. 00:12:37.620 |
So then the question is, can you, the predictor, 00:12:40.220 |
be so much smarter than the system you're predicting, 00:12:44.500 |
I know what you're going to do, I've got the answer. 00:12:52.900 |
So, this principle of computational equivalence says, 00:12:57.300 |
or mathematics, or statistics, or whatever else, 00:13:05.780 |
And that's why you can't make that prediction. 00:13:08.100 |
That's why computational irreducibility happens. 00:13:13.980 |
I mean, the thing that we have only just started mining 00:13:18.980 |
is this computational universe of all possible programs. 00:13:22.300 |
Most programs that we use today were engineered by people. 00:13:30.020 |
- So now we have a program that can make programs. 00:13:36.060 |
We can say, if we know what we want the program to do, 00:13:38.940 |
we can just search this computational universe 00:13:42.700 |
Often, I've done this for many years, for many purposes. 00:13:47.020 |
- Well, so, very simple example, actually, from Rule 30, 00:13:50.460 |
is you want to make a random number generator. 00:13:53.060 |
You say, how do I make something that makes good randomness? 00:14:00.660 |
and pretty soon you find one that makes good randomness. 00:14:03.620 |
You ask me, why does it make good randomness? 00:14:07.300 |
It's not something, there's no narrative explanation. 00:14:09.980 |
- So now with AI, we're generating a ton of programs, 00:14:19.300 |
to solve problems or figure stuff out for us? 00:14:23.540 |
AI is very limited in the computational universe. 00:14:27.500 |
- This is the connection I wanted to make, because, yeah. 00:14:30.060 |
the computational universe is all these possible rules. 00:14:32.900 |
We can talk later about whether the universe-- 00:14:43.340 |
I hope we're gonna talk about whether the physical universe-- 00:14:48.140 |
then we're gonna smoke weed, and then we're gonna go to lunch. 00:15:04.220 |
- All the programs, the library of possible programs. 00:15:10.220 |
Most of those things are things that we humans 00:15:14.700 |
"I don't know what the significance of that is." 00:15:22.460 |
a large language model, we've given it, you know, 00:15:31.980 |
that we humans have selected that we care about. 00:15:42.880 |
"that are like what you said you care about." 00:15:45.580 |
- And that's a tiny part of the computational universe. 00:16:13.740 |
A computational exercise, something we gotta figure out, 00:16:18.900 |
outside of what AI is possibly able to solve for today. 00:16:31.900 |
- Without esoteric topology in algebra or something. 00:16:42.540 |
- Okay, so we've got something we're trying to figure out. 00:16:45.140 |
This collection of cells, it behaves in this way. 00:16:49.860 |
and make a tumor, or is it going to eventually halt 00:16:58.000 |
where, you know, we could, if we knew enough detail, 00:17:00.780 |
we could simulate what every cell's gonna do. 00:17:02.740 |
- Every molecule, every atom, every cell, every interaction. 00:17:05.260 |
If we knew enough, we could simulate each of those steps. 00:17:10.260 |
answer that question. - Right, you can't jump ahead 00:17:11.720 |
and say, so I know this thing is never gonna turn 00:17:15.220 |
- Right, and so, simulating the physical universe, 00:17:30.420 |
becomes this thing where we don't have a simple heuristic, 00:17:34.220 |
a simple equation that says, based on this condition, 00:17:42.300 |
I mean, people have hoped that you can just write down 00:17:46.880 |
and then work out the answer directly from that. 00:17:54.260 |
about how the universe works, sort of philosophically, 00:18:10.540 |
the comet is gonna be in this place at this time, and so on. 00:18:15.060 |
that we can't solve with AI we have today, or can we? 00:18:20.060 |
And can you just help me and help everyone understand, 00:18:23.000 |
what are you excited about with respect to AI? 00:18:25.460 |
What is it that has happened in the last couple of months 00:18:30.380 |
and what does that allow us to do that we couldn't do 00:18:33.180 |
before using just raw approaches to computation? 00:18:36.700 |
- Okay, so I mean, several different things here. 00:18:41.540 |
we humans have been interested in a small part 00:18:46.660 |
AI is reflecting the part that we have been interested in. 00:18:50.260 |
That sort of AI is doing those kinds of things. 00:18:54.500 |
I mean, the big thing that happened a year ago 00:18:56.980 |
was the arrival of successful large language models. 00:19:16.340 |
and not just completely boring and irrelevant and so on. 00:19:20.540 |
And I think there's this jump that happened now in 2012. 00:19:25.300 |
There was a sort of previous jump in machine learning 00:19:27.500 |
that happened with images and things like that, 00:19:31.820 |
So what's the significance of large language models? 00:19:39.380 |
that can successfully kind of complete an essay or something? 00:19:45.420 |
that it's kind of telling us a piece of science 00:19:50.900 |
It's a question of sort of how do you construct language? 00:19:56.220 |
that there's kind of a syntactic grammar of language, 00:19:58.300 |
you know, noun, verb, noun, et cetera, et cetera, et cetera. 00:20:05.660 |
is that there is a kind of semantic grammar of language. 00:20:12.020 |
And, you know, for example, people are always impressed 00:20:14.100 |
that the LLMs have figured out how to, quote, "reason." 00:20:20.020 |
logic is this thing that's kind of this formalization 00:20:25.340 |
and it's a formalization that was discovered, 00:20:26.980 |
you know, by Aristotle and people in antiquity. 00:20:32.540 |
is they looked at lots of speeches people had given, 00:20:43.140 |
That's exactly what the LLM has done as well. 00:20:45.140 |
It's noticed that there are these patterns of language 00:20:49.980 |
and that we call logic or reasoning or something like that. 00:20:55.900 |
the LLMs provide this kind of linguistic user interface. 00:20:59.300 |
We've had kind of graphical user interfaces and so on. 00:21:04.180 |
You say, you know, you've got some very small set of points 00:21:13.020 |
I'm going to send that report to somebody else. 00:21:14.700 |
They're probably going to feed it to their own LLM. 00:21:16.620 |
It's going to grind it down to some small set of results." 00:21:21.100 |
It's kind of, you know, it's allowing one to use language 00:21:26.540 |
there are a lot of these practical use cases for this. 00:21:29.820 |
-It's always seemed to me like the rate-limiting step 00:21:34.140 |
Like, the rate at which you and I are speaking to one another 00:21:39.100 |
Like, just a couple words a minute or something. 00:21:41.020 |
-The question is, what really is communication? 00:21:51.660 |
that are traveling through vibrations in the air to my ear. 00:22:09.980 |
one of the things that's sort of interesting is this, 00:22:16.900 |
You know, the structure of each of our brains is different. 00:22:19.580 |
So the particular nerve firings are different. 00:22:22.140 |
But we're trying to package up those thoughts 00:22:28.060 |
It kind of packages concepts into something transportable. 00:22:32.740 |
-Because they're outputting a packet of Communicate to me. 00:22:36.220 |
-But is there anything else that's exciting to you 00:22:47.420 |
-You know, they're telling us there is a science of LLMs, 00:22:53.300 |
There's kind of a bulk science of knowledge and things 00:22:56.460 |
that the LLMs are kind of showing us is there. 00:22:59.300 |
There's a kind of a science of the semantics of language, 00:23:08.540 |
and we now get to make science about that kind of nature. 00:23:11.900 |
Now, it's not obvious that we can make sort of science 00:23:14.300 |
where we can tell a narrative story about what's going on. 00:23:18.500 |
we're just sort of dumped into computational irreducibility, 00:23:22.500 |
-So there's this black box that the training model created. 00:23:29.500 |
I put a bunch of words in, a bunch of words come out. 00:23:32.180 |
Now you're saying we're going to try and understand the nature, 00:23:38.180 |
and that'll tell us a little bit something about -- 00:23:42.420 |
we'll discover that human language is much less -- 00:23:52.220 |
-In other words, it's showing us rules of human language 00:23:57.540 |
Now, if you ask, "What else do we learn from the AI?" 00:24:00.140 |
So I'll give you another example of something 00:24:08.980 |
as we can now see also making videos and so on. 00:24:11.580 |
There's this question of you go inside the AI, 00:24:18.860 |
is represented by some vector of 1,000 numbers, let's say, 00:24:21.660 |
the concept of a dog and other 1,000 numbers. 00:24:24.140 |
You just say, "Let's take these vectors of numbers, 00:24:34.700 |
Okay? So you can have these definite concepts, 00:24:39.060 |
in this sort of space of all possible concepts. 00:24:49.580 |
And the answer is there's a huge amount of stuff, 00:24:52.540 |
even from a generative AI that's learned from us. 00:25:04.860 |
that, you know, simple estimate, simple case. 00:25:12.500 |
is now filled with actual words that we have? 00:25:21.380 |
You know, what -- Yeah, that's a wrong number. 00:25:22.900 |
I didn't know that number. That's interesting. Yeah. 00:25:25.780 |
that's the -- You know, when it produces the -- 00:25:28.180 |
When it says, "What's the next word going to be?" 00:25:31.620 |
They're just trying to predict the next word. 00:25:36.340 |
which are the probabilities for each of the possible 00:25:42.100 |
or the next most likely one or whatever else. 00:25:48.860 |
in the space of sort of all possible concepts, 00:26:00.300 |
of even the kind of concepts that are revealed 00:26:02.780 |
by the things that sort of we put out there on the Web 00:26:08.500 |
Yeah, there are 10 to the 80th atoms in the universe. 00:26:12.220 |
compared to a lot of the numbers one deals with. 00:26:19.420 |
Yeah, it's a -- We should talk about the universe. 00:26:30.300 |
Yeah, yeah. Please. -We got your point of view. 00:26:41.660 |
Yeah, so I mean -- -What does consensus have wrong? 00:26:44.180 |
Yeah, right. Well, so, I mean, the physics -- 00:26:47.580 |
This is like where you, like, let the rains go, 00:26:51.220 |
Yeah, so, like, we had a great chat at dinner 00:26:57.100 |
physics as we know it right now was kind of -- 00:27:07.460 |
quantum mechanics, the theory of small kinds of things; 00:27:10.300 |
and statistical mechanics, the theory of heat; 00:27:13.060 |
and sort of how in the second law of thermodynamics, 00:27:32.140 |
and I think we figured it out, which is kind of exciting. 00:27:35.260 |
Not something I expected to see in my lifetime, 00:27:38.140 |
not something I kind of expected to see for 50 or 100, 00:27:42.300 |
And kind of the number-one thing that you say, 00:27:52.380 |
You put things in any different place in space." 00:27:54.620 |
People have been arguing about whether space is continuous 00:27:59.020 |
-Discrete means that it's broken up into little pieces. 00:28:02.940 |
So, you know, this argument also happened for matter. 00:28:15.900 |
The answer is water is made of discrete molecules. 00:28:24.100 |
Space, people still think it's a continuous kind of thing. 00:28:27.220 |
So the kind of starting point of things I've tried to figure out 00:28:36.380 |
not like physical atoms like hydrogen and helium and things. 00:28:49.700 |
I mean, they're just all that one knows about them. 00:28:53.740 |
All one knows is that one is different from another. 00:28:56.940 |
-So there are these two discrete things next to each other. 00:29:01.060 |
-There's no next, right, because there is no space. 00:29:11.100 |
I mean, it's kind of like a giant friend network 00:29:18.500 |
And all the things that we observe-- electrons, 00:29:21.580 |
black holes, all these kinds of things-- they're all just 00:29:27.780 |
how could that possibly be the way things are? 00:29:35.680 |
say there's a definite eddy that's moving through the water, 00:29:38.380 |
yet it's made of lots of discrete molecules of water, 00:29:45.260 |
And so it is with electrons, et cetera, et cetera. 00:29:54.740 |
-So does each particle, as we know it-- an electron, 00:30:04.740 |
its relative connectedness to other particles? 00:30:15.780 |
-No, an electron is a pretty big thing relative 00:30:19.420 |
We don't know exactly how big, but it's a big, floppy thing. 00:30:26.860 |
has been assumed that electrons are actually infinitesimally 00:30:39.900 |
and you can either knot it or you can not knot it. 00:30:42.460 |
And there are lots of different ways you can make the knot, 00:30:44.700 |
but it's still-- it's either knotted or it's not knotted. 00:30:46.740 |
And there's either an electron or there isn't an electron. 00:31:01.060 |
And then if you look at a large scale, what are all these-- 00:31:04.780 |
and I should say, by the way, that these atoms of space, 00:31:07.500 |
the main thing that's happening is pieces of this network 00:31:25.100 |
The change through this physical network is time. 00:31:31.620 |
Time is the progressive change of the network. 00:31:35.500 |
One particle touches another one, touches another one. 00:31:48.580 |
And it happens to be operating on these hypergraphs 00:31:51.420 |
rather than just lines of black and white cells. 00:32:00.380 |
I don't know whether you'd call it an exercise, but yes. 00:32:07.180 |
And time is the progress of that computation. 00:32:16.940 |
No, I think that's a philosophically rather confused 00:32:19.780 |
I mean, there's a little bit deeper in the rabbit hole 00:32:31.700 |
is, just like you have all these molecules bouncing around, 00:32:34.460 |
they make sort of continuous fluids like water, 00:32:37.500 |
you can ask, what do all these atoms of space make? 00:32:50.940 |
could derive the properties of something like that 00:32:56.260 |
I mean, I should say, OK, this is a big, complicated subject. 00:33:11.380 |
The thing that's pretty interesting in all of this 00:33:14.220 |
is the way that the nature of us as observers 00:33:17.820 |
is critical to the kind of thing that we observe in-- 00:33:22.920 |
heard in the last year when I watched your interview 00:33:26.900 |
through this statement again, because this is so important. 00:33:41.020 |
looking at a bunch of molecules bouncing around in a gas, 00:33:44.180 |
and we say, what do we see in those molecules? 00:33:48.260 |
Well, we see things like the gas laws, pressure, volume, 00:33:52.300 |
We see things like the gas tends to get more random, 00:34:00.500 |
If we were observers who could trace every molecule, 00:34:11.460 |
are bounded in our computational capabilities. 00:34:13.900 |
We're not capable of untangling all those kinds of detail. 00:34:23.840 |
We can't trace the motion of each atom individually. 00:34:27.660 |
So our heuristic is the PV equals NRT, the gas laws. 00:34:39.260 |
as opposed to the energy of each individual atom. 00:34:45.240 |
We take an average of a lot of stuff to understand it. 00:34:49.180 |
And the point is that people haven't understood the fact 00:34:51.540 |
that we have to take that average, because what's 00:34:55.340 |
underneath is computationally irreducible, but we-- 00:35:06.620 |
that all these atoms of space operating in this network, 00:35:10.180 |
it's all this computationally irreducible process. 00:35:13.700 |
But observers like us, observers who are computationally bounded, 00:35:23.140 |
to correspond to the structure of space time. 00:35:27.140 |
Quantum mechanics is a little bit harder to understand. 00:35:31.260 |
is, in classical physics, you say, I throw a ball. 00:35:37.220 |
In quantum mechanics, you say, I do something. 00:35:39.900 |
There are many possible paths of history that can be followed. 00:35:42.500 |
We only get to work out things about averages of those paths. 00:35:45.980 |
So it's sort of there are many different paths of history 00:35:48.380 |
that are being pursued in these networks and things. 00:35:52.180 |
What's happening is that there are many different rewrites 00:36:01.740 |
to make use of the fact that there are many branches 00:36:03.940 |
of history that it can kind of follow in parallel. 00:36:06.740 |
But then the question is, well, how do we observe 00:36:10.420 |
Well, we are embedded in this whole system that 00:36:13.140 |
has all these branching paths of history and so on. 00:36:16.460 |
Well, we, our brains are full of sort of branching behavior 00:36:25.720 |
We are effectively-- the question you have to ask 00:36:36.000 |
of computational boundedness in its characteristics, 00:36:38.880 |
and it also has one other assumption, which is it assumes 00:36:56.900 |
from being able to see perhaps a different nature 00:37:02.500 |
I told you guys we were going to go like really-- 00:37:11.060 |
The big result is that observers like us inevitably 00:37:15.340 |
observe laws of physics like the ones we know. 00:37:20.940 |
isn't like us, that isn't computationally bounded, 00:37:25.320 |
It will observe different laws of the universe. 00:37:27.860 |
And so the laws of the universe are only based on our nature. 00:37:37.240 |
It's not like we have to know every detail of us. 00:37:45.200 |
there are other things that we probably take for granted 00:37:53.140 |
that are inevitable about the way that physics works. 00:38:03.420 |
because I am not the same Adams as I was a second ago, 00:38:13.180 |
I know this is a bit far-fetched from maybe your specialty, 00:38:17.080 |
And then what is this concept of consciousness 00:38:19.100 |
that we have where we think we're persistent in time, 00:38:27.140 |
And how do you think about this notion of consciousness 00:38:29.540 |
and the observer in the context of the universe? 00:38:33.480 |
and I think I'm a human body, and I'm really-- 00:38:38.580 |
I used to think there was a sort of hierarchy 00:38:45.860 |
it's just like the AIs are not doing all possible computation. 00:38:49.740 |
We are actually rather limited in our observation 00:38:57.620 |
We have this belief that we're persistent in time and so on. 00:39:00.540 |
Imagine what you would feel like, so to speak, 00:39:03.660 |
if you were much more extended in the universe. 00:39:10.820 |
we call the Rouliad, which is this entangled limit of all 00:39:18.900 |
is just at some small point, some small region 00:39:23.540 |
And so it's-- you imagine what happens if you-- 00:39:31.300 |
we're effectively expanding in this kind of Roulial space, 00:39:34.180 |
where it's just like we can send spacecraft out-- 00:39:45.540 |
so as we expand our science, as we expand the ideas that we 00:39:53.420 |
And so you might say, well, what happens if we expand? 00:40:16.780 |
He puts these two advertising execs in a room, 00:40:21.820 |
The name of the company, come up with a logo, 00:40:25.220 |
He goes out for a few hours, comes back, pulls off a thing. 00:40:27.980 |
He copied exactly what they-- he had written down exactly what 00:40:33.300 |
he subliminally put a little image in the cab. 00:40:35.660 |
He had some kids walk across the street with a logo. 00:40:43.140 |
And they thought that they were creative geniuses. 00:40:45.220 |
They're like, we're these high-paid ad execs. 00:40:49.060 |
And it always struck me as like the human is just 00:40:53.340 |
We're just the node in the neural net that takes the input, 00:41:07.740 |
that we are part of this broader computation? 00:41:13.820 |
they're cheating computational irreducibility, so to speak. 00:41:17.460 |
They're saying, I'm going to put this thing which 00:41:37.540 |
I mean, I think it's a funny thing, because when you think 00:41:47.340 |
and you realize that I'm a person who likes people. 00:42:05.340 |
one of the more difficult human things about doing science 00:42:20.140 |
the thing that I've done puts humans right back 00:42:22.780 |
in the middle of the picture with these things 00:42:25.180 |
about the fact that it matters what the observer is like, 00:42:28.220 |
realizing that in this space of possibilities, 00:42:32.020 |
that what we care about is this part that is the result 00:42:42.420 |
Guys, please join me in thanking Stephen Wolfram.