back to index

All-In Summit: Stephen Wolfram on computation, AI, and the nature of the universe


Chapters

0:0 Dave welcomes Stephen Wolfram to All-In Summit ‘23!
2:37 Computational irreducibility
4:58 The paradox of simple heuristics
6:49 AI
8:57 Cellular automata
14:10 Limitations of AI
18:13 Syntax, logic, LLMs and other high-potential AI realms
23:57 Generative AI and interconcept space
26:20 The nature of the universe
29:54 Electrons – size, topology and structure
31:18 Time, spacetime, gravity and the boundaries of human observers
36:53 Persistence and other elements of consciousness humans take for granted
38:9 The concept of the ruliad
41:33 Joy

Whisper Transcript | Transcript Only Page

00:00:00.000 | (audience applauding)
00:00:03.160 | - Solo, no besties.
00:00:10.760 | - Yeah, everybody else was scared away, I'm afraid.
00:00:12.680 | - Yeah, or you scared them away.
00:00:14.640 | Yeah, I mean, it was a challenging prompt.
00:00:19.400 | Interview Stephen Wolfram on stage in 40 minutes.
00:00:21.400 | So here we go.
00:00:22.240 | (upbeat music)
00:00:24.820 | ♪ Dash ride ♪
00:00:27.060 | ♪ Rain man David Sackman ♪
00:00:29.640 | ♪ I'm going all in ♪
00:00:31.460 | ♪ And it said ♪
00:00:32.300 | ♪ We open sourced it to the fans ♪
00:00:33.620 | ♪ And they've just gone crazy ♪
00:00:35.340 | ♪ I'll be the queen of Kinwans ♪
00:00:37.540 | ♪ I'm going all in ♪
00:00:40.460 | - It's a huge honor to talk to Stephen Wolfram,
00:00:43.860 | creator of Mathematica, Wolfram Alpha,
00:00:47.380 | and the Wolfram Language,
00:00:48.500 | the author of A New Kind of Science,
00:00:49.980 | the originator of Wolfram Physics Project,
00:00:52.780 | and head of Wolfram Research.
00:00:54.380 | Stephen first used a computer in 1973,
00:00:58.340 | and quickly became a leader in the emerging field
00:01:00.660 | of scientific computing.
00:01:03.020 | In 1979, he began the construction of SMP,
00:01:06.220 | the first modern computer algebra system.
00:01:09.700 | He published his first scientific paper at age 15,
00:01:13.220 | and had received his PhD in theoretical physics
00:01:15.620 | from Caltech by 20.
00:01:17.040 | Wolfram's early scientific work
00:01:20.340 | was mainly in high energy physics,
00:01:21.780 | quantum field theory, cosmology, and complexity,
00:01:24.700 | discovering a number of fundamental connections
00:01:26.660 | between computation and nature,
00:01:29.380 | and inventing such concepts as computational irreducibility,
00:01:32.620 | which we'll talk about today.
00:01:34.300 | Wolfram's work led to a wide range of applications
00:01:37.300 | and provided the main scientific foundation
00:01:39.980 | for such initiatives of complexity theory
00:01:42.460 | and artificial life.
00:01:44.220 | Wolfram used his ideas
00:01:45.500 | to develop a new randomness generation system
00:01:47.740 | and a new approach to computational fluid dynamics,
00:01:50.100 | both of which are now in widespread use.
00:01:52.780 | The release of Wolfram Alpha in May of 2009
00:01:54.940 | was a historic step that has defined a new dimension
00:01:58.340 | for computation and AI,
00:01:59.940 | now relied on by millions of people
00:02:02.020 | to compute answers both directly
00:02:04.100 | and through intelligent assistants,
00:02:05.580 | such as Siri and Alexa and so on, among others.
00:02:08.420 | So, thank you for being here.
00:02:11.780 | - Thanks for having me.
00:02:13.020 | - I worked at the Lawrence Berkeley National Lab
00:02:16.900 | at the Center for Beam Physics for two and a half years,
00:02:19.420 | and it was when I was an undergrad,
00:02:20.780 | and I worked exclusively in Mathematica,
00:02:22.860 | as I shared with you the other night.
00:02:24.020 | So, that's when I first got to know about you and your work.
00:02:29.020 | Who here has seen an interview that Steven's done before,
00:02:33.420 | just to get a sense?
00:02:34.500 | - Okay.
00:02:36.180 | - I wanna try and guide the conversation a little bit.
00:02:38.300 | So, maybe we could start with computers.
00:02:41.500 | - Okay.
00:02:43.980 | - You talk about this concept of
00:02:48.580 | computational irreducibility.
00:02:50.580 | - Yes.
00:02:51.420 | - Maybe you can just,
00:02:52.260 | and I wanna try and connect with a broad audience
00:02:54.220 | in this conversation. - Yeah, yeah, right.
00:02:55.060 | So, what is computation?
00:02:56.420 | - What is computation?
00:02:57.460 | - Okay, so, at the base, computation is about
00:03:01.300 | you specify rules, and then you let those rules,
00:03:05.460 | you figure out what the consequences of those rules are.
00:03:08.020 | Computers are really good at that.
00:03:09.140 | You give a little program,
00:03:10.700 | the computer will run your program, it will generate output.
00:03:13.700 | I would say that the bigger picture of this is
00:03:17.100 | how do you formalize anything?
00:03:18.980 | We can just use words, we can talk vaguely about things.
00:03:21.900 | How do you actually put something down
00:03:23.860 | that is a kind of precise formalism
00:03:26.340 | that lets you work out what will happen?
00:03:27.900 | That's been done in logic, it's been done in mathematics,
00:03:30.580 | it's done in its most general form in computation,
00:03:32.820 | where the rules can be kind of anything
00:03:34.860 | you can specify to a computer.
00:03:36.740 | Then the question is, given that you have the rules,
00:03:39.420 | is that the end of the story?
00:03:40.660 | Once you have the rules,
00:03:41.700 | do you then know everything about what will happen?
00:03:44.060 | Well, that's kind of what one would assume,
00:03:46.180 | that the traditional view of science is
00:03:49.220 | once you work out the equations and so on, then you're done.
00:03:52.300 | Then you can predict everything,
00:03:53.940 | you've done all the hard work.
00:03:56.620 | Turns out this is not true.
00:03:58.220 | Turns out that even with very simple rules,
00:04:01.420 | the consequences of those rules
00:04:02.980 | can be arbitrarily hard to work out.
00:04:05.180 | It's kind of like if you just run the rules step by step,
00:04:08.140 | step by step, it's making some pattern on screen
00:04:10.900 | or whatever else, you can just run all those steps,
00:04:14.020 | see what happens, that's all good.
00:04:16.220 | Then you can ask yourself, can you jump ahead?
00:04:17.980 | Can you say, I know what's going to happen.
00:04:19.900 | I know the answer is going to be 42 or something at the end.
00:04:22.820 | Well, the point is that that isn't in general possible.
00:04:26.100 | That in general, you have to go through all the steps
00:04:28.620 | to work out what will happen.
00:04:30.020 | And that's kind of a fundamental limitation
00:04:32.500 | of kind of the sort of prediction in science.
00:04:34.780 | And it's something people have gotten very used to the idea
00:04:37.620 | that with science, we can predict everything.
00:04:40.980 | But from within science,
00:04:42.340 | we see this whole phenomenon of computational irreducibility.
00:04:45.220 | It's related to things like Godel's theorem,
00:04:47.180 | undecidability, halting problem,
00:04:49.660 | all kinds of other kinds of ideas.
00:04:52.220 | But from within science, we see this fundamental limitation,
00:04:55.380 | this fundamental inability for us
00:04:57.860 | to be able to say what will happen.
00:04:59.500 | - So we can't just skip ahead in a lot of cases.
00:05:02.700 | We can't just create simple heuristics or simple solves
00:05:07.580 | that avoid all of the hard work to simulate something,
00:05:11.860 | to calculate something,
00:05:13.180 | to come up with the computational output
00:05:15.500 | of something we're trying to figure out.
00:05:16.860 | - In a sense, this is a good thing for us
00:05:18.860 | because it's kind of we lead our lives,
00:05:20.980 | time progresses, things happen.
00:05:23.660 | If we could just say,
00:05:24.860 | we don't need to go through all of those steps of time.
00:05:27.300 | We could just say, and the end, it will be 37 or something.
00:05:31.060 | That would be a bad feature for us feeling
00:05:33.860 | that it was worthwhile to sort of lead our lives
00:05:37.140 | and see time progress.
00:05:38.620 | It will be a kind of, you don't really need time to progress.
00:05:41.380 | You can always just say what the answer will be.
00:05:43.580 | I mean, this kind of idea has many consequences.
00:05:45.860 | I mean, for example, when it comes to AIs,
00:05:48.420 | you say, well, you've got this AI system
00:05:51.460 | and it's doing all kinds of things.
00:05:53.300 | Can you figure out what it will do?
00:05:55.460 | You might want to figure out what it will do
00:05:56.980 | 'cause you might want to say,
00:05:57.820 | I never want the AI system to do this very bad thing
00:06:00.700 | that I'm trying to prevent.
00:06:03.340 | So then you say, well, can I work out what it will do?
00:06:05.940 | Can I be sure that these rules
00:06:07.420 | that I'm putting in for the AI system
00:06:09.260 | will never have it do this very bad thing?
00:06:11.900 | Well, computational irreducibility says
00:06:13.420 | you can't guarantee that.
00:06:15.140 | You kind of have this trade-off with an AI system.
00:06:17.340 | You can either say, let's constrain it a lot.
00:06:21.340 | Then we can know what it will do.
00:06:23.220 | But then, or let's let it sort of have its way
00:06:27.300 | and do what it does.
00:06:28.780 | If we don't let it have its way,
00:06:31.220 | it's not really making use
00:06:32.580 | of the computational capabilities that it has.
00:06:35.180 | We kind of have this trade-off.
00:06:36.220 | We either can understand what's gonna happen,
00:06:38.260 | in which case we don't let our AIs
00:06:40.500 | really do what they can do,
00:06:42.500 | or we always say, okay, we're going to run the risk
00:06:46.380 | of the AI doing something unexpected.
00:06:49.140 | - So can we just, so AI,
00:06:53.460 | a lot of what people call AI are predictive models
00:06:58.100 | that are effectively built off of statistics
00:07:01.540 | that make some prediction of what the right next step
00:07:05.540 | in a sequence of things should be,
00:07:06.900 | whether it's a pixel to generate an image
00:07:09.300 | or a series of words to generate a chat response
00:07:14.300 | through an LLM model like chat GPT or BARD or what have you.
00:07:17.740 | Those are statistical models trained on past data.
00:07:22.900 | Are they, is that different than the problems
00:07:27.580 | in computation that you're talking about,
00:07:30.460 | about better understanding the universe,
00:07:32.340 | the nature of the universe, solving bigger problems,
00:07:35.260 | that AI has its limitation in how we think
00:07:37.620 | and talk about it today,
00:07:38.500 | and maybe you can connect computation
00:07:40.300 | and this idea of AI being this very simple,
00:07:43.260 | heuristical, statistical thing that just predicts stuff.
00:07:46.620 | - Right, well, I mean, the computational universe
00:07:49.140 | of possible programs, possible rules is vast.
00:07:52.860 | And there are many-- - Sorry, I just wanna,
00:07:54.060 | I just wanna make sure everyone understands that.
00:07:55.500 | The, should say the computational--
00:07:57.220 | - Yeah, the computational universe.
00:07:58.340 | So, you know-- - The thing,
00:07:59.940 | the set of things you can compute
00:08:01.460 | that you wanna try and compute.
00:08:02.300 | - Yes, I mean, so people are used to writing programs
00:08:05.020 | that are intended for particular human purposes.
00:08:07.620 | But let's say you just write a program at random.
00:08:09.980 | You just put in the program, it's a random program.
00:08:13.580 | Question is, what does the random program do?
00:08:16.060 | So a big thing that I discovered in the 1980s
00:08:18.780 | is the thing that greatly surprised me
00:08:20.780 | was even a very simple program
00:08:23.100 | can do very complicated things.
00:08:24.860 | I had assumed that if you want to do complicated things,
00:08:28.180 | you would have to set up a complicated program.
00:08:30.700 | Turns out that's not true.
00:08:32.300 | Turns out that in nature, nature kind of discovered
00:08:35.860 | this trick in a sense.
00:08:37.540 | You know, we see all this complexity in nature.
00:08:39.700 | It seems like the big origin of that complexity
00:08:42.220 | is just this phenomenon that even a very simple program
00:08:45.500 | can do very complicated things.
00:08:47.620 | So, I mean, that's the, so this sort of universe--
00:08:50.500 | - Just give a quick example, if you wouldn't mind.
00:08:52.100 | Like-- - Yeah, yeah, so I mean--
00:08:53.580 | - Like three genes in a genome, in DNA.
00:08:57.140 | - Okay, so my favorite example-- - Yes.
00:08:59.380 | - Are things called cellular automata.
00:09:01.100 | - Yes. - And they work like this.
00:09:02.180 | They have a line of cells.
00:09:04.220 | Each one is either black or white.
00:09:06.100 | It's an infinite line of cells.
00:09:08.020 | And you have a rule that says you go down the page,
00:09:11.340 | making successive lines of cells.
00:09:13.820 | You go down the page and you say the color of a cell
00:09:16.060 | on the next line will be determined
00:09:18.300 | by a very simple kind of lookup
00:09:20.500 | from the color of the cell right above it
00:09:22.020 | and to its left and right.
00:09:23.180 | Okay, so very simple setup.
00:09:24.900 | There are 256 possible rules
00:09:26.860 | with just two colors and nearest neighbors.
00:09:28.900 | You can just look at what all of them do.
00:09:30.780 | Many of them do very simple things.
00:09:32.260 | They'll just make some triangle of black--
00:09:34.620 | - It looks like a pyramid or triangle when it's done.
00:09:36.500 | - Right, right, right.
00:09:37.460 | And then my all-time favorite,
00:09:39.820 | it's kind of you turn the telescope
00:09:41.340 | into this computational universe and see what's out there.
00:09:43.540 | My all-time favorite discovery is rule 30
00:09:46.180 | and the numbering of these rules.
00:09:48.180 | You can specify that.
00:09:50.180 | Rule 30, you start it off from one black cell
00:09:52.900 | and it makes this really complicated pattern.
00:09:55.300 | It makes a pattern where if you just saw it,
00:09:56.940 | you would say somebody must have gone
00:09:58.580 | to a huge amount of effort-- - It looks designed.
00:10:00.660 | - Yes. - It looks like
00:10:01.500 | there was an architect-- - Yes.
00:10:02.940 | - That came in and designed that thing.
00:10:04.820 | And that's the only way that thing could have been created
00:10:06.420 | 'cause it's so beautiful and intricate and--
00:10:08.300 | - Right, right. - Resonates.
00:10:09.140 | But-- - But in fact--
00:10:09.980 | - It had two simple rules.
00:10:11.540 | Change, be black if the left is black
00:10:13.300 | and the right is white and be white if--
00:10:14.820 | - Yeah, right. - Right.
00:10:16.220 | - Those kinds of things.
00:10:17.060 | So that's the kind of setup.
00:10:18.260 | And for example, when you look at rule 30,
00:10:20.740 | you look at the center column of cells,
00:10:23.260 | it looks for all practical purposes completely random.
00:10:26.180 | Even though you know that it was really made
00:10:28.980 | from some simple rule, when you see it produced,
00:10:33.220 | it looks random.
00:10:34.060 | It's kind of like a good analogy if you know,
00:10:36.220 | you know, digits of pi, people memorize,
00:10:38.100 | you know, 3.14159, that's about as far as I can go.
00:10:41.340 | - Me too.
00:10:42.180 | You were one ahead of me.
00:10:43.220 | - It's, I, that's why I built software
00:10:45.700 | to do these things.
00:10:46.540 | We can go to, you know--
00:10:47.380 | - I don't know about that PhD at 20.
00:10:49.580 | (audience laughing)
00:10:51.860 | - But, you know, the point is that the rule
00:10:54.820 | for generating those digits, it's, you know,
00:10:56.980 | the ratio of the circumference diameter of a circle,
00:10:59.420 | there's a very definite rule,
00:11:00.500 | but once you've generated those digits,
00:11:02.660 | they seem completely random.
00:11:04.100 | - Yes.
00:11:04.940 | - It's the same kind of phenomenon,
00:11:05.820 | but, you know, you can have a simple rule,
00:11:08.020 | it produces things that look very complicated.
00:11:10.140 | - Yes.
00:11:10.980 | - Now, so--
00:11:11.820 | - So that's a simple computer.
00:11:14.820 | That's a simple computational exercise.
00:11:17.540 | - By the way, it's not such a simple computer.
00:11:19.820 | Because it turns out, when you kind of try and rank,
00:11:23.020 | you know, you set up a computer,
00:11:24.180 | you make it with electronics,
00:11:26.060 | or you might make it, you know,
00:11:27.220 | some mechanical computer from the past,
00:11:29.740 | or something like this.
00:11:30.820 | You ask the question,
00:11:32.100 | you build a computer of a certain kind,
00:11:34.060 | how sophisticated are the computations it can do?
00:11:36.380 | Is it just an adding machine?
00:11:37.780 | Is it just a multiplying machine?
00:11:39.500 | - Yes.
00:11:40.340 | - How far does it get?
00:11:41.220 | Big discovery from the 1930s is,
00:11:44.020 | you can make a fixed piece of hardware
00:11:46.580 | that's capable of running any program,
00:11:48.660 | that's capable of doing any computation.
00:11:50.860 | That's the discovery that launched
00:11:53.620 | the possibility of software,
00:11:55.220 | launched most of modern technology.
00:11:57.420 | So, one might have thought,
00:11:59.220 | you have to go to a lot of effort
00:12:00.500 | to make a universal computer.
00:12:02.780 | Turns out that's not true.
00:12:04.220 | It's a thing I call
00:12:05.060 | the principle of computational equivalence,
00:12:06.660 | which kind of tells one something
00:12:08.340 | about how far one has to go.
00:12:10.220 | And the answer is, pretty much,
00:12:11.380 | as soon as you see complicated behavior,
00:12:13.780 | the chances are, you can kind of use
00:12:15.860 | that complicated behavior to do any computation you want.
00:12:19.220 | And that's actually, that's the reason
00:12:21.260 | for this computational irreducibility phenomenon,
00:12:23.700 | because here's how it works.
00:12:24.900 | So, let's say you've got some system,
00:12:27.540 | and it's doing what it does,
00:12:29.300 | and you're trying to predict what it's going to do.
00:12:31.660 | So, both the system itself,
00:12:33.780 | and you as the predictor, are computational systems.
00:12:37.620 | So then the question is, can you, the predictor,
00:12:40.220 | be so much smarter than the system you're predicting,
00:12:43.420 | you can just jump ahead and say,
00:12:44.500 | I know what you're going to do, I've got the answer.
00:12:46.580 | Or are you stuck being kind of equivalent
00:12:49.140 | in computational sophistication
00:12:50.940 | to the system you're trying to predict?
00:12:52.900 | So, this principle of computational equivalence says,
00:12:55.300 | whether you have a brain, or a computer,
00:12:57.300 | or mathematics, or statistics, or whatever else,
00:12:59.860 | you are really just equivalent
00:13:01.220 | in your computational sophistication
00:13:03.300 | to the system that you're trying to predict.
00:13:05.780 | And that's why you can't make that prediction.
00:13:08.100 | That's why computational irreducibility happens.
00:13:10.860 | But, you know, you were asking about AI.
00:13:13.980 | I mean, the thing that we have only just started mining
00:13:18.980 | is this computational universe of all possible programs.
00:13:22.300 | Most programs that we use today were engineered by people.
00:13:26.820 | People said, I'm going to put this piece in,
00:13:28.660 | and that piece in, and that piece in.
00:13:30.020 | - So now we have a program that can make programs.
00:13:32.180 | - Yes, well, we have sort of a vast universe
00:13:34.780 | of possible programs.
00:13:36.060 | We can say, if we know what we want the program to do,
00:13:38.940 | we can just search this computational universe
00:13:41.460 | and find a program that does it.
00:13:42.700 | Often, I've done this for many years, for many purposes.
00:13:45.780 | - Just give me an example there.
00:13:47.020 | - Well, so, very simple example, actually, from Rule 30,
00:13:50.460 | is you want to make a random number generator.
00:13:53.060 | You say, how do I make something that makes good randomness?
00:13:56.460 | Well, you can just search the set
00:13:59.100 | of possible simple programs,
00:14:00.660 | and pretty soon you find one that makes good randomness.
00:14:03.620 | You ask me, why does it make good randomness?
00:14:05.780 | I don't know.
00:14:07.300 | It's not something, there's no narrative explanation.
00:14:09.980 | - So now with AI, we're generating a ton of programs,
00:14:14.340 | and we now have a bigger space of programs,
00:14:16.940 | or bigger library to go select from,
00:14:19.300 | to solve problems or figure stuff out for us?
00:14:21.420 | Is that--
00:14:22.260 | - Actually, I think AI is sort of,
00:14:23.540 | AI is very limited in the computational universe.
00:14:25.820 | - Yes.
00:14:26.660 | - I mean, the computational universe--
00:14:27.500 | - This is the connection I wanted to make, because, yeah.
00:14:28.820 | - I mean, it's, you know,
00:14:30.060 | the computational universe is all these possible rules.
00:14:32.900 | We can talk later about whether the universe--
00:14:34.220 | - Sorry, I just want to be clear,
00:14:35.060 | 'cause you used the term universe
00:14:36.420 | in the sense of all the things,
00:14:38.260 | and I want to disconnect that
00:14:39.260 | from everyone's concept of universe.
00:14:40.740 | - Yes, yes.
00:14:41.580 | We're gonna talk about whether the,
00:14:43.340 | I hope we're gonna talk about whether the physical universe--
00:14:44.380 | - We are gonna talk about the universe
00:14:45.220 | and the nature of the universe, which is,
00:14:47.060 | we're gonna talk about consciousness,
00:14:48.140 | then we're gonna smoke weed, and then we're gonna go to lunch.
00:14:49.540 | (audience laughing)
00:14:52.860 | - We're gonna talk about, you know,
00:14:54.900 | whether our physical universe
00:14:56.460 | is part of this computational universe,
00:14:57.980 | but when I say computational universe,
00:15:00.340 | I just mean this very abstract idea
00:15:02.900 | of all these possible programs.
00:15:04.220 | - All the programs, the library of possible programs.
00:15:06.100 | - Yes. - Yeah.
00:15:06.940 | - And so, there are many, many things
00:15:08.700 | that those programs can do.
00:15:10.220 | Most of those things are things that we humans
00:15:12.620 | just look at them and say,
00:15:13.460 | "Well, that's kind of interesting.
00:15:14.700 | "I don't know what the significance of that is."
00:15:16.660 | They're very non-human kinds of things.
00:15:18.460 | - Yeah.
00:15:19.300 | - So, what have we done in AI?
00:15:20.620 | What we've done is we've given, for example,
00:15:22.460 | a large language model, we've given it, you know,
00:15:24.780 | four billion web pages, for example.
00:15:26.940 | We've given it kind of the specific parts
00:15:30.100 | of essentially the computational universe
00:15:31.980 | that we humans have selected that we care about.
00:15:34.540 | - Yes.
00:15:35.380 | - We've shown what we care about.
00:15:37.260 | And then what it's doing is to say,
00:15:39.220 | "Okay, I know what you humans care about,
00:15:41.620 | "so I'm going to make things
00:15:42.880 | "that are like what you said you care about."
00:15:44.740 | - Yes.
00:15:45.580 | - And that's a tiny part of the computational universe.
00:15:47.980 | - Right.
00:15:48.820 | Just like we saw in Caleb's video,
00:15:51.540 | his AI said, "This video is me telling you
00:15:55.220 | "that all the stuff you've talked about AI
00:15:57.600 | "is Terminators and shit-blowing stuff up,
00:15:59.660 | "and that's the limit of what we've done."
00:16:02.780 | - Right.
00:16:03.620 | - And it can only construct stuff
00:16:04.760 | from the limit of what we've done,
00:16:06.140 | recorded, seen, our data sets.
00:16:08.140 | - So, I mean, the thing is--
00:16:09.300 | - So, in terms of, so give us an example
00:16:11.220 | of something that needs to be computed.
00:16:13.740 | A computational exercise, something we gotta figure out,
00:16:17.220 | something we wanna solve,
00:16:18.900 | outside of what AI is possibly able to solve for today.
00:16:22.940 | - Well, I mean, any of these
00:16:24.300 | computationally irreducible problems,
00:16:25.920 | anything where we're asking--
00:16:26.760 | - Example, yeah.
00:16:27.580 | So just to connect it for people.
00:16:28.820 | - Yeah, yeah, yeah, right.
00:16:29.660 | I mean, oh gosh, to pick an area, I mean--
00:16:31.900 | - Without esoteric topology in algebra or something.
00:16:35.380 | - Yeah, yeah, right.
00:16:36.220 | I mean, you know, okay, here's an example.
00:16:38.060 | So, you've got a biological system,
00:16:40.020 | you've got a good model--
00:16:41.140 | - Great example, yeah, biology.
00:16:42.540 | - Okay, so we've got something we're trying to figure out.
00:16:45.140 | This collection of cells, it behaves in this way.
00:16:47.860 | Is it gonna grow forever and be really bad
00:16:49.860 | and make a tumor, or is it going to eventually halt
00:16:52.500 | and stop growing?
00:16:53.680 | Okay, that's a classic kind of
00:16:55.600 | computational irreducible type of problem,
00:16:58.000 | where, you know, we could, if we knew enough detail,
00:17:00.780 | we could simulate what every cell's gonna do.
00:17:02.740 | - Every molecule, every atom, every cell, every interaction.
00:17:05.260 | If we knew enough, we could simulate each of those steps.
00:17:07.780 | - Right, and we could--
00:17:08.620 | - And there's no easy way to solve that,
00:17:10.260 | answer that question. - Right, you can't jump ahead
00:17:11.720 | and say, so I know this thing is never gonna turn
00:17:13.980 | into a tumor, for example.
00:17:15.220 | - Right, and so, simulating the physical universe,
00:17:20.220 | whether you're simulating atoms in a cell,
00:17:23.940 | or space-time and discrete space-time
00:17:28.100 | or non-discrete space-time itself,
00:17:30.420 | becomes this thing where we don't have a simple heuristic,
00:17:34.220 | a simple equation that says, based on this condition,
00:17:36.900 | this is how things are gonna end up,
00:17:38.380 | but you have to actually go through
00:17:39.320 | a lot of calculated steps.
00:17:41.300 | - We haven't known that.
00:17:42.300 | I mean, people have hoped that you can just write down
00:17:44.760 | a formula for how physics works,
00:17:46.880 | and then work out the answer directly from that.
00:17:48.900 | That was the big advance.
00:17:50.380 | I mean, if you go back to antiquity,
00:17:52.660 | people were just trying to sort of reason
00:17:54.260 | about how the universe works, sort of philosophically,
00:17:57.060 | and then late 1600s, sort of big advance.
00:18:00.460 | We can write down a mathematical formula,
00:18:02.260 | we can use calculus, we can kind of just,
00:18:05.780 | essentially, jump ahead and say
00:18:07.460 | what's gonna happen in the universe.
00:18:08.460 | We can make a prediction, we can say,
00:18:10.540 | the comet is gonna be in this place at this time, and so on.
00:18:13.020 | - So there's all these hard problems
00:18:15.060 | that we can't solve with AI we have today, or can we?
00:18:20.060 | And can you just help me and help everyone understand,
00:18:23.000 | what are you excited about with respect to AI?
00:18:25.460 | What is it that has happened in the last couple of months
00:18:28.740 | and years that you were excited about,
00:18:30.380 | and what does that allow us to do that we couldn't do
00:18:33.180 | before using just raw approaches to computation?
00:18:36.700 | - Okay, so I mean, several different things here.
00:18:39.100 | I mean, the first thing to say is,
00:18:41.540 | we humans have been interested in a small part
00:18:44.940 | of what's computationally possible.
00:18:46.660 | AI is reflecting the part that we have been interested in.
00:18:50.260 | That sort of AI is doing those kinds of things.
00:18:53.140 | In terms of what's happened with AI,
00:18:54.500 | I mean, the big thing that happened a year ago
00:18:56.980 | was the arrival of successful large language models.
00:19:00.380 | And what does that tell us?
00:19:02.420 | I think that it was a surprise to everybody,
00:19:05.180 | including the people who were working
00:19:06.460 | on large language models,
00:19:07.780 | that we kind of got past, got to this point
00:19:10.820 | where they seemed reasonable to us humans,
00:19:13.500 | where they were producing text
00:19:14.700 | that was reasonable to us humans,
00:19:16.340 | and not just completely boring and irrelevant and so on.
00:19:20.540 | And I think there's this jump that happened now in 2012.
00:19:25.300 | There was a sort of previous jump in machine learning
00:19:27.500 | that happened with images and things like that,
00:19:29.860 | image recognition and so on.
00:19:31.820 | So what's the significance of large language models?
00:19:34.180 | Well, one question is, why do they work?
00:19:36.340 | Why is it possible to make this neural net
00:19:39.380 | that can successfully kind of complete an essay or something?
00:19:43.900 | And I think the answer is
00:19:45.420 | that it's kind of telling us a piece of science
00:19:47.820 | that in a sense we should be embarrassed
00:19:49.300 | we hadn't figured out before.
00:19:50.900 | It's a question of sort of how do you construct language?
00:19:54.140 | And we've known forever
00:19:56.220 | that there's kind of a syntactic grammar of language,
00:19:58.300 | you know, noun, verb, noun, et cetera, et cetera, et cetera.
00:20:02.820 | But what the LLMs are showing us
00:20:05.660 | is that there is a kind of semantic grammar of language.
00:20:08.060 | There's a way of putting together sentences
00:20:10.300 | that could make sense.
00:20:12.020 | And, you know, for example, people are always impressed
00:20:14.100 | that the LLMs have figured out how to, quote, "reason."
00:20:17.700 | And I think what's happening is, you know,
00:20:20.020 | logic is this thing that's kind of this formalization
00:20:23.500 | of everyday language,
00:20:25.340 | and it's a formalization that was discovered,
00:20:26.980 | you know, by Aristotle and people in antiquity.
00:20:29.380 | And in a sense, probably one can think about
00:20:31.580 | the way they discovered it
00:20:32.540 | is they looked at lots of speeches people had given,
00:20:34.580 | and they said, "Which ones make sense?"
00:20:36.700 | Okay, there's a certain pattern
00:20:38.420 | of how things are said that makes sense.
00:20:40.860 | Let's formalize that. That's logic.
00:20:43.140 | That's exactly what the LLM has done as well.
00:20:45.140 | It's noticed that there are these patterns of language
00:20:47.660 | that you can kind of use again,
00:20:49.980 | and that we call logic or reasoning or something like that.
00:20:53.780 | So, you know, I think as a practical matter,
00:20:55.900 | the LLMs provide this kind of linguistic user interface.
00:20:59.300 | We've had kind of graphical user interfaces and so on.
00:21:02.020 | Now we have this linguistic user interface.
00:21:04.180 | You say, you know, you've got some very small set of points
00:21:07.820 | you want to make.
00:21:08.900 | You say, "I'm going to feed it to an LLM.
00:21:10.940 | It's going to puff it up into a big report.
00:21:13.020 | I'm going to send that report to somebody else.
00:21:14.700 | They're probably going to feed it to their own LLM.
00:21:16.620 | It's going to grind it down to some small set of results."
00:21:21.100 | It's kind of, you know, it's allowing one to use language
00:21:23.740 | as a transport layer.
00:21:25.140 | I think that's a, you know,
00:21:26.540 | there are a lot of these practical use cases for this.
00:21:29.820 | -It's always seemed to me like the rate-limiting step
00:21:32.620 | in humans is communication.
00:21:34.140 | Like, the rate at which you and I are speaking to one another
00:21:37.100 | is pretty low bandwidth.
00:21:39.100 | Like, just a couple words a minute or something.
00:21:41.020 | -The question is, what really is communication?
00:21:43.380 | You know, in our brains,
00:21:45.060 | there are all these neurons that are firing.
00:21:46.580 | -Right. -There's 100 billion of them
00:21:47.980 | in each of our brains.
00:21:48.900 | -And there's a lot of sensory input
00:21:50.260 | besides the words that you're saying
00:21:51.660 | that are traveling through vibrations in the air to my ear.
00:21:54.660 | And that's some information from you.
00:21:56.100 | But there's so much more information
00:21:57.580 | that the human brain can gather
00:21:59.420 | and is building models around all the time,
00:22:01.020 | making predictions around
00:22:01.860 | whether this light's gonna be on or off
00:22:03.180 | or that person's gonna go to the bathroom
00:22:04.900 | or sit back down or what have you.
00:22:07.060 | [ Laughter ]
00:22:08.580 | But it's -- -But, you know, I think
00:22:09.980 | one of the things that's sort of interesting is this,
00:22:11.900 | you know, we've got stuff in our brains.
00:22:14.020 | We are trying to package up those thoughts.
00:22:16.900 | You know, the structure of each of our brains is different.
00:22:19.580 | So the particular nerve firings are different.
00:22:22.140 | But we're trying to package up those thoughts
00:22:24.380 | in a kind of transportable way.
00:22:26.140 | That's what language tends to do.
00:22:28.060 | It kind of packages concepts into something transportable.
00:22:29.380 | -And that's what these LLMs have done.
00:22:31.580 | -Yes. I mean, they're --
00:22:32.740 | -Because they're outputting a packet of Communicate to me.
00:22:34.940 | -Yes. Yes. I mean, so --
00:22:36.220 | -But is there anything else that's exciting to you
00:22:37.860 | from a computational perspective?
00:22:41.700 | What else can the AIs do?
00:22:44.420 | And what else -- -Oh, okay.
00:22:45.620 | We learn a lot from the AIs. -Yes.
00:22:47.420 | -You know, they're telling us there is a science of LLMs,
00:22:50.900 | which is completely not worked out yet.
00:22:53.300 | There's kind of a bulk science of knowledge and things
00:22:56.460 | that the LLMs are kind of showing us is there.
00:22:59.300 | There's a kind of a science of the semantics of language,
00:23:01.860 | which LLMs are showing us is there.
00:23:03.500 | -Yes. -But we haven't found it yet.
00:23:04.940 | -Yes. -It's kind of like we just saw
00:23:06.700 | some new piece of nature,
00:23:08.540 | and we now get to make science about that kind of nature.
00:23:11.900 | Now, it's not obvious that we can make sort of science
00:23:14.300 | where we can tell a narrative story about what's going on.
00:23:16.940 | -Right. -It could be that we're just --
00:23:18.500 | we're just sort of dumped into computational irreducibility,
00:23:20.900 | and we just say it does this because --
00:23:22.500 | -So there's this black box that the training model created.
00:23:27.980 | That black box, we don't know what it does.
00:23:29.500 | I put a bunch of words in, a bunch of words come out.
00:23:31.180 | It's amazing.
00:23:32.180 | Now you're saying we're going to try and understand the nature,
00:23:36.060 | the graphs, the nature of that box,
00:23:38.180 | and that'll tell us a little bit something about --
00:23:40.860 | -I think we'll discover that, for example,
00:23:42.420 | we'll discover that human language is much less --
00:23:48.820 | it's much simpler to describe human language
00:23:50.900 | than we had thought. -Yeah.
00:23:52.220 | -In other words, it's showing us rules of human language
00:23:54.580 | that we didn't know. -Yes.
00:23:55.900 | -And that's an interesting thing.
00:23:57.540 | Now, if you ask, "What else do we learn from the AI?"
00:24:00.140 | So I'll give you another example of something
00:24:01.540 | I was playing with recently.
00:24:03.620 | So, you know, you use image generation,
00:24:06.620 | generative AI for making images,
00:24:08.980 | as we can now see also making videos and so on.
00:24:11.580 | There's this question of you go inside the AI,
00:24:14.700 | and you say, you know, inside the AI,
00:24:17.100 | you know, the concept of a cat
00:24:18.860 | is represented by some vector of 1,000 numbers, let's say,
00:24:21.660 | the concept of a dog and other 1,000 numbers.
00:24:24.140 | You just say, "Let's take these vectors of numbers,
00:24:26.620 | and let's just take arbitrary numbers.
00:24:29.180 | What does the AI think --
00:24:31.180 | What is the thing that corresponds
00:24:32.860 | to the sort of arbitrary vector of numbers?"
00:24:34.700 | Okay? So you can have these definite concepts,
00:24:37.100 | like cat and dog, their particular numbers
00:24:39.060 | in this sort of space of all possible concepts.
00:24:41.780 | And there's this idea --
00:24:43.060 | I've been calling it inter-concept space.
00:24:45.540 | What's between kind of the concept of a cat
00:24:48.060 | and the concept of a dog?
00:24:49.580 | And the answer is there's a huge amount of stuff,
00:24:52.540 | even from a generative AI that's learned from us.
00:24:55.940 | -Yeah. -It is finding
00:24:57.620 | these kind of inter-concept things...
00:25:00.100 | -Right. -...that are in between
00:25:01.780 | the things for which we have words.
00:25:03.140 | -Right. -Sort of embarrassing to us
00:25:04.860 | that, you know, simple estimate, simple case.
00:25:07.980 | If you say what fraction of the space
00:25:10.260 | of all possible concepts, so to speak,
00:25:12.500 | is now filled with actual words that we have?
00:25:15.260 | -Yes. -Of the 50,000 words
00:25:16.860 | that are common in English, for example.
00:25:18.460 | -50,000. Okay. -What, um...
00:25:21.380 | You know, what -- Yeah, that's a wrong number.
00:25:22.900 | I didn't know that number. That's interesting. Yeah.
00:25:24.260 | Yeah, right. If you're an LLM person,
00:25:25.780 | that's the -- You know, when it produces the --
00:25:28.180 | When it says, "What's the next word going to be?"
00:25:30.300 | 'Cause that's what LLMs are always doing.
00:25:31.620 | They're just trying to predict the next word.
00:25:33.340 | What it's doing is it says,
00:25:34.700 | "Here's this list of 50,000 numbers,
00:25:36.340 | which are the probabilities for each of the possible
00:25:38.580 | 50,000 words in English."
00:25:40.420 | And then it'll pick the most likely one
00:25:42.100 | or the next most likely one or whatever else.
00:25:45.060 | But in any case, the -- You know, this --
00:25:47.340 | When you ask the question,
00:25:48.860 | in the space of sort of all possible concepts,
00:25:51.420 | how many do we have words for?
00:25:53.100 | The answer is it's one in 10 to the 600.
00:25:55.900 | Wow. -It's like we have --
00:25:57.420 | We have explored a tiny, tiny fraction
00:26:00.300 | of even the kind of concepts that are revealed
00:26:02.780 | by the things that sort of we put out there on the Web
00:26:05.380 | and so on. -That's more than
00:26:06.660 | there are atoms in the universe.
00:26:08.500 | Yeah, there are 10 to the 80th atoms in the universe.
00:26:10.700 | Right. 10 to the 80th is very small
00:26:12.220 | compared to a lot of the numbers one deals with.
00:26:14.660 | But, you know, I think the --
00:26:17.020 | [ Laughter ]
00:26:19.420 | Yeah, it's a -- We should talk about the universe.
00:26:21.660 | I want to talk about the universe.
00:26:23.020 | Yeah, right. -It's pretty cool.
00:26:24.380 | So, um...
00:26:25.660 | [ Laughter ]
00:26:28.140 | People want to talk about AI.
00:26:29.220 | That's why I wanted to just make sure.
00:26:30.300 | Yeah, yeah. Please. -We got your point of view.
00:26:31.500 | So it's appreciated.
00:26:34.700 | What is the nature of the universe?
00:26:37.340 | And what do we have wrong?
00:26:39.820 | What's that? -And what do we have wrong?
00:26:41.660 | Yeah, so I mean -- -What does consensus have wrong?
00:26:44.180 | Yeah, right. Well, so, I mean, the physics --
00:26:47.580 | This is like where you, like, let the rains go,
00:26:49.660 | and then you go.
00:26:51.220 | Yeah, so, like, we had a great chat at dinner
00:26:52.780 | the other night, so I'm excited.
00:26:53.860 | Yeah, go ahead. -It's, you know,
00:26:57.100 | physics as we know it right now was kind of --
00:27:00.500 | 100 years ago, big advances made in physics,
00:27:03.180 | three big theories in physics --
00:27:05.540 | general relativity, the theory of gravity;
00:27:07.460 | quantum mechanics, the theory of small kinds of things;
00:27:10.300 | and statistical mechanics, the theory of heat;
00:27:13.060 | and sort of how in the second law of thermodynamics,
00:27:15.820 | law of entropy, increased things like this.
00:27:17.820 | Okay, so those three big theories
00:27:19.220 | that were invented about 100 years ago.
00:27:21.140 | Physics has been sort of on a gradual,
00:27:23.020 | incremental trajectory since then.
00:27:26.060 | I've been interested in trying to understand
00:27:27.580 | kind of what could be underneath everything
00:27:30.180 | that we see in the world,
00:27:32.140 | and I think we figured it out, which is kind of exciting.
00:27:35.260 | Not something I expected to see in my lifetime,
00:27:38.140 | not something I kind of expected to see for 50 or 100,
00:27:40.740 | more than that, years.
00:27:42.300 | And kind of the number-one thing that you say,
00:27:45.580 | "What do people get wrong?"
00:27:47.060 | One question is, "What is space?"
00:27:49.340 | Well, you know, people just sort of say,
00:27:50.900 | "Space is a continuous thing.
00:27:52.380 | You put things in any different place in space."
00:27:54.620 | People have been arguing about whether space is continuous
00:27:57.340 | or discrete since antiquity.
00:27:59.020 | -Discrete means that it's broken up into little pieces.
00:28:01.700 | -Yes, yes.
00:28:02.940 | So, you know, this argument also happened for matter.
00:28:06.100 | Is matter continuous or discrete?
00:28:07.900 | You know, you have water.
00:28:09.100 | Is it just this continuous fluid,
00:28:10.820 | or is it made of discrete kinds of things?
00:28:13.380 | That got answered about 120 years ago.
00:28:15.900 | The answer is water is made of discrete molecules.
00:28:18.500 | Same thing happened for light.
00:28:19.860 | Light is made of discrete photons.
00:28:21.820 | Space got kind of left out.
00:28:24.100 | Space, people still think it's a continuous kind of thing.
00:28:27.220 | So the kind of starting point of things I've tried to figure out
00:28:30.900 | is actually, no, that's not true.
00:28:33.100 | Space is discrete.
00:28:34.540 | There are atoms of space, so to speak,
00:28:36.380 | not like physical atoms like hydrogen and helium and things.
00:28:39.340 | They're just these sort of points.
00:28:42.300 | -They're slots where things can fit.
00:28:44.460 | Is that a way to think about it?
00:28:45.820 | -They're really just things.
00:28:48.580 | Nothing fits in them.
00:28:49.700 | I mean, they're just all that one knows about them.
00:28:52.380 | They're abstract things.
00:28:53.740 | All one knows is that one is different from another.
00:28:56.940 | -So there are these two discrete things next to each other.
00:29:00.460 | -There's no next.
00:29:01.060 | -There's no next, right, because there is no space.
00:29:03.340 | -We're about to build space.
00:29:04.420 | -So there is a graph.
00:29:05.820 | Is that a way to think about it?
00:29:06.820 | -Yeah.
00:29:07.340 | -A relationship between two things.
00:29:09.100 | -Right.
00:29:09.460 | Well, a relationship between several things.
00:29:11.100 | I mean, it's kind of like a giant friend network
00:29:13.460 | of the atoms of space.
00:29:14.380 | -Right.
00:29:15.100 | -And that's all there is.
00:29:16.420 | That's what our universe is made of.
00:29:18.500 | And all the things that we observe-- electrons,
00:29:21.580 | black holes, all these kinds of things-- they're all just
00:29:24.180 | features of this network.
00:29:25.740 | And it's kind of like you might say,
00:29:27.780 | how could that possibly be the way things are?
00:29:30.300 | If you think about something like water,
00:29:32.220 | you think about a little eddy in the water.
00:29:34.140 | That eddy is a thing where you can
00:29:35.680 | say there's a definite eddy that's moving through the water,
00:29:38.380 | yet it's made of lots of discrete molecules of water,
00:29:42.460 | yet we can identify it as a definite thing.
00:29:45.260 | And so it is with electrons, et cetera, et cetera.
00:29:48.460 | In the universe, it's all features
00:29:50.220 | of this giant network.
00:29:51.860 | And that's the way it seems to be.
00:29:54.740 | -So does each particle, as we know it-- an electron,
00:29:59.100 | a proton-- have a feature that defines
00:30:04.740 | its relative connectedness to other particles?
00:30:09.060 | And that definition, that little number,
00:30:11.540 | is what we look at as space as a whole?
00:30:14.180 | Is that a way to think about it?
00:30:15.780 | -No, an electron is a pretty big thing relative
00:30:18.140 | to the atoms of space.
00:30:19.420 | We don't know exactly how big, but it's a big, floppy thing.
00:30:23.620 | The feature-- I mean, right now, it
00:30:26.860 | has been assumed that electrons are actually infinitesimally
00:30:29.420 | small, but that doesn't seem to be true.
00:30:31.700 | But the thing that kind of defines something
00:30:34.260 | like an electron is kind of like it's
00:30:36.220 | sort of a topological kind of thing.
00:30:37.740 | It's like you can have a piece of string,
00:30:39.900 | and you can either knot it or you can not knot it.
00:30:42.460 | And there are lots of different ways you can make the knot,
00:30:44.700 | but it's still-- it's either knotted or it's not knotted.
00:30:46.740 | And there's either an electron or there isn't an electron.
00:30:49.700 | So the structure of space, it's kind
00:30:52.580 | of much like what happens between molecules
00:30:55.020 | and a fluid like water.
00:30:56.500 | There are all these discrete atoms of space,
00:30:58.940 | and they have these relations to each other.
00:31:01.060 | And then if you look at a large scale, what are all these--
00:31:04.780 | and I should say, by the way, that these atoms of space,
00:31:07.500 | the main thing that's happening is pieces of this network
00:31:10.460 | are getting rewritten.
00:31:11.660 | So a little piece of network--
00:31:12.900 | This is really important.
00:31:14.460 | I think this is really important because I'm
00:31:16.060 | going to ask the follow on question.
00:31:17.340 | All right.
00:31:17.840 | So what is time?
00:31:19.820 | Time is this kind of computational process
00:31:22.580 | where this network that represents--
00:31:25.100 | The change through this physical network is time.
00:31:29.200 | So time is a computational phenomenon.
00:31:31.620 | Time is the progressive change of the network.
00:31:35.500 | One particle touches another one, touches another one.
00:31:38.260 | This thing changes this one.
00:31:39.460 | This changes this one.
00:31:40.420 | Yeah.
00:31:41.060 | Sounds to me like you're describing
00:31:42.500 | your cellular automata.
00:31:44.780 | Yes, except it's very much like that.
00:31:46.780 | It's a computational process.
00:31:48.580 | And it happens to be operating on these hypergraphs
00:31:51.420 | rather than just lines of black and white cells.
00:31:53.420 | So the universe itself is a computer--
00:31:57.540 | --running a computational exercise.
00:32:00.380 | I don't know whether you'd call it an exercise, but yes.
00:32:02.940 | It is computing.
00:32:03.980 | Yes, it is computing.
00:32:04.900 | And that's what the progress of time is.
00:32:07.180 | And time is the progress of that computation.
00:32:10.220 | And that is what people mean when they say,
00:32:12.060 | are we living in a simulation?
00:32:14.360 | [LAUGHTER]
00:32:16.940 | No, I think that's a philosophically rather confused
00:32:19.300 | thing.
00:32:19.780 | I mean, there's a little bit deeper in the rabbit hole
00:32:22.460 | that you have to go to understand why that--
00:32:24.780 | We'll do that at lunch.
00:32:25.780 | --doesn't really make sense.
00:32:27.180 | No, go ahead.
00:32:27.740 | But no, so I mean, then the question
00:32:31.700 | is, just like you have all these molecules bouncing around,
00:32:34.460 | they make sort of continuous fluids like water,
00:32:37.500 | you can ask, what do all these atoms of space make?
00:32:40.340 | Turns out they make Einstein's equations
00:32:43.100 | that describe gravity and describe
00:32:44.740 | the structure of space-time.
00:32:46.020 | So that's kind of a neat thing.
00:32:48.740 | Because it's not been imagined that one
00:32:50.940 | could derive the properties of something like that
00:32:54.820 | from something lower level.
00:32:56.260 | I mean, I should say, OK, this is a big, complicated subject.
00:33:00.300 | But the thing that--
00:33:03.020 | You guys doing OK?
00:33:04.460 | [APPLAUSE]
00:33:08.340 | OK, good.
00:33:11.380 | The thing that's pretty interesting in all of this
00:33:14.220 | is the way that the nature of us as observers
00:33:17.820 | is critical to the kind of thing that we observe in--
00:33:21.380 | This is the most important thing I've
00:33:22.920 | heard in the last year when I watched your interview
00:33:24.660 | a few weeks ago.
00:33:25.340 | So I just want you to walk everyone
00:33:26.900 | through this statement again, because this is so important.
00:33:30.340 | Well, all right.
00:33:31.140 | So the-- OK.
00:33:32.900 | The question is, for example, if we're
00:33:41.020 | looking at a bunch of molecules bouncing around in a gas,
00:33:44.180 | and we say, what do we see in those molecules?
00:33:48.260 | Well, we see things like the gas laws, pressure, volume,
00:33:51.620 | things like this.
00:33:52.300 | We see things like the gas tends to get more random,
00:33:55.140 | things like this.
00:33:55.900 | That's what we see, because we're observers
00:33:58.740 | with certain characteristics.
00:34:00.500 | If we were observers who could trace every molecule,
00:34:02.820 | do all the computations to figure out
00:34:04.400 | what all the molecules would do, we
00:34:06.140 | would make quite different conclusions
00:34:07.700 | about what happens in gases.
00:34:09.460 | It's because we are observers who
00:34:11.460 | are bounded in our computational capabilities.
00:34:13.900 | We're not capable of untangling all those kinds of detail.
00:34:17.060 | To see all those atoms, we have to look
00:34:18.980 | at the whole can of gas.
00:34:21.100 | We can't look at each atom individually.
00:34:23.340 | Right.
00:34:23.840 | We can't trace the motion of each atom individually.
00:34:27.660 | So our heuristic is the PV equals NRT, the gas laws.
00:34:32.820 | Exactly.
00:34:33.460 | We look at that gas, and we say, this
00:34:35.820 | is the ratio of temperature, which
00:34:37.500 | is the total energy of the system,
00:34:39.260 | as opposed to the energy of each individual atom.
00:34:41.300 | And there's a lot of different energy states
00:34:42.580 | of all these different atoms.
00:34:43.840 | So we sum everything up.
00:34:45.240 | We take an average of a lot of stuff to understand it.
00:34:48.660 | Right.
00:34:49.180 | And the point is that people haven't understood the fact
00:34:51.540 | that we have to take that average, because what's
00:34:55.340 | underneath is computationally irreducible, but we--
00:34:58.500 | We can't compute.
00:34:59.540 | --are computationally bounded.
00:35:01.540 | And it turns out that exact same phenomenon
00:35:04.100 | is the origin of space time and gravity,
00:35:06.620 | that all these atoms of space operating in this network,
00:35:10.180 | it's all this computationally irreducible process.
00:35:13.700 | But observers like us, observers who are computationally bounded,
00:35:18.180 | necessarily observe this kind of sort
00:35:20.940 | of aggregate behavior, which turns out
00:35:23.140 | to correspond to the structure of space time.
00:35:25.300 | Same happens in quantum mechanics.
00:35:27.140 | Quantum mechanics is a little bit harder to understand.
00:35:29.420 | One of the big features of quantum mechanics
00:35:31.260 | is, in classical physics, you say, I throw a ball.
00:35:35.500 | It follows a definite trajectory.
00:35:37.220 | In quantum mechanics, you say, I do something.
00:35:39.900 | There are many possible paths of history that can be followed.
00:35:42.500 | We only get to work out things about averages of those paths.
00:35:45.980 | So it's sort of there are many different paths of history
00:35:48.380 | that are being pursued in these networks and things.
00:35:52.180 | What's happening is that there are many different rewrites
00:35:54.560 | to the network that can occur.
00:35:56.100 | We get these many branches of history.
00:35:58.100 | And so then the question is--
00:35:59.940 | and like a quantum computer is trying
00:36:01.740 | to make use of the fact that there are many branches
00:36:03.940 | of history that it can kind of follow in parallel.
00:36:06.740 | But then the question is, well, how do we observe
00:36:09.100 | what's actually happening?
00:36:10.420 | Well, we are embedded in this whole system that
00:36:13.140 | has all these branching paths of history and so on.
00:36:15.340 | And we're limited.
00:36:16.460 | Well, we, our brains are full of sort of branching behavior
00:36:20.480 | and so on.
00:36:21.380 | But we are sort of computationally
00:36:24.620 | bounded in what we can do.
00:36:25.720 | We are effectively-- the question you have to ask
00:36:29.440 | is, how does sort of the branching brain
00:36:31.240 | perceive the branching universe?
00:36:33.280 | And as soon as the brain has this sort
00:36:36.000 | of computational boundedness in its characteristics,
00:36:38.880 | and it also has one other assumption, which is it assumes
00:36:41.760 | that it is persistent in time.
00:36:43.920 | So it's like we are-- at every moment,
00:36:46.240 | we are made of different atoms of space.
00:36:48.600 | Yet we believe that it's the same us
00:36:51.300 | at every successive moment.
00:36:52.700 | Right.
00:36:53.300 | As soon as we--
00:36:53.900 | So our concept of consciousness precludes us
00:36:56.900 | from being able to see perhaps a different nature
00:37:00.100 | of the universe, a different--
00:37:02.500 | I told you guys we were going to go like really--
00:37:04.500 | Right.
00:37:05.060 | So it's kind of like observers like us
00:37:07.860 | who are computationally bounded believe
00:37:09.420 | they're persistent in time.
00:37:11.060 | The big result is that observers like us inevitably
00:37:15.340 | observe laws of physics like the ones we know.
00:37:18.700 | And so imagine sort of the alien that
00:37:20.940 | isn't like us, that isn't computationally bounded,
00:37:23.020 | doesn't believe it's persistent in time.
00:37:25.320 | It will observe different laws of the universe.
00:37:27.860 | And so the laws of the universe are only based on our nature.
00:37:32.600 | But a very--
00:37:33.100 | That's what I thought was so interesting.
00:37:34.560 | Right.
00:37:35.060 | It's a very coarse feature of our nature.
00:37:37.240 | It's not like we have to know every detail of us.
00:37:40.020 | The laws of physics only require these--
00:37:42.460 | and actually, I suspect that as we--
00:37:45.200 | there are other things that we probably take for granted
00:37:47.620 | about the nature of us as observers.
00:37:49.620 | And as we start putting these in,
00:37:51.000 | we'll probably actually find more things
00:37:53.140 | that are inevitable about the way that physics works.
00:37:55.860 | So the belief, as I use that term,
00:37:58.840 | I kind of feel like I'm deluding myself,
00:38:03.420 | because I am not the same Adams as I was a second ago,
00:38:06.100 | a second ago, a second ago, a second ago.
00:38:09.700 | This belief that we have a self--
00:38:11.620 | talk about your understanding.
00:38:13.180 | I know this is a bit far-fetched from maybe your specialty,
00:38:15.500 | but I'm sure you have a point of view.
00:38:17.080 | And then what is this concept of consciousness
00:38:19.100 | that we have where we think we're persistent in time,
00:38:22.580 | where we have this concept of self-identity?
00:38:26.020 | Where does this all arise?
00:38:27.140 | And how do you think about this notion of consciousness
00:38:29.540 | and the observer in the context of the universe?
00:38:31.980 | I'm observing stuff in the universe,
00:38:33.480 | and I think I'm a human body, and I'm really--
00:38:35.820 | I'm a bunch of atoms floating around
00:38:37.240 | with a bunch of other atoms.
00:38:38.580 | I used to think there was a sort of hierarchy
00:38:40.900 | where consciousness was at the top,
00:38:42.980 | but I don't think that anymore.
00:38:44.260 | I think consciousness is a very--
00:38:45.860 | it's just like the AIs are not doing all possible computation.
00:38:49.740 | We are actually rather limited in our observation
00:38:53.900 | of the universe.
00:38:55.380 | We're localized in space.
00:38:57.620 | We have this belief that we're persistent in time and so on.
00:39:00.540 | Imagine what you would feel like, so to speak,
00:39:03.660 | if you were much more extended in the universe.
00:39:05.700 | If you were-- in fact, one of the things
00:39:08.060 | that we see in our models is this thing
00:39:10.820 | we call the Rouliad, which is this entangled limit of all
00:39:14.700 | possible computations.
00:39:16.300 | And we are-- every mind, in a sense,
00:39:18.900 | is just at some small point, some small region
00:39:22.100 | in this Rouliad.
00:39:23.540 | And so it's-- you imagine what happens if you--
00:39:28.820 | by the way, as we learn more in science,
00:39:31.300 | we're effectively expanding in this kind of Roulial space,
00:39:34.180 | where it's just like we can send spacecraft out--
00:39:36.540 | Our aggregate consciousness.
00:39:38.880 | As a species.
00:39:40.900 | So just like we can send spacecraft out
00:39:43.340 | that explore more of the physical universe,
00:39:45.540 | so as we expand our science, as we expand the ideas that we
00:39:49.340 | use to describe the universe, we're
00:39:51.260 | expanding in this Roulial space.
00:39:53.420 | And so you might say, well, what happens if we expand?
00:39:56.820 | That should be the future of civilization,
00:39:58.740 | to expand in Roulial space, to expand
00:40:01.100 | our domain of understanding of things.
00:40:04.980 | This is a shifting consciousness,
00:40:07.420 | like hippie type question, but there's
00:40:11.700 | this guy in the UK named Darren Brown.
00:40:15.260 | He's a mentalist.
00:40:16.780 | He puts these two advertising execs in a room,
00:40:19.500 | and he tells them, hey, come up with an ad.
00:40:21.820 | The name of the company, come up with a logo,
00:40:24.020 | come up with a catchphrase.
00:40:25.220 | He goes out for a few hours, comes back, pulls off a thing.
00:40:27.980 | He copied exactly what they-- he had written down exactly what
00:40:30.860 | they were going to do.
00:40:31.900 | The way he did it is, as they drove over,
00:40:33.300 | he subliminally put a little image in the cab.
00:40:35.660 | He had some kids walk across the street with a logo.
00:40:38.020 | And they just basically were programmed
00:40:40.220 | to output what he asked them to do.
00:40:43.140 | And they thought that they were creative geniuses.
00:40:45.220 | They're like, we're these high-paid ad execs.
00:40:47.060 | Look at our genius.
00:40:47.860 | Look at what we did.
00:40:49.060 | And it always struck me as like the human is just
00:40:51.580 | the unconscious computer.
00:40:53.340 | We're just the node in the neural net that takes the input,
00:40:58.900 | gets sensory program output, and we're
00:41:01.380 | part of the computational exercise.
00:41:04.020 | Is that a way to think about this idea
00:41:07.740 | that we are part of this broader computation?
00:41:09.740 | And as we do that, this consciousness--
00:41:11.620 | I mean, the person you mentioned,
00:41:13.820 | they're cheating computational irreducibility, so to speak.
00:41:17.460 | They're saying, I'm going to put this thing which
00:41:19.820 | is going to be the answer.
00:41:21.380 | And that's-- the more interesting way
00:41:25.020 | to lead life, in a sense, is just
00:41:27.900 | by this process of letting time progress
00:41:30.900 | and this irreducible computation occur.
00:41:33.700 | And is that what gives you joy?
00:41:36.780 | Yeah, I think so.
00:41:37.540 | I mean, I think it's a funny thing, because when you think
00:41:40.900 | you know what's underneath the universe
00:41:43.620 | and how all these ideas fit together,
00:41:47.340 | and you realize that I'm a person who likes people.
00:41:53.940 | And so it seems very bizarre that I
00:41:55.940 | should be interested in these things that
00:41:58.540 | deconstruct everything about humans.
00:42:01.900 | And I realized at some point, one
00:42:03.540 | of the things about doing science that's
00:42:05.340 | one of the more difficult human things about doing science
00:42:07.860 | is you have to get rid of your prejudices
00:42:11.220 | about what might be true and just follow
00:42:14.300 | what the science actually says.
00:42:16.180 | And so I've done that for years.
00:42:17.980 | And I realized, actually, it turns out
00:42:20.140 | the thing that I've done puts humans right back
00:42:22.780 | in the middle of the picture with these things
00:42:25.180 | about the fact that it matters what the observer is like,
00:42:28.220 | realizing that in this space of possibilities,
00:42:32.020 | that what we care about is this part that is the result
00:42:36.460 | of human history and so on.
00:42:39.060 | You guys asked for more Science Corner,
00:42:40.700 | so I hope this fit the bill.
00:42:42.420 | Guys, please join me in thanking Stephen Wolfram.
00:42:45.100 | [APPLAUSE]
00:42:46.100 | [MUSIC PLAYING]
00:42:48.100 | Let your winners ride.
00:42:51.100 | Rain Man, David Saks.
00:42:52.580 | [MUSIC PLAYING]
00:42:55.580 | And instead, we open source it to the fans,
00:42:57.580 | and they've just gone crazy with it.
00:42:59.580 | Love you, SKC.
00:43:00.380 | I'm the queen of Kinwam.
00:43:01.380 | [MUSIC PLAYING]