back to indexScott Aaronson: Computational Complexity and Consciousness | Lex Fridman Podcast #130
Chapters
0:0 Introduction
3:31 Simulation
8:22 Theories of everything
14:2 Consciousness
36:16 Roger Penrose on consciousness
46:28 Turing test
50:16 GPT-3
58:46 Universality of computation
65:17 Complexity
71:23 P vs NP
83:41 Complexity of quantum computation
95:48 Pandemic
109:33 Love
00:00:00.000 |
The following is a conversation with Scott Aronson, his second time on the podcast. 00:00:04.240 |
He is a professor at UT Austin, director of the Quantum Information Center, 00:00:10.320 |
and previously a professor at MIT. Last time we talked about quantum computing, 00:00:15.920 |
this time we talk about computational complexity, consciousness, and theories of everything. 00:00:24.160 |
I'm recording this intro, as you may be able to tell, in a very strange room in the middle of the 00:00:32.240 |
night. I'm not really sure how I got here or how I'm going to get out, but Hunter S. Thompson saying, 00:00:40.560 |
I think, applies to today and the last few days and actually the last couple of weeks. 00:00:46.880 |
"Life should not be a journey to the grave with the intention of arriving safely in a pretty and 00:00:53.680 |
well-preserved body, but rather to skid in broadside in a cloud of smoke, thoroughly used up, 00:01:00.640 |
totally worn out, and loudly proclaiming, 'Wow, what a ride.'" So I figured whatever I'm up to here, 00:01:10.400 |
and yes, lots of wine is involved, I'm gonna have to improvise, hence this recording. Okay, 00:01:18.480 |
quick mention of each sponsor, followed by some thoughts related to the episode. 00:01:23.200 |
First sponsor is SimpliSafe, a home security company I use to monitor and protect my apartment, 00:01:28.720 |
though of course I'm always prepared with a fallback plan, as a man in this world must always 00:01:36.720 |
be. Second sponsor is 8Sleep, a mattress that cools itself, measures heart rate variability, 00:01:46.080 |
has an app, and has given me yet another reason to look forward to sleep, including the all 00:01:52.880 |
important PowerNap. Third sponsor is ExpressVPN, the VPN I've used for many years to protect my 00:01:59.840 |
privacy on the internet. Finally, the fourth sponsor is BetterHelp, online therapy when you 00:02:07.200 |
want to face your demons with a licensed professional, not just by doing David Goggins-like 00:02:12.800 |
physical challenges like I seem to do on occasion. Please check out these sponsors in the description 00:02:18.800 |
to get a discount and to support the podcast. As a side note, let me say that this is the second 00:02:25.280 |
time I recorded a conversation outdoors. The first one was with Stephen Wolfram when it was actually 00:02:31.680 |
sunny out, in this case it was raining, which is why I found a covered outdoor patio. But I learned 00:02:38.960 |
a valuable lesson, which is that raindrops can be quite loud on the hard metal surface of a patio 00:02:45.840 |
cover. I did my best with the audio, I hope it still sounds okay to you. I'm learning, always 00:02:52.320 |
improving. In fact, as Scott says, if you always win, then you're probably doing something wrong. 00:02:59.120 |
To be honest, I get pretty upset with myself when I fail, small or big, but I've learned that this 00:03:06.880 |
feeling is priceless. It can be fuel when channeled into concrete plans of how to improve. 00:03:14.480 |
So if you enjoy this thing, subscribe on YouTube, review the 5 Stars on Apple Podcasts, 00:03:20.320 |
follow on Spotify, support on Patreon, or connect with me on Twitter @LexFriedman. 00:03:25.920 |
And now, here's my conversation with Scott Aaronson. 00:03:29.760 |
Let's start with the most absurd question, but I've read you write some fascinating stuff about 00:03:35.520 |
it, so let's go there. Are we living in a simulation? What difference does it make, 00:03:43.760 |
Because if we are living in a simulation, it raises the question, how real does something 00:03:49.360 |
have to be in simulation for it to be sufficiently immersive for us humans? 00:03:53.920 |
But I mean, even in principle, how could we ever know if we were in one, right? A perfect simulation, 00:03:59.920 |
by definition, is something that's indistinguishable from the real thing. 00:04:03.200 |
But we didn't say anything about perfect. It could be imperfect. 00:04:05.040 |
No, no, that's right. Well, if it was an imperfect simulation, if we could hack it, 00:04:10.240 |
you know, find a bug in it, then that would be one thing, right? If this was like the Matrix, 00:04:15.360 |
and there was a way for me to, you know, do flying Kung Fu moves or something by 00:04:20.400 |
hacking the simulation, well, then, you know, we would have to cross that bridge when we came to 00:04:24.400 |
it, wouldn't we? Right? I mean, at that point, you know, it's hard to see the difference between that 00:04:31.360 |
and just what people would ordinarily refer to as a world with miracles, you know? 00:04:36.800 |
What about from a different perspective, thinking about the universe as a computation, 00:04:41.840 |
like a program running on a computer? That's kind of a neighboring concept. 00:04:45.600 |
It is. It is an interesting and reasonably well-defined question to ask, 00:04:50.480 |
is the world computable? You know, does the world satisfy what we would call in CS the 00:04:56.880 |
Church-Turing thesis? That is, you know, could we take any physical system and simulate it to, 00:05:04.640 |
you know, any desired precision by a Turing machine, you know, given the appropriate input 00:05:09.680 |
data, right? And so far, I think the indications are pretty strong that our world does seem to 00:05:16.160 |
satisfy the Church-Turing thesis. At least if it doesn't, then we haven't yet discovered why not. 00:05:22.320 |
But now, does that mean that our universe is a simulation? Well, you know, that word seems to 00:05:29.760 |
suggest that there is some other larger universe in which it is running, right? And the problem 00:05:35.680 |
there is that if the simulation is perfect, then we're never going to be able to get any direct 00:05:41.600 |
evidence about that other universe. You know, we will only be able to see the effects of the 00:05:47.760 |
computation that is running in this universe. Well, let's imagine an analogy. Let's imagine a 00:05:54.240 |
PC, a personal computer, a computer. Is it possible with the advent of artificial intelligence for the 00:06:02.080 |
computer to look outside of itself to see, to understand its creator? I mean, is that a 00:06:10.400 |
ridiculous analogy? Well, I mean, with the computers that we actually have, I mean, first of all, 00:06:17.840 |
we all know that humans have done an imperfect job of enforcing the abstraction boundaries of 00:06:25.280 |
computers, right? You may try to confine some program to a playpen, but as soon as there's one 00:06:32.560 |
memory allocation error in the C program, then the program has gotten out of that playpen and 00:06:39.920 |
it can do whatever it wants, right? This is how most hacks work, you know, with the viruses and 00:06:45.600 |
worms and exploits. And you would have to imagine that an AI would be able to discover something 00:06:51.840 |
like that. Now, of course, if we could actually discover some exploit of reality itself, 00:06:58.640 |
then in some sense, we wouldn't have to philosophize about this, right? This would 00:07:06.880 |
no longer be a metaphysical conversation. This would just be a... 00:07:12.160 |
But the question is, what would that hack look like? 00:07:15.120 |
Yeah, well, I have no idea. I mean, Peter Shore, you know, a very famous person in quantum 00:07:22.640 |
computing, of course, has joked that maybe the reason why we haven't yet integrated general 00:07:29.280 |
relativity and quantum mechanics is that the part of the universe that depends on both of them was 00:07:35.520 |
actually left unspecified. And if we ever tried to do an experiment involving the singularity of a 00:07:42.320 |
black hole or something like that, then the universe would just generate an overflow error 00:07:49.680 |
Yeah, we would just crash the universe. Now, the universe has seemed to hold up pretty well for 00:07:58.080 |
14 billion years, right? So, you know, my Occam's razor kind of guess has to be that it will continue 00:08:08.560 |
to hold up. That the fact that we don't know the laws of physics governing some phenomenon 00:08:14.480 |
is not a strong sign that probing that phenomenon is going to crash the universe, 00:08:19.360 |
right? But, you know, of course, I could be wrong. 00:08:22.720 |
But do you think on the physics side of things, you know, there's been recently a few folks, 00:08:29.280 |
Eric Weinstein and Stephen Wolfram, that came out with the theory of everything. I think there's a 00:08:33.920 |
history of physicists dreaming and working on the unification of all the laws of physics. Do you 00:08:40.960 |
think it's possible that once we understand more physics, not necessarily the unification of the 00:08:47.200 |
laws, but just understand physics more deeply at the fundamental level, we'll be able to start 00:08:52.960 |
you know, I mean, part of this is humorous, but looking to see if there's any bugs in the universe 00:08:59.920 |
that could be exploited for, you know, traveling at not just speed of light, but just traveling 00:09:06.560 |
faster than our current spaceships can travel, all that kind of stuff? 00:09:10.960 |
Well, I mean, to travel faster than our current spaceships could travel, you wouldn't need to 00:09:15.920 |
find any bug in the universe, right? The known laws of physics, you know, let us go much faster, 00:09:21.360 |
up to the speed of light, right? And you know, when people want to go faster than the speed of 00:09:26.160 |
light, well, we actually know something about what that would entail, namely that, you know, 00:09:31.200 |
according to relativity, that seems to entail communication backwards in time, okay? So then 00:09:37.200 |
you have to worry about closed time-like curves and all of that stuff. So, you know, in some sense, 00:09:42.000 |
we sort of know the price that you have to pay for these things, right? 00:09:46.400 |
But under the current understanding of physics. 00:09:48.880 |
That's right, that's right. We can't, you know, say that they're impossible, but we know that 00:09:54.480 |
sort of a lot else in physics breaks, right? So now, regarding Eric Weinstein and Stephen Wolfram, 00:10:02.960 |
like I wouldn't say that either of them has a theory of everything. I would say that they have 00:10:07.920 |
ideas that they hope, you know, could someday lead to a theory of everything. 00:10:13.760 |
Well, I mean, certainly, let's say by theory of everything, you know, we don't literally mean a 00:10:19.200 |
theory of cats and of baseball and, you know, but we just mean it in the more limited sense of 00:10:25.440 |
everything, a fundamental theory of physics, right? Of all of the fundamental interactions 00:10:32.080 |
of physics. Of course, such a theory, even after we had it, you know, would leave the entire 00:10:38.960 |
question of all the emergent behavior, right? You know, to be explored. So it's only everything for 00:10:46.320 |
a specific definition of everything. Okay, but in that sense, I would say, of course, 00:10:50.400 |
that's worth pursuing. I mean, that is the entire program of fundamental physics, 00:10:55.360 |
right? All of my friends who do quantum gravity, who do string theory, who do anything like that, 00:11:02.800 |
Yeah, it's funny though, but I mean, Eric Weinstein talks about this. It is, 00:11:07.120 |
I don't know much about the physics world, but I know about the AI world. It is a little bit taboo 00:11:12.960 |
to talk about AGI, for example, on the AI side. So really, to talk about the big dream of the 00:11:23.520 |
community, I would say, because it seems so far away, it's almost taboo to bring it up because, 00:11:28.560 |
you know, it's seen as the kind of people that dream about creating a truly superhuman level 00:11:34.400 |
intelligence that's really far out there. People, because we're not even close to that. 00:11:39.200 |
And it feels like the same thing is true for the physics community. 00:11:42.480 |
I mean, Stephen Hawking certainly talked constantly about theory of everything, right? 00:11:48.880 |
You know, I mean, people, you know, use those terms who were, you know, some of the most 00:11:54.240 |
respected people in the whole world of physics, right? But I mean, I think that the distinction 00:12:00.880 |
that I would make is that people might react badly if you use the term in a way that suggests 00:12:07.200 |
that you, you know, thinking about it for five minutes have come up with this major new insight 00:12:12.880 |
about it, right? It's difficult. Stephen Hawking is not a great example because I think you can do 00:12:21.200 |
whatever the heck you want when you get to that level. And I certainly see like senior faculty, 00:12:28.720 |
you know, at that point, that's one of the nice things about getting older is you stop giving a 00:12:34.160 |
damn. But community as a whole, they tend to roll their eyes very quickly at stuff that's outside 00:12:40.080 |
the quote unquote mainstream. Well, let me put it this way. I mean, if you asked, you know, Ed 00:12:44.960 |
Witten, let's say, who is, you know, you might consider a leader of the string community and 00:12:49.760 |
thus, you know, very, very mainstream in a certain sense, but he would have no hesitation in saying, 00:12:55.440 |
you know, of course, you know, they're looking for a, you know, a unified description of nature, 00:13:03.120 |
of, you know, of general relativity, of quantum mechanics, of all the fundamental interactions 00:13:08.480 |
of nature, right? Now, you know, whether people would call that a theory of everything, whether 00:13:14.400 |
they would use that term, that might vary. You know, Lenny Suskin would definitely have no 00:13:19.360 |
problem telling you that, you know, if that's what we want, right? For me, who loves human beings 00:13:24.320 |
and psychology, it's kind of ridiculous to say a theory that unifies the laws of physics gets you 00:13:32.720 |
to understand everything. I would say you're not even close to understanding everything. 00:13:36.800 |
Yeah, right. Well, yeah, I mean, the word everything is a little ambiguous here, 00:13:41.280 |
right? Because, you know, and then people will get into debates about, you know, reductionism 00:13:46.080 |
versus emergentism and blah, blah, blah. And so in not wanting to say theory of everything, 00:13:52.400 |
people might just be trying to short circuit that debate and say, you know, look, you know, 00:13:57.040 |
yes, we want a fundamental theory of, you know, the particles and interactions of nature. 00:14:02.480 |
Let me bring up the next topic that people don't want to mention, although they're getting more 00:14:05.920 |
comfortable with it, is consciousness. You mentioned that you have a talk on consciousness 00:14:10.240 |
that I watched five minutes of, but the internet connection is really bad. 00:14:14.160 |
Was this my talk about, you know, refuting the integrated information theory? 00:14:18.800 |
Which was this particular account of consciousness that, yeah, I think 00:14:21.840 |
one can just show it doesn't work. But it's much harder to say what does work. 00:14:25.600 |
What does work, yeah. Let me ask, maybe it'd be nice to comment on, 00:14:30.160 |
you talk about also like the semi-hard problem of consciousness, or like almost hard, 00:14:37.600 |
Pretty hard. So maybe can you talk about that, their idea of the approach to modeling consciousness 00:14:47.040 |
and why you don't find it convincing? What is it, first of all? 00:14:49.680 |
Okay, well, so what I call the pretty hard problem of consciousness, this is my term, 00:14:55.920 |
although many other people have said something equivalent to this, okay? But it's just, you know, 00:15:02.400 |
the problem of, you know, giving an account of just which physical systems are conscious and 00:15:09.840 |
which are not. Or, you know, if there are degrees of consciousness, then quantifying how conscious 00:15:17.280 |
Oh, awesome. So that's the pretty hard problem. 00:15:20.400 |
That's it, I'm adopting it. I love it. That's a good ring to it. 00:15:23.680 |
And so, you know, the infamous hard problem of consciousness is to explain how something like 00:15:29.760 |
consciousness could arise at all, you know, in a material universe, right? Or, you know, 00:15:34.560 |
why does it ever feel like anything to experience anything, right? And, you know, so I'm trying to 00:15:40.880 |
distinguish from that problem, right? And say, you know, no, okay, I would merely settle for an 00:15:46.880 |
account that could say, you know, is a fetus conscious? You know, if so, at which trimester? 00:15:52.560 |
You know, is a dog conscious? You know, what about a frog, right? 00:15:58.160 |
Or even as a precondition, you take that both these things are conscious, 00:16:03.680 |
Yeah, for example, yes. Yeah, yeah. I mean, if consciousness is some multi-dimensional vector, 00:16:09.360 |
well, just tell me in which respects these things are conscious and in which respect they aren't, 00:16:14.320 |
right? And, you know, and have some principled way to do it where you're not, you know, 00:16:18.960 |
carving out exceptions for things that you like or don't like, but could somehow take a description 00:16:24.800 |
of an arbitrary physical system. And then just based on the physical properties of that system, 00:16:32.080 |
or the informational properties, or how it's connected, or something like that, 00:16:36.800 |
just in principle calculate, you know, its degree of consciousness, right? I mean, this would be the 00:16:43.040 |
kind of thing that we would need, you know, if we wanted to address questions like, you know, 00:16:48.160 |
what does it take for a machine to be conscious, right? Or when should we regard AIs as being 00:16:55.120 |
conscious? So now this IIT, this integrated information theory, which has been put forward by 00:17:04.320 |
Giulio Tononi and a bunch of his collaborators over the last decade or two, this is noteworthy, 00:17:14.880 |
I guess, as a direct attempt to answer that question, to, you know, answer the, to address 00:17:21.040 |
the pretty hard problem, right? And they give a criterion that's just based on how a system is 00:17:28.720 |
connected. So you, so it's up to you to sort of abstract a system like a brain or a microchip 00:17:35.360 |
as a collection of components that are connected to each other by some pattern of connections, 00:17:41.120 |
you know, and to specify how the components can influence each other, you know, like where the 00:17:46.880 |
inputs go, you know, where they affect the outputs. But then once you've specified that, 00:17:51.920 |
then they give this quantity that they call phi, you know, the Greek letter phi. 00:17:56.880 |
And the definition of phi has actually changed over time. It changes from one paper to another, 00:18:02.880 |
but in all of the variations, it involves something about what we in computer science 00:18:08.640 |
would call graph expansion. So basically what this means is that they want, in order to get 00:18:14.720 |
a large value of phi, it should not be possible to take your system and partition it into two 00:18:22.160 |
components that are only weakly connected to each other. Okay. So whenever we take our system and 00:18:28.800 |
sort of try to split it up into two, then there should be lots and lots of connections going 00:18:33.600 |
between the two components. Okay. Well, I understand what that means on a graph. Do they 00:18:38.000 |
formalize what, how to construct such a graph or data structure, whatever, or is this, one of the 00:18:45.840 |
criticism I've heard you kind of say is that a lot of the very interesting specifics are usually 00:18:51.840 |
communicated through like natural language, like through words. So it's like the details aren't 00:18:58.320 |
always clear. Well, it's true. I mean, they have nothing even resembling a derivation of this phi. 00:19:05.520 |
Okay. So what they do is they state a whole bunch of postulates, you know, axioms that they think 00:19:11.600 |
that consciousness should satisfy. And then there's some verbal discussion. And then at some 00:19:16.800 |
point phi appears. Right. And this was the first thing that really made the hair stand on my neck, 00:19:23.520 |
to be honest, because they are acting as if there's a derivation. They're acting as if, you 00:19:28.320 |
know, you're supposed to think that this is a derivation and there's nothing even remotely 00:19:33.040 |
resembling a derivation. They just pull the phi out of a hat completely. So is one of the key 00:19:38.400 |
criticisms to you is that details are missing or is there something more fundamental? That's not 00:19:42.160 |
even the key criticism. That's just a side point. Okay. The core of it is that I think that they 00:19:49.120 |
want to say that a system is more conscious, the larger its value of phi. And I think that that is 00:19:55.120 |
obvious nonsense. Okay. As soon as you think about it for like a minute, as soon as you think about 00:20:00.560 |
it in terms of, could I construct a system that had an enormous value of phi, like, you know, 00:20:06.880 |
even larger than the brain has, but that is just implementing an error correcting code, you know, 00:20:12.880 |
doing nothing that we would associate with, you know, intelligence or consciousness or any of it. 00:20:18.880 |
The answer is yes, it is easy to do that. Right. And so I wrote blog posts just making this point 00:20:24.800 |
that, yeah, it's easy to do that. Now, you know, Tannone's response to that was actually kind of 00:20:29.920 |
incredible. Right. I mean, I admired it in a way because instead of disputing any of it, 00:20:35.920 |
he just bit the bullet in the sense, you know, he was one of the most audacious bullet bitings 00:20:43.120 |
I've ever seen in my career. Okay. He said, okay, then fine. You know, this system that just applies 00:20:50.240 |
this error correcting code, it's conscious, you know, and if it has a much larger value of phi, 00:20:55.360 |
then you or me, it's much more conscious than you or me. You know, we just have to accept what the 00:21:01.040 |
theory says because, you know, science is not about confirming our intuitions, it's about 00:21:06.080 |
challenging them. And, you know, this is what my theory predicts, that this thing is conscious and, 00:21:11.680 |
you know, or super duper conscious and how are you going to prove me wrong? 00:21:15.760 |
See, so the way I would argue against your blog post is I would say, yes, sure, you're right in 00:21:22.240 |
general, but for naturally arising systems developed through the process of evolution on Earth, 00:21:29.680 |
this rule of the larger phi being associated with more consciousness is correct. 00:21:34.400 |
Yeah, so that's not what he said at all, right? Because he wants this to be completely general. 00:21:41.760 |
Yeah, I mean, the whole interest of the theory is the, you know, the hope that it could be 00:21:46.400 |
completely general, apply to aliens, to computers, to animals, coma patients, to any of it, right? 00:21:55.120 |
And so he just said, well, you know, Scott is relying on his intuition, but, you know, 00:22:02.720 |
I'm relying on this theory. And, you know, to me it was almost like, you know, are we being serious 00:22:08.400 |
here? Like, you know, like, okay, yes, in science we try to learn highly non-intuitive things, 00:22:16.640 |
but what we do is we first test the theory on cases where we already know the answer, right? 00:22:23.120 |
Like if someone had a new theory of temperature, right, then, you know, maybe we could check that 00:22:28.320 |
it says that boiling water is hotter than ice. And then if it says that the sun is hotter than 00:22:34.240 |
anything, you know, you've ever experienced, then maybe we trust that extrapolation, right? 00:22:40.480 |
But like this theory, like if, you know, it's now saying that, you know, a gigantic grid, 00:22:48.320 |
like regular grid of exclusive or gates can be way more conscious than a, you know, a person 00:22:54.800 |
or than any animal can be, you know, even if it, you know, is, you know, is so uniform that it 00:23:02.160 |
might as well just be a blank wall, right? And so now the point is, if this theory is sort of 00:23:07.920 |
getting wrong, the question is a blank wall, you know, more conscious than a person, then I would 00:23:15.920 |
So your sense is a blank wall is not more conscious than a human being. 00:23:22.160 |
Yeah, I mean, you could say that I am taking that as one of my axioms. 00:23:25.920 |
I'm saying that if a theory of consciousness is getting that wrong, then whatever it is talking 00:23:35.040 |
about at that point, I'm not going to call it consciousness. I'm going to use a different word. 00:23:40.640 |
You have to use a different word. I mean, it's possible just like with intelligence 00:23:45.120 |
that us humans conveniently define these very difficult to understand concepts in a very human 00:23:49.920 |
centric way. Just like a Turing test really seems to define intelligence as a thing that's human-like. 00:23:56.320 |
Right, but I would say that with any concept, you know, there's, you know, like we first need to 00:24:05.280 |
define it, right? And a definition is only a good definition if it matches what we thought we were 00:24:10.800 |
talking about, you know, prior to having a definition, right? And I would say that, you know, 00:24:15.600 |
phi as a definition of consciousness fails that test. That is my argument. 00:24:22.400 |
So, okay, so let's take a further step. So you mentioned that the universe might be 00:24:27.360 |
the Turing machine, so like it might be computation. 00:24:32.560 |
Simulatable by one. So what's your sense about consciousness? Do you think consciousness is 00:24:39.040 |
computation? That we don't need to go to any place outside of the computable universe to, 00:24:46.240 |
you know, to understand consciousness, to build consciousness, to measure consciousness, 00:24:54.080 |
I don't know. These are what, you know, have been called the vertiginous questions, right? 00:24:59.840 |
There's the questions like, you know, you get a feeling of vertigo when thinking about them, 00:25:05.120 |
right? I mean, I certainly feel like I am conscious in a way that is not reducible to 00:25:11.680 |
computation, but why should you believe me, right? I mean, and if you said the same to me, 00:25:19.360 |
But as computer scientists, I feel like a computer could achieve human level intelligence. 00:25:26.320 |
And that's actually a feeling and a hope. That's not a scientific belief. It's just we've built up 00:25:34.160 |
enough intuition, the same kind of intuition you use in your blog. You know, that's what scientists 00:25:38.640 |
do. I mean, some of it is a scientific method, but some of it is just damn good intuition. 00:25:42.880 |
I don't have a good intuition about consciousness. 00:25:45.840 |
Yeah. I'm not sure that anyone does or has in the, you know, 2,500 years that these things 00:25:52.640 |
But do you think we will? Like one of the, I got a chance to attend, can't wait to hear your 00:25:58.800 |
opinion on this, but attend the Neuralink event. And one of the dreams there is to, you know, 00:26:05.280 |
basically push neuroscience forward. And the hope with neuroscience is that we can inspect 00:26:12.400 |
the machinery from which all this fun stuff emerges and see, are we going to notice something 00:26:17.200 |
special, some special sauce from which something like consciousness or cognition emerges? 00:26:22.720 |
Yeah. Well, it's clear that we've learned an enormous amount about neuroscience. 00:26:27.040 |
We've learned an enormous amount about computation, you know, about machine learning, 00:26:31.600 |
about AI, how to get it to work. We've learned an enormous amount about the underpinnings of 00:26:38.560 |
the physical world, you know. And, you know, from one point of view, that's like an enormous distance 00:26:45.120 |
that we've traveled along the road to understanding consciousness. From another point of view, 00:26:49.920 |
you know, the distance still to be traveled on the road, you know, maybe seems no shorter than 00:26:55.760 |
Right? So it's very hard to say. I mean, you know, these are questions like, in sort of trying to 00:27:02.480 |
have a theory of consciousness, there's sort of a problem where it feels like it's not just that 00:27:07.520 |
we don't know how to make progress, it's that it's hard to specify what could even count as progress. 00:27:13.360 |
Right? Because no matter what scientific theory someone proposed, someone else could come along 00:27:18.160 |
and say, "Well, you've just talked about the mechanism. You haven't said anything about 00:27:22.640 |
what breathes fire into the mechanism, what really makes there something that it's like to be it." 00:27:27.920 |
Right? And that seems like an objection that you could always raise, no matter, you know, 00:27:32.400 |
how much someone elucidated the details of how the brain works. 00:27:35.840 |
Okay, let's go to Turing test and Lobna Prize. I have this intuition, call me crazy, 00:27:40.000 |
but we, that a machine to pass the Turing test in its full, whatever the spirit of it is, 00:27:48.000 |
we can talk about how to formulate the perfect Turing test, that that machine has to be conscious. 00:27:54.880 |
Or we at least have to, I have a very low bar of what consciousness is. I tend to think that 00:28:03.280 |
the emulation of consciousness is as good as consciousness. So like consciousness is just a 00:28:08.640 |
dance, a social shortcut, like a nice, useful tool. But I tend to connect intelligence and 00:28:16.240 |
consciousness together. So by that, do you maybe just to ask, what role does consciousness play, 00:28:27.520 |
Well, look, I mean, it's almost tautologically true that if we had a machine that passed the 00:28:32.080 |
Turing test, then it would be emulating consciousness, right? So if your position is 00:28:36.960 |
that emulation of consciousness is consciousness, then by definition, any machine that passed the 00:28:43.920 |
Turing test would be conscious. But I mean, you could say that that is just a way to rephrase the 00:28:51.200 |
original question, is an emulation of consciousness necessarily conscious, right? And you can, 00:28:57.520 |
I hear, I'm not saying anything new that hasn't been debated ad nauseum in the literature, okay? 00:29:03.840 |
But you could imagine some very hard cases, like imagine a machine that passed the Turing test, 00:29:10.480 |
but it did so just by an enormous cosmological-sized lookup table that just cached 00:29:17.200 |
every possible conversation that could be had. 00:29:21.520 |
Well, yeah, but this is, I mean, the Chinese room actually would be doing some computation, 00:29:27.360 |
at least in Searle's version, right? Here, I'm just talking about a table lookup, okay? Now, 00:29:32.400 |
it's true that for conversations of a reasonable length, this lookup table would be so enormous 00:29:38.080 |
it wouldn't even fit in the observable universe, okay? But supposing that you could build a big 00:29:43.120 |
enough lookup table and then just pass the Turing test just by looking up what the person said, 00:29:50.080 |
right? Are you going to regard that as conscious? 00:29:52.960 |
Okay, let me try to make this formal, and then you can shut it down. I think that the emulation 00:30:00.720 |
of something is that something, if there exists in that system a black box that's full of mystery. 00:30:13.920 |
So does that mean that consciousness is relative to the observer? Like, could something be conscious 00:30:18.400 |
for us but not conscious for an alien that understood better what was happening inside 00:30:24.160 |
Yes, yes. So that if inside the black box is just a lookup table, the alien that saw that would say 00:30:29.840 |
this is not conscious. To us, another way to phrase the black box is layers of abstraction, 00:30:36.000 |
which make it very difficult to see to the actually underlying functionality of the system. 00:30:40.560 |
And then we observe just the abstraction, and so it looks like magic to us. But once we 00:30:45.680 |
understand the inner machinery, it stops being magic. And so like, that's a prerequisite, 00:30:52.000 |
is that you can't know how it works, some part of it. Because then there has to be, in our human mind, 00:30:58.160 |
entry point for the magic. So that's a formal definition of the system. 00:31:05.520 |
Yeah, well look, I mean, I explored a view in this essay I wrote called "The Ghost in the 00:31:10.640 |
Quantum Turing Machine" seven years ago that is related to that, except that I did not want to 00:31:17.200 |
have consciousness be relative to the observer, right? Because I think that if consciousness 00:31:22.080 |
means anything, it is something that is experienced by the entity that is conscious, right? You know, 00:31:27.680 |
like, I don't need you to tell me that I'm conscious, right? Nor do you need me to tell 00:31:34.720 |
you that you are, right? But basically, what I explored there is, are there aspects of a system 00:31:44.160 |
like a brain that just could not be predicted, even with arbitrarily advanced future technologies? 00:31:52.000 |
It's because of chaos combined with quantum mechanical uncertainty, and things like that. 00:31:57.280 |
I mean, that actually could be a property of the brain, if true, that would distinguish it in a 00:32:04.880 |
principled way, at least from any currently existing computer. Not from any possible computer, 00:32:10.400 |
but from, yeah, yeah. - Let's do a thought experiment. 00:32:14.320 |
information that you're in, the entire history of your life, basically explain away free will 00:32:22.640 |
with a lookup table, say that this was all predetermined, that everything you experienced 00:32:27.200 |
has already been predetermined, wouldn't that take away your consciousness? Wouldn't you, 00:32:31.520 |
yourself, wouldn't the experience of the world change for you in a way that you can't take back? 00:32:37.920 |
- Let me put it this way. If you could do like in a Greek tragedy, where you would just write down 00:32:43.520 |
a prediction for what I'm going to do, and then maybe you put the prediction in a sealed box, 00:32:49.360 |
and maybe you open it later, and you show that you knew everything I was going to do, or of course, 00:32:56.400 |
the even creepier version would be you tell me the prediction, and then I try to falsify it, 00:33:01.680 |
my very effort to falsify it makes it come true. Let's even forget that version, as convenient as 00:33:09.680 |
it is for fiction writers. Let's just do the version where you put the prediction into a 00:33:14.320 |
sealed envelope. But if you could reliably predict everything that I was going to do, 00:33:20.800 |
I'm not sure that that would destroy my sense of being conscious, but I think it really would 00:33:25.600 |
destroy my sense of having free will. And much, much more than any philosophical conversation 00:33:32.800 |
could possibly do that. And so I think it becomes extremely interesting to ask, could such predictions 00:33:41.120 |
be done, even in principle? Is it consistent with the laws of physics to make such predictions, 00:33:47.120 |
to get enough data about someone that you could actually generate such predictions without having 00:33:52.160 |
to kill them in the process, to slice their brain up into little slivers or something? 00:33:57.280 |
>>ANDREW: I mean, theoretically possible, right? 00:33:58.960 |
>>BETSY: Well, I don't know. I mean, it might be possible, but only at the cost of destroying the 00:34:03.280 |
person, right? I mean, it depends on how low you have to go in sort of the substrate. Like, 00:34:10.320 |
if there was a nice digital abstraction layer, if you could think of each neuron as a kind of 00:34:16.080 |
transistor computing a digital function, then you could imagine some nanorobots that would go in 00:34:22.160 |
and would just scan the state of each transistor, of each neuron, and then make a good enough copy. 00:34:29.440 |
But if it was actually important to get down to the molecular or the atomic level, 00:34:35.520 |
then eventually you would be up against quantum effects. You would be up against the unclone 00:34:40.320 |
ability of quantum states. So I think it's a question of how good does the replica have to be 00:34:48.160 |
before you're going to count it as actually a copy of you or as being able to predict your actions. 00:34:53.600 |
>>ANDREW: And that's a totally open question. 00:34:55.520 |
>>BETSY: Yeah, yeah, yeah. And especially once we say that, well, look, maybe there's no way 00:35:02.400 |
to make a deterministic prediction because we know that there's noise buffeting the brain around, 00:35:09.120 |
presumably even quantum mechanical uncertainty affecting the sodium ion channels, for example, 00:35:15.360 |
whether they open or they close. There's no reason why over a certain timescale that shouldn't be 00:35:23.680 |
amplified, just like we imagine happens with the weather or with any other chaotic system. 00:35:31.920 |
So if that stuff is important, then we would say, well, you're never going to be able to make an 00:35:44.560 |
accurate enough copy. But now the hard part is, well, what if someone can make a copy that no one 00:35:50.080 |
else can tell apart from you? It says the same kinds of things that you would have said, maybe 00:35:56.240 |
not exactly the same things because we agree that there's noise, but it says the same kinds of 00:36:01.280 |
things. And maybe you alone would say, no, I know that that's not me. I haven't felt my consciousness 00:36:09.520 |
leap over to that other thing. I still feel it localized in this version, right? Then why should 00:36:14.960 |
anyone else believe you? >>ENOCH: What are your thoughts? I'd be curious. You're a really good 00:36:19.120 |
person to ask, which is Penrose's, Roger Penrose's work on consciousness, saying that there is some 00:36:26.160 |
- with axons and so on - there might be some biological places where quantum mechanics can 00:36:32.240 |
come into play and through that create consciousness somehow. >>ANDREW: Yeah. Okay. 00:36:36.480 |
>>ENOCH: Are you familiar with his work at all? >>ANDREW: Of course. I read Penrose's books as a 00:36:41.280 |
teenager. They had a huge impact on me. Five or six years ago, I had the privilege to actually 00:36:47.200 |
talk these things over with Penrose at some length at a conference in Minnesota. And he is 00:36:54.800 |
an amazing personality. I admire the fact that he was even raising such audacious questions at all. 00:37:01.600 |
But to answer your question, I think the first thing we need to get clear on is that he is not 00:37:08.640 |
merely saying that quantum mechanics is relevant to consciousness. That would be tame compared to 00:37:16.320 |
what he is saying. He is saying that even quantum mechanics is not good enough. Because if supposing 00:37:23.680 |
for example that the brain were a quantum computer, well, that's still a computer. In fact, 00:37:28.720 |
a quantum computer can be simulated by an ordinary computer. It might merely need exponentially more 00:37:34.880 |
time in order to do so. So that's simply not good enough for him. So what he wants is for the brain 00:37:41.680 |
to be a quantum gravitational computer. Or he wants the brain to be exploiting as-yet-unknown 00:37:50.960 |
laws of quantum gravity, which would be uncomputable according to him. 00:37:56.880 |
Okay, yes, yes. That would be literally uncomputable. And I've asked him to clarify 00:38:02.400 |
this. But uncomputable, even if you had an oracle for the halting problem, or as high up as you want 00:38:11.280 |
to go in the usual hierarchy of uncomputability, he wants to go beyond all of that. So just to be 00:38:20.000 |
clear, if we're keeping count of how many speculations, there's probably at least five 00:38:26.160 |
or six of them, right? There's first of all that there is some quantum gravity theory that would 00:38:30.960 |
involve this kind of uncomputability, right? Most people who study quantum gravity would not agree 00:38:36.560 |
with that. They would say that what we've learned, what little we know about quantum gravity from 00:38:42.160 |
this ADS-CFT correspondence, for example, has been very much consistent with the broad idea 00:38:49.200 |
of nature being computable, right? But supposing that he's right about that, then what most 00:38:58.720 |
physicists would say is that whatever new phenomena there are in quantum gravity, they might be 00:39:05.520 |
relevant at the singularities of black holes. They might be relevant at the Big Bang. They are 00:39:13.280 |
plainly not relevant to something like the brain that is operating at ordinary temperatures 00:39:19.760 |
with ordinary chemistry and the physics underlying the brain. They would say that we have the 00:39:29.440 |
fundamental physics of the brain, they would say that we've pretty much completely known for 00:39:34.720 |
generations now, right? Because quantum field theory lets us sort of parametrize our ignorance, 00:39:42.160 |
right? I mean, Sean Carroll has made this case in great detail, right? That sort of whatever 00:39:47.920 |
new effects are coming from quantum gravity, they are sort of screened off by quantum field theory, 00:39:54.080 |
right? And this brings us to the whole idea of effective theories, right? But we have, 00:40:00.400 |
like in the standard model of elementary particles, right? We have a quantum field theory 00:40:07.280 |
that seems totally adequate for all of the terrestrial phenomena, right? The only things 00:40:13.360 |
that it doesn't explain are, well, first of all, the details of gravity, if you were to probe it 00:40:19.680 |
at extremes of curvature or at incredibly small distances. It doesn't explain dark matter. It 00:40:28.880 |
doesn't explain black hole singularities, right? But these are all very exotic things, very far 00:40:34.800 |
removed from our life on Earth, right? So for Penrose to be right, he needs these phenomena to 00:40:41.600 |
somehow affect the brain. He needs the brain to contain antennae that are sensitive to this-- 00:40:48.400 |
>> --to this as-yet-unknown physics, right? And then he needs a modification of quantum mechanics, 00:40:55.040 |
okay? So he needs quantum mechanics to actually be wrong, okay? He needs what he wants is what 00:41:02.160 |
he calls an objective reduction mechanism or an objective collapse. So this is the idea that once 00:41:08.560 |
quantum states get large enough, then they somehow spontaneously collapse, right? And this is an idea 00:41:18.960 |
that lots of people have explored. There's something called the GRW proposal that tries to 00:41:26.080 |
say something along those lines. And these are theories that actually make testable predictions, 00:41:31.920 |
right? Which is a nice feature that they have. But the very fact that they're testable may mean 00:41:36.640 |
that in the coming decades, we may well be able to test these theories and show that they're wrong, 00:41:43.840 |
right? We may be able to test some of Penrose's ideas. If not, not his ideas about consciousness, 00:41:50.480 |
but at least his ideas about an objective collapse of quantum states, right? And people have actually, 00:41:56.560 |
like Dick Balmister, have actually been working to try to do these experiments. 00:42:00.880 |
They haven't been able to do it yet to test Penrose's proposal, okay? But Penrose would 00:42:05.840 |
need more than just an objective collapse of quantum states, which would already be the 00:42:10.800 |
biggest development in physics for a century since quantum mechanics itself, okay? He would need 00:42:16.160 |
for consciousness to somehow be able to influence the direction of the collapse so that it wouldn't 00:42:23.520 |
be completely random, but that your dispositions would somehow influence the quantum state to 00:42:29.680 |
collapse more likely this way or that way, okay? Finally, Penrose says that all of this has to be 00:42:37.200 |
true because of an argument that he makes based on Gödel's incompleteness theorem, okay? 00:42:43.120 |
>>Now, like, I would say, the overwhelming majority of computer scientists and mathematicians 00:42:49.120 |
who have thought about this, I don't think that Gödel's incompleteness theorem can do what he 00:42:54.000 |
needs it to do here, right? I don't think that that argument is sound, okay? But that is sort of 00:43:00.720 |
the tower that you have to ascend to if you're going to go where Penrose goes. 00:43:04.560 |
>>And the intuition he uses with the incompleteness theorem is that basically 00:43:09.440 |
that there's important stuff that's not computable? Is that where he takes it? 00:43:13.120 |
>>No, it's not just that, because, I mean, everyone agrees that there are problems that 00:43:17.120 |
are uncomputable, right? That's a mathematical theorem, right? But what Penrose wants to say 00:43:22.960 |
is that, for example, there are statements, given any formal system for doing math, right? There 00:43:34.400 |
will be true statements of arithmetic that that formal system, if it's adequate for math at all, 00:43:41.200 |
if it's consistent and so on, will not be able to prove. A famous example being the statement 00:43:46.960 |
that that system itself is consistent, right? No good formal system can actually prove its own 00:43:53.600 |
consistency. That can only be done from a stronger formal system, which then can't prove its own 00:43:59.600 |
consistency and so on forever, okay? That's Gödel's theorem. But now, why is that relevant to 00:44:06.080 |
consciousness, right? Well, I mean, the idea that it might have something to do with consciousness 00:44:13.360 |
is an old one. Gödel himself apparently thought that it did. Lucas thought so, I think, in the 60s. 00:44:23.360 |
And Penrose is really just sort of updating what they and others had said. I mean, the idea that 00:44:30.640 |
Gödel's theorem could have something to do with consciousness was in 1950, when Alan Turing wrote 00:44:37.440 |
his article about the Turing test. He already was writing about that as an old and well-known idea 00:44:44.720 |
and as a wrong one that he wanted to dispense with, right? Okay, but the basic problem with 00:44:51.760 |
this idea is Penrose wants to say that, and all of his predecessors here want to say, that even 00:44:59.600 |
though this given formal system cannot prove its own consistency, we as humans, sort of looking at 00:45:07.520 |
it from the outside, can just somehow see its consistency, right? And the rejoinder to that 00:45:15.760 |
from the very beginning has been, "Well, can we really?" I mean, maybe he, Penrose, can, 00:45:23.680 |
but can the rest of us? And notice that it is perfectly plausible to imagine a computer 00:45:35.200 |
that could say, would not be limited to working within a single formal system, right? They could 00:45:41.120 |
say, "I am now going to adopt the hypothesis that my formal system is consistent, and I'm now going 00:45:47.840 |
to see what can be done from that stronger vantage point," and so on. And I'm going to add new 00:45:53.280 |
axioms to my system. Totally plausible. There's absolutely, Godel's theorem has nothing to say 00:45:59.520 |
against an AI that could repeatedly add new axioms. All it says is that there is no 00:46:05.920 |
absolute guarantee that when the AI adds new axioms that it will always be right. 00:46:11.760 |
Okay? And that's, of course, the point that Penrose pounces on, but the reply is obvious. 00:46:17.280 |
And it's one that Alan Turing made 70 years ago, namely, we don't have an absolute guarantee that 00:46:22.880 |
we're right when we add a new axiom. We never have, and plausibly we never will. 00:46:27.840 |
So on Alan Turing, you took part in the Lubna Prize? 00:46:34.240 |
I didn't. I mean, there was this kind of ridiculous claim that was made almost a decade ago about a 00:46:45.360 |
Yeah, I apologize. I guess you didn't participate as a judge in the Lubna Prize. 00:46:49.120 |
But you participated as a judge in that, I guess it was an exhibition event or something like that. 00:46:55.440 |
Eugene Gooseman, that was just me writing a blog post, 00:46:59.360 |
because some journalist called me to ask about it. 00:47:03.200 |
I did chat with Eugene Gooseman. I mean, it was available on the web, the chat. 00:47:07.680 |
So yeah. So all that happened was that a bunch of journalists started writing breathless articles 00:47:14.400 |
about first chatbot that passes the Turing test. And it was this thing called Eugene Gooseman 00:47:21.440 |
that was supposed to simulate a 13-year-old boy. And apparently someone had done some test where 00:47:29.120 |
people were less than perfect, let's say, distinguishing it from a human. 00:47:35.440 |
And they said, well, if you look at Turing's paper and you look at the percentages that he 00:47:41.600 |
talked about, then it seems like we're past that threshold. And I had a different way to look at it 00:47:50.400 |
instead of the legalistic way, like, let's just try the actual thing out and let's see what it can do 00:47:56.480 |
with questions like, is Mount Everest bigger than a shoebox? Or just the most obvious questions. 00:48:04.560 |
And the answer is, well, it just parries you because it doesn't know what you're talking about. 00:48:11.200 |
So just to clarify exactly in which way they're obvious. They're obvious in the sense that you 00:48:17.840 |
convert the sentences into the meaning of the objects they represent and then do some basic 00:48:22.960 |
obvious, we mean, common sense reasoning with the objects that the sentences represent. 00:48:29.200 |
Right, right. It was not able to answer or even intelligently respond to basic common 00:48:34.720 |
sense questions. But let me say something stronger than that. There was a famous chatbot in the '60s 00:48:39.920 |
called Eliza that managed to actually fool a lot of people. People would pour their hearts out into 00:48:48.400 |
this Eliza because it simulated a therapist. And most of what it would do is it would just throw 00:48:54.320 |
back at you whatever you said. And this turned out to be incredibly effective. Maybe therapists 00:49:02.400 |
know this. This is one of their tricks. But it really had some people convinced. 00:49:10.240 |
But this thing was just like, I think it was literally just a few hundred lines of Lisp code. 00:49:16.880 |
Not only was it not intelligent, it wasn't especially sophisticated. It was a simple 00:49:24.080 |
little hobbyist program. And Eugene Guzman, from what I could see, was not a significant advance 00:49:29.920 |
compared to Eliza. And that was really the point I was making. And in some sense, you didn't need a 00:49:40.160 |
computer science professor to say this. Anyone who was looking at it and who just had an ounce of 00:49:48.880 |
sense could have said the same thing. But because these journalists were calling me, the first thing 00:49:56.800 |
I said was, "Well, no, I'm a quantum computing person. I'm not an AI person. You shouldn't ask 00:50:03.760 |
me." Then they said, "Look, you can go here and you can try it out." I said, "All right, all right, 00:50:07.680 |
so I'll try it out." But now this whole discussion, it got a whole lot more interesting in just the 00:50:14.720 |
last few months. Yeah, I'd love to hear your thoughts about GPT-3, the advancements in length. 00:50:19.280 |
In the last few months, the world has now seen a chat engine or a text engine, I should say, 00:50:28.000 |
called GPT-3. I think it does not pass a Turing test. There are no real claims that it passes 00:50:37.680 |
the Turing test. This comes out of the group at OpenAI and they've been relatively careful in 00:50:45.200 |
what they've claimed about the system. But I think this, as clearly as Eugene Guzman was not 00:50:53.760 |
an advance over ELISA, it is equally clear that this is a major advance over ELISA or really over 00:51:01.280 |
anything that the world has seen before. This is a text engine that can come up with on-topic, 00:51:10.320 |
reasonable-sounding completions to just about anything that you ask. You can ask it to write 00:51:16.000 |
a poem about topic X in the style of poet Y, and it will have a go at that. And it will do 00:51:23.520 |
not a great job, not an amazing job, but a passable job. Definitely as good as, 00:51:33.760 |
in many cases, I would say better than I would have done. You can ask it to write an essay, 00:51:40.240 |
like a student essay about pretty much any topic, and it will get something that I am pretty sure 00:51:45.680 |
would get at least a B- in most high school or even college classes. In some sense, the way that 00:51:55.360 |
it achieves this, Scott Alexander of the much mourned blog Slate Star Codex had a wonderful 00:52:04.000 |
way of putting it. He said that they basically just ground up the entire internet into a slurry. 00:52:09.680 |
To tell you the truth, I had wondered for a while why nobody had tried that. Why not write a chat 00:52:18.400 |
bot by just doing deep learning over a corpus consisting of the entire web? Now they finally 00:52:28.160 |
have done that. The results are very impressive. It's not clear that people can argue about whether 00:52:36.640 |
this is truly a step toward general AI or not, but this is an amazing capability that we didn't 00:52:45.440 |
have a few years ago. A few years ago, if you had told me that we would have it now, that would have 00:52:51.840 |
surprised me. I think that anyone who denies that is just not engaging with what's there. 00:52:57.280 |
Their model takes a large part of the internet and compresses it in a small number of parameters 00:53:05.200 |
relative to the size of the internet, and is able to, without fine tuning, 00:53:12.320 |
do a basic kind of a querying mechanism, just like you describe when you specify a kind of poet, 00:53:18.240 |
and then you want to write a poem. It somehow is able to do basically a lookup on the internet 00:53:23.120 |
of relevant things. How else do you explain it? 00:53:27.440 |
Well, okay. The training involved massive amounts of data from the internet, and actually took 00:53:34.080 |
lots and lots of computer power, lots of electricity. There are some very prosaic 00:53:40.080 |
reasons why this wasn't done earlier, right? But it costs some tens of millions of dollars, 00:53:46.400 |
Less, but approximately a few million dollars. 00:53:53.680 |
Oh, all right, all right. Thank you. I mean, as they scale it up, it will- 00:53:57.440 |
It'll cost. But then the hope is cost comes down, and all that kind of stuff. 00:54:01.600 |
But basically, it is a neural net, or what's now called a deep net, but they're 00:54:09.760 |
basically the same thing, right? So it's a form of algorithm that people have known about for 00:54:16.400 |
decades, right? But it is constantly trying to solve the problem, predict the next word, 00:54:23.680 |
right? So it's just trying to predict what comes next. It's not trying to decide what it should 00:54:32.160 |
say, what ought to be true. It's trying to predict what someone who had said all of the words up to 00:54:40.720 |
Although to push back on that, that's how it's trained, but- 00:54:44.240 |
But it's arguable that our very cognition could be a mechanism as that simple. 00:54:50.560 |
Oh, of course. Of course. I never said that it wasn't. 00:54:55.200 |
In some sense, if there is a deep philosophical question that's raised by GPT-3, then that is it, 00:55:02.800 |
right? Are we doing anything other than this predictive processing, 00:55:06.960 |
just constantly trying to fill in a blank of what would come next after what we just said up to this 00:55:13.680 |
point? Is that what I'm doing right now? Is it possible, so the intuition that a lot 00:55:19.280 |
of people have, "Well, look, this thing is not going to be able to reason, the Mountain Everest 00:55:24.000 |
question." Do you think it's possible that GPT-5, 6, and 7 would be able to, with this exact same 00:55:31.040 |
process, begin to do something that looks like is indistinguishable to us humans from reasoning? 00:55:38.720 |
I mean, the truth is that we don't really know what the limits are, right? 00:55:44.000 |
Because what we've seen so far is that GPT-3 was basically the same thing as GPT-2, but just with 00:55:52.400 |
a much larger network, more training time, bigger training corpus, and it was very noticeably better 00:56:02.960 |
than its immediate predecessor. We don't know where you hit the ceiling here. I mean, that's 00:56:09.920 |
the amazing part and maybe also the scary part. Now, my guess would be that at some point, 00:56:18.560 |
there has to be diminishing returns. It can't be that simple, can it? Right? But I wish that I had 00:56:25.840 |
more to base that guess on. Right. Yeah. I mean, some people say that there 00:56:29.760 |
would be a limitation on the, "We're going to hit a limit on the amount of data that's 00:56:33.760 |
on the internet." Yes. Yeah. So, sure. So, there's certainly that limit. I mean, there's also, 00:56:39.360 |
if you are looking for questions that will stump GPT-3, you can come up with some without, 00:56:48.320 |
even getting it to learn how to balance parentheses. Right? It doesn't do such a great 00:56:55.200 |
job. Right? Its failures are ironic, right? Like basic arithmetic. Right? And you think, 00:57:05.280 |
"Isn't that what computers are supposed to be best at? Isn't that where computers already had 00:57:09.760 |
us beat a century ago?" Right? And yet, that's where GPT-3 struggles. Right? But it's amazing 00:57:16.640 |
that it's almost like a young child in that way. Right? But somehow, because it is just trying to 00:57:25.600 |
predict what comes next, it doesn't know when it should stop doing that and start doing something 00:57:32.400 |
very different, like some more exact logical reasoning. Right? And so, one is naturally led 00:57:41.520 |
to guess that our brain sort of has some element of predictive processing, but that it's coupled to 00:57:48.080 |
other mechanisms. Right? That it's coupled to, first of all, visual reasoning, which GPT-3 also 00:57:53.920 |
doesn't have any of. Right? Although there's some demonstration that there's a lot of promise there. 00:57:57.920 |
Oh, yeah. It can complete images. That's right. Yeah. And using exact same kind of transformer 00:58:03.200 |
mechanisms to watch videos on YouTube. And so, the same self-supervised mechanism to be able to 00:58:11.360 |
it'd be fascinating to think what kind of completions you could do. 00:58:14.240 |
Oh, yeah. No, absolutely. Although, if we ask it to a word problem that involved reasoning about 00:58:20.400 |
the locations of things in space, I don't think it does such a great job on those. Right? To take an 00:58:25.440 |
example. And so, the guess would be, well, humans have a lot of predictive processing, a lot of just 00:58:31.840 |
filling in the blanks, but we also have these other mechanisms that we can couple to, or that we can 00:58:37.360 |
call a subroutines when we need to. And that maybe to go further that one would want to integrate 00:58:44.480 |
other forms of reasoning. Let me go on another topic that is amazing, which is complexity. 00:58:52.320 |
And then start with the most absurdly romantic question of what's the most beautiful idea in 00:59:00.560 |
the computer science or theoretical computer science to you? Like what just early on in your 00:59:05.520 |
life or in general have captivated you and just grabbed you? 00:59:08.560 |
I think I'm going to have to go with the idea of universality. If you're really asking for the most 00:59:14.800 |
beautiful. I mean, so universality is the idea that you put together a few simple operations. 00:59:23.040 |
In the case of Boolean logic, that might be the AND gate, the OR gate, the NOT gate. And then 00:59:30.560 |
your first guess is, okay, this is a good start, but obviously, as I want to do more complicated 00:59:36.800 |
things, I'm going to need more complicated building blocks to express that. And that was actually my 00:59:42.560 |
guess when I first learned what programming was. I mean, when I was an adolescent and someone showed 00:59:48.480 |
me Apple Basic and GW Basic, if anyone listening remembers that. But I thought, okay, well now, 01:00:00.000 |
I felt like this is a revelation. It's like finding out where babies come from. It's like 01:00:06.480 |
that level of, why didn't anyone tell me this before? But I thought, okay, this is just the 01:00:11.520 |
beginning. Now I know how to write a basic program, but to really write an interesting program, 01:00:18.080 |
like a video game, which had always been my dream as a kid to create my own Nintendo games. 01:00:25.120 |
But obviously, I'm going to need to learn some way more complicated form of programming than that. 01:00:30.800 |
But eventually I learned this incredible idea of universality. And that says that, no, 01:00:38.560 |
you throw in a few rules and then you already have enough to express everything. So for example, 01:00:45.840 |
the AND, the OR, and the NOT gate can all, or in fact, even just the AND and the NOT gate, 01:00:51.600 |
or even just the NAND gate, for example, is already enough to express any Boolean function 01:00:58.160 |
on any number of bits. You just have to string together enough of them. 01:01:03.440 |
You can build the universe out of NAND gates. Yeah. The simple instructions of Basic are already 01:01:10.000 |
enough, at least in principle. If we ignore details like how much memory can be accessed 01:01:16.560 |
and stuff like that, that is enough to express what could be expressed by any programming language 01:01:21.760 |
whatsoever. And the way to prove that is very simple. We simply need to show that in Basic, 01:01:27.680 |
or whatever, we could write an interpreter or a compiler for whatever other programming language 01:01:34.160 |
we care about, like C or Java or whatever. And as soon as we had done that, then ipso facto, 01:01:40.720 |
anything that's expressible in C or Java is also expressible in Basic. 01:01:45.200 |
And so this idea of universality goes back at least to Alan Turing in the 1930s, when he 01:01:54.720 |
wrote down this incredibly simple pared-down model of a computer, the Turing machine, which 01:02:04.080 |
he pared down the instruction set to just read a symbol, write a symbol, move to the left, 01:02:11.760 |
move to the right, halt, change your internal state. That's it. And he proved that this could 01:02:21.120 |
simulate all kinds of other things. And so in fact, today we would say, well, we would call it 01:02:27.920 |
a Turing universal model of computation that has just the same expressive power that Basic or 01:02:36.320 |
Java or C++ or any of those other languages have, because anything in those other languages could 01:02:44.080 |
be compiled down to Turing machine. Now, Turing also proved a different related thing, 01:02:50.000 |
which is that there is a single Turing machine that can simulate any other Turing machine. 01:02:56.800 |
If you just describe that other machine on its tape, right? And likewise, there is a single 01:03:03.360 |
Turing machine that will run any C program, you know, if you just put it on its tape. That's 01:03:08.480 |
a second meaning of universality. First of all, he couldn't visualize it, 01:03:13.200 |
and that was in the 30s? Yeah, the 30s, that's right. 01:03:15.520 |
That's before computers really... I mean, I don't know how... I wonder what that felt like, 01:03:24.480 |
you know, learning that there's no Santa Claus or something. Because I don't know if that's 01:03:30.000 |
empowering or paralyzing, because it doesn't give you any... It's like, you can't write a 01:03:36.720 |
software engineering book and make that the first chapter and say, we're done. 01:03:40.400 |
Well, I mean, right. I mean, in one sense, it was this enormous flattening of the universe, right? 01:03:47.520 |
I had imagined that there was going to be some infinite hierarchy of more and more powerful 01:03:52.880 |
programming languages. And then I kicked myself for having such a stupid idea, but apparently 01:03:58.880 |
Gödel had had the same conjecture in the 30s. Oh, good. You're in good company. 01:04:03.200 |
Yeah, and then Gödel read Turing's paper, and he kicked himself, and he said, "Yeah, 01:04:09.360 |
I was completely wrong about that." Okay, but I had thought that maybe where I can contribute will 01:04:16.560 |
be to invent a new, more powerful programming language that lets you express things that could 01:04:21.760 |
never be expressed in basic. And how would you do that? Obviously, you couldn't do it itself in 01:04:27.760 |
basic, right? But there is this incredible flattening that happens once you learn what 01:04:34.000 |
is universality. But then it's also an opportunity, because it means once you know these rules, 01:04:42.560 |
then the sky is the limit, right? Then you have the same weapons at your disposal that the world's 01:04:49.840 |
greatest programmer has. It's now all just a question of how you wield them. 01:04:54.320 |
Right, exactly. So every problem is solvable, but some problems are harder than others. 01:05:00.960 |
Well, yeah, there's the question of how much time, of how hard is it to write a program? 01:05:06.960 |
And then there's also the questions of what resources does the program need? How much time, 01:05:12.080 |
how much memory? Those are much more complicated questions, of course, ones that we're still 01:05:16.080 |
struggling with today. Exactly. So you've, I don't know if you created Complexity Zoo, or... 01:05:21.200 |
I did create the Complexity Zoo. What is it? What's complexity? 01:05:24.960 |
Oh, all right, all right, all right. Complexity theory is the study of the inherent resources 01:05:31.360 |
needed to solve computational problems. So it's easiest to give an example. Let's say we want to 01:05:41.840 |
add two numbers, right? If I want to add them, if the numbers are twice as long, then it will take 01:05:49.440 |
me twice as long to add them, but only twice as long, right? It's no worse than that. 01:05:55.440 |
For a computer, or for a person, using pencil and paper for that matter. 01:06:00.400 |
Yeah, that's right. I mean, even if you just use the elementary school algorithm of just carrying, 01:06:05.440 |
you know, then it takes time that is linear in the length of the numbers, right? Now, 01:06:10.640 |
multiplication, if you use the elementary school algorithm, is harder because you have to multiply 01:06:17.040 |
each digit of the first number by each digit of the second one, and then deal with all the carries. 01:06:22.720 |
So that's what we call a quadratic time algorithm, right? If the numbers become twice as long, 01:06:29.120 |
now you need four times as much time, okay? So now, as it turns out, people discovered much 01:06:38.000 |
faster ways to multiply numbers using computers. And today we know how to multiply two numbers that 01:06:45.200 |
are n digits long using a number of steps that's nearly linear in n. These are questions you can 01:06:51.120 |
ask, but now let's think about a different thing that people, you know, have encountered in 01:06:56.320 |
elementary school, factoring a number. Okay, take a number and find its prime factors, right? 01:07:03.040 |
And here, you know, if I give you a number with 10 digits, I ask you for its prime factors, 01:07:08.640 |
well, maybe it's even, so you know that two is a factor. You know, maybe it ends in zero, 01:07:13.600 |
so you know that 10 is a factor, right? But, you know, other than a few obvious things like that, 01:07:18.880 |
you know, if the prime factors are all very large, then it's not clear how you even get started, 01:07:24.320 |
right? You know, it seems like you have to do an exhaustive search among an enormous number 01:07:29.200 |
of factors. Now, and as many people might know, for better or worse, the security, you know, 01:07:39.280 |
of most of the encryption that we currently use to protect the internet is based on the belief, 01:07:45.200 |
and this is not a theorem, it's a belief, that factoring is an inherently hard problem 01:07:52.080 |
for our computers. We do know algorithms that are better than just trial division, 01:07:57.360 |
just trying all the possible divisors, but they are still basically exponential. 01:08:05.280 |
>> Yeah, exactly. So the fastest algorithms that anyone has discovered, 01:08:10.560 |
at least publicly discovered, you know, I'm assuming that the NSA doesn't know something 01:08:14.800 |
better, okay, but they take time that basically grows exponentially with the cube root of the 01:08:21.520 |
size of the number that you're factoring, right? So that cube root, that's the part that takes all 01:08:26.720 |
the cleverness, okay, but there's still an exponential, there's still an exponentiality 01:08:31.040 |
there. What that means is that, like, when people use a thousand bit keys for their cryptography, 01:08:37.360 |
that can probably be broken using the resources of the NSA or the world's other intelligence 01:08:42.800 |
agencies. You know, people have done analyses that say, you know, with a few hundred million 01:08:47.600 |
dollars of computer power, they could totally do this. And if you look at the documents that 01:08:52.480 |
Snowden released, you know, it looks a lot like they are doing that or something like that. It 01:08:59.040 |
would kind of be surprising if they weren't, okay? But, you know, if that's true, then in some ways, 01:09:05.200 |
that's reassuring, because if that's the best that they can do, then that would say that they 01:09:12.960 |
>> Right, then 2000-bit numbers would be beyond what even they could do. 01:09:16.880 |
>> They haven't found an efficient algorithm. That's where all the worries and the concerns of 01:09:21.360 |
quantum computing came in, that there could be some kind of shortcut around that. 01:09:24.160 |
>> Right. So, complexity theory is a, you know, is a huge part of, let's say, the theoretical core 01:09:31.280 |
of computer science. You know, it started in the '60s and '70s as, you know, sort of an, you know, 01:09:38.080 |
autonomous field. So it was, you know, already, you know, I mean, you know, it was well-developed 01:09:43.840 |
even by the time that I was born, okay? But I, in 2002, I made a website called The Complexity Zoo, 01:09:53.040 |
to answer your question, where I just tried to catalog the different complexity classes, 01:09:59.520 |
which are classes of problems that are solvable with different kinds of resources, okay? 01:10:04.800 |
So these are kind of, you know, you could think of complexity classes as like being almost to 01:10:11.680 |
theoretical computer science, like what the elements are to chemistry, right? They're sort of, 01:10:16.560 |
you know, they're our most basic objects in a certain way. 01:10:20.720 |
>> I feel like the elements have a characteristic to them where you can't just add an infinite number. 01:10:28.880 |
>> Well, you could, but beyond a certain point, they become unstable, right? So it's like, you 01:10:35.040 |
know, in theory, you can have atoms with, you know, and look, look, I mean, a neutron star, 01:10:40.400 |
you know, is a nucleus with, you know, unculled billions of neutrons in it, of hadrons in it, 01:10:50.080 |
okay? But, you know, for sort of normal atoms, right, probably you can't get much above 100, 01:10:57.600 |
you know, atomic weight, 150 or so, or sorry, sorry, I mean, beyond 150 or so protons without 01:11:05.360 |
very quickly fissioning. With complexity classes, well, yeah, you can have an infinity of complexity 01:11:11.680 |
classes. But, you know, maybe there's only a finite number of them that are particularly 01:11:16.880 |
interesting, right? Just like with anything else, you know, you care about some more than about 01:11:22.960 |
>> So what kind of interesting classes are there? I mean, you could have just maybe say, 01:11:28.000 |
what are the, if you take any kind of computer science class, what are the classes you learn? 01:11:32.400 |
>> Good. Let me tell you sort of the biggest ones, the ones that you would learn first. So, 01:11:38.160 |
you know, first of all, there is P, that's what it's called, okay? It stands for polynomial time. 01:11:44.400 |
And this is just the class of all of the problems that you could solve with a conventional computer, 01:11:50.960 |
like your iPhone or your laptop, you know, by a completely deterministic algorithm, 01:11:57.120 |
right? Using a number of steps that grows only like the size of the input raised to some fixed 01:12:04.320 |
power, okay? So if your algorithm is linear time, like, you know, for adding numbers, okay, that 01:12:11.920 |
problem is in P. If you have an algorithm that's quadratic time, like the elementary school 01:12:17.920 |
algorithm for multiplying two numbers, that's also in P. Even if it was the size of the input 01:12:23.200 |
to the 10th power or to the 50th power, well, that wouldn't be very good in practice. But, 01:12:29.360 |
you know, formally, we would still count that. That would still be in P, okay? But if your 01:12:34.000 |
algorithm takes exponential time, meaning like if every time I add one more data point to your input, 01:12:43.280 |
if the time needed by the algorithm doubles, if you need time like two to the power of the amount 01:12:50.480 |
of input data, then that we call an exponential time algorithm, okay? And that is not polynomial, 01:12:58.000 |
okay? So P is all of the problems that have some polynomial time algorithm, okay? 01:13:04.720 |
So that includes most of what we do with our computers on a day-to-day basis, you know, 01:13:09.760 |
all the, you know, sorting, basic arithmetic, you know, whatever is going on in your email reader 01:13:15.600 |
or in Angry Birds, okay? It's all in P. Then the next super important class is called NP. 01:13:23.040 |
That stands for non-deterministic polynomial, okay? Does not stand for not polynomial, which is a 01:13:30.240 |
common confusion. But NP was basically all of the problems where if there is a solution, 01:13:38.000 |
then it is easy to check the solution if someone shows it to you, okay? So actually a perfect 01:13:43.760 |
example of a problem in NP is factoring, the one I told you about before. Like if I gave you a 01:13:51.360 |
number with thousands of digits and I told you that, you know, I asked you, "Does this have at 01:13:58.800 |
least three non-trivial divisors?" Right? That might be a super hard problem to solve, right? 01:14:05.920 |
Might take you millions of years using any algorithm that's known, at least running on 01:14:10.640 |
our existing computers, okay? But if I simply showed you the divisors, I said, "Here are three 01:14:17.040 |
divisors of this number," then it would be very easy for you to ask your computer to just check 01:14:22.800 |
each one and see if it works. Just divide it in, see if there's any remainder, right? And if they 01:14:28.240 |
all go in, then you've checked. Well, I guess there were, right? So any problem where, you know, 01:14:36.080 |
wherever there's a solution, there is a short witness that can be easily like a polynomial-size 01:14:42.320 |
witness that can be checked in polynomial time. That we call an NP problem, okay? 01:14:48.800 |
>>And yeah, so every problem that's in P is also in NP, right? Because, you know, 01:14:55.120 |
you could always just ignore the witness and just, you know, if a problem is in P, 01:14:58.800 |
you can just solve it yourself. Okay? But now the, in some sense, the central, you know, 01:15:04.640 |
mystery of theoretical computer science is, is every NP problem in P. So if you can easily check 01:15:11.760 |
the answer to a computational problem, does that mean that you can also easily find the answer? 01:15:17.840 |
>>Even though there's all these problems that appear to be very difficult to find the answer, 01:15:23.600 |
it's still an open question whether a good answer exists. So what's your— 01:15:27.520 |
>>Because no one has proven that there's no way to do it, right? 01:15:29.680 |
>>It's arguably the most, I don't know, the most famous, the most maybe interesting, 01:15:36.560 |
maybe you disagree with that, problem in theoretical computer science. So what's your— 01:15:42.640 |
>>If you were to bet all your money, where do you put your money? 01:15:47.680 |
>>I like to say that if we were physicists, we would have just declared that to be a law of nature. 01:15:52.480 |
You know, just like, just like thermodynamics or something. 01:15:55.280 |
>>Just giving ourselves Nobel prizes for its discovery. 01:15:58.240 |
Yeah, yeah, no, and look, if later it turned out that we were wrong, we just give ourselves— 01:16:03.520 |
>>More Nobel prizes, yeah. I mean, you know, but yeah, because we're— 01:16:09.200 |
>>I mean, no, I mean, I mean, it's really just because we are mathematicians or descended 01:16:14.720 |
from mathematicians, you know, we have to call things conjectures that other people 01:16:19.760 |
would just call empirical facts or discoveries, right? But one shouldn't read more into that 01:16:24.960 |
difference in language, you know, about the underlying truth. 01:16:28.480 |
>>So, okay, so you're a good investor and good spender of money. So then let me ask— 01:16:32.880 |
>>Let me ask another way. Is it possible at all? And what would that look like if P indeed equals NP? 01:16:41.120 |
>>Well, I do think that it's possible. I mean, in fact, you know, when people really 01:16:45.040 |
pressed me on my blog for what odds would I put, I put, you know, two or three percent odds. 01:16:51.280 |
>>That P equals NP. Yeah, just be—well, because, you know, when P—I mean, you really have to 01:16:56.960 |
think about, like, if there were 50, you know, mysteries like P versus NP, and if I made a guess 01:17:03.840 |
about every single one of them, would I expect to be right 50 times, right? And the truthful 01:17:11.040 |
>>So, you know, and that's what you really mean in saying that, you know, you have, 01:17:15.760 |
you know, better than 98% odds for something, okay? But so, yeah, you know, I mean, there could 01:17:22.640 |
certainly be surprises. And look, if P equals NP, well, then there would be the further question 01:17:28.720 |
of, you know, is the algorithm actually efficient in practice, right? I mean, Don Knuth, who I know 01:17:35.280 |
that you've interviewed as well, right, he likes to conjecture that P equals NP, but that the 01:17:41.440 |
algorithm is so inefficient that it doesn't matter anyway, right? Now, I don't know, I've listened 01:17:47.040 |
to him say that. I don't know whether he says that just because he has an actual reason for 01:17:51.920 |
thinking it's true or just because it sounds cool, okay? But, you know, that's a logical 01:17:58.000 |
possibility, right, that the algorithm could be N to the 10,000 time, or it could even just be N 01:18:03.840 |
squared time, but with a leading constant of—it could be a Google times N squared, or something 01:18:09.440 |
like that. And in that case, the fact that P equals NP, well, it would, you know, ravage the 01:18:16.160 |
whole theory of complexity, and we would have to, you know, rebuild from the ground up. But in 01:18:21.200 |
practical terms, it might mean very little, right, if the algorithm was too inefficient to run. 01:18:27.360 |
If the algorithm could actually be run in practice, like if it had small enough constants, 01:18:33.600 |
you know, or if you could improve it to where it had small enough constants that it was 01:18:38.960 |
efficient in practice, then that would change the world, okay? 01:18:42.400 |
You think it would have, like, what kind of impact would it have? 01:18:44.320 |
Well, okay, I mean, here's an example. I mean, you could—well, okay, just for starters, 01:18:49.600 |
you could break basically all of the encryption that people use to protect the internet. 01:18:54.240 |
You could break Bitcoin and every other cryptocurrency, or, you know, 01:18:58.000 |
mine as much Bitcoin as you wanted, right? You know, become a super-duper billionaire, 01:19:09.360 |
Right, that's just for starters. That's a good point. 01:19:11.280 |
Now, your next move might be something like, you know, you now have, like, a theoretically optimal 01:19:17.440 |
way to train any neural network, to find parameters for any neural network, right? 01:19:22.240 |
So you could now say, like, is there any small neural network that generates the entire content 01:19:27.840 |
of Wikipedia, right? If, you know—and now the question is not, can you find it? The question 01:19:33.680 |
has been reduced to, does that exist or not? If it does exist, then the answer would be, yes, 01:19:39.360 |
you can find it, okay, if you had this algorithm in your hands, okay? You could ask your computer, 01:19:46.320 |
you know, I mean, P versus NP is one of these seven problems that carries this million-dollar 01:19:51.360 |
prize from the Clay Foundation. You know, if you solve it, you know, and others are the Riemann 01:19:56.560 |
hypothesis, the Poincare conjecture, which was solved, although the solver turned down the prize, 01:20:03.120 |
right, and four others. But what I like to say, the way that we can see that P versus NP is the 01:20:09.040 |
biggest of all of these questions, is that if you had this fast algorithm, then you could solve all 01:20:14.720 |
seven of them, okay? You just ask your computer, you know, is there a short proof of the Riemann 01:20:19.680 |
hypothesis, right? You know, that a machine could—in a language where a machine could verify 01:20:24.640 |
it, and provided that such a proof exists, then your computer finds it in a short amount of time, 01:20:30.080 |
without having to do a brute force search. Okay, so I mean, those are the stakes of what we're 01:20:34.480 |
talking about. But I hope that also helps to give your listeners some intuition of why I and most of 01:20:42.240 |
my colleagues would put our money on P not equaling NP. —Is it possible—I apologize, this is a really 01:20:48.720 |
dumb question, but is it possible to—that a proof will come out that P equals NP, but an algorithm 01:20:57.280 |
that makes P equals NP is impossible to find? Is that like crazy? —Okay, well, if P equals NP, 01:21:04.880 |
it would mean that there is such an algorithm. —That it exists, yeah. —But, you know, it would 01:21:12.880 |
mean that it exists. Now, you know, in practice, normally the way that we would prove anything 01:21:18.240 |
like that would be by finding the algorithm. —By finding one algorithm. —But there is such a thing 01:21:22.720 |
as a non-constructive proof that an algorithm exists. You know, this has really only reared 01:21:28.000 |
its head, I think, a few times in the history of our field, right? But, you know, it is theoretically 01:21:34.560 |
possible that such a thing could happen. But, you know, there are—even here, there are some amusing 01:21:40.960 |
observations that one could make. So there is this famous observation of Leonid Levin, who was, you 01:21:47.360 |
know, one of the original discoverers of NP completeness, right? And he said, "Well, consider 01:21:52.400 |
the following algorithm that, like, I guarantee will solve the NP problems efficiently, just as 01:21:59.520 |
provided that P equals NP." Okay? Here is what it does. It just runs, you know, it enumerates every 01:22:06.640 |
possible algorithm in a gigantic infinite list, right? From like, in like alphabetical order, 01:22:12.720 |
right? You know, and many of them maybe won't even compile, so we just ignore those, okay? But now 01:22:18.080 |
we just, you know, run the first algorithm, then we run the second algorithm, we run the first one 01:22:23.440 |
a little bit more, then we run the first three algorithms for a while, we run the first four 01:22:28.080 |
for a while. This is called dovetailing, by the way. This is a known trick in theoretical computer 01:22:34.960 |
science, okay? But we do it in such a way that, you know, whatever is the algorithm out there in 01:22:41.760 |
our list that solves NP complete, you know, the NP problems efficiently, will eventually hit that 01:22:48.000 |
one, right? And now the key is that whenever we hit that one, you know, by assumption it has to 01:22:54.560 |
solve the problem, it has to find the solution, and once it claims to find the solution, then we 01:22:59.680 |
can check that ourself, right? Because these are NP problems, then we can check it. Now, this is 01:23:05.360 |
utterly impractical, right? You know, you'd have to do this enormous, exhaustive search among all 01:23:11.680 |
the algorithms, but from a certain theoretical standpoint, that is merely a constant pre-factor. 01:23:18.800 |
>> That's merely a multiplier of your running time. So there are tricks like that one can do to say 01:23:23.600 |
that in some sense the algorithm would have to be constructive. But, you know, in the human sense, 01:23:30.480 |
you know, it is possible that, you know, it's conceivable that one could prove such a thing 01:23:35.280 |
via a non-constructive method. Is that likely? I don't think so, not personally. 01:23:41.360 |
>> So that's P and NP, but the Complexity Zoo is full of wonderful creatures. 01:23:52.640 |
>> Yeah, how do you get more? How are beings made? 01:23:56.160 |
>> I mean, just for starters, there is everything that we could do with a conventional computer 01:24:02.560 |
with a polynomial amount of memory, okay? But possibly an exponential amount of time 01:24:08.080 |
because we get to reuse the same memory over and over again. Okay, that is called PSPACE, 01:24:13.360 |
okay? And that's actually, we think, an even larger class than NP. Okay, well, 01:24:20.240 |
P is contained in NP, which is contained in PSPACE. And we think that those containments 01:24:26.240 |
>> And the constraint there is on the memory. The memory has to grow 01:24:32.880 |
>> That's right, that's right. But in PSPACE, we now have interesting things that were not in NP, 01:24:38.240 |
like as a famous example, you know, from a given position in chess, you know, does white or black 01:24:45.200 |
have the win? Let's say, assuming, provided that the game lasts only for a reasonable number of 01:24:50.720 |
moves, okay? Or likewise for Go, okay? And, you know, even for the generalizations of these games 01:24:57.040 |
to arbitrary size boards, right? Because with an eight by eight board, you could say that's just a 01:25:01.360 |
constant size problem. You just, you know, in principle, you just solve it in O of one time, 01:25:06.240 |
right? But so we really mean the generalizations of, you know, games to arbitrary size boards here. 01:25:14.080 |
Or another thing in PSPACE would be like, I give you some really hard constraint satisfaction 01:25:21.920 |
problem, like, you know, a traveling salesperson or, you know, packing boxes into the trunk of 01:25:28.880 |
your car or something like that. And I ask not just, is there a solution, which would be an NP 01:25:33.920 |
problem, but I ask how many solutions are there, okay? That, you know, count the number of valid 01:25:41.200 |
solutions. That actually gives those problems lie in a complexity class called sharp P, or like, 01:25:49.120 |
it looks like hashtag, like hashtag P. >> Got it. 01:25:51.760 |
>> Okay, which sits between NP and PSPACE. There's all the problems that you can do in 01:25:57.920 |
exponential time, okay? That's called EXP. So, and by the way, it was proven in the 60s that EXP 01:26:08.400 |
is larger than P, okay? So we know that much. We know that there are problems that are solvable in 01:26:14.400 |
exponential time that are not solvable in polynomial time, okay? In fact, we even know, 01:26:20.560 |
we know that there are problems that are solvable in N cubed time that are not solvable in N squared 01:26:25.920 |
time. >> And that, those don't help us with the controversy between P and NP at all. 01:26:29.840 |
>> Unfortunately, it seems not, or certainly not yet, right? The techniques that we use to 01:26:36.400 |
establish those things, they're very, very related to how Turing proved the unsolvability of the 01:26:41.120 |
halting problem, but they seem to break down when we're comparing two different resources, 01:26:46.640 |
like time versus space, or like, you know, P versus NP, okay? But, you know, I mean, there's 01:26:53.360 |
what you can do with a randomized algorithm, right? That can sometimes, you know, has some 01:26:58.560 |
probability of making a mistake. That's called BPP, bounded error probabilistic polynomial time. 01:27:05.200 |
>> Wow. >> And then, of course, there's one that's very close to my own heart, what you can 01:27:09.360 |
efficiently do, do in polynomial time using a quantum computer, okay? And that's called BQP, 01:27:16.000 |
right? And so, you know, what-- >> What's understood about that class, 01:27:19.840 |
maybe, as a comment. >> Okay, so P is contained in BPP, which is contained in BQP, which is contained 01:27:26.320 |
in P space, okay? So anything you can, in fact, in like, in something very similar to sharp P. 01:27:33.280 |
BQP is basically, you know, well, it's contained in like, P with the magic power to solve sharp P 01:27:39.760 |
problems, okay? >> Why is BQP contained in P space? 01:27:44.480 |
>> Oh, that's an excellent question. So there is, well, I mean, one has to prove that, okay? But 01:27:52.400 |
the proof, you could think of it as using Richard Feynman's picture of quantum mechanics, 01:28:00.960 |
which is that you can always, you know, we haven't really talked about quantum mechanics in this 01:28:06.640 |
conversation. We did in our previous one. >> Yeah, we did last time. 01:28:09.600 |
>> But yeah, we did last time, okay? But basically, you could always think of a quantum computation 01:28:16.160 |
as like a branching tree of possibilities, where each possible path that you could take 01:28:24.000 |
through, you know, the space has a complex number attached to it called an amplitude, okay? And now, 01:28:30.960 |
the rule is, you know, when you make a measurement at the end, well, you see a random answer, 01:28:36.080 |
okay? But quantum mechanics is all about calculating the probability that you're 01:28:40.720 |
going to see one potential answer versus another one, right? And the rule for calculating the 01:28:47.120 |
probability that you'll see some answer is that you have to add up the amplitudes for all of the 01:28:53.120 |
paths that could have led to that answer. And then, you know, that's a complex number, so that, 01:28:58.560 |
you know, how could that be a probability? Then you take the squared absolute value of the result. 01:29:04.400 |
That gives you a number between zero and one, okay? So, yeah, I just summarized quantum mechanics in 01:29:10.960 |
like 30 seconds, okay? >> Yeah, in a few sentences. 01:29:13.040 |
>> But now, you know, what this already tells us is that anything I can do with a quantum computer, 01:29:19.440 |
I could simulate with a classical computer if I only have exponentially more time, okay? And why 01:29:25.520 |
is that? Because if I have exponential time, I could just write down this entire branching tree 01:29:31.920 |
and just explicitly calculate each of these amplitudes, right? You know, that will be very 01:29:37.600 |
inefficient, but it will work, right? It's enough to show that quantum computers could not solve the 01:29:43.600 |
halting problem, or, you know, they could never do anything that is literally uncomputable in 01:29:48.960 |
Turing's sense, okay? But now, as I said, there is even a stronger result, which says that BQP 01:29:55.520 |
is contained in P space. The way that we prove that is that we say, "If all I want is to calculate 01:30:03.200 |
the probability of some particular output happening, you know, which is all I need to 01:30:08.240 |
simulate a quantum computer, really, then I don't need to write down the entire quantum state, 01:30:13.520 |
which is an exponentially large object. All I need to do is just calculate what is the amplitude 01:30:20.240 |
for that final state, and to do that, I just have to sum up all the amplitudes that lead to that 01:30:27.120 |
state. Okay, so that's an exponentially large sum, but I can calculate it just reusing the same 01:30:33.280 |
memory over and over for each term in the sum. >> And hence the P, in the P space. 01:30:39.360 |
>> So what, out of that whole complexity zoo, and it could be BQP, what do you find is the most, 01:30:46.000 |
the class that captured your heart the most? Was the most beautiful class that's just, yeah. 01:30:53.440 |
>> Well, I used as my email address bqpqpolly@gmail.com, just because bqp/qpolly, 01:31:03.680 |
well, amazingly, no one had taken it. >> Amazing. 01:31:06.720 |
>> But this is a class that I was involved in defining, proving the first theorems about 01:31:14.240 |
in 2003 or so, so it was close to my heart. But this is like, if we extended bqp, which is the 01:31:22.560 |
class of everything we can do efficiently with a quantum computer, to allow quantum advice, 01:31:28.880 |
which means imagine that you had some special initial state that could somehow help you do 01:31:35.440 |
computation, and maybe such a state would be exponentially hard to prepare, but maybe somehow 01:31:44.080 |
these states were formed in the Big Bang or something, and they've just been sitting around 01:31:48.000 |
ever since, right? If you found one, and if this state could be like ultra-power, there are no 01:31:53.600 |
limits on how powerful it could be, except that this state doesn't know in advance which input 01:31:59.760 |
you've got, right? It only knows the size of your input. That's bqp/qpolly. So that's one that I 01:32:07.360 |
just personally happen to love, okay? But if you're asking, there's a class that I think is way 01:32:16.800 |
more beautiful or fundamental than a lot of people, even within this field, realize that it is. 01:32:24.000 |
That class is called SZK, or statistical zero knowledge. And there's a very, very easy way to 01:32:31.760 |
define this class, which is to say, suppose that I have two algorithms that each sample from 01:32:38.240 |
probability distributions, right? So each one just outputs random samples according to possibly 01:32:45.360 |
different distributions. And now the question I ask is, let's say distributions over strings of 01:32:51.520 |
n bits, so over an exponentially large space. Now I ask, are these two distributions close or far 01:32:59.120 |
as probability distributions? Okay, any problem that can be reduced to that, that can be put into 01:33:05.280 |
that form, is an SZK problem. And the way that this class was originally discovered was completely 01:33:12.320 |
different from that, and was kind of more complicated. It was discovered as the class 01:33:17.360 |
of all of the problems that have a certain kind of what's called zero knowledge proof. 01:33:22.160 |
The zero knowledge proofs are one of the central ideas in cryptography. Shafi Goldwasser and Sylvio 01:33:29.760 |
Macaulay won the Turing Award for inventing them, and they're at the core of even some 01:33:35.200 |
cryptocurrencies that people use nowadays. But zero knowledge proofs are ways of proving to 01:33:44.400 |
someone that something is true, like that there is a solution to this optimization problem, 01:33:52.880 |
or that these two graphs are isomorphic to each other or something, but without revealing why 01:33:58.640 |
it's true, without revealing anything about why it's true. SZK is all of the problems for which 01:34:06.480 |
there is such a proof that doesn't rely on any cryptography. And if you wonder, how could such 01:34:13.680 |
a thing possibly exist? Well, imagine that I had two graphs, and I wanted to convince you 01:34:21.040 |
that these two graphs are not isomorphic, meaning I cannot permute one of them so that it's the same 01:34:27.040 |
as the other one. That might be a very hard statement to prove. You might have to do a very 01:34:33.760 |
exhaustive enumeration of all the different permutations before you were convinced that 01:34:38.640 |
it was true. But what if there were some all-knowing wizard that said to you, "Look, 01:34:43.760 |
I'll tell you what. Just pick one of the graphs randomly, then randomly permute it, 01:34:49.120 |
then send it to me, and I will tell you which graph you started with. 01:34:54.480 |
And I will do that every single time." Right? >>Karl: Let me load that in. Okay, 01:34:59.280 |
that's fine. I got it. >>Corey: Yeah. And let's say that that wizard did that 100 times, and it 01:35:04.080 |
was right every time. Now, if the graphs were isomorphic, then it would have been flipping 01:35:09.600 |
a coin each time. It would have had only a 1 in 2 to the 100 power chance of guessing right each time. 01:35:16.960 |
So if it's right every time, then now you're statistically convinced that these graphs are 01:35:22.880 |
not isomorphic, even though you've learned nothing new about why they are. >>Karl: So 01:35:27.440 |
fascinating. >>Corey: So yeah. So SDK is all of the problems that have protocols like that one, 01:35:33.440 |
but it has this beautiful other characterization. It's shown up again and again in my own work, 01:35:39.360 |
in a lot of people's work. And I think that it really is one of the most fundamental classes. 01:35:44.720 |
It's just that people didn't realize that when it was first discovered. 01:35:47.520 |
>>Luis: So we're living in the middle of a pandemic currently. How has your life been 01:35:54.400 |
changed? Or no, better to ask, how has your perspective of the world changed with this 01:35:59.600 |
world-changing event of a pandemic overtaking the entire world? 01:36:03.680 |
>>Karl: Yeah. Well, I mean, all of our lives have changed. I guess as with no other event since I 01:36:10.640 |
was born, you would have to go back to World War II for something, I think, of this magnitude, 01:36:16.880 |
on the way that we live our lives. As for how it has changed my worldview, 01:36:21.760 |
I think that the failure of institutions like the CDC, like other institutions that we thought were 01:36:33.440 |
trustworthy, like a lot of the media, was staggering, was absolutely breathtaking. 01:36:40.800 |
It is something that I would not have predicted. I think I wrote on my blog that it's fascinating 01:36:50.000 |
to re-watch the movie Contagion from a decade ago that correctly foresaw so many aspects of what was 01:36:58.800 |
going on. An airborne virus originates in China, spreads to much of the world, shuts everything 01:37:07.680 |
down until a vaccine can be developed, everyone has to stay at home. It gets an enormous number 01:37:16.240 |
of things right. But the one thing that they could not imagine is that in this movie, everyone from 01:37:22.960 |
the government is hyper-competent, hyper-dedicated to the public good. 01:37:30.160 |
>>KARL: Yeah, they're the best of the best. And there are these conspiracy theorists 01:37:36.480 |
who think this is all fake news, there's not really a pandemic. And those are some random 01:37:42.960 |
people on the internet who the hyper-competent government people have to oppose. In trying to 01:37:49.920 |
envision the worst thing that could happen, there was a failure of imagination. The movie makers 01:37:56.960 |
did not imagine that the conspiracy theorists and the incompetence and the nutcases would have 01:38:03.920 |
captured our institutions and be the ones actually running things. 01:38:07.840 |
>>ANDREW: So you had a certain... I love competence in all walks of life. I get so 01:38:13.680 |
much energy, I'm so excited by people who do an amazing job. And I, like you, 01:38:17.520 |
or maybe you can clarify, but I had maybe not intuition, but I hope that government at its 01:38:23.200 |
best could be ultra-competent. First of all, two questions. How do you explain the lack of 01:38:30.400 |
confidence? And the other, maybe on the positive side, how can we build a more competent government? 01:38:36.160 |
>>KARL: Well, there's an election in two months. I mean... 01:38:39.680 |
>>ANDREW: But you have a faith that the election process... 01:38:41.680 |
>>KARL: It's not going to fix everything, but I feel like there is a ship that is sinking, 01:38:47.920 |
and you could at least stop the sinking. But I think that there are much, much deeper problems. 01:38:54.160 |
I mean, I think that it is plausible to me that a lot of the failures with the CDC, 01:39:03.440 |
with some of the other health agencies even predate Trump, predate the right-wing populism 01:39:11.440 |
that has taken over much of the world now. And I think that it is very... I've actually been 01:39:23.760 |
strongly in favor of rushing vaccines. I thought that we could have done human challenge trials, 01:39:34.320 |
which were not done. We could have had volunteers to actually get vaccines, get exposed to COVID. 01:39:46.080 |
>>ANDREW: So innovative ways of accelerating what we've done previously over a long amount of time. 01:39:50.320 |
>>KARL: I thought that each month that a vaccine is closer is like trillions of dollars. 01:39:57.760 |
>>KARL: And of course lives, at least hundreds of thousands of lives. 01:40:02.160 |
>>ANDREW: Are you surprised that it's taken this long? We still don't have a plan. There's still 01:40:06.480 |
not a feeling like anyone is actually doing anything in terms of alleviating any kind of plan. 01:40:13.200 |
So there's a bunch of stuff. There's vaccine, but you could also do a testing infrastructure where 01:40:17.680 |
everybody's tested nonstop with contact tracing, all that kind of stuff. 01:40:21.200 |
>>KARL: Well, I mean, I'm as surprised as almost everyone else. I mean, this is a historic failure. 01:40:26.800 |
It is one of the biggest failures in the 240 year history of the United States. And we should be 01:40:33.440 |
crystal clear about that. And one thing that I think has been missing, even from the more 01:40:41.520 |
competent side is the World War II mentality. The mentality of, "Let's just... If we can, 01:40:54.640 |
by breaking a whole bunch of rules, get a vaccine in even half the amount of time as we thought, 01:41:01.680 |
then let's just do that because we have to weigh all of the moral qualms we have about doing that 01:41:13.600 |
>>ANDREW: And one key little aspect to that that's deeply important to me, 01:41:17.360 |
and we'll go into that topic next, is the World War II mentality wasn't just about breaking all 01:41:23.680 |
the rules to get the job done. There was a togetherness to it. So if I were president right 01:41:30.400 |
now, it seems quite elementary to unite the country because we're facing a crisis. It's easy 01:41:38.800 |
to make the virus the enemy. And it's very surprising to me that the division has increased 01:41:45.680 |
as opposed to decreased. That's heartbreaking. 01:41:48.640 |
>>KARL: Yeah, well, look, I mean, it's been said by others that this is the first time 01:41:52.160 |
in the country's history that we have a president who does not even pretend to want to unite the 01:41:58.080 |
country. I mean, Lincoln, who fought a civil war, said he wanted to unite the country. 01:42:06.560 |
And I do worry enormously about what happens if the results of this election are contested. 01:42:17.520 |
Will there be violence as a result of that, and will we have a clear path of succession? 01:42:22.320 |
And look, I mean, we're going to find out the answers to this in two months, and if none of 01:42:28.880 |
that happens, maybe I'll look foolish. But I am willing to go on the record and say, 01:42:34.560 |
>>ANDREW: Yeah, I've been reading The Rise and Fall of the Third Reich. 01:42:37.680 |
So if I can--this is like one little voice to put out there that I think November 01:42:46.400 |
will be a really critical month for people to breathe and put love out there. Do not, 01:42:53.360 |
you know, anger in that context. No matter who wins, no matter what is said, 01:42:59.120 |
will destroy our country, may destroy our country, may destroy the world because of 01:43:04.560 |
the power of the country. So it's really important to be patient, loving, empathetic. 01:43:09.520 |
One of the things that troubles me is that even people on the left are unable to have 01:43:15.840 |
a love and respect for people who voted for Trump. They can't imagine that there's good 01:43:20.560 |
people that could vote for the opposite side. 01:43:22.560 |
>>ANDREW: Oh, I know there are, because I know some of them, right? I mean, you know, 01:43:26.640 |
it's still, you know, maybe it baffles me, but you know, I know such people. 01:43:31.440 |
>>ANDREW: Let me ask you this. It's also heartbreaking to me on the topic of cancel 01:43:36.320 |
culture. So in the machine learning community, I've seen it a little bit, that there's 01:43:41.840 |
aggressive attacking of people who are trying to have a nuanced conversation about things. 01:43:47.120 |
And it's troubling because it feels like nuanced conversation is the only way to talk about 01:43:55.360 |
difficult topics. And when there's a thought police and speech police on any nuanced conversation 01:44:02.320 |
that everybody has to, like in a animal farm chant that racism is bad and sexism is bad, 01:44:08.880 |
which is things that everybody believes, and they can't possibly say anything nuanced, 01:44:14.480 |
it feels like it goes against any kind of progress, from my kind of shallow perspective. 01:44:19.440 |
But you've written a little bit about cancel culture. Do you have thoughts there? 01:44:23.440 |
>>ANDREW: Well, look, I mean, to say that I am opposed to, you know, this trend of cancellations 01:44:31.920 |
or of, you know, shouting people down rather than engaging them, that would be a massive 01:44:36.000 |
understatement, right? And I feel like, you know, I have put my money where my mouth is, 01:44:41.680 |
you know, not as much as some people have, but, you know, I've tried to do something. I mean, 01:44:46.800 |
I have defended, you know, some unpopular people and unpopular, you know, ideas on my blog. I've, 01:44:54.480 |
you know, tried to defend, you know, norms of open discourse of, you know, 01:45:02.160 |
reasoning with our opponents, even when I've been shouted down for that on social media, 01:45:07.760 |
you know, called a racist, called a sexist, all of those things. And which, by the way, I should 01:45:12.160 |
say, you know, I would be perfectly happy to, you know, say, you know, if we had time to say, 01:45:17.040 |
you know, 10,000 times, you know, my hatred of racism, of sexism, of homophobia, right? 01:45:25.600 |
But what I don't want to do is to cede to some particular political faction, the right to define 01:45:33.600 |
exactly what is meant by those terms, to say, "Well, then you have to agree with all of these 01:45:38.720 |
other extremely contentious positions, or else you are a misogynist, or else you are a racist, 01:45:45.440 |
right?" I say that, "Well, no, you know, don't, like, don't I, or, you know, don't people like 01:45:52.720 |
me also get a say in the discussion about, you know, what is racism, about what is going to be 01:45:58.480 |
the most effective to combat racism, right?" And, you know, this cancellation mentality, I think, 01:46:06.240 |
is spectacularly ineffective at its own professed goal of, you know, combating racism and sexism. 01:46:13.200 |
What's a positive way out? So I try to, I don't know if you see what I do on Twitter, but on 01:46:20.160 |
Twitter, I mostly, in my whole, in my life, I've actually, it's who I am to the core, is like, 01:46:26.000 |
I really focus on the positive, and I try to put love out there in the world. And still, 01:46:31.440 |
I get attacked. And I look at that, and I wonder, like... 01:46:38.240 |
Like, I haven't actually said anything difficult and nuanced. You talk about somebody like 01:46:43.920 |
Steven Pinker, who, I actually don't know the full range of things that he's attacked for, 01:46:51.040 |
but he tries to say difficult, he tries to be thoughtful about difficult topics. 01:46:55.920 |
And obviously, he just gets slaughtered by... 01:47:00.000 |
Well, I mean, yes, but it's also amazing how well Steve has withstood it. I mean, 01:47:06.480 |
he just survived an attempt to cancel him just a couple of months ago, right? 01:47:10.960 |
Psychologically, he survives it too, which worries me, because I don't think I can. 01:47:15.360 |
Yeah, I've gotten to know Steve a bit. He is incredibly unperturbed by this stuff. 01:47:19.920 |
And I admire that, and I envy it. I wish that I could be like that. I mean, my impulse when 01:47:26.240 |
I'm getting attacked is I just want to engage every single anonymous person on Twitter and 01:47:32.560 |
Reddit who is saying mean stuff about me. And I want to just say, "Well, look, can we just 01:47:36.960 |
talk this over for an hour, and then you'll see that I'm not that bad." 01:47:41.360 |
And sometimes that even works. The problem is then there's the 20,000 other ones, right? 01:47:46.640 |
And that's not... But psychologically, does that wear on you? 01:47:50.640 |
It does, it does. But yeah, I mean, in terms of what is the solution, I mean, I wish I knew, 01:47:56.080 |
right? In a certain way, these problems are maybe harder than P versus NP, right? I mean, 01:48:04.240 |
but I think that part of it has to be for... I think that there's a lot of sort of silent support 01:48:11.200 |
for what I'll call the open discourse side, the reasonable enlightenment side. And I think that 01:48:18.080 |
support has to become less silent, right? I think that a lot of people agree that a lot of these 01:48:27.280 |
cancellations and attacks are ridiculous, but are just afraid to say so, right? Or else they'll get 01:48:34.080 |
shouted down as well, right? That's just the standard witch hunt dynamic, which of course, 01:48:38.880 |
this faction understands and exploits to its great advantage. But more people just said, 01:48:47.760 |
"We're not going to stand for this, right? Guess what? We're against racism too, but 01:48:57.760 |
what you're doing is ridiculous," right? And the hard part is it takes a lot of mental energy. It 01:49:04.000 |
takes a lot of time. Even if you feel like you're not going to be canceled or you're staying on the 01:49:09.920 |
safe side, it takes a lot of time to phrase things in exactly the right way and to respond to 01:49:18.320 |
everything people say. But I think that the more people speak up from all political persuasions, 01:49:27.760 |
from all walks of life, then the easier it is to move forward. 01:49:32.720 |
Since we've been talking about love, can you—last time I talked to you about the meaning of life a 01:49:39.600 |
little bit, but here has—it's a weird question to ask a computer scientist, but has love for other 01:49:47.120 |
human beings, for things, for the world around you played an important role in your life? 01:49:54.000 |
Have you—it's easy for a world-class computer scientist, you could even call yourself a 01:50:03.200 |
physicist, everything to be lost in the books. Is the connection to other humans, love for other 01:50:09.440 |
humans played an important role? I love my kids. I love my wife. I love my parents. 01:50:19.600 |
I am probably not different from most people in loving their families and in that being very 01:50:28.320 |
important in my life. Now, I should remind you that I am a theoretical computer scientist. 01:50:36.320 |
If you're looking for deep insight about the nature of love, you're probably looking in the 01:50:40.640 |
wrong place to ask me, but sure, it's been important. 01:50:45.920 |
But is there something from a computer science perspective to be said about love? 01:50:50.160 |
Is that even beyond into the realm of consciousness? 01:50:56.480 |
There was this great cartoon, I think it was one of the classic XKCDs, where it shows a heart, 01:51:04.560 |
and it's like squaring the heart, taking the Fourier transform of the heart, integrating the 01:51:10.720 |
heart, each thing, and then it says, "My normal approach is useless here." 01:51:16.880 |
I'm so glad I asked this question. I think there's no better way to end this, Scott. 01:51:23.440 |
I hope we get a chance to talk again. This has been an amazing, 01:51:26.080 |
cool experiment to do it outside. I'm really glad you made it out. 01:51:29.120 |
Yeah, well, I appreciate it a lot. It's been a pleasure, 01:51:31.520 |
and I'm glad you were able to come out to Austin. 01:51:35.600 |
Thanks for listening to this conversation with Scott Aronson, and thank you to our sponsors, 01:51:41.200 |
8sleep, SimpliSafe, ExpressVPN, and BetterHelp. Please check out these sponsors in the description 01:51:48.480 |
to get a discount and to support this podcast. If you enjoy this thing, subscribe on YouTube, 01:51:54.960 |
review it with 5 Stars on Apple Podcasts, follow on Spotify, support on Patreon, 01:52:03.600 |
And now let me leave you with some words from Scott Aronson that I also gave to you in the 01:52:08.320 |
introduction, which is, "If you always win, then you're probably doing something wrong." 01:52:14.560 |
Thank you for listening and for putting up with the intro and outro in this strange room in the 01:52:21.200 |
middle of nowhere, and I very much hope to see you next time in many more ways than one.