back to indexMax Tegmark: AI and Physics | Lex Fridman Podcast #155
Chapters
0:0 Introduction
2:49 AI and physics
16:7 Can AI discover new laws of physics?
24:57 AI safety
42:33 Extinction of human species
53:31 How to fix fake news and misinformation
75:5 Autonomous weapons
90:28 The man who prevented nuclear war
100:36 Elon Musk and AI
114:14 AI alignment
120:16 Consciousness
129:20 Richard Feynman
133:30 Machine learning and computational physics
144:28 AI and creativity
155:42 Aliens
171:25 Mortality
00:00:00.000 |
The following is a conversation with Max Tegmark, 00:00:10.960 |
He is a physicist and artificial intelligence researcher 00:00:14.800 |
at MIT, co-founder of the Future of Life Institute, 00:00:21.360 |
Being Human in the Age of Artificial Intelligence. 00:00:42.880 |
Most recently at the intersection of AI and physics, 00:00:49.960 |
that divide us by controlling the information we see, 00:00:55.000 |
and all other kinds of complex social phenomena 00:01:01.440 |
and brilliant people I have the fortune of knowing. 00:01:18.480 |
So the choices, wisdom, caffeine, sanity, or privacy. 00:01:25.000 |
And if you wish, click the sponsor links below 00:01:27.480 |
to get a discount and to support this podcast. 00:01:32.160 |
much of the researchers in the machine learning 00:01:42.720 |
Because our current algorithms are seen as useful but dumb, 00:01:46.160 |
it's difficult to imagine how they may become destructive 00:01:53.000 |
I understand this mindset, but it's very troublesome. 00:02:00.480 |
reminiscent of a lobster sitting in a pot of lukewarm water 00:02:21.640 |
to define the trajectory of the interplay of technology 00:02:26.960 |
I think that the future of human civilization 00:02:29.680 |
very well may be at stake over this very question 00:02:32.840 |
of the role of artificial intelligence in our society. 00:02:36.240 |
If you enjoy this thing, subscribe on YouTube, 00:02:38.160 |
review it on Apple Podcasts, follow on Spotify, 00:02:40.960 |
support on Patreon, or connect with me on Twitter, 00:02:45.260 |
And now, here's my conversation with Max Tagmark. 00:02:51.440 |
but you were actually episode number one of this podcast 00:02:55.280 |
just a couple of years ago, and now we're back. 00:02:59.280 |
And it so happens that a lot of exciting things 00:03:01.800 |
happened in both physics and artificial intelligence, 00:03:05.600 |
both fields that you're super passionate about. 00:03:08.480 |
Can we try to catch up to some of the exciting things 00:03:14.040 |
especially in the context of the way it's cracking open 00:03:21.180 |
- Yeah, I'd love to, especially now as we start 2021 here. 00:03:30.840 |
Not the ones necessarily that media wrote about, 00:03:47.720 |
that they could cause if we're not smart enough 00:03:50.480 |
You know, what do we learn basically from this? 00:03:56.220 |
is the AI Institute for Artificial Intelligence 00:04:06.300 |
- The idea is something I'm very on fire with, 00:04:13.340 |
And you know, it's been almost five years now 00:04:22.120 |
And in the beginning, I noticed a lot of my colleagues, 00:04:32.280 |
- But then gradually, I, together with some colleagues, 00:04:39.080 |
of the other professors in our physics department 00:04:50.620 |
MIT and a bunch of neighboring universities here also. 00:04:56.500 |
who were looking at me funny have stopped asking 00:05:06.780 |
AI can help physics a lot to do better physics, 00:05:17.600 |
My colleague, Marin Solzhachich, for example, 00:05:31.340 |
dramatically less energy use, faster, better. 00:05:43.600 |
and a different, maybe more audacious attitude. 00:05:52.480 |
where you're just trying to make things that work 00:05:55.080 |
and being more interested in maybe selling them 00:06:01.200 |
and proving theorems about that they will always work. 00:06:11.020 |
they didn't just train with machine learning, 00:06:12.800 |
oh, let's fire it a little bit more to the left, 00:06:17.600 |
No, we figured out Newton's laws of gravitation 00:06:24.040 |
and got a really deep fundamental understanding. 00:06:26.440 |
And that's what gives us such confidence in rockets. 00:06:40.640 |
will be understood at a really, really deep level. 00:06:43.720 |
So we trust them not 'cause some sales rep told us to, 00:06:53.200 |
that they will always do what we expect them to do. 00:06:57.640 |
So it's interesting, if you look at big breakthroughs 00:07:00.760 |
that have happened in machine learning this year, 00:07:09.880 |
but if you just think about not that many years ago, 00:07:16.120 |
where the MIT robot comes out of the car and face plants, 00:07:38.320 |
You can look at GPT-3 that can spout off English texts, 00:07:44.840 |
which sometimes really, really blows you away. 00:07:49.440 |
You can look at the Google at DeepMind's MuZero, 00:07:53.480 |
which doesn't just kick our butt in Go and Chess and Shogi, 00:08:00.360 |
and you don't even have to teach it the rules now. 00:08:10.000 |
And that's fine if it's just some dancing robots, 00:08:13.760 |
and the worst thing that can happen is they face plant, 00:08:19.680 |
is that they make a bad move and lose the game. 00:08:25.400 |
your self-driving car or your nuclear power plant. 00:08:28.720 |
And we've seen already that even though Hollywood 00:08:33.680 |
had all these movies where they try to make us worry 00:08:35.720 |
about the wrong things, like machines turning evil, 00:08:39.160 |
the actual bad things that have happened with automation 00:09:00.400 |
Was it that that little simple system was evil? 00:09:03.920 |
But we didn't understand it as well as we should have. 00:09:12.400 |
- We didn't even understand that we didn't understand. 00:09:15.680 |
The humility is really at the core of being a scientist. 00:09:19.800 |
I think step one, if you wanna be a scientist, 00:09:24.160 |
you understand things when you actually don't. 00:09:27.800 |
- That's probably good advice for humans in general. 00:09:29.480 |
- I think humility in general can do us good. 00:09:33.080 |
Like why did we have the wrong theory of gravity 00:09:35.880 |
ever from Aristotle onward until Galileo's time? 00:09:47.280 |
until it realizes that its natural motion is down. 00:09:51.040 |
- Because people just kind of assumed Aristotle was right, 00:10:01.880 |
Why did we believe that time flows at the same rate 00:10:20.560 |
There was something to be discovered about the 737 MAX. 00:10:26.520 |
and tested it better, we would have found it. 00:10:30.640 |
that's been done by automation so far, I would say. 00:10:33.720 |
So I don't know if you, did you hear of a company 00:10:38.040 |
- So good, that means you didn't invest in them earlier. 00:10:46.960 |
They didn't understand it as well as they thought. 00:10:49.480 |
And it went about losing 10 million bucks per minute 00:11:03.480 |
Something they didn't fully understand, right? 00:11:11.000 |
which is quite rare still, but in factory accidents, 00:11:19.120 |
that a human is different from an auto part or whatever. 00:11:22.200 |
So this is where I think there's so much opportunity 00:11:33.680 |
And if you look at all these systems that we talked about, 00:11:37.040 |
from reinforcement learning systems and dancing robots 00:11:42.040 |
to all these neural networks that power GPT-3 00:11:55.880 |
you have no idea how their brain works, right? 00:11:58.080 |
Except the human brain at least has been error corrected 00:12:04.440 |
in a way that some of these systems have not, right? 00:12:15.960 |
- That's a good line, intelligible intelligence. 00:12:18.440 |
- Yeah, that we shouldn't settle for something 00:12:20.360 |
that seems intelligent, but it should be intelligible 00:12:23.080 |
so that we actually trust it because we understand it, right? 00:12:35.160 |
can I tell you why I'm optimistic about this? 00:12:46.520 |
and we're just gonna have to learn to live with this. 00:12:55.880 |
building their own, it's super simple what happens inside. 00:13:01.280 |
and then you do a bunch of operations on them, 00:13:06.960 |
and some other numbers come out, that's the output of it. 00:13:09.920 |
And then there are a bunch of knobs you can tune. 00:13:13.600 |
And when you change them, it affects the computation, 00:13:18.120 |
And then you just give the computer some definition of good 00:13:24.720 |
And often you go like, wow, that's really good. 00:13:46.840 |
Many of my colleagues seem willing to settle for that. 00:13:48.920 |
And I'm like, no, that's like the halfway point. 00:13:51.720 |
Some have even gone as far as sort of guessing 00:14:17.960 |
the output changes only smoothly if you tweak your knobs. 00:14:23.880 |
And then you can use all these powerful methods 00:14:31.680 |
That's the fundamental idea of machine learning, 00:14:37.240 |
Suppose you wrote this algorithm instead in Python 00:14:45.600 |
they just changed random letters in your code. 00:14:51.440 |
You change one thing and instead of saying print, 00:15:05.240 |
- And just to clarify, the changing of the different letters 00:15:07.440 |
in a program would not be a differentiable process. 00:15:10.640 |
- It would make it an invalid program, typically. 00:15:13.760 |
And then you wouldn't even know if you changed more letters 00:15:29.320 |
- So you don't like the poetry of the mystery 00:15:32.840 |
of neural networks as the source of its power? 00:15:35.120 |
- I generally like poetry, but not in this case. 00:15:39.160 |
It's so misleading and above all, it shortchanges us. 00:15:46.440 |
we can accomplish because, so what we've been doing 00:15:51.440 |
train the mysterious neural network to do something well. 00:15:54.900 |
And then step two, do some additional AI techniques 00:15:59.580 |
to see if we can now transform this black box 00:16:08.540 |
This AI Feynman project that we just published. 00:16:11.600 |
So we took the 100 most famous or complicated equations 00:16:22.600 |
in the first place, the Feynman lectures on physics. 00:16:29.480 |
what goes into the formula is six different variables 00:16:36.000 |
So then you can make like a giant Excel spreadsheet 00:16:39.420 |
You put in just random numbers for the six columns 00:16:41.680 |
for those six input variables and then you calculate 00:16:44.340 |
with a formula of the seventh column, the output. 00:16:46.800 |
So maybe it's like the force equals in the last column 00:16:51.680 |
And now the task is, okay, if I don't tell you 00:16:53.860 |
what the formula was, can you figure that out 00:17:00.100 |
- This problem is called symbolic regression. 00:17:03.500 |
If I tell you that the formula is what we call 00:17:06.260 |
a linear formula, so it's just that the output is 00:17:09.020 |
some sum of all the things input, the times some constants, 00:17:18.680 |
We do it all the time in science and engineering. 00:17:21.340 |
But the general one, if it's more complicated functions 00:17:26.940 |
it's a very, very hard one and probably impossible 00:17:30.600 |
to do fast in general just because the number of formulas 00:17:37.780 |
Just like the number of passwords you can make 00:17:46.380 |
a neural network that can actually approximate the formula, 00:17:48.980 |
you just trained it, even if you don't understand 00:18:03.220 |
and put in all sorts of other data that wasn't 00:18:05.340 |
in the original training data and use that to discover 00:18:17.420 |
So we were able to solve all of those 100 formulas, 00:18:20.020 |
discover them automatically, plus a whole bunch 00:18:22.980 |
It's actually kind of humbling to see that this code, 00:18:28.020 |
which anyone who wants now, who's listening to this, 00:18:30.260 |
can type pip install AI Feynman on the computer 00:18:34.540 |
It can actually do what Johannes Kepler spent four years 00:18:40.180 |
Until he's like, "Finally, Eureka, this is an ellipse!" 00:18:42.980 |
This will do it automatically for you in one hour, right? 00:18:46.900 |
Or Max Planck, he was looking at how much radiation 00:18:51.020 |
comes out at different wavelengths from a hot object 00:18:59.980 |
I'm actually excited about seeing if we can discover 00:19:04.980 |
not just old formulas again, but new formulas 00:19:11.940 |
- I do like this process of using kind of a neural network 00:19:14.660 |
to find some basic insights and then dissecting 00:19:27.260 |
the explainability issue of really trying to analyze 00:19:35.580 |
in order to come up with the final, beautiful, 00:19:44.180 |
And the reason I'm so optimistic that it can be generalized 00:19:48.420 |
to so much more is because that's exactly what we do 00:19:57.060 |
if his dad threw him an apple, he would catch it. 00:20:04.420 |
that he had trained to predict the parabolic orbit 00:20:11.940 |
it also has this same ability of deep learning 00:20:15.380 |
to figure out how the ball is gonna move and catch it. 00:20:18.220 |
But Galileo went one step further when he got older. 00:20:31.660 |
And he helped revolutionize physics as we know it, right? 00:20:36.580 |
- So there was a basic neural network in there 00:20:43.380 |
of observing different kinds of trajectories. 00:21:00.740 |
what that complicated black box neural network was doing. 00:21:09.100 |
and similarly, this is how Newton got Newton's laws, 00:21:11.860 |
which is why Elon can send rockets to the space station now. 00:21:20.140 |
And it's so simple that we can actually describe it 00:21:24.940 |
We've talked about it just in the context of physics now, 00:21:35.500 |
just like dogs and cats and chipmunks and blue jays, 00:21:41.860 |
But then we humans do this additional step on top of that, 00:21:44.460 |
where we then distill out certain high-level knowledge 00:21:52.900 |
in a symbolic form, in English in this case, right? 00:21:59.240 |
and we believe that we are information processing entities, 00:22:02.900 |
then we should be able to make machine learning 00:22:05.920 |
- Well, do you think the entire thing could be learning? 00:22:10.140 |
Because this dissection process, like for AI Feynman, 00:22:14.140 |
the secondary stage feels like something like reasoning, 00:22:21.300 |
the more basic kind of differentiable learning. 00:22:25.300 |
Do you think the whole thing could be differentiable learning? 00:22:30.500 |
basically neural networks on top of each other? 00:22:35.940 |
- I mean, that's a really interesting question. 00:22:42.940 |
is a bunch of neurons doing their thing, right? 00:22:50.380 |
what algorithms are being used in your brain? 00:22:56.180 |
I think we've gone a little bit backwards historically, 00:22:58.740 |
because we humans first discovered good old-fashioned AI, 00:23:03.100 |
the logic-based AI that we often called Go-Fi, 00:23:08.300 |
And then more recently, we did machine learning, 00:23:16.020 |
So we think of machine learning with neural networks 00:23:20.580 |
and the logic-based AI as the old-fashioned thing. 00:23:32.780 |
an eagle has a better vision system than I have, 00:23:37.020 |
and dogs are just as good at casting tennis balls as I am. 00:23:42.460 |
All this stuff which is done by training a neural network 00:23:48.020 |
is something so many of our animal friends can do, 00:23:58.180 |
It's more to do with this logic-based stuff, right, 00:24:01.620 |
where we can extract out information in symbols, 00:24:14.900 |
that could multiply numbers real fast and manipulate symbols, 00:24:25.060 |
and do a lot of this inscrutable black box neural networks. 00:24:37.420 |
So if we ever wanna get artificial general intelligence 00:24:40.980 |
that can do all jobs as well as humans can, right, 00:24:47.180 |
to be able to combine the neural networks with symbolic, 00:24:52.180 |
combine the old AI with the new AI in a good way. 00:24:57.380 |
and there seems to be basically two strategies 00:25:03.680 |
and the other one I find much more encouraging. 00:25:09.740 |
- The one that scares the heebie-jeebies out of me 00:25:13.280 |
ever bigger systems that we still don't understand 00:25:22.340 |
I think it's just such a reckless thing to do. 00:25:24.260 |
And unfortunately, and if we actually succeed as a species 00:25:40.540 |
- So it's that 44-minute losing money problem 00:26:02.660 |
we have to worry about people using machines. 00:26:06.900 |
They're short of AI, AGI, and power to do bad things. 00:26:11.800 |
and if anyone is not worried particularly about advanced AI, 00:26:21.060 |
your least favorite leader on the planet right now. 00:26:32.980 |
this incredibly powerful AI under their control 00:26:36.660 |
and can use it to impose their will on the whole planet. 00:27:12.980 |
and in a way where everyone agrees it's kinda good, 00:27:25.700 |
- Oh yeah, I'm sure Hitler thought he was doing good. 00:27:37.880 |
- I think Mao Zedong thought what he was doing 00:27:51.300 |
'cause I told you that there were two different routes 00:27:53.460 |
we could get to artificial general intelligence, 00:28:06.180 |
That, I think, is the most unsafe and reckless approach. 00:28:23.060 |
like for the first step to get the intuition, 00:28:27.060 |
but then we're gonna spend also serious resources 00:28:30.660 |
on other AI techniques for demystifying this black box 00:28:41.040 |
but that we actually understand what it's doing. 00:28:45.980 |
that this car here will never be hacked when it's driving, 00:28:58.140 |
but it works well in certain other kinds of codes. 00:29:01.020 |
That approach, I think, is much more promising. 00:29:05.180 |
That's exactly why I'm working on it, frankly, 00:29:07.180 |
not just because I think it's cool for science, 00:29:09.460 |
but because I think the more we understand these systems, 00:29:24.300 |
about something as complicated as a neural network? 00:29:30.820 |
there has to be a neural network in the end either. 00:29:34.340 |
Like, we discovered that Newton's laws of gravity 00:29:41.500 |
into the navigation system of Elon Musk's rocket anymore. 00:29:48.820 |
or I don't know what language he uses exactly. 00:29:57.860 |
has done a lot of really great research on this, 00:30:03.800 |
they don't just go fire at random or malfunction, right? 00:30:07.600 |
And there's even a whole operating system called Cell 3 00:30:20.400 |
- One day, I hope that will be something you can say 00:30:22.680 |
about the OS that's running on our laptops too. 00:30:34.220 |
to help do the proofs and so on as well, right, 00:30:36.380 |
then it's much easier to verify that a proof is correct 00:30:40.140 |
than to come up with a proof in the first place. 00:31:06.060 |
- Yeah, although some of those proofs are pretty complicated, 00:31:14.500 |
You know, we kinda, even with the security of systems, 00:31:17.520 |
there's a kinda cynicism that pervades people 00:31:21.780 |
who think about this, which is like, oh, it's hopeless. 00:31:26.040 |
exactly like you're saying when you own networks, 00:31:27.920 |
oh, it's hopeless to understand what's happening. 00:31:32.080 |
well, there's always going to be attack vectors, 00:31:46.440 |
and it's not out of the realm of possibility. 00:31:49.600 |
Just like people didn't understand the movement 00:31:54.580 |
- It's entirely possible that within, hopefully soon, 00:32:12.340 |
I think, of course, if you're selling computers 00:32:15.300 |
that get hacked a lot, that's in your interest as a company 00:32:17.340 |
that people think it's impossible to make it safe, 00:32:19.400 |
so nobody's going to get the idea of suing you. 00:32:34.900 |
You don't need the music player to be super safe 00:32:42.200 |
If someone hacks it and starts playing music you don't like, 00:32:49.600 |
and say the drive computer that controls your safety 00:32:53.120 |
must be completely physically decoupled entirely 00:32:57.640 |
and it must physically be such that it can't take on 00:33:03.240 |
It can have, ultimately, some operating system on it 00:33:12.320 |
that it's always going to do what it's supposed to do. 00:33:22.180 |
and say what are the few systems in our company 00:33:31.840 |
And then they can save money by going for the El Cheapo, 00:33:43.200 |
that there'll be unintentional failures, I think, 00:33:46.320 |
there are two quite separate risks here, right? 00:33:49.600 |
which is that the goals are noble of the human. 00:33:52.640 |
The human says, "I want this airplane to not crash 00:34:08.320 |
If you set that aside, there's also the separate question. 00:34:13.360 |
How do you make sure that the goals of the pilot 00:34:17.400 |
are actually aligned with the goals of the passenger? 00:34:28.080 |
that the goals are aligned here, the alignment problem. 00:34:44.920 |
so we could launch the first research program 00:35:11.800 |
There was this depressed pilot named Andreas Lubitz, 00:35:19.040 |
He just told the computer to change the altitude 00:35:26.560 |
And it had the freaking topographical map of the Alps 00:35:35.560 |
no, we never want airplanes to fly into mountains 00:35:41.980 |
And so we have to think beyond just the technical issues 00:35:46.980 |
and think about how do we align in general incentives 00:35:59.140 |
should be taught whatever kindergarten ethics 00:36:08.260 |
then go on autopilot mode, send an email to the cops 00:36:12.980 |
and land at the latest airport, nearest airport. 00:36:19.660 |
should just be programmed by the manufacturer 00:36:22.180 |
so that it will never accelerate into a human ever. 00:36:36.540 |
oh, you know, US and China, different views on, 00:36:45.460 |
They just hadn't thought to do the alignment. 00:36:50.980 |
the vast majority have to do with poor alignment. 00:36:55.020 |
I mean, think about, let's go back really big 00:37:01.220 |
- Yeah, so long ago in evolution, we had these genes 00:37:05.100 |
and they wanted to make copies of themselves. 00:37:09.380 |
So some genes said, hey, I'm gonna build a brain 00:37:21.820 |
to get copied more, to align your brain's incentives 00:37:33.260 |
and it wanted you to make copies of the genes. 00:37:45.340 |
So that was successful value alignment done on the genes. 00:37:49.140 |
They created something more intelligent than themselves, 00:37:51.620 |
but they made sure to try to align the values. 00:38:01.700 |
hey, yeah, we really like this business about sex 00:38:10.260 |
So we're gonna hack the genes and use birth control. 00:38:14.700 |
And I really feel like drinking a Coca-Cola right now, 00:38:26.980 |
how we can actually subvert their intentions. 00:38:34.860 |
creating other non-human entities with a lot of power, 00:39:02.900 |
would do things that were good for us and not bad for us, 00:39:05.980 |
we created institutions to keep them in check. 00:39:08.300 |
Like if the local supermarket sells poisonous food, 00:39:17.540 |
have to spend some years reflecting behind bars, right? 00:39:29.460 |
birth control, if you're a powerful corporation, 00:39:31.860 |
you also have an incentive to try to hack the institutions 00:39:46.020 |
So if they can figure out a way of bribing regulators, 00:39:54.420 |
and made laws against corruption and bribery. 00:39:58.580 |
Then in the late 1800s, Teddy Roosevelt realized that, 00:40:07.280 |
had like a bigger budget than the state of Massachusetts 00:40:10.140 |
and they were doing a lot of very corrupt stuff. 00:40:15.500 |
to try to align these other non-human entities, 00:40:18.460 |
the companies, again, more with the incentives 00:40:23.040 |
It's not surprising though that this is a battle 00:40:27.180 |
Now we have even larger companies than we ever had before. 00:40:39.640 |
I think people make a mistake of getting all too, 00:40:43.100 |
black, thinking about things in terms of good and evil, 00:40:47.940 |
like arguing about whether corporations are good or evil 00:40:57.020 |
It's a tool and you can use it for great things 00:41:05.140 |
And if you have good incentives to the corporation, 00:41:12.680 |
then it's gonna start maybe marketing addictive drugs 00:41:16.780 |
to people and you'll have an opioid epidemic, right? 00:41:19.440 |
It's all about, we should not make the mistake 00:41:27.380 |
good, evil thing about corporations or robots. 00:41:30.500 |
We should focus on putting the right incentives in place. 00:41:33.420 |
My optimistic vision is that if we can do that, 00:41:38.460 |
We're not doing so great with that right now, 00:41:40.580 |
either on AI, I think, or on other intelligent, 00:41:50.180 |
that's gonna start up now in the Biden administration 00:41:53.620 |
who was an active member of the board of Raytheon, 00:42:16.700 |
maybe we need another Teddy Roosevelt to come along again 00:42:19.540 |
and say, "Hey, we want what's good for all Americans. 00:42:23.460 |
"And we need to go do some serious realigning again 00:42:26.540 |
"of the incentives that we're giving to these big companies. 00:42:35.820 |
just like you beautifully described the history 00:42:37.740 |
of this whole thing, it all started with the genes, 00:42:42.660 |
by all the unintended consequences that happened since. 00:42:56.920 |
a way to realign the values or keep the values aligned. 00:43:03.820 |
Like different leaders, different humans pop up 00:43:10.700 |
Do you want, I mean, do you have an explanation why that is? 00:43:19.620 |
somehow fundamentally different than with AI systems, 00:43:23.100 |
where you're no longer dealing with something 00:43:26.420 |
that was a direct, maybe companies are the same, 00:43:30.180 |
a direct byproduct of the evolutionary process? 00:43:33.340 |
- I think there is one thing which has changed. 00:43:40.280 |
That's why I think there's about a 50% chance 00:43:42.260 |
if we take the dumb route with artificial intelligence 00:43:46.100 |
that humanity will be extinct in this century. 00:44:04.220 |
who was the king because his dad was the king, you know? 00:44:10.580 |
of having this powerful kingdom or empire of any sort, 00:44:15.280 |
because then it could prevent a lot of local squabbles. 00:44:20.780 |
And their incentives of different cities in the kingdom 00:44:27.220 |
Harari, Noah Ural Harari has a beautiful piece 00:44:35.340 |
And then we also, Harari says, invented money 00:44:37.620 |
for that reason, so we could have better alignment 00:44:40.660 |
and we could do trade even with people we didn't know. 00:44:47.840 |
What's changed is that it happens on ever larger scales, 00:44:54.760 |
So now we can communicate over larger distances, 00:45:02.960 |
but our planet is not getting bigger anymore. 00:45:05.480 |
So in the past, you could have one experiment 00:45:08.120 |
that just totally screwed up, like Easter Island, 00:45:12.020 |
where they actually managed to have such poor alignment 00:45:17.660 |
there was no one else to come back and replace them, right? 00:45:31.540 |
And that's a mistake I would rather that we don't make 00:45:35.860 |
In the past, of course, history is full of fiascos, right? 00:45:42.200 |
And then, okay, now there's this nice uninhabited land here, 00:45:46.000 |
some other people could move in and organize things better. 00:45:52.720 |
is that technology gives us so much more empowerment, 00:45:57.720 |
right, both to do good things and also to screw up. 00:46:09.940 |
if he wanted to kill as many people as he could, 00:46:12.660 |
how many could he really take out with a rock and a stick 00:46:37.300 |
So the scale of the damage is bigger than we can do. 00:46:41.300 |
there's obviously no law of physics that says 00:46:46.060 |
that technology will never get powerful enough 00:46:57.220 |
And it's not at all unfeasible in our lifetime 00:47:00.260 |
that someone could design a designer pandemic 00:47:14.420 |
here's an intuition, maybe it's completely naive, 00:47:18.980 |
which it seems, and maybe it's a biased experience 00:47:22.860 |
that I have, but it seems like the most brilliant people 00:47:35.860 |
like they really want to do good for the world 00:47:51.020 |
there'll be much more of the ones that are doing good 00:48:00.180 |
us always like last minute coming up with a solution. 00:48:11.660 |
it feels like to me, either leading up to that before 00:48:30.060 |
But could that be a fundamental law of human nature? 00:48:40.900 |
good is beneficial, and therefore we'll be all right. 00:49:03.060 |
And I think, in fact, I think it can be dangerous 00:49:10.780 |
Like if someone tells you, you never have to worry 00:49:13.740 |
then you're not gonna put in a smoke detector 00:49:16.980 |
Even if it's sometimes very simple precautions, 00:49:20.000 |
If you're like, oh, the government is gonna take care 00:49:23.500 |
of everything for us, I can always trust my politicians. 00:49:25.820 |
We can always, we abdicate our own responsibility. 00:49:32.180 |
Maybe I'm actually gonna have to myself step up 00:49:41.420 |
we can develop all this ever more powerful technology 00:49:50.920 |
but like billions of years throughout our universe. 00:50:00.380 |
- Well, I just mean, so you're absolutely right. 00:50:06.420 |
to take responsibility and to build the solutions 00:50:17.300 |
If you or anyone listening to this is completely confident 00:50:24.460 |
on handling any future crisis with engineered pandemics 00:50:40.340 |
around the world has handled this flawlessly? 00:50:43.340 |
- That's a really sad and disappointing reality 00:50:45.780 |
that hopefully is a wake up call for everybody. 00:50:54.780 |
It was disappointing to see how inefficient we were 00:51:04.660 |
in a privacy-preserving way and spreading that data 00:51:10.980 |
- Yeah, I think when something bad happens to me, 00:51:27.780 |
but then I try to focus on what did I learn from this 00:51:31.060 |
that can make me a better person in the future? 00:51:33.100 |
And there's usually something to be learned when I fail. 00:51:43.460 |
And you mentioned there a really good lesson. 00:51:46.420 |
We were not as resilient as we thought we were 00:51:50.540 |
and we were not as prepared maybe as we wish we were. 00:51:54.020 |
You can even see very stark contrast around the planet. 00:51:57.340 |
South Korea, they have over 50 million people. 00:52:10.340 |
Well, the short answer is that they had prepared. 00:52:19.260 |
incredibly quick to get on it with very rapid testing 00:52:30.100 |
They never even had to have the kind of big lockdowns 00:52:36.460 |
it's not just the Koreans are just somehow better people. 00:52:40.860 |
was because they had already had a pretty bad hit 00:52:45.380 |
from the SARS pandemic, which never became a pandemic, 00:53:03.300 |
that rather than just wait for the next pandemic 00:53:06.340 |
or the next problem with AI getting out of control 00:53:09.820 |
or anything else, maybe we should just actually 00:53:16.820 |
to have people very systematically do some horizon scanning 00:53:20.460 |
and say, okay, what are the things that could go wrong? 00:53:23.340 |
And let's duke it out and see which are the more likely ones 00:53:25.820 |
and which are the ones that are actually actionable 00:53:29.820 |
- So one of the observations as one little ant/human 00:53:36.540 |
that I am of disappointment is the political division 00:53:50.860 |
sort of what happened and understanding what happened deeply 00:53:59.020 |
and more about there's different truths out there. 00:54:13.260 |
that doesn't seem to get at any kind of notion of the truth. 00:54:16.540 |
It's not like some kind of scientific process. 00:54:28.660 |
of trying to rethink, you mentioned corporations, 00:54:33.660 |
there's one of the other collective intelligence systems 00:54:37.340 |
that have emerged through all of this is social networks. 00:54:42.580 |
is the spread of information on the internet, 00:54:48.300 |
there's all different kinds of news sources and so on. 00:54:50.620 |
And so you said like that's from first principles, 00:55:06.420 |
I've spent nights and weekends on ever since the lockdown. 00:55:14.540 |
that you had this hope that in your experience, 00:55:21.260 |
Frankly, I feel the same about all people by and large. 00:55:48.420 |
that those Danes in Denmark, they're so terrible people, 00:55:53.460 |
because they've done all these terrible things 00:56:00.740 |
And we're seeing so much of this today in the world, 00:56:11.700 |
that China is bad and Russia is bad and Venezuela is bad, 00:56:22.980 |
"Oh, those who voted for the other party are bad people." 00:56:40.300 |
I think it's pretty obvious that it has, again, 00:56:50.620 |
where you might know 30 people in total, right? 00:56:56.300 |
for assessing who you could trust and who you could not. 00:56:58.540 |
And if someone told you that Joe there is a jerk, 00:57:09.100 |
that that's actually not quite accurate, right? 00:57:19.460 |
So, okay, so where does the news project come in? 00:57:23.180 |
Well, throughout history, you can go read Machiavelli 00:57:26.580 |
from the 1400s and you'll see how already then 00:57:41.260 |
What's new is machine learning meets propaganda. 00:57:49.100 |
Some people like to blame certain individuals, 00:57:54.220 |
many people blame Donald Trump and say it was his fault. 00:57:59.220 |
I think Donald Trump just had this extreme skill 00:58:04.860 |
at playing this game in the machine learning algorithm age, 00:58:16.020 |
and other companies, and I'm not badmouthing them, 00:58:19.420 |
I have a lot of friends who work for these companies, 00:58:21.660 |
good people, they deployed machine learning algorithms 00:58:27.100 |
to just maximize the time people spent watching ads. 00:58:38.940 |
They just noticed, oh, we're getting more ad revenue, great. 00:58:42.220 |
It took a long time until they even realized why and how 00:58:48.140 |
'Cause of course, what the machine learning figured out was 00:58:51.420 |
that the by far most effective way of gluing you 00:58:54.500 |
to your little rectangle was to show you things 00:58:57.420 |
that triggered strong emotions, anger, et cetera, resentment. 00:59:01.780 |
And if it was true or not, didn't really matter. 00:59:07.700 |
It was also easier to find stories that weren't true. 00:59:10.580 |
If you weren't limited, that's just a limitation. 00:59:15.340 |
- And before long, we got these amazing filter bubbles 00:59:30.780 |
There's less than half as many journalists now in America, 00:59:39.300 |
You just couldn't compete with the online advertising. 00:59:42.780 |
So all of a sudden, most people are not getting, 00:59:51.300 |
And most people only get news in their little bubble. 00:59:54.980 |
So along comes now some people like Donald Trump 00:59:58.460 |
who figured out, among the first successful politicians 01:00:01.580 |
to figure out how to really play this new game 01:00:11.020 |
He didn't create, the fundamental conditions were created 01:00:15.020 |
by machine learning taking over the news media. 01:00:19.020 |
So this is what motivated my little COVID project here. 01:00:22.940 |
So I said before, machine learning and tech in general, 01:00:36.020 |
was mainly used by the big players, big tech, 01:00:39.700 |
to manipulate people into watch as many ads as possible, 01:00:49.500 |
So I thought, well, machine learning algorithms 01:00:54.260 |
They can run on your smartphone for free also 01:00:57.880 |
There's no reason why they only have to help the big guy 01:01:25.900 |
And then if you just slide the left-right slider 01:01:33.700 |
- Yeah, there's the one, the most obvious one 01:01:36.280 |
is the one that has left-right labeled on it. 01:01:40.580 |
you go to the right, you see a very different truth 01:01:44.220 |
- Oh, that's literally left and right on the-- 01:01:48.500 |
- Yeah, so if you're reading about immigration, 01:01:59.260 |
is just to be able to recognize the techniques people use. 01:02:08.200 |
I think many people are under the misconception 01:02:23.920 |
And yes, of course, sometimes there's fake news 01:02:26.640 |
where someone just claims something that's false, right? 01:02:30.800 |
Like, oh, Hillary Clinton just got divorced or something. 01:02:33.840 |
But what we see much more of is actually just omissions. 01:02:41.700 |
which just won't be mentioned by the left or the right 01:02:47.420 |
And then they'll mention other ones very, very, very much. 01:02:50.680 |
So for example, we've had a number of stories 01:03:02.540 |
about the Biden family's, Hunter Biden's financial dealings. 01:03:06.240 |
Surprise, surprise, they don't get equal coverage 01:03:18.940 |
But the great news is if you're a normal American citizen 01:03:25.000 |
then slide, slide, you can just look at both sides 01:03:29.420 |
and you'll see all those political corruption stories. 01:03:33.340 |
It's really liberating to just take in the both sides, 01:03:40.480 |
It somehow unlocks your mind to think on your own, 01:03:53.880 |
was much more aware that they're surrounded by propaganda. 01:03:58.080 |
- That is so interesting what you're saying, actually. 01:04:01.560 |
So Noam Chomsky, used to be our MIT colleague, 01:04:15.060 |
if you have a really totalitarian government, 01:04:18.820 |
People will do what you want them to do anyway 01:04:37.520 |
When I talk to colleagues, science colleagues, 01:04:52.520 |
That means the propaganda in the Western media 01:05:07.420 |
you realize there's also something very optimistic there 01:05:22.780 |
a much more accurate idea of what's actually going on, right? 01:05:35.540 |
sometimes I feel that some of us in the academic bubble 01:05:38.540 |
are too arrogant about this and somehow think, 01:05:48.260 |
we read only our media and don't see through things. 01:05:52.140 |
Anyone who looks at both sides like this in comparison, 01:06:05.820 |
tried to blame the individual for being manipulated, 01:06:09.020 |
much like big tobacco tried to blame the individuals 01:06:13.740 |
And then later on, our government stepped up and said, 01:06:24.660 |
It's very convenient for a big tech to blame. 01:06:27.620 |
So it's just people who are so dumb and get fooled. 01:06:36.000 |
People just wanna hear what they already believe. 01:06:38.360 |
But Professor David Rand at MIT actually partly debunked that 01:06:52.700 |
Suppose, for example, that you have a company 01:07:00.340 |
And someone says, you know, Lex, I hate to tell you this, 01:07:06.700 |
Would you be like, shut up, I don't wanna hear it. 01:07:19.060 |
and the guy next to you is clearly from the opposite side 01:07:22.900 |
of the political spectrum, but is very respectful 01:07:26.780 |
and polite to you, wouldn't you be kind of interested 01:07:29.040 |
to hear a bit about how he or she thinks about things? 01:07:36.380 |
respectful disagreement now because, for example, 01:08:01.100 |
by them having put a deliberately ugly picture 01:08:03.180 |
of Donald Trump on the front page or something, 01:08:06.580 |
So this news aggregator also has a nuance slider, 01:08:13.300 |
to make it easier to get exposed to actually more 01:08:25.420 |
people are mostly aware of is the left-right, 01:08:29.180 |
because both left and right are very powerful here. 01:08:33.420 |
Both of them have well-funded TV stations and newspapers, 01:08:38.380 |
But there's another one, the establishment slider, 01:08:59.500 |
that's what you're gonna read in both the big media, 01:09:01.860 |
mainstream media on the left and on the right, of course. 01:09:04.620 |
And powerful companies can push back very hard, 01:09:08.220 |
like tobacco companies pushed back very hard back in the day 01:09:10.820 |
when some newspapers started writing articles 01:09:15.380 |
So it was hard to get a lot of coverage about it initially. 01:09:20.860 |
Of course, in any country, when you read their media, 01:09:23.100 |
you're mainly gonna be reading a lot of articles 01:09:27.380 |
and the other countries are the bad guys, right? 01:09:30.380 |
So if you wanna have a really more nuanced understanding, 01:09:50.340 |
On the geopolitical scale, it's just as much as ever, 01:09:54.260 |
you know, big Cold War now, US, China, and so on. 01:09:57.620 |
And if you wanna get a more nuanced understanding 01:10:03.500 |
then it's really fun to look at this establishment slider, 01:10:05.940 |
because it turns out there are tons of little newspapers, 01:10:11.340 |
who sometimes challenge establishment and say, 01:10:14.460 |
you know, maybe we shouldn't actually invade Iraq right now. 01:10:17.780 |
Maybe this weapons of mass destruction thing is BS. 01:10:20.380 |
If you look at the journalism research afterwards, 01:10:36.200 |
this evidence seems a bit sketchy, and maybe we, 01:10:42.260 |
Most people didn't even know they existed, right? 01:10:44.580 |
Yet it would have been better for American national security 01:10:50.140 |
I think it harmed America's national security, actually, 01:10:55.540 |
in that kind of thinking, too, from those small sources. 01:11:02.620 |
it's more about kind of the reach of the broadcast, 01:11:16.220 |
or like skepticism towards out-of-the-box thinking. 01:11:20.380 |
There's a lot of interest in that kind of thing. 01:11:22.020 |
Do you see this news project or something like it 01:11:39.100 |
You're calling it your little project in 2020, 01:11:43.980 |
but how does that become the new way we consume information? 01:11:48.500 |
- I hope, first of all, just to plant a little seed there, 01:11:51.020 |
because normally, the big barrier of doing anything 01:12:01.100 |
pay a tiny amount of money each month to Amazon 01:12:06.940 |
The point is not to make any money off of it, 01:12:09.340 |
and we just train machine learning algorithms 01:12:14.860 |
So if it actually gets good enough at some point 01:12:28.180 |
I think there's a real opportunity for machine learning 01:12:41.420 |
it's been mostly the other way around so far. 01:12:44.860 |
and then they tell people, "This is the truth. 01:12:49.660 |
but it can just as well go the other way around. 01:12:54.980 |
that maybe this will be a great thing for democracy, 01:12:59.620 |
and maybe machine learning and things like this 01:13:03.780 |
And I have to say, I think it's more important than ever now 01:13:07.140 |
because this is very linked also to the whole future of life 01:13:16.060 |
Frank, it's pretty clear if you look on the one 01:13:19.980 |
or two generation, three generation timescale 01:13:21.940 |
that there are only two ways this can end geopolitically. 01:13:42.300 |
when the weapons just keep getting ever more powerful 01:13:55.420 |
but the fact of the matter is what's good for America 01:14:03.660 |
It would be if this was some sort of zero-sum game, 01:14:10.060 |
when the only way one country could get more resources 01:14:18.940 |
Some countries kept getting bigger and smaller, 01:14:29.980 |
So the optimistic outcome is that the big winner 01:14:34.900 |
in this century is gonna be America and China and Russia 01:14:39.100 |
because technology just makes us all healthier and wealthier 01:14:41.820 |
and we just find some way of keeping the peace 01:14:48.620 |
there are some pretty powerful forces right now 01:14:50.500 |
that are pushing in exactly the opposite direction 01:14:56.540 |
that this ever more powerful tech we're building 01:15:05.420 |
- Yeah, even look at just military AI now, right? 01:15:07.860 |
It was so awesome to see these dancing robots. 01:15:13.940 |
But one of the biggest growth areas in robotics now 01:15:19.300 |
And 2020 was like the best marketing year ever 01:15:36.260 |
Oh yeah, we wanna build autonomous weapons too. 01:15:48.420 |
that they bought from China, bombing Libyans. 01:15:52.100 |
And on the other side, you had our other ally, Turkey, 01:15:56.220 |
They had no skin in the game, any of these other countries. 01:16:01.660 |
And of course, it was the Libyans who really got screwed. 01:16:08.020 |
again, Turkey is sending drones built by this company 01:16:19.580 |
So MIT has a direct responsibility for ultimately this, 01:16:24.380 |
And so because it was militarily so effective, 01:16:31.260 |
Oh yeah, yeah, let's go build ever more autonomy 01:16:45.580 |
some sort of future Terminator scenario right now 01:16:48.300 |
should start focusing on the fact that we have 01:16:52.460 |
two much more urgent threats happening from machine learning. 01:16:55.060 |
One of them is the whole destruction of democracy 01:16:59.940 |
where our flow of information is being manipulated 01:17:09.860 |
out-of-control arms race in at least autonomous weapons 01:17:20.700 |
for the race of, for the autonomous weapons race. 01:17:25.460 |
when they proved decisive in the battlefield. 01:17:28.380 |
And these ones are still not fully autonomous, 01:17:45.780 |
a skin color or whatever, and it flies away and does it. 01:17:53.180 |
and all the other superpowers should put the kibosh on this 01:18:15.580 |
"look, we don't want there to be a $500 weapon 01:18:19.340 |
"of mass destruction that all our enemies can afford, 01:18:31.660 |
It's in America's interest that the powerful weapons 01:18:34.300 |
are all really expensive, so only we can afford them, 01:18:37.580 |
or maybe some more stable adversaries, right? 01:18:46.460 |
And that's why you never hear about them now. 01:18:52.900 |
for the big powerhouses in terms of the big nations 01:18:57.900 |
in the world to agree that autonomous weapons 01:19:00.380 |
is not a race we wanna be on, that it doesn't end well. 01:19:05.560 |
in mass proliferation, and every terrorist everywhere 01:19:15.940 |
about being assassinated every time they go outdoors 01:19:21.820 |
And even if the US and China and everyone else 01:19:25.900 |
could just agree that you can only build these weapons 01:19:31.220 |
that would be a huge win for the superpowers, 01:19:43.180 |
But hey, you could say the same about bioweapons. 01:19:49.300 |
Of course they could build some nasty bioweapon 01:19:54.060 |
they don't want to 'cause they think it's disgusting 01:19:58.340 |
even if there's some sort of nutcase and want to, 01:20:02.020 |
it's very likely that some of their grad students 01:20:05.160 |
because everyone else thinks it's so disgusting. 01:20:10.420 |
a fair bit of cheating on the bioweapons ban, 01:20:13.460 |
but no countries used them because it was so stigmatized 01:20:17.460 |
that it just wasn't worth revealing that they had cheated. 01:20:22.340 |
- You talk about drones, but you kind of think 01:20:30.620 |
- But you're not taking the next intellectual step 01:20:36.260 |
You're kind of saying the problem with drones 01:20:38.660 |
is that you're removing yourself from direct violence, 01:20:42.340 |
therefore you're not able to sort of maintain 01:20:51.380 |
if this is automated, and just exactly as you said, 01:20:58.660 |
then the technology's gonna get better and better and better, 01:21:01.280 |
which means getting cheaper and cheaper and cheaper. 01:21:17.620 |
between the tech industry and autonomous weapons 01:21:20.420 |
to where you could have smartphone type of cheapness. 01:21:30.820 |
that's able to maintain flight autonomously for you 01:21:36.260 |
You could see that going into the autonomous weapon space. 01:21:43.260 |
or discussed enough in the public, do you think? 01:21:45.660 |
You see those dancing Boston Dynamics robots, 01:21:55.340 |
They have this fear, like, oh, this'll be Terminator 01:21:58.620 |
in some, I don't know, unspecified 20, 30, 40 years. 01:22:04.380 |
this is some much less dramatic version of that 01:22:11.140 |
It's not gonna be legged, it's not gonna be dancing, 01:22:17.180 |
to use artificial intelligence to kill humans. 01:22:22.900 |
I think the reason we imagine them holding guns 01:22:24.980 |
is just 'cause you've all seen Arnold Schwarzenegger. 01:22:32.700 |
That's not gonna be the main military use of them. 01:22:35.340 |
They might be useful in law enforcement in the future, 01:22:40.260 |
do you want robots showing up at your house with guns 01:23:07.540 |
is basically a fancy little remote-controlled airplane. 01:23:14.420 |
and the decision ultimately about whether to kill somebody 01:23:19.420 |
And this is a line I think we should never cross. 01:23:27.940 |
I think algorithms should never make life or death decisions. 01:23:37.740 |
Well, first of all, these are expensive, right? 01:23:40.540 |
So for example, when Azerbaijan had all these drones 01:23:47.640 |
they started trying to jerry-rig little cheap things, 01:23:51.060 |
fly around, but then of course, the Armenians would jam them 01:24:02.980 |
if we're piloting something from far away, speed of light, 01:24:09.580 |
it would be nice to eliminate that jamming possibility 01:24:12.500 |
in the time delay by having it fully autonomous. 01:24:17.980 |
but now you might be crossing that exact line. 01:24:20.260 |
You might program it to just, oh yeah, the air drone, 01:24:26.160 |
And whenever you find someone who is a bad guy, 01:24:31.760 |
Now the machine is making these sort of decisions. 01:24:36.680 |
well, that's morally fine because we are the good guys 01:24:40.560 |
and we will tell it the definition of bad guy 01:24:52.080 |
that they're gonna use our definition of bad guy. 01:24:58.720 |
Or maybe there will be some weird ethnic group 01:25:03.720 |
who decides that someone of another ethnic group, 01:25:10.240 |
The thing is, human soldiers, with all our faults, 01:25:17.080 |
Like, no, it's not okay to kill kids and civilians. 01:25:26.640 |
It's like the perfect Adolf Eichmann on steroids. 01:25:40.720 |
Do we really wanna make machines that are like that? 01:25:43.800 |
Like completely amoral and will take the user's definition 01:25:55.460 |
That's, I think, the big argument for why we wanna, 01:26:03.520 |
And I think you can tell there's a lot of very active debate 01:26:10.160 |
and undoubtedly in other militaries around the world also, 01:26:21.900 |
so that things just don't totally spiral out of control. 01:26:33.560 |
'Cause some people tell me, "Oh, just give up." 01:26:35.960 |
But again, so Matthew Messelson, again, from Harvard, 01:26:43.580 |
he had exactly this criticism also with bioweapons. 01:26:47.760 |
People were like, "How can you check for sure 01:26:51.720 |
And he told me this, I think, really ingenious insight. 01:27:14.100 |
Because if it's an enemy, if it's another big state, 01:27:19.100 |
like suppose China and the US have signed the treaty, 01:27:28.900 |
Now suppose you are China and you have cheated 01:27:33.900 |
and secretly developed some clandestine little thing, 01:27:39.220 |
Well, you're like, "Okay, what's the probability 01:27:43.400 |
If the probability is 100%, of course we're not gonna do it. 01:27:48.980 |
But if the probability is 5% that we're gonna get caught, 01:27:52.620 |
then it's gonna be a huge embarrassment for us. 01:28:00.060 |
so it doesn't really make an enormous difference 01:28:06.520 |
- And that feeds the stigma that you kind of establish, 01:28:11.580 |
like this fabric, this universal stigma over the thing. 01:28:14.660 |
- Exactly, it's very reasonable for them to say, 01:28:16.580 |
"Well, we probably get away with it, but if we don't, 01:28:21.780 |
and then they're gonna go full tilt with their program 01:28:25.020 |
and now we have all these weapons against us, 01:28:32.100 |
And again, look what happened with bioweapons. 01:28:36.940 |
When was the last time you read about a bioterrorism attack? 01:28:40.180 |
The only deaths I really know about with bioweapons 01:28:42.700 |
that have happened, when we Americans managed to kill 01:28:46.980 |
you know, the idiot who sent them to Tom Daschle 01:28:50.900 |
And similarly, in Sverdlovsk in the Soviet Union, 01:29:00.020 |
and it leaked out and killed a bunch of Russians. 01:29:04.460 |
50 years, just two own goals by the superpowers, 01:29:12.060 |
what they think about biology, they think it's great. 01:29:19.780 |
This is how I want to think about AI in the future. 01:29:24.900 |
as a source of all these great solutions to our problems, 01:29:34.660 |
- Yeah, it's kind of brilliant that the bioweapons 01:29:40.820 |
I mean, of course, they're still a huge source of danger, 01:29:43.380 |
but we figured out some way of creating rules 01:29:54.620 |
whatever that game theoretic stability is, of course. 01:29:59.220 |
And you're kind of screaming from the top of the mountain 01:30:09.620 |
with the future of life, as you've pointed out, 01:30:12.220 |
Institute Awards pointed out that with nuclear weapons, 01:30:17.220 |
we could have destroyed ourselves quite a few times. 01:30:21.260 |
And it's a learning experience that is very costly. 01:30:31.100 |
we gave it the first time to this guy, Vasily Arkhipov. 01:30:34.860 |
He was on, most people haven't even heard of him. 01:30:41.980 |
Has, in my opinion, made the greatest positive contribution 01:30:52.100 |
like I'm just over the top, but let me tell you the story 01:31:14.140 |
But we didn't know that this nuclear submarine 01:31:17.900 |
actually was a nuclear submarine with a nuclear torpedo. 01:31:20.780 |
We also didn't know that they had authorization 01:31:34.540 |
The temperature was about 110, 120 Fahrenheit on board. 01:31:43.500 |
And at that point, these giant explosions start happening 01:31:53.940 |
And one of them shouted, "We're all gonna die, 01:32:00.340 |
if there had been a giant mushroom cloud all of a sudden 01:32:05.020 |
but since everybody had their hands on the triggers, 01:32:10.700 |
to think that it could have led to an all-out nuclear war, 01:32:15.900 |
What actually took place was they needed three people 01:32:22.420 |
There was the Communist Party political officer. 01:32:26.220 |
And the third man was this guy, Vasily Arkhipov, 01:32:30.100 |
For some reason, he was just more chill than the others, 01:32:36.100 |
I don't want us as a species rely on the right person 01:32:41.540 |
We tracked down his family living in relative poverty 01:32:55.900 |
It was incredibly moving to get to honor them for this. 01:32:59.140 |
The next year, we gave this Future Life Award 01:33:05.600 |
- So he was in charge of the Soviet early warning station, 01:33:14.500 |
It said that there were five US missiles coming in. 01:33:21.140 |
we probably wouldn't be having this conversation. 01:33:23.180 |
He decided, based on just mainly gut instinct, 01:33:32.500 |
And I'm very glad he wasn't replaced by an AI 01:33:35.060 |
that was just automatically following orders. 01:33:37.500 |
And then we gave the third one to Matthew Messelson. 01:33:46.580 |
not avoiding something bad, but for something good. 01:34:01.140 |
COVID, on average, kills less than 1% of people who get it. 01:34:08.180 |
And ultimately, Viktor Zhdanov and Bill Fahy, 01:34:13.180 |
most of my colleagues have never heard of either of them, 01:34:17.500 |
one American, one Russian, they did this amazing effort. 01:34:22.020 |
Not only was Zhdanov able to get the US and the Soviet Union 01:34:25.220 |
to team up against smallpox during the Cold War, 01:34:27.980 |
but Bill Fahy came up with this ingenious strategy 01:34:30.340 |
for making it actually go all the way to defeat the disease 01:34:37.620 |
And as a result, we went from 15 million deaths 01:34:48.140 |
- To zero deaths, of course, this year and forever. 01:34:51.940 |
There have been 200 million people, they estimate, 01:35:01.620 |
And the reason we wanna celebrate these sort of people 01:35:05.700 |
Science is so awesome when you use it for good. 01:35:10.140 |
- And those awards actually, the variety there, 01:35:19.340 |
it's kind of exciting to think that these average humans, 01:35:26.180 |
of other humans that came before them, evolution, 01:35:35.300 |
that stopped the annihilation of the human race. 01:35:49.820 |
which is to build solution to the existential crises 01:36:07.020 |
- Yeah, and the best is when they work together. 01:36:08.660 |
Arkhipov, I wish I could have met him, of course, 01:36:16.740 |
combining all the best traits that we in America admire 01:36:23.180 |
He never even told anyone about this during his whole life, 01:36:26.100 |
even though you think he had some bragging rights, right? 01:36:34.300 |
And second, the reason he did the right thing 01:36:47.340 |
It was partly because he had been the captain 01:36:49.780 |
on another submarine that had a nuclear reactor meltdown. 01:36:53.060 |
And it was his heroism that helped contain this. 01:37:01.460 |
And I think for him, that gave him this gut feeling 01:37:06.940 |
and the Soviet Union, the whole world is gonna go through 01:37:11.060 |
what I saw my dear crew members suffer through. 01:37:17.660 |
And second, though, not just the gut, the mind, right? 01:37:20.620 |
He was, for some reason, just a very level-headed personality 01:37:23.940 |
and a very smart guy, which is exactly what we want 01:37:30.100 |
I never forget Neil Armstrong when he's landing on the moon 01:37:34.540 |
And he doesn't even change, when they say 30 seconds, 01:37:37.420 |
he doesn't even change the tone of voice, just keeps going. 01:37:47.420 |
I don't think the Americans are trying to sink us. 01:38:02.700 |
and he said, listen, listen, it's alternating. 01:38:06.900 |
One loud explosion on the left, one on the right. 01:38:14.260 |
And he's like, I think this is them trying to send us 01:38:25.740 |
And somehow, this is how he then managed to ultimately, 01:38:40.140 |
And yeah, so this is some of the best in humanity. 01:38:44.220 |
I guess coming back to what we talked about earlier, 01:38:45.820 |
is the combination of the neural network, the instinctive, 01:38:48.580 |
with, I'm tearing up here, I'm getting emotional. 01:38:51.660 |
But he was just, he is one of my superheroes. 01:39:00.460 |
- Especially in that time, there's something about the, 01:39:05.380 |
people are used to this kind of idea of being the individual, 01:39:11.180 |
I think in the Soviet Union, under communism, 01:39:25.900 |
- Yeah, there's echoes of that with Chernobyl, 01:39:43.900 |
- But I think we need to think of people like this, 01:40:03.740 |
and almost have a nuclear war by mistake now and then, 01:40:06.660 |
'cause relying on luck is not a good long-term strategy. 01:40:09.620 |
If you keep playing Russian roulette over and over again, 01:40:16.720 |
of having an accidental nuclear war every year, 01:40:27.900 |
and there's a lot of very concrete things we can do 01:40:36.580 |
- On the AI front, if we just link on that for a second. 01:40:41.020 |
So you're friends with, you often talk with Elon Musk, 01:40:54.980 |
Do you have a sense, we've already talked about 01:40:59.740 |
the things we should be worried about with AI. 01:41:01.580 |
Do you have a sense of the shape of his fears in particular 01:41:05.420 |
about AI, of which subset of what we've talked about, 01:41:10.160 |
whether it's creating, it's that direction of creating 01:41:26.820 |
is it the manipulation by big corporations of that 01:41:31.800 |
or individual evil people to use that for destruction 01:41:37.420 |
Do you have a sense of where his thinking is on this? 01:41:42.460 |
yeah, I certainly have a model of how he thinks. 01:41:47.460 |
It's actually very much like the way I think also. 01:41:51.100 |
I just want to push back on when you said evil people. 01:41:54.660 |
I don't think it's a very helpful concept, evil people. 01:42:02.380 |
but they usually do it because they think it's a good thing 01:42:15.540 |
I believe in the fundamental goodness of humanity 01:42:24.300 |
people generally want to do good and be good. 01:42:35.380 |
we'll likely be able to do good in the way that's aligned 01:42:42.180 |
- Yeah, and it's not just the individual people 01:42:53.260 |
but we also would need to align other non-human entities. 01:42:56.300 |
We talked about corporations, there has to be institutions 01:43:00.940 |
and we should make sure that what the countries do 01:43:03.500 |
is actually good for the species as a whole, et cetera. 01:43:15.260 |
which is one of the reasons I like him so much 01:43:29.860 |
not just what's going to happen in the next election cycle, 01:43:32.540 |
but in millennia, millions and billions of years from now. 01:43:36.040 |
And when you look in this more cosmic perspective, 01:43:39.280 |
it's so obvious that we're gazing out into this universe 01:43:46.220 |
with life being an almost imperceptibly tiny perturbation. 01:43:56.460 |
Mars is obviously just first stop on this cosmic journey. 01:44:01.460 |
And precisely because he thinks more long-term, 01:44:04.960 |
it's much more clear to him than to most people 01:44:09.520 |
that what we do with this Russian roulette thing 01:44:11.300 |
we keep playing with our nukes is a really poor strategy, 01:44:26.620 |
in the sense that he wants an awesome future for humanity. 01:44:30.860 |
He wants it to be us that control the machines 01:44:48.400 |
that has no further say in the matter, right? 01:44:50.220 |
That's not my idea of an inspiring future either. 01:45:04.220 |
Whenever I have a bad day, that's what I think about. 01:45:09.260 |
- It makes me sad that for us individual humans, 01:45:20.060 |
- Yeah, I mean, I think of our universe sometimes 01:45:22.220 |
as an organism that has only begun to wake up a tiny bit. 01:45:25.120 |
Just like the very first little glimmers of consciousness 01:45:30.100 |
you have in the morning when you start coming around. 01:45:59.100 |
I define consciousness as subjective experience, 01:46:09.380 |
So beauty is an experience, meaning is an experience, 01:46:15.820 |
If there was no conscious experience observing these galaxies 01:46:20.260 |
If we do something dumb with advanced AI in the future here 01:46:30.460 |
If there is nothing else with telescopes in our universe, 01:46:33.540 |
then it's kind of game over for beauty and meaning 01:46:38.100 |
And I think that would be just such an opportunity lost, 01:46:49.620 |
for all the dumb media bias reasons we talked about. 01:46:52.420 |
They want to print precisely the things about Elon 01:47:07.700 |
'cause I was in the front row when he gave that talk. 01:47:13.860 |
They had Buzz Aldrin there from the moon landing, 01:47:20.860 |
And he had this amazing Q&A, might've gone for an hour. 01:47:23.940 |
And they talked about rockets and Mars and everything. 01:47:29.620 |
who's actually in my class asked him, "What about AI?" 01:47:35.220 |
and they take this out of context, print it, goes viral. 01:47:39.420 |
- Was it like with AI, we're summoning the demons, 01:47:59.860 |
that there is so much awesomeness in the future 01:48:07.740 |
I get so pissed off when people try to cast him 01:48:20.500 |
about artificial general intelligence are Luddites, 01:48:26.980 |
you have some of the most outspoken people making warnings 01:48:31.980 |
are people like Professor Stuart Russell from Berkeley 01:48:35.660 |
who's written the best-selling AI textbook, you know. 01:48:38.380 |
So claiming that he's a Luddite who doesn't understand AI, 01:48:44.260 |
the joke is really on the people who said it, 01:48:52.660 |
They think that Elon and Stuart Russell and others 01:48:56.660 |
are worried about the dancing robots picking up an AR-15 01:49:04.340 |
They think they're worried about robots turning evil. 01:49:20.020 |
their goals accomplished, even if they clash with our goals. 01:49:25.900 |
Why did we humans drive the West African black rhino extinct? 01:49:30.900 |
Is it because we're malicious, evil rhinoceros haters? 01:49:41.860 |
So the point is just we don't wanna put ourselves 01:49:51.860 |
if we haven't first figured out how to align the goals. 01:49:55.540 |
I think we could do it if we worked really hard on it 01:49:57.540 |
because I spent a lot of time around intelligent entities 01:50:01.460 |
that were more intelligent than me, my mom and my dad. 01:50:17.820 |
So those click-through optimization algorithms 01:50:26.740 |
with what was good for democracy, it turned out. 01:50:36.180 |
And that's exactly why that's why we should be concerned 01:50:39.980 |
- Do you think it's possible that with systems 01:50:43.900 |
like Neuralink and brain-computer interfaces, 01:50:59.980 |
So one of them is having a symbiosis with AI, 01:51:05.580 |
where we're like stuck together in this weird relationship, 01:51:10.340 |
whether it's biological or in some kind of other way. 01:51:25.940 |
talking to these intelligible, self-doubting AIs, 01:51:33.580 |
like we're self-doubting and full of uncertainty, 01:51:37.620 |
and then have our AI systems that are full of uncertainty, 01:51:46.220 |
I would say that because we don't know for sure 01:51:48.580 |
what if any of our, which of any of our ideas will work, 01:51:56.700 |
any of these things to work and just barge ahead, 01:51:59.860 |
then our species is probably gonna go extinct this century. 01:52:05.540 |
You think we're facing this crisis is a 21st century crisis. 01:52:16.100 |
- On a hard drive somewhere. - On a hard drive somewhere. 01:52:22.300 |
like there'll be future, Future of Life Institute awards 01:52:26.260 |
for people that have done something about AI. 01:52:35.300 |
We just totally wipe out, you know, like on Easter Island. 01:52:39.900 |
No, there are still 79 years left of it, right? 01:52:44.300 |
Think about how far we've come just in the last 30 years. 01:52:47.700 |
So we can talk more about what might go wrong, 01:52:55.780 |
Is it Neuralink or Russell's approach or whatever? 01:52:59.780 |
I think, you know, when we did the Manhattan Project, 01:53:08.460 |
for enriching uranium and getting out the uranium-235 01:53:16.700 |
So, you know, what we did, we tried all four of them. 01:53:19.500 |
Here, I think it's analogous where there's the greatest 01:53:25.940 |
And of course, US national security by implication. 01:53:31.500 |
that's guaranteed to work, but we have a lot of ideas. 01:53:34.660 |
So we should invest pretty heavily in pursuing all of them 01:53:36.860 |
with an open mind and hope that one of them at least works. 01:53:40.540 |
These are, the good news is the century is long, you know, 01:53:57.100 |
It's gonna actually be, it's the most difficult problem 01:54:04.260 |
rather than, you know, begin thinking about it 01:54:05.860 |
the night before some people who've had too much Red Bull 01:54:09.580 |
And we have to, coming back to your question, 01:54:11.860 |
we have to pursue all of these different avenues and see. 01:54:22.140 |
is most likely to destroy itself in this century? 01:54:27.540 |
Yeah, so if the crises, many of the crises we're facing 01:54:33.540 |
are really before us within the next 100 years, 01:54:42.340 |
make known the unknowns and solve those problems 01:54:48.220 |
starting with the biggest existential crisis? 01:55:06.700 |
For AI alignment, we can break it into three sub-problems. 01:55:14.420 |
You want first to make machines understand our goals, 01:55:18.380 |
then adopt our goals, and then retain our goals. 01:55:26.140 |
The problem when Andreas Lubitz told his autopilot 01:55:34.340 |
didn't even understand anything about his goals. 01:55:58.260 |
oh, it's so hard, we should start with the simple stuff, 01:56:04.100 |
just put in all the goals that we all agree on already. 01:56:07.220 |
And then have a habit of whenever machines get smarter, 01:56:10.500 |
so they can understand one level higher goals, 01:56:16.900 |
The second challenge is getting them to adopt the goals. 01:56:23.220 |
But when you have self-learning systems like children, 01:56:44.020 |
First, they're too dumb to understand what we want, 01:56:47.100 |
And then they have this period of some years, 01:56:50.420 |
when they're both smart enough to understand them, 01:57:08.540 |
The third one is, how do you make sure they keep the goals, 01:57:11.380 |
if they keep learning more and getting smarter? 01:57:14.580 |
Many sci-fi movies are about how you have something 01:57:27.420 |
Now they're just gathering dust in the basement. 01:57:29.820 |
If we create machines that are really on board 01:57:43.460 |
self-improving system retain certain basic goals? 01:57:47.480 |
- That said, a lot of adult people still play with Legos. 01:57:55.500 |
- So not all AI systems have to maintain the goals, right? 01:58:00.220 |
- Yeah, so there's a lot of talented AI researchers now 01:58:08.940 |
Of the billions that go into building AI more powerful, 01:58:21.580 |
but we should greatly accelerate the investment 01:58:25.020 |
And also make sure, this was very embarrassing last year, 01:58:29.380 |
but the NSF decided to give out six of these big institutes. 01:58:33.720 |
We got one of them for AI and science, you asked me about. 01:58:37.100 |
Another one was supposed to be for AI safety research. 01:58:51.220 |
that actually goes into AI safety research also 01:58:58.020 |
And then at the higher level, you asked this question, 01:59:11.080 |
Again, 'cause if you solve only the technical problem, 01:59:15.560 |
- If we can get our machines to just blindly obey 01:59:21.940 |
so we can always trust that it will do what we want, 01:59:26.180 |
that might be great for the owner of the robot, 01:59:28.480 |
but it might not be so great for the rest of humanity 01:59:31.420 |
if that person is that least favorite world leader 01:59:35.740 |
So we have to also take a look at the apply alignment, 01:59:48.580 |
make sure that the playing field is not rigged 01:59:51.460 |
so that corporations are given the right incentives 02:00:00.980 |
to do things that are both good for their people 02:00:06.860 |
And this is not just something for AI nerds to geek out on. 02:00:10.300 |
This is an interesting challenge for political scientists, 02:00:16.820 |
- So one of the magical things that perhaps makes 02:00:53.060 |
Or is there something fundamental to consciousness 02:00:56.820 |
that is, is there something about consciousness 02:00:59.860 |
that is fundamental to humans and humans only? 02:01:15.980 |
whether the information is processed by carbon atoms 02:01:34.980 |
you said consciousness is information processing. 02:01:37.700 |
So meaning, I think you had a quote of something like, 02:01:42.700 |
it's information knowing itself, that kind of thing. 02:01:47.740 |
- I think consciousness is the way information feels 02:01:53.660 |
We don't know exactly what those complex ways are. 02:01:56.140 |
It's clear that most of the information processing 02:02:07.900 |
even though it's clearly being done by your body. 02:02:12.140 |
When you go jogging, there's a lot of complicated stuff 02:02:25.780 |
just sends an email, hey, I'm gonna keep jogging 02:02:33.220 |
but somehow there is some of the information processing, 02:02:44.140 |
that I hope one day we'll have some equation for 02:02:50.980 |
here there is some consciousness, here there is not. 02:02:53.900 |
Oh, don't boil that lobster because it's feeling pain 02:02:59.860 |
Right now we treat this as sort of just metaphysics, 02:03:02.460 |
but it would be very useful in emergency rooms 02:03:09.740 |
and is conscious or if they are actually just out. 02:03:14.580 |
And in the future, if you build a very, very intelligent 02:03:20.100 |
I think you'd like to know if you should feel guilty 02:03:22.500 |
by shutting it down or if it's just like a zombie 02:03:26.180 |
going through the motions like a fancy tape recorder. 02:03:28.880 |
Once we can make progress on the science of consciousness 02:03:34.060 |
and figure out what is conscious and what isn't, 02:03:38.340 |
then we, assuming we wanna create positive experiences 02:03:43.340 |
and not suffering, we'll probably choose to build 02:03:48.900 |
some machines that are deliberately unconscious 02:03:59.700 |
And maybe we'll choose to create helper robots 02:04:16.900 |
And so there's a place, I think, for everybody 02:04:24.400 |
- But I know for sure that I would, if I had a robot, 02:04:31.060 |
emotional connection with it, I would be very creeped out 02:04:36.820 |
Now, today you can buy a little talking doll for a kid, 02:04:45.340 |
will often think that this is actually conscious 02:04:47.820 |
and even real secrets to it that then go on the internet 02:04:52.580 |
I would not wanna be just hacked and tricked like this. 02:04:58.060 |
If I was gonna be developing real emotional connections 02:05:01.580 |
with a robot, I would wanna know that this is actually real. 02:05:15.560 |
after we understand the science of consciousness, 02:05:17.540 |
you're saying we'll be able to come up with tools 02:05:19.780 |
that can measure consciousness and definitively say 02:05:26.060 |
it says it's experiencing. - Yeah, kind of by definition. 02:05:28.300 |
If it is a physical phenomenon, information processing, 02:05:31.540 |
and we know that some information processing is conscious 02:05:34.020 |
and some isn't, well, then there is something there 02:05:35.980 |
to be discovered with the methods of science. 02:05:38.020 |
Giulio Tononi has stuck his neck out the farthest 02:05:41.100 |
and written down some equations for a theory. 02:05:47.060 |
But I applaud that kind of efforts to sort of take this, 02:05:50.460 |
say this is not just something that philosophers 02:06:00.580 |
I think what we would probably choose to do, as I said, 02:06:19.020 |
should not be making a bunch of machines that suffer 02:06:23.580 |
And if at any point someone decides to upload themselves, 02:06:34.580 |
- Suppose he uploads himself into this robo-Ray, 02:06:38.460 |
and it talks like him and acts like him and laughs like him, 02:06:42.100 |
and before he powers off his biological body, 02:06:49.540 |
This robot is not having any subjective experience, right? 02:06:52.760 |
If humanity gets replaced by machine descendants 02:06:59.820 |
which do all these cool things and build spaceships 02:07:05.620 |
and it turns out that they are all unconscious, 02:07:11.420 |
wouldn't that be like the ultimate zombie apocalypse, right? 02:07:18.020 |
- Yeah, I have a sense that there's some kind of, 02:07:22.780 |
we'll understand that there's some kind of continuum, 02:07:28.020 |
And we'll probably understand, just like you said, 02:07:33.940 |
that love is indeed a trick that we play on each other, 02:07:37.780 |
that we humans are, we convince ourselves we're conscious, 02:07:48.140 |
with a philosophical thought here about the love part? 02:07:57.960 |
And then maybe you can go and get depressed about that. 02:08:01.840 |
But I think that would be the wrong conclusion, actually. 02:08:21.120 |
but they just want me to make copies of them. 02:08:24.560 |
the whole enjoyment of food is also a scam like this. 02:08:32.600 |
I love pistachio ice cream, and I can tell you, 02:08:38.240 |
I enjoy pistachio ice cream every bit as much, 02:08:55.840 |
Ultimately, all of my brain is also just something 02:08:58.680 |
the genes built to copy themselves, but so what? 02:09:01.360 |
I'm grateful that, yeah, thanks genes for doing this, 02:09:09.560 |
thank you very much, and not just the pistachio ice cream, 02:09:12.480 |
but also the love I feel for my amazing wife, 02:09:15.480 |
and all the other delights of being conscious. 02:09:20.240 |
Actually, Richard Feynman, I think, said this so well, 02:09:25.080 |
he is also the guy who really got me into physics. 02:09:31.240 |
oh, science kind of just is the party pooper, 02:09:36.280 |
When like, you have a beautiful flower, says the artist, 02:09:39.680 |
and then the scientist is gonna deconstruct that 02:09:49.920 |
and make you not feel guilty about falling in love. 02:09:58.880 |
see that this is a beautiful flower, thank you very much. 02:10:00.960 |
Maybe I can't draw as good a painting as you, 02:10:09.360 |
But in addition to that, Feynman said, as a scientist, 02:10:12.200 |
I see even more beauty that the artist did not see, right? 02:10:16.960 |
Suppose this is a flower on a blossoming apple tree, 02:10:21.080 |
you could say this tree has more beauty in it 02:10:29.040 |
This is one of my favorite Feynman quotes ever. 02:10:33.760 |
and bound it in using the flaming heat of the sun, 02:10:42.760 |
it's really beautiful to think that this is being reversed. 02:10:45.120 |
Now the tree is going, the wood is going back into air, 02:10:48.560 |
and in this flaming, beautiful dance of the fire 02:10:52.520 |
that the artist can see is the flaming light of the sun 02:11:14.840 |
I can understand that there is even more nuance 02:11:20.520 |
At this very visceral level, you can fall in love 02:11:23.680 |
just as much as someone who knows nothing about neuroscience 02:11:26.680 |
but you can also appreciate this even greater beauty in it. 02:11:38.600 |
just a bunch of hot blob of plasma expanding? 02:11:50.960 |
and then the electric force bound in electrons 02:11:53.080 |
and made atoms, and then they clustered from gravity, 02:11:55.280 |
and you got planets and stars and this and that, 02:12:01.840 |
and you started getting what went from seeming 02:12:13.280 |
And then this goal-orientedness through evolution 02:12:15.800 |
got ever more sophisticated where you got ever more, 02:12:20.160 |
which is kind of like DeepMind's mu zero and steroids, 02:12:25.160 |
the ultimate self-play is not what DeepMind's AI 02:12:34.480 |
against each other in the game of survival of the fittest. 02:12:38.960 |
Now, when you had really dumb bacteria living 02:12:42.320 |
in a simple environment, there wasn't much incentive 02:12:45.200 |
to get intelligent, but then the life made environment 02:12:49.520 |
more complex, and then there was more incentive 02:12:52.080 |
to get even smarter, and that gave the other organisms 02:12:57.560 |
and then here we are now, just like mu zero learned 02:13:05.560 |
from playing against itself, by just playing against itself. 02:13:08.600 |
All the quarks here on our planet and electrons 02:13:11.360 |
have created giraffes and elephants and humans and love. 02:13:16.360 |
I just find that really beautiful, and to me, 02:13:34.600 |
to the beginning of our conversation a little bit, 02:13:49.280 |
throughout the history of physics of coming up 02:13:58.440 |
and then the ultimate of that would be a theory 02:14:01.120 |
of everything that combines everything together. 02:14:03.680 |
Do you think it's possible that, well, one, we humans, 02:14:07.440 |
but perhaps AI systems will figure out a theory of physics 02:14:33.340 |
So, and gradually, yeah, unless we go extinct 02:14:39.720 |
I think it's very likely that our understanding 02:14:48.080 |
that our technology will no longer be limited 02:14:53.080 |
by human intelligence, but instead be limited 02:15:02.160 |
I think as AI progresses, it'll just be limited 02:15:05.840 |
by the speed of light and other physical limits, 02:15:09.280 |
which will mean it's gonna be just dramatically 02:15:15.320 |
- Do you think it's a fundamentally mathematical pursuit 02:15:22.120 |
that govern our universe from a mathematical perspective? 02:15:33.520 |
Or is there some other more computational ideas, 02:15:43.120 |
It's really interesting to look out at the landscape 02:15:48.000 |
So here you come now with this big new hammer. 02:15:51.480 |
and that's, you know, where are there some nails 02:15:53.400 |
that you can help with here that you can hammer? 02:15:56.640 |
Ultimately, if machine learning gets to the point 02:16:02.840 |
it will be able to help across the whole space of science. 02:16:06.040 |
But maybe we can anchor it by starting a little bit 02:16:08.160 |
right now near term and see how we kind of move forward. 02:16:19.400 |
we are able to collect way more data every hour 02:16:28.760 |
And machine learning is already being used very effectively, 02:16:38.760 |
to detect the ripples in the fabric of space-time 02:16:44.640 |
caused by enormous black holes crashing into each other 02:16:49.920 |
Machine learning is running and taking it right now, 02:16:53.800 |
and it's really helping all these experimental fields. 02:16:57.560 |
There is a separate front of physics, computational physics, 02:17:05.680 |
So we had to do all our computations by hand, right? 02:17:19.920 |
Then we started to get little calculators and computers 02:17:28.880 |
kind of a shift from Go-Fi computational physics 02:17:40.040 |
What I mean by that is most computational physics 02:17:48.520 |
the intelligence of how to do the computation 02:17:51.200 |
Just as when Garry Kasparov got his posterior kicked 02:17:56.920 |
humans had programmed in exactly how to play chess. 02:18:09.880 |
which is the best sort of Go-Fi chess program. 02:18:13.560 |
By learning, and we're seeing more of that now, 02:18:25.080 |
whose goal is basically to take the periodic table 02:18:28.320 |
and just compute the whole thing from first principles. 02:18:31.080 |
This is not the search for theory of everything. 02:18:34.840 |
We already know the theory that's supposed to produce 02:18:53.720 |
But the math is just too hard for us to solve. 02:18:57.160 |
We have not been able to start with these equations 02:18:59.240 |
and solve them to the extent that we can predict, oh yeah. 02:19:03.960 |
and this is what the spectrum of the carbon atom looks like. 02:19:22.120 |
because you're going down to the subatomic scale, 02:19:29.000 |
that we still haven't been able to calculate things 02:19:31.840 |
as accurately as we measure them in many cases. 02:19:35.000 |
And now machine learning is really revolutionizing this. 02:19:37.560 |
So my colleague, Fiola Shanahan at MIT, for example, 02:19:40.080 |
she's been using this really cool machine learning technique 02:19:52.240 |
by having the AI learn how to do things faster. 02:20:00.720 |
an enormous amount of super computer time to do physics 02:20:11.720 |
we want to be able to know what we're seeing. 02:20:15.120 |
And so it's a very simple conceptual problem. 02:20:19.560 |
Newton solved it for classical gravity hundreds of years ago, 02:20:24.720 |
but the two-body problem is still not fully solved. 02:20:30.760 |
because they won't just orbit each other forever anymore, 02:20:33.560 |
two things, they give off gravitational waves, 02:20:43.480 |
as a function of the masses of the two black holes, 02:20:56.160 |
Wouldn't it be great if you can use machine learning 02:21:04.760 |
Now you can use the expensive old Gophi calculation 02:21:09.360 |
as the truth, and then see if machine learning 02:21:32.320 |
and try to figure out the mass of the proton, 02:21:37.720 |
and try to look at how all the galaxies get formed in there. 02:21:41.360 |
There again, there are a lot of very cool ideas right now 02:21:51.600 |
is you kind of make the data yourself, right? 02:22:00.240 |
and seeing what can we hammer with machine learning, right? 02:22:02.160 |
So we talked about experimental data, big data, 02:22:09.520 |
Then we talked about taking the expensive computations 02:22:13.440 |
we're doing now and figuring out how to do them 02:22:28.800 |
This is something closest to what I've been doing 02:22:33.840 |
We talked earlier about the whole AI Feynman project 02:22:51.160 |
about if this is sort of a search problem in some sense. 02:22:54.880 |
That's very deep actually what you said, because it is. 02:22:57.720 |
Suppose I ask you to prove some mathematical theorem. 02:23:05.520 |
logical steps that you can write out with symbols. 02:23:08.720 |
And once you find it, it's very easy to write a program 02:23:16.880 |
Well, because there are ridiculously many possible 02:23:19.760 |
candidate proofs you could write down, right? 02:23:29.960 |
that's 10 to the power of 1,000 possible proofs, 02:23:34.200 |
which is way more than there are atoms in our universe. 02:23:36.800 |
So you could say it's trivial to prove these things. 02:23:39.200 |
You just write a computer, generate all strings, 02:23:55.120 |
You just want to search the space of all strings of symbols 02:23:59.920 |
to find the one, find one that is the proof, right? 02:24:02.960 |
And there's a whole area of machine learning called search. 02:24:12.360 |
It's easier in cases where there's a clear measure of good, 02:24:23.840 |
That's why we talked about neural networks work so well. 02:24:32.320 |
of figuring out the intuition of good, essentially. 02:24:46.880 |
sometimes 20 steps ahead was not a calculation 02:24:53.760 |
about different patterns, about board positions, 02:25:06.320 |
Zero be the first one that did the self-play. 02:25:12.200 |
It was able to learn through self-play mechanism, 02:25:16.840 |
- But just like you said, it's so fascinating to think 02:25:32.280 |
in the cranium of the great mathematicians of humanity. 02:25:41.840 |
when we said intuition is something different. 02:26:04.280 |
Because for Deep Blue, there was no intuition. 02:26:08.640 |
There was some pro, humans had programmed in some intuition. 02:26:15.040 |
count the pawn as one point, the bishop as three points, 02:26:20.320 |
You add it all up and then you add some extra points 02:26:22.480 |
for past pawns and subtract if the opponent has it 02:26:31.600 |
Just very brute force, tried many, many moves ahead, 02:26:35.000 |
all these combinations in a pruned tree search. 02:26:48.800 |
it's just brute force search but it has no intuition. 02:27:00.880 |
yes, it does also do some of that tree search, 02:27:06.560 |
which in geek speak is called a value function 02:27:11.120 |
and comes up with a number for how good is that position. 02:27:21.080 |
And Mu Zero is the coolest or scariest of all, 02:27:35.320 |
regardless of whether it's chess or Go or Shogi 02:27:38.640 |
or Pac-Man or Lady Pac-Man or Breakout or Space Invaders 02:27:45.840 |
and it gets this intuition after a while for what's good. 02:27:49.760 |
So this is very hopeful for science, I think, 02:28:02.480 |
One of the most fun things in my science career 02:28:06.400 |
is when I've been able to prove some theorem about something 02:28:08.640 |
and it's very heavily intuition guided, of course. 02:28:14.280 |
I have a hunch that this reminds me a little bit 02:28:17.720 |
about this other proof I've seen for this thing. 02:28:34.880 |
I think it's gonna be able to help physics too. 02:28:38.600 |
- Do you think there'll be a day when an AI system 02:28:45.120 |
let's say 90% plus, wins a Nobel Prize in physics? 02:28:52.000 |
'cause we humans don't like to give prizes to machines. 02:28:54.840 |
It'll give it to the humans behind the system. 02:28:57.600 |
You could argue that AI has already been involved 02:29:01.600 |
maybe some to the black holes and stuff like that. 02:29:03.600 |
- Yeah, we don't like giving prizes to other life forms. 02:29:13.160 |
But do you think that we might be able to see 02:29:16.040 |
something like that in our lifetimes when AI? 02:29:21.880 |
that makes us think about a Nobel Prize seriously 02:29:38.760 |
we might be able to see that in our lifetimes? 02:29:51.880 |
to do a computation that gives them the Nobel Prize, 02:29:54.920 |
nobody's gonna dream of giving the prize to the computer. 02:29:57.160 |
They're gonna be like, "That was just a tool." 02:30:06.080 |
But what's gonna change is the ubiquity of machine learning. 02:30:26.760 |
says, "Oh, I don't know anything about computers," 02:30:35.400 |
there is a magic moment, though, like with AlphaZero, 02:30:48.960 |
in a way where you feel like it's another entity. 02:30:54.920 |
the way certain people are looking at the work of AlphaZero, 02:31:03.080 |
in the sense that it doesn't feel like a tool. 02:31:08.920 |
So there is a magic difference where you're like, 02:31:12.480 |
if an AI system is able to come up with an insight 02:31:43.120 |
of the 21st century science is exactly what you're saying, 02:31:48.800 |
will be doing machine learning to some degree. 02:32:03.080 |
that's super surprising and they'll make us question 02:32:12.200 |
I think the question of isn't if it's gonna happen, 02:32:19.440 |
the time when that happens is also more or less 02:32:22.280 |
the same time when we get artificial general intelligence. 02:32:25.600 |
And then we have a lot bigger things to worry about 02:32:28.200 |
than whether we should get the Nobel Prize or not, right? 02:32:54.600 |
about loss of control when machines get to AGI 02:32:58.960 |
across the board, when they can do everything, all our jobs. 02:33:07.920 |
We talked at length about how the hacking of our minds 02:33:12.600 |
with algorithms trying to get us glued to our screens, 02:33:47.120 |
the implication is always that they're coming to kill us. 02:33:50.000 |
And maybe you shouldn't have worried about that 02:34:08.240 |
into buying things that maybe we didn't need, 02:34:22.080 |
are actually much more hackable than we thought. 02:34:24.800 |
And the ultimate insult is that we are actually 02:34:27.080 |
getting hacked by the machine learning algorithms 02:34:35.800 |
because how do you feel about the cute puppies? 02:34:47.720 |
but boy, are our cute puppies good at hacking us, right? 02:34:52.520 |
persuade us to feed them and do all these things. 02:34:57.480 |
- Other than being cute and making us feel good, right? 02:35:04.960 |
if pretty dumb machine learning algorithms can hack us too. 02:35:09.080 |
- Not to speak of cats, which is another level. 02:35:15.680 |
let us not think about evil creatures in this world. 02:35:43.080 |
You've mentioned offline that there might be a link 02:36:03.680 |
perhaps the entirety of the observable universe, 02:36:06.360 |
we might be the only intelligent civilization here. 02:36:17.720 |
So I have a few little questions around this. 02:36:33.120 |
in which way do you think you would be surprised? 02:36:48.420 |
Is it because the nature of their intelligence 02:36:51.320 |
or the nature of their life is totally different 02:37:04.480 |
Or maybe because we're being protected from signals? 02:37:08.800 |
All those explanations for why we haven't heard 02:37:13.800 |
a big, loud, like red light that says we're here. 02:37:21.720 |
So there are actually two separate things there 02:37:35.500 |
when you're going from simple bacteria-like things 02:38:08.240 |
that's actually gotten far enough to invent telescopes. 02:38:12.320 |
So let's talk about maybe both of them in turn 02:38:14.980 |
The first one, if you look at the N equals one, 02:38:25.900 |
futzing around on this planet with life, right? 02:38:38.160 |
then the things gradually accelerated, right? 02:38:41.300 |
Then the dinosaurs spent over 100 million years 02:38:43.620 |
stomping around here without even inventing smartphones. 02:38:49.780 |
we've only spent 400 years going from Newton to us, right? 02:38:59.260 |
when I was a little kid, there was no internet even. 02:39:02.620 |
So I think it's pretty likely for in this case 02:39:08.180 |
That we're either gonna really get our act together 02:39:12.180 |
and start spreading life into space, the century, 02:39:23.500 |
what happened on this Earth is very atypical. 02:39:25.780 |
And for some reason, what's more common on other planets 02:39:31.440 |
futzing around with the ham radio and things, 02:39:33.720 |
but they just never really take it to the next level 02:39:54.120 |
whether it's a population explosion or a nuclear explosion, 02:39:58.020 |
It's that the next step triggers a step after that. 02:40:01.500 |
So today's technology enables tomorrow's technology, 02:40:13.820 |
of course, the steps can come faster and faster. 02:40:17.020 |
On the other question that I might be wrong about, 02:40:19.200 |
that's the much more controversial one, I think. 02:40:24.960 |
if the first one, if it's true that most civilizations 02:40:28.360 |
spend only a very short amount of their total time 02:40:46.220 |
then that should apply also elsewhere out there. 02:41:08.740 |
some really cool galactic construction projects 02:41:13.340 |
- Would we be able to recognize them, do you think? 02:41:20.060 |
could it be just existing in some other dimension? 02:41:28.500 |
that it changes completely where we won't be able to detect? 02:41:32.900 |
- We have to be, honestly, very humble about this. 02:41:37.020 |
the number one principle of being a scientist 02:41:39.780 |
is you have to be humble and willing to acknowledge 02:41:42.220 |
that everything we think, guess, might be totally wrong. 02:41:52.380 |
and not disturb the flora and fauna around them 02:41:59.960 |
If you have millions of civilizations out there 02:42:03.700 |
all it takes is one with a more ambitious mentality 02:42:19.620 |
We're still gonna notice that expansionist one, right? 02:42:23.020 |
And it seems like quite the stretch to assume that, 02:42:28.140 |
that there are probably a billion or more planets 02:42:34.540 |
and many of them were formed over a billion years 02:42:40.820 |
So if you actually assume also that life happens 02:42:45.060 |
kind of automatically on an Earth-like planet, 02:42:47.540 |
I think it's quite the stretch to then go and say, 02:42:52.260 |
okay, so there are another billion civilizations out there 02:42:59.460 |
and not a single one decided to go Hitler on the galaxy 02:43:05.420 |
or not a single one decided for more benevolent reasons 02:43:16.700 |
you challenged me to be, that I might be wrong about, 02:43:21.180 |
So Francis Drake, when he wrote down the Drake equation, 02:43:31.660 |
when you multiply together the whole product. 02:43:41.060 |
how common is it that a solar system even has a planet? 02:43:46.500 |
- Earth-like planets, we know we have better-- 02:43:48.540 |
- There are a dime a dozen, there are many, many of them, 02:43:55.220 |
I'm a big supporter of the SETI project and its cousins, 02:44:04.020 |
all we have is still unconvincing hints, nothing more. 02:44:12.580 |
If there were 100 million other human-like civilizations 02:44:20.380 |
to notice some of them with today's technology, 02:44:29.640 |
we can rule out that there is a human-level civilization 02:44:32.320 |
on the moon, and in fact, the many nearby solar systems, 02:44:39.320 |
that there is something like Earth sitting in a galaxy 02:44:50.240 |
given that there are all these planets there. 02:44:58.840 |
So my argument, which might very well be wrong, 02:45:01.120 |
it's very simple, really, it just goes like this. 02:45:09.640 |
on a random planet, it could be 10 to the minus one, 02:45:15.400 |
10 to the minus 20, 10 to the minus 30, 10 to the minus 40. 02:45:27.460 |
it's again equally likely that it's 10 to the 10 meters away, 02:45:30.560 |
10 to the 20 meters away, 10 to the 30 meters away. 02:45:33.480 |
We have some nerdy ways of talking about this 02:45:35.680 |
with Bayesian statistics and a uniform log prior, 02:45:43.760 |
okay, there are all these orders of magnitude. 02:46:10.920 |
it could be 10 to the 10 meters, 10 to the 20, 02:46:25.460 |
- And here is the edge of our observable universe already. 02:46:33.700 |
then you're basically 100% guaranteed that there is. 02:46:48.740 |
there's actually significantly less than one, I think. 02:46:52.240 |
And I think there's a moral lesson from this, 02:47:03.760 |
oh, it's fine if we nuke our planet or ruin the climate 02:47:10.280 |
because I know there is this nice Star Trek fleet out there. 02:47:15.160 |
They're gonna swoop in and take over where we failed. 02:47:19.840 |
that the Easter Island losers wiped themselves out. 02:47:26.640 |
If it's actually the case that it might be up to us 02:47:32.020 |
and only us, the whole future of intelligent life 02:47:40.440 |
it really puts a lot of responsibility on our shoulders. 02:48:00.160 |
This is about as far from that as you can come. 02:48:06.920 |
on our little spinning ball here in our lifetime 02:48:13.880 |
for the entire future of life in our universe. 02:48:20.280 |
I mean, the other, a very similar kind of empowering aspect 02:48:27.680 |
say there is a huge number of intelligent civilizations 02:48:39.880 |
And just like you said, it's clear that that, 02:48:45.920 |
the one possible great filter seems to be coming 02:49:02.920 |
Nick Bostrom has articulated this really beautifully too. 02:49:06.080 |
Every time yet another search for life on Mars 02:49:09.480 |
comes back negative or something, I'm like, yes, yes! 02:49:17.880 |
You already made the argument in broad brush there, right? 02:49:24.560 |
there is a crap ton of planets out there that are Earth-like 02:49:29.680 |
and we also know that most of them do not seem to have 02:49:37.280 |
There's clearly one step along the evolutionary, 02:49:39.520 |
at least one filter roadblock in going from no life 02:49:48.200 |
Is it in front of us or is it behind us, right? 02:49:54.080 |
and we keep finding all sorts of little mice on Mars 02:50:15.640 |
it just sets their clock and then after a little while 02:50:18.160 |
it goes poof for one reason or other and wipes itself out. 02:50:26.120 |
Whereas if it turns out that there is a great filter 02:50:39.120 |
or even the first ribosome or whatever, right? 02:50:43.280 |
Or maybe you have lots of planets with dinosaurs and cows 02:50:47.120 |
but for some reason they tend to get stuck there 02:50:50.800 |
All of those are huge boosts for our own odds 02:50:58.800 |
It doesn't matter how hard or unlikely it was 02:51:01.680 |
that we got past that roadblock because we already did. 02:51:11.440 |
So that's why I think the fact that life is rare 02:51:20.560 |
but also something we should actually hope for. 02:51:33.120 |
maybe prospering beyond any kind of great filter. 02:51:39.400 |
Does it make you sad that you may not witness some of the, 02:51:47.880 |
some of the biggest questions in the universe actually, 02:51:53.720 |
Does it make you sad that you may not be able 02:51:55.560 |
to see some of these exciting things come to fruition 02:52:07.200 |
my dad made this remark that life is fundamentally tragic. 02:52:10.840 |
And I'm like, what are you talking about, daddy? 02:52:15.640 |
now I feel I totally understand what he means. 02:52:21.960 |
And then suddenly we find out that actually, you know, 02:52:37.280 |
- No, not in the sense that I think anything terrible 02:52:46.040 |
is gonna happen after I die or anything like that. 02:52:51.000 |
but it's more that it makes me very acutely aware 02:53:00.240 |
And it's a steady reminder to just live life to the fullest 02:53:04.720 |
and really enjoy it because it is finite, you know. 02:53:08.080 |
And I think actually, and we all get the regular reminders 02:53:23.760 |
to be an immortal being, if they might even enjoy 02:53:27.040 |
some of the wonderful things of life a little bit less, 02:53:35.200 |
- Do you think that could be a feature, not a bug, 02:53:53.440 |
that the reason the pistachio ice cream is delicious 02:53:57.000 |
is the fact that you're going to die one day? 02:54:00.000 |
And you will not have all the pistachio ice cream 02:54:10.560 |
I do think I appreciate the pistachio ice cream a lot more 02:54:12.840 |
knowing that I will, there's only a finite number of times 02:54:17.840 |
And I can only remember a finite number of times 02:54:30.560 |
I also think though that death is a little bit overrated 02:54:35.560 |
in the sense that it comes from a sort of outdated view 02:54:45.640 |
Because if you ask, okay, what is it that's going to die 02:54:52.040 |
When I say I feel sad about the idea of myself dying, 02:54:56.000 |
am I really sad that this skin cell here is gonna die? 02:54:59.200 |
Of course not, 'cause it's gonna die next week anyway 02:55:04.020 |
And it's not any of my cells that I'm associating really 02:55:08.440 |
with who I really am, nor is it any of my atoms 02:55:14.040 |
In fact, basically all of my atoms get replaced 02:55:25.880 |
In processing, that's where my memory, that's my memories, 02:55:30.880 |
that's my values, my dreams, my passion, my love. 02:55:43.560 |
And frankly, not all of that will die when my body dies. 02:55:55.120 |
But many of his ideas that he felt made him very him 02:56:04.120 |
I try to keep a little bit of him alive in myself. 02:56:09.640 |
- Yeah, he almost came alive for a brief moment 02:56:13.360 |
- Yeah, and this honestly gives me some solace. 02:56:19.360 |
I feel if I can actually share a bit about myself, 02:56:25.360 |
that my students feel worthy enough to copy and adopt 02:56:35.680 |
now I live on also a little bit in them, right? 02:56:38.280 |
And so being a teacher is a little bit of what I, 02:56:51.760 |
to making me a little teeny bit less mortal, right? 02:56:55.720 |
Because I'm not, at least not all gonna die all at once, 02:57:06.960 |
the things that we felt was the most awesome about them, 02:57:17.240 |
but it's a very beautiful idea you bring up there. 02:57:19.800 |
I think we should stop this old-fashioned materialism 02:57:23.160 |
and just equate who we are with our quarks and electrons. 02:57:34.620 |
Now, if you look a little bit towards the future, right, 02:57:40.520 |
one thing which really sucks about humans dying 02:57:46.200 |
and memories and stories and ethics and so on 02:57:49.920 |
will be copied by those around them, hopefully, 02:57:52.960 |
a lot of it can't be copied and just dies with them, 02:57:57.160 |
That's the fundamental reason why we find it so tragic 02:58:00.900 |
when someone goes from having all this information there 02:58:15.000 |
The only reason it's so hard to make a backup of your brain 02:58:25.760 |
there's no reason for why it has to die at all 02:58:36.680 |
You can copy not just some of it, but all of it, right? 02:58:42.820 |
you can get immortality because all the information 02:58:54.100 |
It's also with that, very much the whole individualism 02:59:01.220 |
The reason that we make such a big difference 02:59:06.220 |
we're a little bit limited in how much we can copy. 02:59:08.100 |
Like I would just love to go back to the beginning 02:59:10.460 |
and copy, like I would just love to go like this 02:59:13.340 |
and copy your Russian skills, Russian speaking skills. 02:59:27.980 |
- Just copying paste freely, then that loses completely, 02:59:31.860 |
it washes away the sense of what immortality is. 02:59:35.180 |
- And also individuality a little bit, right? 02:59:39.340 |
maybe we would feel much more collaborative with each other 02:59:43.540 |
if we can just, hey, I'll give you my Russian, 02:59:45.660 |
you can give me your Russian and I'll give you whatever, 02:59:52.060 |
but whatever else you want from my brain, right? 03:00:10.220 |
what it would feel like to be a super intelligent machine, 03:00:15.220 |
but I'm quite confident that however it feels 03:00:22.460 |
will be very, very different from how it is for us. 03:00:30.540 |
seems to be pretty important at this particular moment. 03:00:40.660 |
- Sorry, this is the world's worst translation. 03:00:45.820 |
It's such a huge honor that you spent time with me. 03:01:07.740 |
and with the other, like this ripple effect of friends, 03:01:12.740 |
including Elon and everybody else that you inspire. 03:01:18.980 |
I feel so fortunate that you're doing this podcast 03:01:23.620 |
and getting so many interesting voices out there 03:01:27.780 |
into the ether and not just the five-second sound bites, 03:01:30.940 |
but so many of the interviews I've watched you do. 03:01:36.140 |
in a way which we sorely need in this day and age. 03:01:38.380 |
And that I got to be number one, I feel super honored. 03:01:44.660 |
Thanks for listening to this conversation with Max Tegmark. 03:02:05.820 |
And if you wish, click the sponsor links below 03:02:08.740 |
to get a discount and to support this podcast. 03:02:11.860 |
And now let me leave you with some words from Max Tegmark. 03:02:15.100 |
If consciousness is the way that information feels 03:02:24.220 |
It's only the structure of information processing 03:02:26.660 |
that matters, not the structure of the matter 03:02:31.900 |
Thank you for listening and hope to see you next time.