back to indexMax Tegmark: Life 3.0 | Lex Fridman Podcast #1
Chapters
0:0 Introduction
2:27 Are there intelligent life out there
3:2 We dont mean all of space
5:42 Intelligent life
7:22 Space and intelligence
15:32 Does consciousness have an experience
19:4 Selfpreservation instinct
20:19 Fear of death
22:59 Intelligence and consciousness
27:17 AI
31:39 Quantum Mechanics
33:50 The Future
36:56 Creativity
42:5 Intelligent Machines
44:38 Human Values
46:30 Is it possible
49:1 Cellular automata
51:28 Information processing
53:46 Communication
00:00:00.000 |
As part of MIT course 6.099 Artificial General Intelligence, 00:00:04.180 |
I've gotten the chance to sit down with Max Tegmark. 00:00:08.660 |
He's a physicist, spent a large part of his career 00:00:11.900 |
studying the mysteries of our cosmological universe, 00:00:21.640 |
and the existential risks of artificial intelligence. 00:00:25.780 |
Amongst many other things, he's the co-founder 00:00:29.020 |
of the Future of Life Institute, author of two books, 00:00:35.180 |
First, "Our Mathematical Universe," second is "Life 3.0." 00:00:40.180 |
He's truly an out-of-the-box thinker and a fun personality, 00:00:45.460 |
If you'd like to see more of these videos in the future, 00:00:47.980 |
please subscribe and also click the little bell icon 00:01:07.900 |
It's really where philosophy and engineering come together, 00:01:16.460 |
"in just staying alive, but in finding something to live for." 00:01:36.140 |
radio frequency interference, RFI, look it up. 00:01:42.900 |
from local radio stations can bleed into the audio 00:01:49.300 |
It's an exceptionally difficult sound source to remove. 00:01:55.500 |
how to avoid RFI in the future during recording sessions. 00:02:11.220 |
Of course, this is an exceptionally difficult noise to remove. 00:02:25.020 |
and I hope you're still able to enjoy this conversation. 00:02:36.260 |
When I give public lectures, I often ask for a show of hands 00:02:39.460 |
who thinks there's intelligent life out there 00:02:42.020 |
somewhere else, and almost everyone put their hands up. 00:02:47.420 |
"Oh, there's so many galaxies out there, there's gotta be." 00:02:58.020 |
When we talk about our universe, first of all, 00:03:11.460 |
from which light has had time to reach us so far 00:03:25.940 |
that's gotten to the point of building telescopes 00:03:34.500 |
The probability of it happening on any given planet 00:03:42.580 |
And what we do know is that the number can't be super high, 00:03:47.580 |
'cause there's over a billion Earth-like planets 00:03:52.860 |
many of which are billions of years older than Earth. 00:04:01.900 |
that any super advanced civilization has come here at all. 00:04:05.620 |
And so that's the famous Fermi paradox, right? 00:04:13.460 |
what the probability is of getting life on a given planet, 00:04:27.660 |
that our nearest neighbor is 10 to the 16 meters away, 00:04:33.940 |
Now, by the time you get much less than 10 to the 16 already 00:04:40.180 |
we pretty much know there is nothing else that close. 00:04:48.780 |
- Yeah, they would have discovered us long ago, 00:04:51.500 |
we would have probably noted some engineering projects 00:05:00.060 |
So my guess is actually that we are the only life in here 00:05:05.060 |
that's gotten to the point of building advanced tech, 00:05:10.820 |
puts a lot of responsibility on our shoulders, 00:05:20.140 |
have an accidental nuclear war or go extinct somehow, 00:05:22.780 |
because there's a sort of Star Trek-like situation out there 00:05:25.980 |
where some other life forms are gonna come and bail us out 00:05:30.420 |
I think they're lulling us into a false sense of security. 00:05:38.740 |
and make the best of it just in case it is down to us. 00:05:48.780 |
so it's unique from a sort of statistical view 00:05:55.820 |
how difficult is it for intelligent life to come about 00:05:59.020 |
with the kind of advanced tech building life? 00:06:01.320 |
Is implied in your statement that it's really difficult 00:06:07.580 |
- Well, I think what we know is that going from no life 00:06:18.660 |
than actually settling our whole universe with life. 00:06:26.500 |
which is some great filter as it's sometimes called, 00:06:33.460 |
It's either, that roadblock is either behind us 00:06:40.980 |
I'm super excited every time we get a new report 00:06:45.380 |
from NASA saying they failed to find any life on Mars. 00:06:54.140 |
or some very low level kind of stepping stone 00:07:07.260 |
that this level of life is kind of a dime a dozen, 00:07:13.060 |
As soon as a civilization gets advanced technology, 00:07:16.900 |
they get into some stupid fight with themselves and poof. 00:07:29.900 |
I think you've also begun to explore the other universe, 00:07:42.740 |
So is there a common thread between your interest 00:07:45.180 |
and the way you think about space and intelligence? 00:07:50.960 |
I was already very fascinated by the biggest questions. 00:07:57.180 |
And I felt that the two biggest mysteries of all in science 00:08:00.460 |
were our universe out there and our universe in here. 00:08:20.220 |
for you trans-greatly deepening our understanding of this. 00:08:26.620 |
- Yeah, 'cause I think a lot of people view intelligence 00:08:37.980 |
artificial general intelligence is science fiction. 00:08:50.120 |
And this is also a blob of quarks and electrons. 00:08:55.380 |
because I'm made of different kinds of quarks. 00:09:05.180 |
It's all about the pattern of the information processing. 00:09:08.620 |
And this means that there's no law of physics 00:09:15.680 |
which can help us by being incredibly intelligent 00:09:20.020 |
and help us crack mysteries that we couldn't. 00:09:21.740 |
In other words, I think we've really only seen 00:09:40.140 |
as you're saying, subjective experience emerge, 00:09:50.980 |
So again, I think many people have underestimated 00:10:04.140 |
because somehow we're missing some ingredient that we need, 00:10:08.740 |
or some new consciousness particle or whatever. 00:10:11.800 |
I happen to think that we're not missing anything 00:10:32.220 |
and that's why I like to think about this idea 00:10:38.820 |
for an arbitrary physical system to be conscious 00:10:49.580 |
you know, this attitude you have to be made of carbon atoms 00:10:54.100 |
- So something about the information processing 00:11:01.340 |
describing various fundamental aspects of the world. 00:11:05.900 |
maybe someone who's watching this will come up 00:11:07.700 |
with the equations that information processing 00:11:12.140 |
I'm quite convinced there is big discovery to be made there 00:11:16.740 |
'cause let's face it, we know that some information 00:11:20.620 |
processing is conscious 'cause we are conscious, 00:11:25.620 |
but we also know that a lot of information processing 00:11:27.820 |
is not conscious, like most of the information processing 00:11:29.880 |
happening in your brain right now is not conscious. 00:11:32.900 |
There are like 10 megabytes per second coming in, 00:11:38.300 |
You're not conscious about your heartbeat regulation 00:11:44.620 |
to like read what it says here, you look at it 00:11:51.060 |
You're like, your consciousness is like the CEO 00:11:53.820 |
that got an email at the end with the final answer. 00:12:05.240 |
We're actually studying it a little bit in my lab 00:12:07.040 |
here at MIT, but I also think it's just a really urgent 00:12:12.200 |
For starters, I mean, if you're an emergency room doctor 00:12:15.000 |
and you have an unresponsive patient coming in, 00:12:17.320 |
wouldn't it be great if in addition to having a CT scanner, 00:12:20.320 |
you had a consciousness scanner that could figure out 00:12:26.800 |
whether this person is actually having locked in syndrome 00:12:32.200 |
And in the future, imagine if we build robots 00:12:36.800 |
or the machine that we can have really good conversations 00:12:41.320 |
with, which I think is most likely to happen, right? 00:12:44.680 |
Wouldn't you want to know like if your home helper robot 00:12:47.600 |
is actually experiencing anything or just like a zombie? 00:12:51.200 |
I mean, would you prefer, what would you prefer? 00:12:53.960 |
Would you prefer that it's actually unconscious 00:12:56.040 |
so that you don't have to feel guilty about switching it off 00:12:58.640 |
or giving boring chores or what would you prefer? 00:13:06.620 |
I would prefer the appearance of consciousness. 00:13:09.040 |
But the question is whether the appearance of consciousness 00:13:18.320 |
do you think we need to understand what consciousness is, 00:13:23.640 |
in order to build something like an AGI system? 00:13:30.560 |
And I think we will probably be able to build things 00:13:36.160 |
But if we want to make sure that what happens 00:13:41.040 |
So it's a wonderful controversy you're raising there 00:13:45.040 |
where you have basically three points of view 00:13:52.880 |
that both conclude that the hard problem of consciousness 00:13:56.080 |
On one hand, you have some people like Daniel Dennett 00:14:01.560 |
because consciousness is the same thing as intelligence. 00:14:06.520 |
So anything which acts conscious is conscious, 00:14:19.960 |
"because of course machines can never be conscious." 00:14:24.600 |
You never have to feel guilty about how you treat them. 00:14:39.600 |
who say that actually some information processing 00:14:48.320 |
And I think we've just been a little bit lazy, 00:14:52.140 |
kind of running away from this problem for a long time. 00:14:55.040 |
It's been almost taboo to even mention the C word 00:15:05.440 |
And there are ways we can even test any theory 00:15:24.020 |
if you realized that it was just a glossed up tape recorder, 00:15:27.780 |
you know, that was just zombie and a sort of faking emotion? 00:15:31.660 |
Would you prefer that it actually had an experience 00:15:41.540 |
- It's such a difficult question because, you know, 00:15:45.100 |
it's like when you're in a relationship and you say, 00:15:49.840 |
It's like asking, well, do they really love you back? 00:15:55.120 |
Don't you really want them to actually love you? 00:15:58.200 |
It's hard to, it's hard to really know the difference 00:16:03.520 |
between everything seeming like there's consciousness 00:16:20.760 |
So Mass General Hospital is right across the river, right? 00:16:23.760 |
- Suppose you're going in for a medical procedure 00:16:29.280 |
what we're going to do is we're going to give you 00:16:31.000 |
muscle relaxants so you won't be able to move 00:16:35.900 |
but you won't be able to do anything about it. 00:16:43.400 |
What's the difference that you're conscious about it 00:16:48.620 |
or not if there's no behavioral change, right? 00:17:01.100 |
So actually being able to have subjective experiences, 00:17:21.740 |
It's okay to boil lobsters because we asked them 00:17:41.140 |
So I'm a little bit nervous when I hear people 00:17:49.940 |
it's really fascinating science question is what it is. 00:18:03.820 |
Boston Dynamics, humanoid robot being sort of 00:18:09.940 |
it starts pushing on his consciousness question. 00:18:22.340 |
needs to have a body or something like a body? 00:18:31.780 |
- I do think it helps a lot to have a physical embodiment 00:18:51.420 |
Your eyes are closed, you're not getting any sensory input, 00:18:55.980 |
but there's still an experience there, right? 00:19:04.820 |
it's just the information processing itself in your brain, 00:19:15.140 |
is the reason you wanna have a body and a physical, 00:19:23.900 |
is because you want to be able to preserve something. 00:19:31.740 |
you need to have some kind of embodiment of self 00:19:37.900 |
- Well, now we're getting a little bit anthropomorphic, 00:19:45.100 |
maybe talking about self-preservation instincts. 00:19:57.060 |
'cause those that didn't have those self-preservation genes 00:20:02.940 |
But if you build an artificial general intelligence, 00:20:06.860 |
the mind space that you can design is much, much larger 00:20:10.020 |
than just a specific subset of minds that can evolve. 00:20:24.020 |
Like imagine if you could just, first of all, 00:20:28.140 |
So suppose you could back yourself up every five minutes 00:20:34.100 |
"I'm gonna lose the last five minutes of experiences 00:20:46.740 |
which we could easily do if we were silicon-based, right? 00:20:57.860 |
So I don't think we should take for granted at all 00:20:59.900 |
that AGI will have to have any of those sort of 00:21:07.300 |
On the other hand, you know, this is really interesting 00:21:10.100 |
because I think some people go too far and say, 00:21:13.700 |
"Of course, we don't have to have any concerns either 00:21:22.620 |
That there's a very nice set of arguments going back 00:21:26.220 |
to Steven Mohandro and Nick Bostrom and others 00:21:28.500 |
just pointing out that when we build machines, 00:21:32.260 |
we normally build them with some kind of goal, 00:21:38.460 |
And as soon as you put in a goal into machine, 00:21:44.580 |
it'll break that down into a bunch of sub-goals. 00:21:57.940 |
you have a little robot and you tell it to go down 00:22:03.940 |
make you cook you an Italian dinner, you know, 00:22:06.100 |
and then someone mugs it and tries to break it on the way. 00:22:09.420 |
That robot has an incentive to not get destroyed 00:22:14.660 |
because otherwise it's gonna fail in cooking your dinner. 00:22:19.500 |
but it really wants to complete the dinner cooking goal 00:22:22.860 |
so it will have a self-preservation instinct. 00:22:27.900 |
- And similarly, if you give any kind of more 00:22:35.420 |
it's very likely to wanna acquire more resources 00:22:39.780 |
And it's exactly from those sort of sub-goals 00:22:43.740 |
that some of the concerns about AGI safety come. 00:22:47.060 |
You give it some goal that seems completely harmless 00:23:17.180 |
so that's the kind of mind space evolution created 00:23:20.500 |
that we're sort of almost obsessed about self-preservation. 00:23:24.380 |
You don't think that's necessary to be afraid of death? 00:23:29.380 |
So not just a kind of sub-goal of self-preservation 00:23:34.860 |
but more like fundamentally sort of have the finite thing 00:23:44.060 |
Do I think it's necessary for what precisely? 00:23:47.380 |
- For intelligence, but also for consciousness. 00:24:04.380 |
before we can agree on whether it's necessary 00:24:07.740 |
we should be clear on how we define those two words 00:24:17.100 |
and they couldn't agree on how to define intelligence even. 00:24:36.660 |
I would say alpha go, alpha zero is quite intelligent. 00:24:40.100 |
I don't think alpha zero has any fear of being turned off 00:24:43.100 |
because it doesn't understand the concept of it even. 00:24:53.900 |
If certain plants have any kind of experience, 00:24:58.540 |
or there's nothing they can do about it anyway, 00:25:07.580 |
not just about being conscious, but maybe having 00:25:21.380 |
maybe there perhaps it does help having a backdrop 00:25:27.900 |
No, let's make the most of this, let's live to the fullest. 00:25:31.220 |
So if you knew you were gonna just live forever, 00:25:39.580 |
it would be an incredibly boring life living forever. 00:25:43.940 |
So in the sort of loose subjective terms that you said 00:25:52.820 |
is yeah, it seems that the finiteness of it is important. 00:26:02.100 |
everything in our universe is ultimately probably finite, 00:26:08.260 |
- Big crunch or big, what's the infinite expansion? 00:26:24.620 |
than our ancestors thought, but they're still pretty hard 00:26:29.620 |
to squeeze in an infinite number of compute cycles, 00:26:50.580 |
we should build our civilization as if it's all finite 00:27:04.780 |
or how would you try to define human level intelligence 00:27:10.700 |
Where is consciousness part of that definition? 00:27:13.300 |
- No, consciousness does not come into this definition. 00:27:20.340 |
but there are very many different kinds of goals 00:27:22.820 |
You can have a goal to be a good chess player, 00:27:34.340 |
isn't something you can measure by just one number 00:27:37.980 |
No, no, there are some people who are better at this, 00:27:42.380 |
Right now we have machines that are much better than us 00:27:50.100 |
memorizing large databases, playing chess, playing Go, 00:27:56.300 |
But there's still no machine that can match a human child 00:28:29.260 |
but it doesn't necessarily have to wait the big impact 00:28:32.980 |
until machines are better than us at nothing. 00:28:35.420 |
The really big change doesn't come exactly at the moment 00:28:45.220 |
becoming better at us at doing most of the jobs that we do, 00:28:48.820 |
because that takes away much of the demand for human labor. 00:28:55.620 |
when they become better than us at AI research. 00:29:05.700 |
by the human research and development cycle of years, 00:29:20.860 |
by 40,000 equivalent pieces of software or whatever, 00:29:25.860 |
right, then there's no reason that has to be years. 00:29:44.380 |
which gives rise to this incredibly fun controversy 00:29:48.700 |
about whether there can be intelligence explosion, 00:29:51.820 |
so-called singularity as Werner Wieners called it. 00:30:10.020 |
human level intelligence? - Yeah, human level, yeah. 00:30:15.740 |
which is better than us at all cognitive tasks, 00:30:19.380 |
or better than any human at all cognitive tasks. 00:30:25.820 |
It's when they can, when they're better than us 00:30:34.860 |
get better than us at anything by just studying up. 00:30:42.180 |
of the complexity of goals it's able to accomplish. 00:30:46.860 |
And that's certainly a very clear definition of human level. 00:31:11.700 |
a sort of a slightly different direction on creativity, 00:31:24.700 |
and perhaps better means having contradiction, 00:31:36.540 |
Let me ask, what's your favorite equation, first of all? 00:31:39.660 |
I know they're all like your children, but... 00:31:50.900 |
it can calculate everything to do with atoms, 00:32:01.140 |
a beautiful, mysterious formulation of our world. 00:32:08.740 |
it perhaps doesn't have the same beauty as physics does, 00:32:15.620 |
the Andrew Wiles who proved the Fermat's last theorem. 00:32:29.900 |
everybody tried to prove it, everybody failed. 00:32:32.580 |
And so here's this guy comes along and eventually proves it 00:32:37.380 |
and then fails to prove it and then proves it again in '94. 00:32:41.300 |
And he said like the moment when everything connected 00:32:47.860 |
That moment when you finally realized the connecting piece 00:32:58.740 |
And I just stared at it in disbelief for 20 minutes. 00:33:01.980 |
Then during the day I walked around the department 00:33:12.860 |
It was the most important moment of my working life. 00:33:20.740 |
and it kind of made me think of what would it take? 00:33:24.780 |
And I think we have all been there at small levels. 00:33:37.180 |
- I wouldn't mention myself in the same breath 00:33:43.420 |
but I've certainly had a number of aha moments 00:33:48.420 |
when I realized something very cool about physics. 00:33:56.100 |
In fact, some of my favorite discoveries I made, 00:33:58.420 |
I later realized that they had been discovered earlier 00:34:01.180 |
by someone who sometimes got quite famous for it. 00:34:07.540 |
the emotional experience you have when you realize it. 00:34:12.260 |
So what would it take in that moment, that wow, 00:34:17.380 |
So what do you think it takes for an intelligent system, 00:34:21.500 |
an AGI system, an AI system to have a moment like that? 00:34:26.820 |
'cause there are actually two parts to it, right? 00:34:29.260 |
One of them is, can it accomplish that proof? 00:34:51.460 |
Can you build machines that are that intelligent? 00:34:57.340 |
that can independently come up with that level of proofs, 00:35:03.460 |
The second question is a question about consciousness. 00:35:07.320 |
When will we, how likely is it that such a machine 00:35:28.320 |
it views it as somehow something very positive 00:35:54.540 |
In a way, my absolutely worst nightmare would be that 00:36:04.320 |
the distant future, maybe our cosmos is teeming 00:36:15.680 |
by the time our species eventually fizzles out, 00:36:21.600 |
'cause we're so proud of our descendants here. 00:36:28.640 |
the consciousness problem and we haven't realized 00:36:34.880 |
than a tape recorder hasn't any kind of experience. 00:36:37.880 |
So the whole thing has just become a play for empty benches. 00:36:41.640 |
That would be like the ultimate zombie apocalypse. 00:36:47.260 |
that we have these beings which can really appreciate 00:37:22.960 |
But that's not necessarily what I mean by creativity. 00:37:29.600 |
where the sea is rising for there to be something creative 00:37:41.860 |
- My hunch is that we should think of creativity 00:37:49.480 |
And we have to be very careful with human vanity 00:37:56.660 |
where we have this tendency to very often want to say, 00:38:09.180 |
if we ask ourselves to write down a definition 00:38:16.940 |
what we mean by Andrew Wiles, what he did there, 00:38:24.280 |
and it's not like taking 573 and multiplying it by 224 00:38:29.280 |
by just a step of straightforward cookbook like rules. 00:38:34.800 |
You can maybe make a connection between two things 00:38:39.640 |
that people have never thought was connected. 00:38:47.660 |
and this is actually one of the most important aspects of it. 00:38:53.080 |
Maybe the reason we humans tend to be better at it 00:38:55.540 |
than traditional computers is because it's something 00:38:58.380 |
that comes more naturally if you're a neural network 00:39:01.260 |
than if you're a traditional logic gate-based 00:39:07.740 |
and if you activate here, activate here, activate here, 00:39:27.100 |
I want to travel around the world instead this month. 00:39:36.080 |
And it does everything that you would have done 00:39:39.840 |
That would, in my mind, involve a lot of creativity. 00:39:43.500 |
- Yeah, so it's actually a beautiful way to put it. 00:39:45.720 |
I think we do try to grasp at the definition of intelligence 00:39:50.740 |
and the definition of intelligence is everything 00:40:01.260 |
And maybe creativity is just one of the things, 00:40:07.060 |
- I don't think we need to be that defensive. 00:40:09.920 |
I don't think anything good comes out of saying, 00:40:18.640 |
Contrary-wise, there are many examples in history 00:40:21.080 |
of where trying to pretend that we're somehow superior 00:40:33.600 |
Nazi Germany, they said that they were somehow superior 00:40:40.120 |
Today, we still do a lot of cruelty to animals 00:41:01.140 |
I don't think we should try to found our self-worth 00:41:16.440 |
and the meaning of life from the experiences that we have. 00:41:27.520 |
even if there are other people who are smarter than me. 00:41:35.840 |
and then I suddenly realize, oh, he has an old prize, 00:42:10.500 |
if there's machines that are more intelligent, 00:42:13.240 |
you naturally think that that's not going to be 00:42:25.040 |
And they might be clever about certain topics 00:42:27.520 |
and you can have fun having a few drinks with them. 00:42:30.900 |
- Well, also, another example we can all relate to 00:42:37.000 |
of why it doesn't have to be a terrible thing 00:42:45.560 |
I mean, our parents were much more intelligent than us. 00:42:50.700 |
Because their goals were aligned with our goals. 00:42:53.940 |
And that I think is really the number one key issue 00:43:00.740 |
- Value aligned, the value alignment problem, exactly. 00:43:03.100 |
'Cause people who see too many Hollywood movies 00:43:12.160 |
They worry about some machines suddenly turning evil. 00:43:14.920 |
It's not malice that is the concern, it's competence. 00:43:21.360 |
By definition, intelligent makes you very competent. 00:43:30.520 |
computer playing is the less intelligent one. 00:43:35.360 |
as the ability to accomplish goal winning, right? 00:43:38.160 |
It's gonna be the more intelligent one that wins. 00:43:40.560 |
And if you have a human and then you have an AGI 00:43:50.440 |
So I was just reading about this particular rhinoceros 00:43:55.440 |
species that was driven extinct just a few years ago. 00:43:59.160 |
And a bummer, I was looking at this cute picture 00:44:05.120 |
and why did we humans drive it to extinction? 00:44:09.400 |
Wasn't because we were evil rhino haters as a whole, 00:44:12.840 |
it was just because our goals weren't aligned 00:44:16.040 |
and it didn't work out so well for the rhinoceros 00:44:27.200 |
we have to make sure that it learns to understand our goals, 00:44:34.240 |
and that it adopts our goals, and it retains those goals. 00:44:40.600 |
is us as human beings trying to formulate our values. 00:44:45.600 |
So you could think of the United States Constitution 00:45:09.480 |
So for the value alignment problem and the solution to it, 00:45:30.200 |
There's the technical value alignment problem, 00:45:33.280 |
of figuring out just how to make machines understand 00:45:46.080 |
And since it's not like we have any great consensus 00:45:49.800 |
what mechanism should we create then to aggregate 00:46:02.000 |
- If we refuse to talk about it and then AGI gets built, 00:46:05.760 |
who's going to be actually making the decision 00:46:08.600 |
It's going to be a bunch of dudes in some tech company. 00:46:11.400 |
Are they necessarily so representative of all of humankind 00:46:23.080 |
to future human happiness just because they're good 00:46:26.560 |
I'd much rather have this be a really inclusive 00:46:32.640 |
so you create a beautiful vision that includes 00:46:38.920 |
and various perspectives on discussing rights, freedoms, 00:46:42.160 |
human dignity, but how hard is it to come to that consensus? 00:46:46.600 |
Do you think it's certainly a really important thing 00:46:54.280 |
- I think there's no better way to guarantee failure 00:46:59.160 |
than to refuse to talk about it or refuse to try. 00:47:02.880 |
And I also think it's a really bad strategy to say, 00:47:05.680 |
okay, let's first have a discussion for a long time. 00:47:08.640 |
And then once we've reached complete consensus, 00:47:13.440 |
No, we shouldn't let perfect be the enemy of good. 00:47:16.600 |
Instead, we should start with the kindergarten ethics 00:47:35.760 |
Yet the September 11 hijackers were able to do that. 00:47:40.920 |
Andreas Lubitz, this depressed German wings pilot, 00:47:44.120 |
when he flew his passenger jet into the Alps, 00:47:55.080 |
And even though it had the GPS maps, everything, 00:48:04.440 |
Where the problem is not that we don't agree. 00:48:11.520 |
and make sure that from now on airplanes will just, 00:48:17.000 |
but we'll just refuse to do something like that. 00:48:19.840 |
Go into safe mode, maybe lock the cockpit door, 00:48:26.840 |
in our world as well now, where it's really quite, 00:48:34.200 |
Even in cars, we've had enough vehicle terrorism attacks 00:48:53.640 |
But most of those people don't have the technical expertise 00:48:56.360 |
to figure out how to work around something like that. 00:49:15.960 |
of having these kinds of conversations about, 00:49:24.000 |
- Great, so, but that also means describing these things 00:49:31.320 |
So one thing we had a few conversations with Stephen Wall 00:49:34.880 |
from, I'm not sure if you're familiar with Stephen Wall. 00:49:38.400 |
- So he has, he works with a bunch of things, 00:49:42.120 |
but cellular automata, these simple computable things, 00:49:49.960 |
we probably have already within these systems 00:50:04.080 |
to try to at least form a question out of this is, 00:50:09.880 |
to think that we can have intelligent systems, 00:50:12.680 |
but we don't know how to describe something to them 00:50:18.680 |
in explainable AI, trying to get AI to explain itself. 00:50:22.040 |
So what are your thoughts of natural language processing 00:50:30.080 |
How do we explain something to it, to machines? 00:50:35.280 |
- So there are two separate parts to your question there. 00:50:42.400 |
which is super interesting, and we'll get to that in a sec. 00:50:53.000 |
I don't think there's anything in any cellular automaton 00:50:56.480 |
or anything or the internet itself or whatever 00:51:24.480 |
to formulating consciousness as information processing. 00:51:29.480 |
You can think of intelligence as information processing. 00:51:33.000 |
You can think of the entire universe as these particles 00:51:41.400 |
You don't think there is something with the power 00:52:11.960 |
the hardware processing power is already out there 00:52:16.920 |
can think of it as being a computer already, right? 00:52:23.840 |
how it devolved the water waves and the river Charles 00:52:28.480 |
Seth Lloyd has pointed out, my colleague here, 00:52:32.960 |
think of our entire universe as being a quantum computer. 00:52:40.360 |
because you can even within this physics computer 00:52:45.040 |
We can even build actually laptops and stuff. 00:52:49.040 |
It's just that most of the compute power that nature has, 00:52:52.080 |
it's in my opinion, kind of wasting on boring stuff 00:52:54.280 |
like simulating yet another ocean wave somewhere 00:53:17.240 |
And even just like computing what's going to happen 00:53:21.120 |
for the next five seconds in this water bottle, 00:53:29.960 |
But that does not mean that this water bottle has AGI 00:53:37.080 |
like I've written my book, done this interview. 00:53:40.240 |
- And I don't think it's just communication problems. 00:53:46.840 |
- Although Buddhists say when they watch the water 00:53:54.880 |
- Communication is also very important though 00:53:56.520 |
because I mean, look, part of my job is being a teacher 00:54:01.240 |
and I know some very intelligent professors even 00:54:06.240 |
who just have a better hard time communicating. 00:54:14.560 |
you have to also be able to simulate their own mind. 00:54:18.360 |
- Build well enough and understand model of their mind 00:54:20.680 |
that you can say things that they will understand. 00:54:28.280 |
if you have a computer that makes some cancer diagnosis 00:54:40.840 |
and this is my diagnosis, boop, boop, beep, beep. 00:54:45.120 |
Doesn't really instill a lot of confidence, right? 00:54:54.360 |
- So what kind of, I think you're doing a little bit work 00:54:59.360 |
What do you think are the most promising avenues? 00:55:06.680 |
of being able to actually use human interpretable methods 00:55:13.120 |
So being able to talk to a system and it talk back to you, 00:55:16.000 |
or is there some more fundamental problems to be solved? 00:55:23.520 |
but there are also more nerdy fundamental problems. 00:55:53.880 |
- I don't know, my wife has some calculations. 00:56:02.600 |
Check it out, some of them are just mind-blowing. 00:56:12.400 |
You go talk to Demis Hassabis and others from DeepMind, 00:56:34.560 |
And even if you have natural language processing 00:56:43.560 |
So I think there's a whole spectrum of fun challenges 00:56:52.240 |
and transforming it into something equally good, 00:56:57.760 |
equally intelligent, but that's more understandable. 00:57:09.760 |
the power grid, the trading on the stock market, 00:57:14.320 |
it's absolutely crucial that we can trust these AIs 00:57:38.840 |
needs to be based on things you can actually understand, 00:57:41.200 |
preferably even make, preferably even prove theorems on. 00:57:50.680 |
it's less reassuring than if someone actually has a proof. 00:57:56.000 |
but still it says that under no circumstances 00:57:58.840 |
is this car just going to swerve into oncoming traffic. 00:58:02.320 |
- And that kind of information helps build trust 00:58:08.080 |
at least awareness that your goals, your values are aligned. 00:58:16.400 |
This absolutely pathetic state of cybersecurity that we have, 00:58:21.440 |
where is it, three billion Yahoo accounts were hacked, 00:58:26.080 |
almost every American's credit card and so on. 00:58:34.200 |
It's ultimately happening because we have software 00:58:41.280 |
That's why the bugs hadn't been found, right? 00:58:44.880 |
And I think AI can be used very effectively for offense, 00:58:48.640 |
for hacking, but it can also be used for defense, 00:58:55.480 |
and creating systems that are built in different ways 00:59:08.800 |
of course, a bunch of people ask about your paper, 00:59:17.240 |
on deep learning, these kind of simplified models 00:59:20.120 |
of our own brains have been able to do some successful 00:59:27.560 |
and now with alpha zero and so on, do some clever things. 00:59:36.560 |
I think there are a number of very important insights, 00:59:47.160 |
One of them is when you look at the human brain 00:59:48.960 |
and you see it's very complicated, 10th of 11 neurons, 00:59:51.480 |
and there are all these different kinds of neurons 00:59:53.320 |
and yada, yada, and there's been this long debate 00:59:57.200 |
of different kinds is actually necessary for intelligence. 01:00:00.040 |
We can now, I think, quite convincingly answer 01:00:03.360 |
that question of no, it's enough to have just one kind. 01:00:11.080 |
and it's ridiculously simple, simple mathematical thing. 01:00:17.280 |
it's not, if you have a gas with waves in it, 01:00:20.360 |
it's not the detailed nature of the molecules that matter, 01:00:37.000 |
because it wasn't evolved just to be intelligent, 01:00:45.840 |
and self-repairing and evolutionarily attainable. 01:00:58.340 |
before we fully understand how our brains work. 01:01:00.920 |
Just like we understood how to build flying machines 01:01:04.960 |
long before we were able to build a mechanical bird. 01:01:07.800 |
- Yeah, that's right, you've given the example exactly 01:01:20.920 |
did you see the TED talk with this German mechanical bird? 01:01:29.360 |
because it turned out the way we came up with was simpler 01:01:37.520 |
Another lesson, which is more what our paper was about. 01:01:42.520 |
First, I as a physicist thought it was fascinating 01:01:45.840 |
how there's a very close mathematical relationship 01:01:50.800 |
and a lot of things that we've studied for in physics 01:01:59.880 |
And when you look a little more closely at this, 01:02:10.720 |
whoa, there's something crazy here that doesn't make sense. 01:02:13.520 |
'Cause we know that if you even want to build 01:02:21.080 |
tell apart cat pictures and dog pictures, right? 01:02:35.840 |
there's two to the power of 1 million possible images, 01:02:38.760 |
which is way more than there are atoms in our universe. 01:02:40.920 |
So in order to, and then for each one of those, 01:02:51.000 |
is a list of more numbers than there are atoms 01:02:56.760 |
So clearly I can't store that under the hood of my GPU 01:03:04.000 |
Well, it means that out of all of the problems 01:03:07.480 |
that you could try to solve with a neural network, 01:03:28.800 |
that we actually care about given the laws of physics, 01:03:34.880 |
And amazingly, they're basically the same part. 01:03:37.800 |
- Yeah, it's almost like our world was created for, 01:03:41.400 |
- Yeah, but you could say maybe where the world created, 01:03:50.360 |
with neural networks precisely for that reason. 01:03:56.040 |
is very, very well adapted to solving the kind of problems 01:04:02.040 |
that nature kept presenting our ancestors with, right? 01:04:05.560 |
So it makes sense that why do we have a brain 01:04:14.240 |
which could never solve it, it wouldn't have evolved. 01:04:22.480 |
We also realize that there's been earlier work 01:04:32.120 |
but we were able to show an additional cool fact there, 01:04:34.760 |
which is that even incredibly simple problems, 01:04:43.440 |
you can write a few lines of code, boom, done, trivial. 01:04:46.740 |
If you just try to do that with a neural network 01:05:01.440 |
more neurons than there are atoms in our universe. 01:05:22.720 |
and what are your thoughts about quantum computing 01:05:27.240 |
and the role of this kind of computational unit 01:05:41.080 |
the way they get AGI is building a quantum computer. 01:05:45.520 |
- Because the word quantum sounds cool and so on. 01:05:48.480 |
- First of all, I think we don't need quantum computers 01:06:02.520 |
- I even wrote a paper about that many years ago. 01:06:08.200 |
how long it takes until the quantum computerness 01:07:12.980 |
so that it becomes as good as possible at this thing. 01:07:26.800 |
if you have a very high dimensional landscape, 01:07:37.280 |
If I want to know what's the lowest energy state 01:07:43.560 |
but nature will happily figure this out for you 01:07:57.360 |
And quantum mechanics even uses some clever tricks, 01:08:00.320 |
which today's machine learning systems don't. 01:08:05.240 |
and you get stuck in a little local minimum here, 01:08:24.760 |
Okay, so as a component of kind of the learning process, 01:08:29.800 |
- Let me ask, sort of wrapping up here a little bit, 01:08:33.480 |
let me return to the questions of our human nature 01:08:49.160 |
Do you think the way we human beings fall in love 01:09:00.400 |
Do you think we would ever see that kind of connection? 01:09:15.040 |
with the raising sea levels that we'll be able to achieve? 01:09:17.400 |
Or do you think that's something that's ultimately, 01:09:21.800 |
relative to the other goals is not achievable? 01:09:27.640 |
there's a very wide range of guesses, as you know, 01:09:30.880 |
among AI researchers, when we're going to get AGI. 01:09:33.760 |
Some people, you know, like our friend, Rodney Brooks, 01:09:37.680 |
says it's going to be hundreds of years at least. 01:09:58.160 |
I think we shouldn't spend so much time asking, 01:10:10.320 |
Hey, we're the ones creating this future, right? 01:10:22.640 |
it's just some sort of incredibly boring zombie-like future 01:10:25.280 |
where there's all these mechanical things happening 01:10:26.880 |
and there's no passion, no emotion, no experience, maybe even. 01:10:30.040 |
No, I would, of course, much rather prefer it 01:10:40.480 |
are subjective experience, passion, inspiration, love. 01:10:44.520 |
If we can create a future where those things do exist, 01:11:09.200 |
- A lot of people that seriously study this problem 01:11:19.280 |
are the ones that are not beneficial to humanity. 01:11:39.640 |
- No, I don't think panicking is gonna help in any way. 01:11:43.000 |
It's not gonna increase chances of things going well either. 01:11:45.920 |
Even if you are in a situation where there is a real threat, 01:11:58.540 |
First of all, it's important when we think about this thing, 01:12:08.480 |
Everything we love about society and civilization 01:12:15.320 |
with machine intelligence and not anymore lose our loved one 01:12:21.080 |
and things like this, of course, we should aspire to that. 01:12:26.640 |
reminding ourselves that the reason we try to solve problems 01:12:29.120 |
is not just because we're trying to avoid gloom, 01:12:33.520 |
but because we're trying to do something great. 01:12:37.680 |
I think the really important question is to ask, 01:12:50.160 |
I find it quite funny often when I'm in discussion panels 01:12:55.000 |
about these things, how the people who work for companies 01:13:03.340 |
nothing to worry about, nothing to worry about. 01:13:04.980 |
And it's only academics sometimes express concerns. 01:13:09.840 |
That's not surprising at all if you think about it. 01:13:15.320 |
that it's hard to make a man believe in something 01:13:18.160 |
when his income depends on not believing in it. 01:13:20.240 |
And frankly, we know a lot of these people in companies 01:13:24.200 |
that they're just as concerned as anyone else. 01:13:28.600 |
that's not something you want to go on record saying 01:13:31.680 |
who are going to put a picture of a Terminator robot 01:13:47.040 |
first of all, are we going to just dismiss this, the risks, 01:13:51.040 |
and say, well, let's just go ahead and build machines 01:13:54.660 |
that can do everything we can do better and cheaper. 01:13:57.740 |
Let's just make ourselves obsolete as fast as possible. 01:14:14.840 |
What are the shared goals that we can really aspire towards? 01:14:28.360 |
then you can think about the obstacles you want to avoid. 01:14:30.680 |
I often get students coming in right here into my office 01:14:38.080 |
If all she can say is, oh, maybe I'll have cancer. 01:14:42.640 |
- Focus on the obstacles instead of the goals. 01:14:44.400 |
- She's just going to end up a hypochondriac paranoid. 01:14:55.920 |
That's, I think, a much, much healthier attitude. 01:15:01.080 |
And I feel it's very challenging to come up with a vision 01:15:06.080 |
for the future, which we are unequivocally excited about. 01:15:14.840 |
Talking about what kind of society do we want to create? 01:15:18.320 |
What do we want it to mean to be human in the age of AI, 01:15:30.560 |
and gradually start converging towards some future 01:15:45.680 |
if I try to wrap this up in a more succinct way, 01:15:59.160 |
that doesn't overpower us, but that empowers us. 01:16:04.160 |
- And think of the many various ways that can do that, 01:16:14.920 |
that believes there's human level intelligence is required 01:16:20.680 |
that would actually be something we would enjoy using 01:16:26.320 |
And certainly there's a lot of other types of robots 01:16:31.080 |
So focusing on those and then coming up with the obstacles, 01:16:34.040 |
coming up with the ways that that can go wrong 01:16:38.360 |
- And just because you can build an autonomous vehicle, 01:16:41.720 |
even if you could build one that would drive this final, 01:16:53.200 |
there's some things that we find very meaningful to do. 01:16:57.360 |
And that doesn't mean we have to stop doing them 01:17:04.240 |
just the day someone built a tennis robot and beat me. 01:17:07.520 |
- People are still playing chess and even go. 01:17:14.200 |
even some people are advocating basic income, replace jobs. 01:17:20.960 |
to just hand out cash to people for doing nothing, 01:17:27.800 |
a lot more teachers and nurses and the kind of jobs 01:17:30.680 |
which people often find great fulfillment in doing. 01:17:34.600 |
I get very tired of hearing politicians saying, 01:17:41.680 |
If we can have more serious research and thought 01:18:07.360 |
And I think for a lot of people in the 20th century, 01:18:10.280 |
going to the moon, going to space was an inspiring thing. 01:18:29.360 |
that it's not funded as greatly as it could be, 01:18:36.640 |
except in the killer bots, Terminator kind of view, 01:18:43.800 |
perhaps excited by the possible positive future 01:18:48.200 |
- And we should be, because politicians usually 01:18:51.000 |
just focus on the next election cycle, right? 01:18:53.360 |
The single most important thing I feel we humans have learned 01:19:07.160 |
again and again, realizing that everything we thought existed 01:19:10.280 |
was just a small part of something grander, right? 01:19:38.600 |
Wouldn't it be kind of lame if all we ever aspired to 01:19:42.720 |
was to stay in Cambridge, Massachusetts forever 01:19:47.200 |
even though Earth was going to continue on for longer? 01:19:52.840 |
on the cosmic scale, life can flourish on Earth, 01:19:57.840 |
not for four years, but for billions of years. 01:20:00.880 |
I can even tell you about how to move it out of harm's way 01:20:04.520 |
And then we have so much more resources out here, 01:20:09.520 |
which today, maybe there are a lot of other planets 01:20:19.560 |
seems as far as we can tell to be largely dead, 01:20:23.640 |
And yet we have the opportunity to help life flourish 01:20:44.120 |
- Yeah, and that's, I think, why it's really exciting 01:20:51.960 |
because he's literally going out into that space, 01:20:54.560 |
really exploring our universe, and it's wonderful. 01:20:57.080 |
- That is exactly why Elon Musk is so misunderstood, right? 01:21:02.080 |
Misconstrued him as some kind of pessimistic doomsayer. 01:21:11.000 |
appreciates these amazing opportunities that will squander 01:21:16.720 |
And we're not just gonna wipe out the next generation, 01:21:20.680 |
and this incredible opportunity that's out there, 01:21:29.520 |
that it would be better to do without technology, 01:21:32.600 |
let me just mention that if we don't improve our technology, 01:21:36.440 |
the question isn't whether humanity is gonna go extinct. 01:21:39.440 |
The question is just whether we're gonna get taken out 01:21:41.200 |
by the next big asteroid, or the next super volcano, 01:21:44.840 |
or something else dumb that we could easily prevent 01:21:49.880 |
And if we want life to flourish throughout the cosmos, 01:21:54.800 |
As I mentioned in a lot of detail in my book right there, 01:21:59.880 |
even many of the most inspired sci-fi writers, 01:22:04.880 |
I feel, have totally underestimated the opportunities 01:22:09.160 |
for space travel, especially to other galaxies, 01:22:11.240 |
because they weren't thinking about the possibility of AGI, 01:22:18.440 |
So that goes to your view of AGI that enables our progress,