back to index

Max Tegmark: Life 3.0 | Lex Fridman Podcast #1


Chapters

0:0 Introduction
2:27 Are there intelligent life out there
3:2 We dont mean all of space
5:42 Intelligent life
7:22 Space and intelligence
15:32 Does consciousness have an experience
19:4 Selfpreservation instinct
20:19 Fear of death
22:59 Intelligence and consciousness
27:17 AI
31:39 Quantum Mechanics
33:50 The Future
36:56 Creativity
42:5 Intelligent Machines
44:38 Human Values
46:30 Is it possible
49:1 Cellular automata
51:28 Information processing
53:46 Communication

Whisper Transcript | Transcript Only Page

00:00:00.000 | As part of MIT course 6.099 Artificial General Intelligence,
00:00:04.180 | I've gotten the chance to sit down with Max Tegmark.
00:00:06.580 | He is a professor here at MIT.
00:00:08.660 | He's a physicist, spent a large part of his career
00:00:11.900 | studying the mysteries of our cosmological universe,
00:00:16.900 | but he's also studied and delved
00:00:18.980 | into the beneficial possibilities
00:00:21.640 | and the existential risks of artificial intelligence.
00:00:25.780 | Amongst many other things, he's the co-founder
00:00:29.020 | of the Future of Life Institute, author of two books,
00:00:33.080 | both of which I highly recommend.
00:00:35.180 | First, "Our Mathematical Universe," second is "Life 3.0."
00:00:40.180 | He's truly an out-of-the-box thinker and a fun personality,
00:00:44.060 | so I really enjoy talking to him.
00:00:45.460 | If you'd like to see more of these videos in the future,
00:00:47.980 | please subscribe and also click the little bell icon
00:00:50.660 | to make sure you don't miss any videos.
00:00:52.740 | Also, Twitter, LinkedIn, AGI.MIT.edu
00:00:56.820 | if you wanna watch other lectures
00:00:59.580 | or conversations like this one.
00:01:01.020 | Better yet, go read Max's book, "Life 3.0."
00:01:03.940 | Chapter seven on goals is my favorite.
00:01:07.900 | It's really where philosophy and engineering come together,
00:01:10.440 | and it opens with a quote by Dostoevsky,
00:01:13.400 | "The mystery of human existence lies not
00:01:16.460 | "in just staying alive, but in finding something to live for."
00:01:20.480 | Lastly, I believe that every failure
00:01:23.260 | rewards us with an opportunity to learn,
00:01:26.540 | in that sense, I've been very fortunate
00:01:28.340 | to fail in so many new and exciting ways,
00:01:31.820 | and this conversation was no different.
00:01:34.000 | I've learned about something called
00:01:36.140 | radio frequency interference, RFI, look it up.
00:01:40.820 | Apparently, music and conversations
00:01:42.900 | from local radio stations can bleed into the audio
00:01:45.440 | that you're recording in such a way
00:01:47.060 | that it almost completely ruins that audio.
00:01:49.300 | It's an exceptionally difficult sound source to remove.
00:01:52.040 | So, I've gotten the opportunity to learn
00:01:55.500 | how to avoid RFI in the future during recording sessions.
00:02:00.180 | I've also gotten the opportunity to learn
00:02:02.660 | how to use Adobe Audition and iZotope RX6
00:02:06.220 | to do some noise, some audio repair.
00:02:11.220 | Of course, this is an exceptionally difficult noise to remove.
00:02:14.980 | I am an engineer, I'm not an audio engineer,
00:02:18.200 | neither is anybody else in our group,
00:02:20.140 | but we did our best.
00:02:21.860 | Nevertheless, I thank you for your patience,
00:02:25.020 | and I hope you're still able to enjoy this conversation.
00:02:27.980 | Do you think there's intelligent life
00:02:29.340 | out there in the universe?
00:02:31.380 | Let's open up with an easy question.
00:02:33.500 | - I have a minority view here, actually.
00:02:36.260 | When I give public lectures, I often ask for a show of hands
00:02:39.460 | who thinks there's intelligent life out there
00:02:42.020 | somewhere else, and almost everyone put their hands up.
00:02:45.420 | And when I ask why, they'll be like,
00:02:47.420 | "Oh, there's so many galaxies out there, there's gotta be."
00:02:50.980 | But I'm a numbers nerd, right?
00:02:54.540 | So when you look more carefully at it,
00:02:56.660 | it's not so clear at all.
00:02:58.020 | When we talk about our universe, first of all,
00:03:00.660 | we don't mean all of space.
00:03:02.980 | We actually mean, I don't know,
00:03:04.060 | you can throw me the universe if you want,
00:03:05.380 | it's behind you there.
00:03:06.540 | (laughing)
00:03:07.580 | We simply mean the spherical region of space
00:03:11.460 | from which light has had time to reach us so far
00:03:15.360 | during the 14.8 billion year,
00:03:17.060 | 13.8 billion years since our Big Bang.
00:03:19.340 | There's more space here,
00:03:20.620 | but this is what we call a universe,
00:03:22.340 | 'cause that's all we have access to.
00:03:24.020 | So is there intelligent life here
00:03:25.940 | that's gotten to the point of building telescopes
00:03:28.860 | and computers?
00:03:29.900 | My guess is no, actually.
00:03:34.500 | The probability of it happening on any given planet
00:03:37.780 | is some number we don't know what it is.
00:03:42.580 | And what we do know is that the number can't be super high,
00:03:47.580 | 'cause there's over a billion Earth-like planets
00:03:50.260 | in the Milky Way galaxy alone,
00:03:52.860 | many of which are billions of years older than Earth.
00:03:56.260 | And aside from some UFO believers,
00:04:00.620 | there isn't much evidence
00:04:01.900 | that any super advanced civilization has come here at all.
00:04:05.620 | And so that's the famous Fermi paradox, right?
00:04:08.440 | And then if you work the numbers,
00:04:10.180 | what you find is that if you have no clue
00:04:13.460 | what the probability is of getting life on a given planet,
00:04:16.860 | so it could be 10 to the minus 10,
00:04:18.580 | 10 to the minus 20, or 10 to the minus two,
00:04:20.940 | or any power of 10 is sort of equally likely
00:04:23.700 | if you wanna be really open-minded,
00:04:25.540 | that translates into it being equally likely
00:04:27.660 | that our nearest neighbor is 10 to the 16 meters away,
00:04:31.860 | 10 to the 17 meters away, 10 to the 18.
00:04:33.940 | Now, by the time you get much less than 10 to the 16 already
00:04:40.180 | we pretty much know there is nothing else that close.
00:04:46.020 | And when you get beyond 10-
00:04:46.860 | - Because they would have discovered us.
00:04:48.780 | - Yeah, they would have discovered us long ago,
00:04:50.420 | or if they're really close,
00:04:51.500 | we would have probably noted some engineering projects
00:04:53.620 | that they're doing.
00:04:54.700 | And if it's beyond 10 to the 26 meters,
00:04:57.980 | that's already outside of here.
00:05:00.060 | So my guess is actually that we are the only life in here
00:05:05.060 | that's gotten to the point of building advanced tech,
00:05:09.140 | which I think is very,
00:05:10.820 | puts a lot of responsibility on our shoulders,
00:05:14.580 | not screw up.
00:05:15.420 | I think people who take for granted
00:05:17.300 | that it's okay for us to screw up,
00:05:20.140 | have an accidental nuclear war or go extinct somehow,
00:05:22.780 | because there's a sort of Star Trek-like situation out there
00:05:25.980 | where some other life forms are gonna come and bail us out
00:05:28.380 | and it doesn't matter as much.
00:05:30.420 | I think they're lulling us into a false sense of security.
00:05:33.420 | I think it's much more prudent to say,
00:05:35.220 | let's be really grateful
00:05:36.420 | for this amazing opportunity we've had
00:05:38.740 | and make the best of it just in case it is down to us.
00:05:43.740 | - So from a physics perspective,
00:05:45.700 | do you think intelligent life,
00:05:48.780 | so it's unique from a sort of statistical view
00:05:51.340 | of the size of the universe,
00:05:52.500 | but from the basic matter of the universe,
00:05:55.820 | how difficult is it for intelligent life to come about
00:05:59.020 | with the kind of advanced tech building life?
00:06:01.320 | Is implied in your statement that it's really difficult
00:06:05.660 | to create something like a human species?
00:06:07.580 | - Well, I think what we know is that going from no life
00:06:11.500 | to having life that can do a level of tech,
00:06:15.660 | there's some sort of two going beyond that
00:06:18.660 | than actually settling our whole universe with life.
00:06:22.140 | There's some road, major roadblock there,
00:06:26.500 | which is some great filter as it's sometimes called,
00:06:30.820 | which is tough to get through.
00:06:33.460 | It's either, that roadblock is either behind us
00:06:37.100 | or in front of us.
00:06:38.660 | I'm hoping very much that it's behind us.
00:06:40.980 | I'm super excited every time we get a new report
00:06:45.380 | from NASA saying they failed to find any life on Mars.
00:06:48.420 | Like, yes, awesome.
00:06:50.000 | Because that suggests that the hard part,
00:06:51.580 | maybe it was getting the first ribosome
00:06:54.140 | or some very low level kind of stepping stone
00:06:59.140 | so that we're home free.
00:07:00.300 | 'Cause if that's true,
00:07:01.620 | then the future is really only limited
00:07:03.540 | by our own imagination.
00:07:05.140 | It would be much suckier if it turns out
00:07:07.260 | that this level of life is kind of a dime a dozen,
00:07:11.340 | but maybe there is some other problem.
00:07:13.060 | As soon as a civilization gets advanced technology,
00:07:16.060 | within a hundred years,
00:07:16.900 | they get into some stupid fight with themselves and poof.
00:07:20.220 | That would be a bummer.
00:07:21.660 | - Yeah, so you've explored the mysteries
00:07:24.340 | of the cosmological universe,
00:07:27.260 | the one that's between us today.
00:07:29.900 | I think you've also begun to explore the other universe,
00:07:35.860 | which is sort of the mystery,
00:07:37.940 | the mysterious universe of the mind,
00:07:39.860 | of intelligence, of intelligent life.
00:07:42.740 | So is there a common thread between your interest
00:07:45.180 | and the way you think about space and intelligence?
00:07:48.660 | - Oh yeah.
00:07:49.620 | When I was a teenager,
00:07:50.960 | I was already very fascinated by the biggest questions.
00:07:57.180 | And I felt that the two biggest mysteries of all in science
00:08:00.460 | were our universe out there and our universe in here.
00:08:04.100 | - Yeah.
00:08:04.940 | - So it's quite natural after having spent
00:08:08.100 | a quarter of a century on my career,
00:08:11.020 | thinking a lot about this one,
00:08:12.700 | that I'm now indulging in the luxury
00:08:14.300 | of doing research on this one.
00:08:15.980 | It's just so cool.
00:08:17.740 | I feel the time is ripe now
00:08:20.220 | for you trans-greatly deepening our understanding of this.
00:08:25.100 | - Just start exploring this one.
00:08:26.620 | - Yeah, 'cause I think a lot of people view intelligence
00:08:29.540 | as something mysterious that can only exist
00:08:33.540 | in biological organisms like us,
00:08:36.140 | and therefore dismiss all talk about,
00:08:37.980 | artificial general intelligence is science fiction.
00:08:41.220 | But from my perspective as a physicist,
00:08:43.220 | I am a blob of quarks and electrons
00:08:45.820 | moving around in a certain pattern
00:08:48.420 | and processing information in certain ways.
00:08:50.120 | And this is also a blob of quarks and electrons.
00:08:53.620 | I'm not smarter than the water bottle
00:08:55.380 | because I'm made of different kinds of quarks.
00:08:57.940 | I'm made of up quarks and down quarks,
00:08:59.700 | exact same kind as this.
00:09:01.440 | There's no secret sauce, I think, in me.
00:09:05.180 | It's all about the pattern of the information processing.
00:09:08.620 | And this means that there's no law of physics
00:09:12.340 | saying that we can't create technology,
00:09:15.680 | which can help us by being incredibly intelligent
00:09:20.020 | and help us crack mysteries that we couldn't.
00:09:21.740 | In other words, I think we've really only seen
00:09:23.660 | the tip of the intelligence iceberg so far.
00:09:26.560 | - Yeah, so the perceptronium.
00:09:30.020 | - Yeah.
00:09:31.340 | - So you coined this amazing term.
00:09:33.300 | It's a hypothetical state of matter,
00:09:35.820 | sort of thinking from a physics perspective,
00:09:38.460 | what is the kind of matter that can help,
00:09:40.140 | as you're saying, subjective experience emerge,
00:09:42.980 | consciousness emerge?
00:09:44.380 | So how do you think about consciousness
00:09:46.740 | from this physics perspective?
00:09:48.240 | - Very good question.
00:09:50.980 | So again, I think many people have underestimated
00:09:55.980 | our ability to make progress on this
00:10:02.420 | by convincing themselves it's hopeless
00:10:04.140 | because somehow we're missing some ingredient that we need,
00:10:08.740 | or some new consciousness particle or whatever.
00:10:11.800 | I happen to think that we're not missing anything
00:10:15.540 | and that it's not, the interesting thing
00:10:19.300 | about consciousness that gives us
00:10:21.660 | this amazing subjective experience of colors
00:10:24.380 | and sounds and emotions and so on
00:10:26.300 | is rather something at the higher level
00:10:29.980 | about the patterns of information processing
00:10:32.220 | and that's why I like to think about this idea
00:10:36.780 | of perceptronium, what does it mean
00:10:38.820 | for an arbitrary physical system to be conscious
00:10:41.940 | in terms of what its particles are doing
00:10:45.500 | or its information is doing?
00:10:47.140 | I don't think, I hate the carbon chauvinism,
00:10:49.580 | you know, this attitude you have to be made of carbon atoms
00:10:51.580 | to be smart or conscious.
00:10:54.100 | - So something about the information processing
00:10:56.260 | that this kind of matter performs.
00:10:58.500 | - Yeah, and you know, you can see,
00:10:59.460 | I have my favorite equations here
00:11:01.340 | describing various fundamental aspects of the world.
00:11:04.100 | I feel that, I think one day,
00:11:05.900 | maybe someone who's watching this will come up
00:11:07.700 | with the equations that information processing
00:11:10.540 | has to satisfy to be conscious.
00:11:12.140 | I'm quite convinced there is big discovery to be made there
00:11:16.740 | 'cause let's face it, we know that some information
00:11:20.620 | processing is conscious 'cause we are conscious,
00:11:25.620 | but we also know that a lot of information processing
00:11:27.820 | is not conscious, like most of the information processing
00:11:29.880 | happening in your brain right now is not conscious.
00:11:32.900 | There are like 10 megabytes per second coming in,
00:11:36.260 | even just through your visual system.
00:11:38.300 | You're not conscious about your heartbeat regulation
00:11:40.680 | or most things, even like if I just ask you
00:11:44.620 | to like read what it says here, you look at it
00:11:46.460 | and then, oh, now you know what it said,
00:11:48.260 | but you're not aware of how the computation
00:11:50.020 | actually happened.
00:11:51.060 | You're like, your consciousness is like the CEO
00:11:53.820 | that got an email at the end with the final answer.
00:11:56.820 | So what is it that makes a difference?
00:12:00.880 | I think that's both a great science mystery.
00:12:05.240 | We're actually studying it a little bit in my lab
00:12:07.040 | here at MIT, but I also think it's just a really urgent
00:12:10.280 | question to answer.
00:12:12.200 | For starters, I mean, if you're an emergency room doctor
00:12:15.000 | and you have an unresponsive patient coming in,
00:12:17.320 | wouldn't it be great if in addition to having a CT scanner,
00:12:20.320 | you had a consciousness scanner that could figure out
00:12:26.800 | whether this person is actually having locked in syndrome
00:12:30.720 | or is actually comatose.
00:12:32.200 | And in the future, imagine if we build robots
00:12:36.800 | or the machine that we can have really good conversations
00:12:41.320 | with, which I think is most likely to happen, right?
00:12:44.680 | Wouldn't you want to know like if your home helper robot
00:12:47.600 | is actually experiencing anything or just like a zombie?
00:12:51.200 | I mean, would you prefer, what would you prefer?
00:12:53.960 | Would you prefer that it's actually unconscious
00:12:56.040 | so that you don't have to feel guilty about switching it off
00:12:58.640 | or giving boring chores or what would you prefer?
00:13:02.200 | - Well, certainly we would prefer,
00:13:06.620 | I would prefer the appearance of consciousness.
00:13:09.040 | But the question is whether the appearance of consciousness
00:13:11.840 | is different than consciousness itself.
00:13:15.160 | And sort of to ask that as a question,
00:13:18.320 | do you think we need to understand what consciousness is,
00:13:21.880 | solve the hard problem of consciousness
00:13:23.640 | in order to build something like an AGI system?
00:13:28.360 | - No, I don't think that.
00:13:30.560 | And I think we will probably be able to build things
00:13:34.640 | even if we don't answer that question.
00:13:36.160 | But if we want to make sure that what happens
00:13:37.840 | is a good thing, we better solve it first.
00:13:41.040 | So it's a wonderful controversy you're raising there
00:13:45.040 | where you have basically three points of view
00:13:48.040 | about the hard problem.
00:13:48.880 | So there are two different points of view
00:13:52.880 | that both conclude that the hard problem of consciousness
00:13:55.240 | is BS.
00:13:56.080 | On one hand, you have some people like Daniel Dennett
00:13:59.360 | who say, "Oh, consciousness is just BS,"
00:14:01.560 | because consciousness is the same thing as intelligence.
00:14:05.080 | There's no difference.
00:14:06.520 | So anything which acts conscious is conscious,
00:14:11.180 | just like we are.
00:14:12.500 | And then there are also a lot of people,
00:14:16.040 | including many top AI researchers I know,
00:14:18.480 | who say, "Oh, consciousness is just bullshit
00:14:19.960 | "because of course machines can never be conscious."
00:14:22.840 | They're always gonna be zombies.
00:14:24.600 | You never have to feel guilty about how you treat them.
00:14:27.960 | And then there's a third group of people,
00:14:29.960 | including Giulio Tononi, for example,
00:14:33.160 | and Christoph Koch and a number of others.
00:14:37.540 | I would put myself also in this middle camp
00:14:39.600 | who say that actually some information processing
00:14:41.960 | is conscious and some is not.
00:14:44.240 | So let's find the equation which can be used
00:14:47.040 | to determine which it is.
00:14:48.320 | And I think we've just been a little bit lazy,
00:14:52.140 | kind of running away from this problem for a long time.
00:14:55.040 | It's been almost taboo to even mention the C word
00:14:57.880 | in a lot of circles.
00:14:59.000 | But we should stop making excuses.
00:15:03.640 | This is a science question.
00:15:05.440 | And there are ways we can even test any theory
00:15:10.400 | that makes predictions for this.
00:15:12.080 | And coming back to this helper robot,
00:15:14.560 | so you said you'd want your helper robot
00:15:16.200 | to certainly act conscious and treat you,
00:15:18.320 | have conversations with you and stuff.
00:15:21.000 | - I think so.
00:15:21.840 | - Wouldn't you, would you feel,
00:15:22.680 | would you feel a little bit creeped out
00:15:24.020 | if you realized that it was just a glossed up tape recorder,
00:15:27.780 | you know, that was just zombie and a sort of faking emotion?
00:15:31.660 | Would you prefer that it actually had an experience
00:15:34.660 | or would you prefer that it's actually
00:15:37.080 | not experiencing anything?
00:15:38.280 | So you feel, you don't have to feel guilty
00:15:40.180 | about what you do to it.
00:15:41.540 | - It's such a difficult question because, you know,
00:15:45.100 | it's like when you're in a relationship and you say,
00:15:47.340 | well, I love you.
00:15:48.180 | And the other person said, I love you back.
00:15:49.840 | It's like asking, well, do they really love you back?
00:15:52.720 | Or are they just saying they love you back?
00:15:55.120 | Don't you really want them to actually love you?
00:15:58.200 | It's hard to, it's hard to really know the difference
00:16:03.520 | between everything seeming like there's consciousness
00:16:08.520 | present, there's intelligence present,
00:16:10.640 | there's affection, passion, love,
00:16:13.840 | and it actually being there.
00:16:16.220 | I'm not sure.
00:16:17.060 | Do you have-
00:16:17.900 | - But like, let me ask you,
00:16:18.720 | can I ask you a question about this?
00:16:19.560 | Just to make it a bit more pointed.
00:16:20.760 | So Mass General Hospital is right across the river, right?
00:16:22.920 | - Yes.
00:16:23.760 | - Suppose you're going in for a medical procedure
00:16:26.760 | and they're like, you know, for anesthesia,
00:16:29.280 | what we're going to do is we're going to give you
00:16:31.000 | muscle relaxants so you won't be able to move
00:16:33.160 | and you're going to feel excruciating pain
00:16:35.060 | during the whole surgery,
00:16:35.900 | but you won't be able to do anything about it.
00:16:37.600 | But then we're going to give you this drug
00:16:39.160 | that erases your memory of it.
00:16:40.760 | Would you be cool about that?
00:16:43.400 | What's the difference that you're conscious about it
00:16:48.620 | or not if there's no behavioral change, right?
00:16:51.660 | - Right.
00:16:52.500 | That's a really clear way to put it.
00:16:54.540 | That's, yeah, it feels like in that sense,
00:16:57.420 | experiencing it is a valuable quality.
00:17:01.100 | So actually being able to have subjective experiences,
00:17:04.800 | at least in that case, is valuable.
00:17:09.140 | - And I think we humans have a little bit
00:17:11.260 | of a bad track record also
00:17:12.540 | of making these self-serving arguments
00:17:15.480 | that other entities aren't conscious.
00:17:18.060 | You know, people often say,
00:17:19.140 | oh, these animals can't feel pain.
00:17:21.740 | It's okay to boil lobsters because we asked them
00:17:24.020 | if it hurt and they didn't say anything.
00:17:25.940 | And now there was just a paper out saying
00:17:27.380 | lobsters do feel pain when you boil them
00:17:29.300 | and they're banning it in Switzerland.
00:17:31.020 | And we did this with slaves too often
00:17:33.300 | and said, oh, they don't mind.
00:17:34.800 | They don't maybe, aren't conscious
00:17:39.420 | or women don't have souls or whatever.
00:17:41.140 | So I'm a little bit nervous when I hear people
00:17:43.180 | just take as an axiom that machines
00:17:46.340 | can't have experience ever.
00:17:48.900 | I think this is just,
00:17:49.940 | it's really fascinating science question is what it is.
00:17:52.380 | Let's research it and try to figure out
00:17:54.740 | what it is that makes the difference
00:17:56.020 | between unconscious intelligent behavior
00:17:58.900 | and conscious intelligent behavior.
00:18:01.140 | - So in terms of, so if you think of it,
00:18:03.820 | Boston Dynamics, humanoid robot being sort of
00:18:07.020 | with a broom being pushed around,
00:18:09.940 | it starts pushing on his consciousness question.
00:18:13.340 | So let me ask, do you think an AGI system
00:18:17.060 | like a few neuroscientists believe
00:18:19.740 | needs to have a physical embodiment,
00:18:22.340 | needs to have a body or something like a body?
00:18:25.780 | - No, I don't think so.
00:18:28.340 | You mean to have a conscious experience?
00:18:30.580 | - To have consciousness.
00:18:31.780 | - I do think it helps a lot to have a physical embodiment
00:18:36.140 | to learn the kind of things about the world
00:18:38.460 | that are important to us humans, for sure.
00:18:41.520 | But I don't think the physical embodiment
00:18:45.620 | is necessary after you've learned it,
00:18:47.340 | just have the experience.
00:18:48.780 | Think about when you're dreaming, right?
00:18:51.420 | Your eyes are closed, you're not getting any sensory input,
00:18:54.260 | you're not behaving or moving in any way,
00:18:55.980 | but there's still an experience there, right?
00:18:58.220 | And so clearly the experience that you have
00:19:01.420 | when you see something cool in your dreams
00:19:03.340 | isn't coming from your eyes,
00:19:04.820 | it's just the information processing itself in your brain,
00:19:08.660 | which is that experience, right?
00:19:10.940 | - But if I put it another way,
00:19:13.260 | I'll say because it comes from neuroscience
00:19:15.140 | is the reason you wanna have a body and a physical,
00:19:18.300 | something like a physical,
00:19:20.220 | like a physical system
00:19:23.900 | is because you want to be able to preserve something.
00:19:27.020 | In order to have a self, you could argue,
00:19:31.740 | you need to have some kind of embodiment of self
00:19:36.340 | to want to preserve.
00:19:37.900 | - Well, now we're getting a little bit anthropomorphic,
00:19:42.340 | anthropomorphizing things,
00:19:45.100 | maybe talking about self-preservation instincts.
00:19:47.180 | I mean, we are evolved organisms, right?
00:19:49.740 | - Right.
00:19:50.580 | - So Darwinian evolution endowed us
00:19:53.460 | and other evolved organism
00:19:55.740 | with a self-preservation instinct
00:19:57.060 | 'cause those that didn't have those self-preservation genes
00:20:00.500 | got cleaned out of the gene pool, right?
00:20:02.940 | But if you build an artificial general intelligence,
00:20:06.860 | the mind space that you can design is much, much larger
00:20:10.020 | than just a specific subset of minds that can evolve.
00:20:13.660 | So an AGI mind doesn't necessarily have
00:20:17.260 | to have any self-preservation instinct.
00:20:19.220 | It also doesn't necessarily have to be
00:20:21.580 | so individualistic as us.
00:20:24.020 | Like imagine if you could just, first of all,
00:20:26.060 | or we are also very afraid of death.
00:20:28.140 | So suppose you could back yourself up every five minutes
00:20:31.220 | and then your airplane is about to crash.
00:20:32.700 | You're like, "Shucks, I'm just,
00:20:34.100 | "I'm gonna lose the last five minutes of experiences
00:20:37.420 | "since my last cloud backup, dang."
00:20:39.540 | You know, it's not as big a deal.
00:20:41.540 | Or if we could just copy experiences
00:20:44.300 | between our minds easily,
00:20:46.740 | which we could easily do if we were silicon-based, right?
00:20:50.380 | Then maybe we would feel a little bit more
00:20:54.020 | like a hive mind actually.
00:20:57.860 | So I don't think we should take for granted at all
00:20:59.900 | that AGI will have to have any of those sort of
00:21:02.940 | competitive as alpha male instincts.
00:21:07.300 | On the other hand, you know, this is really interesting
00:21:10.100 | because I think some people go too far and say,
00:21:13.700 | "Of course, we don't have to have any concerns either
00:21:16.540 | "that advanced AI will have those instincts
00:21:20.660 | "because we can build anything we want."
00:21:22.620 | That there's a very nice set of arguments going back
00:21:26.220 | to Steven Mohandro and Nick Bostrom and others
00:21:28.500 | just pointing out that when we build machines,
00:21:32.260 | we normally build them with some kind of goal,
00:21:34.060 | you know, win this chess game,
00:21:36.340 | drive this car safely or whatever.
00:21:38.460 | And as soon as you put in a goal into machine,
00:21:40.900 | especially if it's kind of open-ended goal
00:21:42.660 | and the machine is very intelligent,
00:21:44.580 | it'll break that down into a bunch of sub-goals.
00:21:46.980 | And one of those goals
00:21:50.460 | will almost always be self-preservation
00:21:52.260 | 'cause if it breaks or dies in the process,
00:21:54.620 | it's not gonna accomplish the goal, right?
00:21:56.060 | Like suppose you just build a little,
00:21:57.940 | you have a little robot and you tell it to go down
00:22:00.900 | the store market here and get you some food,
00:22:03.940 | make you cook you an Italian dinner, you know,
00:22:06.100 | and then someone mugs it and tries to break it on the way.
00:22:09.420 | That robot has an incentive to not get destroyed
00:22:12.860 | and defend itself or run away
00:22:14.660 | because otherwise it's gonna fail in cooking your dinner.
00:22:17.660 | It's not afraid of death,
00:22:19.500 | but it really wants to complete the dinner cooking goal
00:22:22.860 | so it will have a self-preservation instinct.
00:22:24.980 | - Continue being a functional agent.
00:22:26.700 | - Yeah. - Somehow.
00:22:27.900 | - And similarly, if you give any kind of more
00:22:32.820 | ambitious goal to an AGI,
00:22:35.420 | it's very likely to wanna acquire more resources
00:22:37.740 | so it can do that better.
00:22:39.780 | And it's exactly from those sort of sub-goals
00:22:42.620 | that we might not have intended
00:22:43.740 | that some of the concerns about AGI safety come.
00:22:47.060 | You give it some goal that seems completely harmless
00:22:49.660 | and then before you realize it,
00:22:53.300 | it's also trying to do these other things
00:22:55.420 | which you didn't want it to do
00:22:56.860 | and it's maybe smarter than us.
00:22:59.140 | So it's fascinating.
00:23:00.940 | - And let me pause just because I am
00:23:03.660 | in a very kind of human-centric way
00:23:07.620 | see fear of death as a valuable motivator.
00:23:11.060 | - Uh-huh.
00:23:11.900 | - So you don't think,
00:23:13.260 | you think that's an artifact of evolution,
00:23:17.180 | so that's the kind of mind space evolution created
00:23:20.500 | that we're sort of almost obsessed about self-preservation.
00:23:23.140 | - Yeah. - Some kind of genetic level.
00:23:24.380 | You don't think that's necessary to be afraid of death?
00:23:29.380 | So not just a kind of sub-goal of self-preservation
00:23:32.860 | just so you can keep doing the thing,
00:23:34.860 | but more like fundamentally sort of have the finite thing
00:23:38.660 | like this ends for you at some point.
00:23:43.020 | - Interesting.
00:23:44.060 | Do I think it's necessary for what precisely?
00:23:47.380 | - For intelligence, but also for consciousness.
00:23:50.860 | So for those, for both,
00:23:53.420 | do you think really like a finite death
00:23:57.220 | and the fear of it is important?
00:23:59.020 | - So before I can answer,
00:24:04.380 | before we can agree on whether it's necessary
00:24:06.140 | for intelligence or for consciousness,
00:24:07.740 | we should be clear on how we define those two words
00:24:09.820 | 'cause a lot of really smart people
00:24:11.500 | define them in very different ways.
00:24:13.300 | I was on this panel with AI experts
00:24:17.100 | and they couldn't agree on how to define intelligence even.
00:24:20.060 | So I define intelligence simply
00:24:22.020 | as the ability to accomplish complex goals.
00:24:24.740 | I like your broad definition because again,
00:24:27.260 | I don't want to be a carbon chauvinist.
00:24:29.060 | - Right.
00:24:30.420 | - And in that case, no,
00:24:33.700 | certainly it doesn't require fear of death.
00:24:36.660 | I would say alpha go, alpha zero is quite intelligent.
00:24:40.100 | I don't think alpha zero has any fear of being turned off
00:24:43.100 | because it doesn't understand the concept of it even.
00:24:46.340 | And similarly consciousness,
00:24:48.460 | I mean, you can certainly imagine
00:24:50.220 | very simple kind of experience.
00:24:53.900 | If certain plants have any kind of experience,
00:24:57.180 | I don't think they're very afraid of dying
00:24:58.540 | or there's nothing they can do about it anyway,
00:25:00.860 | so there wasn't that much value in.
00:25:03.340 | But more seriously, I think if you ask,
00:25:07.580 | not just about being conscious, but maybe having
00:25:10.100 | what we might call an exciting life
00:25:15.260 | for you if you have passion
00:25:16.380 | and really appreciate the things,
00:25:21.380 | maybe there perhaps it does help having a backdrop
00:25:25.900 | that, hey, it's finite.
00:25:27.900 | No, let's make the most of this, let's live to the fullest.
00:25:31.220 | So if you knew you were gonna just live forever,
00:25:33.860 | do you think you would change your?
00:25:37.420 | - Yeah, I mean, in some perspective,
00:25:39.580 | it would be an incredibly boring life living forever.
00:25:43.940 | So in the sort of loose subjective terms that you said
00:25:47.380 | of something exciting and something in this
00:25:50.500 | that other humans would understand, I think,
00:25:52.820 | is yeah, it seems that the finiteness of it is important.
00:25:57.140 | - Well, the good news I have for you then is
00:25:59.540 | based on what we understand about cosmology,
00:26:02.100 | everything in our universe is ultimately probably finite,
00:26:07.100 | although--
00:26:08.260 | - Big crunch or big, what's the infinite expansion?
00:26:11.580 | - Yeah, we could have a big chill
00:26:13.020 | or a big crunch or a big rip or the big snap
00:26:16.540 | or death bubbles, all of them are more
00:26:19.020 | than a billion years away.
00:26:20.060 | So we certainly have vastly more time
00:26:24.620 | than our ancestors thought, but they're still pretty hard
00:26:29.620 | to squeeze in an infinite number of compute cycles,
00:26:33.300 | even though there are some loopholes
00:26:36.580 | that just might be possible.
00:26:37.740 | But I think some people like to say
00:26:41.980 | that you should live as if you're about to,
00:26:44.780 | you're gonna die in five years or so,
00:26:46.740 | and that's sort of optimal.
00:26:47.980 | Maybe it's a good assumption,
00:26:50.580 | we should build our civilization as if it's all finite
00:26:54.700 | to be on the safe side.
00:26:55.700 | - Right, exactly.
00:26:56.980 | So you mentioned in defining intelligence
00:26:59.740 | as the ability to solve complex goals.
00:27:02.980 | Where would you draw a line
00:27:04.780 | or how would you try to define human level intelligence
00:27:08.220 | and superhuman level intelligence?
00:27:10.700 | Where is consciousness part of that definition?
00:27:13.300 | - No, consciousness does not come into this definition.
00:27:16.660 | So I think of intelligence as a spectrum,
00:27:20.340 | but there are very many different kinds of goals
00:27:21.980 | you can have.
00:27:22.820 | You can have a goal to be a good chess player,
00:27:24.060 | a good go player, a good car driver,
00:27:26.780 | a good investor, good poet, et cetera.
00:27:31.180 | So intelligence by its very nature
00:27:34.340 | isn't something you can measure by just one number
00:27:36.660 | or some overall goodness.
00:27:37.980 | No, no, there are some people who are better at this,
00:27:40.340 | some people are better at that.
00:27:42.380 | Right now we have machines that are much better than us
00:27:45.460 | at some very narrow tasks,
00:27:46.900 | like multiplying large numbers fast,
00:27:50.100 | memorizing large databases, playing chess, playing Go,
00:27:54.180 | and soon driving cars.
00:27:56.300 | But there's still no machine that can match a human child
00:28:01.100 | in general intelligence.
00:28:02.740 | But artificial general intelligence, AGI,
00:28:05.740 | the name of your course, of course,
00:28:07.900 | that is by its very definition,
00:28:11.780 | the quest to build a machine
00:28:14.660 | that can do everything as well as we can.
00:28:17.620 | So the old holy grail of AI
00:28:19.700 | from back to its inception in the '60s.
00:28:23.940 | If that ever happens, of course,
00:28:25.580 | I think it's gonna be the biggest transition
00:28:27.340 | in the history of life on earth,
00:28:29.260 | but it doesn't necessarily have to wait the big impact
00:28:32.980 | until machines are better than us at nothing.
00:28:35.420 | The really big change doesn't come exactly at the moment
00:28:39.540 | they're better than us at everything.
00:28:41.820 | The really big change comes,
00:28:43.740 | or first their big change is when they start
00:28:45.220 | becoming better at us at doing most of the jobs that we do,
00:28:48.820 | because that takes away much of the demand for human labor.
00:28:53.220 | And then the really whopping change comes
00:28:55.620 | when they become better than us at AI research.
00:29:00.620 | - Right. - Right, because right now,
00:29:02.140 | the timescale of AI research is limited
00:29:05.700 | by the human research and development cycle of years,
00:29:09.180 | typically, you know, how long does it take
00:29:11.420 | from one release of some software or iPhone
00:29:14.100 | or whatever to the next?
00:29:16.060 | But once Google can replace 40,000 engineers
00:29:20.860 | by 40,000 equivalent pieces of software or whatever,
00:29:25.860 | right, then there's no reason that has to be years.
00:29:29.620 | It can be in principle much faster.
00:29:31.780 | And the timescale of future progress in AI
00:29:35.980 | and all of science and technology
00:29:37.980 | will be driven by machines, not humans.
00:29:40.940 | So it's this simple point,
00:29:44.380 | which gives rise to this incredibly fun controversy
00:29:48.700 | about whether there can be intelligence explosion,
00:29:51.820 | so-called singularity as Werner Wieners called it.
00:29:54.380 | Now the idea is articulated by I.J. Good,
00:29:57.020 | is obviously way back 50s,
00:29:59.420 | but you can see Alan Turing
00:30:00.980 | and others thought about it even earlier.
00:30:03.580 | So you asked me what exactly would I define
00:30:10.020 | human level intelligence? - Yeah, human level, yeah.
00:30:12.860 | - So the glib answer is to say something
00:30:15.740 | which is better than us at all cognitive tasks,
00:30:19.380 | or better than any human at all cognitive tasks.
00:30:21.860 | But the really interesting bar, I think,
00:30:23.500 | goes a little bit lower than that, actually.
00:30:25.820 | It's when they can, when they're better than us
00:30:27.980 | at AI programming and general learning
00:30:31.820 | so that they can, if they want to,
00:30:34.860 | get better than us at anything by just studying up.
00:30:37.300 | - So there, better is a key word,
00:30:39.180 | and better is towards this kind of spectrum
00:30:42.180 | of the complexity of goals it's able to accomplish.
00:30:45.260 | - Yeah. - So another way to...
00:30:46.860 | And that's certainly a very clear definition of human level.
00:30:53.020 | So it's almost like a sea that's rising,
00:30:55.180 | you could do more and more and more things.
00:30:56.780 | It's that geographic that you show.
00:30:58.620 | It's a really nice way to put it.
00:30:59.900 | So there's some peaks that,
00:31:01.540 | and there's an ocean level elevating,
00:31:03.300 | and you solve more and more problems.
00:31:04.820 | But just kind of to take a pause,
00:31:07.740 | and we took a bunch of questions
00:31:08.980 | in a lot of social networks,
00:31:10.220 | and a bunch of people asked
00:31:11.700 | a sort of a slightly different direction on creativity,
00:31:15.540 | on things that perhaps aren't a peak.
00:31:20.140 | Human beings are flawed,
00:31:24.700 | and perhaps better means having contradiction,
00:31:28.700 | being flawed in some way.
00:31:30.140 | So let me sort of start easy, first of all.
00:31:34.860 | So you have a lot of cool equations.
00:31:36.540 | Let me ask, what's your favorite equation, first of all?
00:31:39.660 | I know they're all like your children, but...
00:31:41.300 | - That one.
00:31:42.660 | - Which one is that?
00:31:43.580 | - The Schrodinger equation.
00:31:45.460 | It's the master key of quantum mechanics.
00:31:48.540 | - So the micro world.
00:31:49.780 | So this equation,
00:31:50.900 | it can calculate everything to do with atoms,
00:31:54.300 | molecules, and all the way up to...
00:31:56.060 | - Yeah, so, okay.
00:31:59.740 | So quantum mechanics is certainly
00:32:01.140 | a beautiful, mysterious formulation of our world.
00:32:05.140 | So I'd like to sort of ask you,
00:32:07.220 | just as an example,
00:32:08.740 | it perhaps doesn't have the same beauty as physics does,
00:32:12.140 | but in mathematics, abstract,
00:32:15.620 | the Andrew Wiles who proved the Fermat's last theorem.
00:32:19.340 | So I just saw this recently
00:32:22.060 | and it kind of caught my eye a little bit.
00:32:24.180 | This is 358 years after it was conjectured.
00:32:27.940 | So this very simple formulation,
00:32:29.900 | everybody tried to prove it, everybody failed.
00:32:32.580 | And so here's this guy comes along and eventually proves it
00:32:37.380 | and then fails to prove it and then proves it again in '94.
00:32:41.300 | And he said like the moment when everything connected
00:32:43.420 | into place, in an interview he said,
00:32:45.940 | "It was so indescribably beautiful."
00:32:47.860 | That moment when you finally realized the connecting piece
00:32:51.020 | of two conjectures, he said,
00:32:53.260 | "It was so indescribably beautiful.
00:32:55.220 | It was so simple and so elegant.
00:32:57.020 | I couldn't understand how I'd missed it.
00:32:58.740 | And I just stared at it in disbelief for 20 minutes.
00:33:01.980 | Then during the day I walked around the department
00:33:05.180 | and I'd keep coming back to my desk,
00:33:07.820 | looking to see if it was still there.
00:33:09.780 | It was still there.
00:33:10.620 | I couldn't contain myself.
00:33:11.700 | I was so excited.
00:33:12.860 | It was the most important moment of my working life.
00:33:15.860 | Nothing I ever do again will mean as much."
00:33:18.900 | So that particular moment,
00:33:20.740 | and it kind of made me think of what would it take?
00:33:24.780 | And I think we have all been there at small levels.
00:33:28.060 | Maybe let me ask,
00:33:30.660 | have you had a moment like that in your life
00:33:33.660 | where you just had an idea?
00:33:34.980 | It's like, wow, yes.
00:33:37.180 | - I wouldn't mention myself in the same breath
00:33:42.580 | as Andrew Wiles,
00:33:43.420 | but I've certainly had a number of aha moments
00:33:48.420 | when I realized something very cool about physics.
00:33:53.780 | Just completely made my head explode.
00:33:56.100 | In fact, some of my favorite discoveries I made,
00:33:58.420 | I later realized that they had been discovered earlier
00:34:01.180 | by someone who sometimes got quite famous for it.
00:34:03.340 | So it's too late for me to even publish it,
00:34:05.580 | but that doesn't diminish in any way
00:34:07.540 | the emotional experience you have when you realize it.
00:34:09.860 | Like, wow.
00:34:11.420 | - Yeah.
00:34:12.260 | So what would it take in that moment, that wow,
00:34:15.620 | that was yours in that moment.
00:34:17.380 | So what do you think it takes for an intelligent system,
00:34:21.500 | an AGI system, an AI system to have a moment like that?
00:34:24.580 | - That's a tricky question
00:34:26.820 | 'cause there are actually two parts to it, right?
00:34:29.260 | One of them is, can it accomplish that proof?
00:34:33.980 | Can it prove that you can never write
00:34:37.020 | A to the N plus B to the N equals Z to the N
00:34:42.020 | for all integers, et cetera, et cetera,
00:34:45.420 | when N is bigger than two?
00:34:47.060 | That's simply a question about intelligence.
00:34:51.460 | Can you build machines that are that intelligent?
00:34:54.260 | And I think by the time we get a machine
00:34:57.340 | that can independently come up with that level of proofs,
00:35:00.940 | probably quite close to AGI.
00:35:03.460 | The second question is a question about consciousness.
00:35:07.320 | When will we, how likely is it that such a machine
00:35:11.800 | will actually have any experience at all
00:35:14.280 | as opposed to just being like a zombie?
00:35:16.200 | And would we expect it to have
00:35:19.080 | some sort of emotional response to this
00:35:21.640 | or anything at all akin to human emotion
00:35:24.680 | where when it accomplishes its machine goal,
00:35:28.320 | it views it as somehow something very positive
00:35:31.960 | and sublime and deeply meaningful?
00:35:36.960 | I would certainly hope that if in the future
00:35:41.480 | we do create machines that are our peers
00:35:45.200 | or even our descendants,
00:35:48.320 | I would certainly hope that they do have
00:35:51.800 | this sort of sublime appreciation of life.
00:35:54.540 | In a way, my absolutely worst nightmare would be that
00:36:00.880 | at some point in the future,
00:36:04.320 | the distant future, maybe our cosmos is teeming
00:36:08.000 | with all this post-biological life
00:36:10.160 | doing all the seemingly cool stuff.
00:36:13.080 | And maybe the last humans,
00:36:15.680 | by the time our species eventually fizzles out,
00:36:20.200 | we'll be like, well, that's okay
00:36:21.600 | 'cause we're so proud of our descendants here.
00:36:23.680 | And look what all the...
00:36:25.720 | My worst nightmare is that we haven't solved
00:36:28.640 | the consciousness problem and we haven't realized
00:36:31.360 | that these are all the zombies.
00:36:33.000 | They're not aware of anything any more
00:36:34.880 | than a tape recorder hasn't any kind of experience.
00:36:37.880 | So the whole thing has just become a play for empty benches.
00:36:41.640 | That would be like the ultimate zombie apocalypse.
00:36:44.720 | So I would much rather in that case
00:36:47.260 | that we have these beings which can really appreciate
00:36:52.260 | how amazing it is.
00:36:56.960 | - And in that picture,
00:36:58.640 | what would be the role of creativity?
00:37:01.120 | A few people ask about creativity.
00:37:03.160 | When you think about intelligence,
00:37:06.880 | I mean, certainly the story you told
00:37:08.680 | at the beginning of your book involved
00:37:10.560 | creating movies and so on.
00:37:12.200 | So making money, you can make a lot of money
00:37:16.080 | in our modern world with music and movies.
00:37:18.520 | So if you are an intelligent system,
00:37:20.880 | you may want to get good at that.
00:37:22.960 | But that's not necessarily what I mean by creativity.
00:37:26.220 | Is it important on that complex goals
00:37:29.600 | where the sea is rising for there to be something creative
00:37:33.760 | or am I being very human centric
00:37:36.360 | and thinking creativity is somehow special
00:37:39.480 | relative to intelligence?
00:37:41.860 | - My hunch is that we should think of creativity
00:37:46.860 | simply as an aspect of intelligence.
00:37:49.480 | And we have to be very careful with human vanity
00:37:56.660 | where we have this tendency to very often want to say,
00:38:00.020 | as soon as machines can do something,
00:38:01.540 | we try to diminish it and say,
00:38:03.100 | oh, but that's not like real intelligence.
00:38:05.900 | Isn't it creative or this or that?
00:38:08.340 | The other thing,
00:38:09.180 | if we ask ourselves to write down a definition
00:38:13.260 | of what we actually mean by being creative,
00:38:16.940 | what we mean by Andrew Wiles, what he did there,
00:38:19.000 | for example, don't we often mean
00:38:20.620 | that someone takes a very unexpected leap
00:38:24.280 | and it's not like taking 573 and multiplying it by 224
00:38:29.280 | by just a step of straightforward cookbook like rules.
00:38:34.800 | You can maybe make a connection between two things
00:38:39.640 | that people have never thought was connected.
00:38:41.200 | - It's very surprising.
00:38:42.040 | - Or something like that.
00:38:43.240 | I think this is an aspect of intelligence
00:38:47.660 | and this is actually one of the most important aspects of it.
00:38:53.080 | Maybe the reason we humans tend to be better at it
00:38:55.540 | than traditional computers is because it's something
00:38:58.380 | that comes more naturally if you're a neural network
00:39:01.260 | than if you're a traditional logic gate-based
00:39:04.460 | computer machine.
00:39:05.740 | We physically have all these connections
00:39:07.740 | and if you activate here, activate here, activate here,
00:39:12.340 | ping!
00:39:14.740 | My hunch is that if we ever build a machine
00:39:19.180 | where you could just give it the task,
00:39:22.100 | hey, you say, hey, I just realized
00:39:27.100 | I want to travel around the world instead this month.
00:39:32.360 | Can you teach my AGI course for me?
00:39:34.640 | And it's like, okay, I'll do it.
00:39:36.080 | And it does everything that you would have done
00:39:38.000 | and improvises and stuff.
00:39:39.840 | That would, in my mind, involve a lot of creativity.
00:39:43.500 | - Yeah, so it's actually a beautiful way to put it.
00:39:45.720 | I think we do try to grasp at the definition of intelligence
00:39:50.740 | and the definition of intelligence is everything
00:39:53.100 | we don't understand how to build.
00:39:56.380 | So we as humans try to find things
00:39:59.340 | that we have and machines don't have.
00:40:01.260 | And maybe creativity is just one of the things,
00:40:03.660 | one of the words we use to describe that.
00:40:05.860 | That's a really interesting way to put it.
00:40:07.060 | - I don't think we need to be that defensive.
00:40:09.920 | I don't think anything good comes out of saying,
00:40:11.580 | oh, we're somehow special.
00:40:18.640 | Contrary-wise, there are many examples in history
00:40:21.080 | of where trying to pretend that we're somehow superior
00:40:26.080 | to all other intelligent beings
00:40:31.680 | has led to pretty bad results, right?
00:40:33.600 | Nazi Germany, they said that they were somehow superior
00:40:38.520 | to other people.
00:40:40.120 | Today, we still do a lot of cruelty to animals
00:40:42.480 | by saying that we're so superior somehow
00:40:44.480 | and they can't feel pain.
00:40:46.480 | Slavery was justified by the same kind
00:40:48.540 | of just really weak arguments.
00:40:51.340 | And I don't think if we actually go ahead
00:40:55.820 | and build artificial general intelligence,
00:40:59.420 | it can do things better than us.
00:41:01.140 | I don't think we should try to found our self-worth
00:41:03.620 | on some sort of bogus claims of superiority
00:41:08.620 | in terms of our intelligence.
00:41:12.160 | I think we should instead find our calling
00:41:16.440 | and the meaning of life from the experiences that we have.
00:41:22.660 | I can have very meaningful experiences
00:41:27.520 | even if there are other people who are smarter than me.
00:41:31.540 | When I go to a faculty meeting here
00:41:34.440 | and we're talking about something
00:41:35.840 | and then I suddenly realize, oh, he has an old prize,
00:41:38.360 | he has an old prize, he has an old prize.
00:41:40.080 | I don't have one.
00:41:40.920 | Does that make me enjoy life any less
00:41:43.780 | or enjoy talking to those people less?
00:41:47.580 | Of course not.
00:41:48.620 | And contrary wise, I feel very honored
00:41:53.660 | and privileged to get to interact
00:41:55.100 | with other very intelligent beings
00:41:58.660 | that are better than me at a lot of stuff.
00:42:00.700 | So I don't think there's any reason
00:42:02.780 | why we can't have the same approach
00:42:04.100 | with intelligent machines.
00:42:06.100 | - That's a really interesting.
00:42:07.340 | So people don't often think about that.
00:42:08.860 | They think about when there's going,
00:42:10.500 | if there's machines that are more intelligent,
00:42:13.240 | you naturally think that that's not going to be
00:42:15.960 | a beneficial type of intelligence.
00:42:19.020 | You don't realize it could be,
00:42:21.480 | like peers with Nobel prizes
00:42:23.400 | that would be just fun to talk with.
00:42:25.040 | And they might be clever about certain topics
00:42:27.520 | and you can have fun having a few drinks with them.
00:42:30.900 | - Well, also, another example we can all relate to
00:42:37.000 | of why it doesn't have to be a terrible thing
00:42:39.320 | to be in presence of people
00:42:40.760 | who are even smarter than us all around
00:42:42.940 | is when you and I were both two years old,
00:42:45.560 | I mean, our parents were much more intelligent than us.
00:42:49.020 | Worked out okay.
00:42:50.700 | Because their goals were aligned with our goals.
00:42:53.940 | And that I think is really the number one key issue
00:42:58.660 | we have to solve.
00:42:59.900 | - It's value aligned.
00:43:00.740 | - Value aligned, the value alignment problem, exactly.
00:43:03.100 | 'Cause people who see too many Hollywood movies
00:43:06.860 | with lousy science fiction plot lines,
00:43:10.040 | they worry about the wrong thing, right?
00:43:12.160 | They worry about some machines suddenly turning evil.
00:43:14.920 | It's not malice that is the concern, it's competence.
00:43:21.360 | By definition, intelligent makes you very competent.
00:43:27.440 | If you have a more intelligent goal playing,
00:43:30.520 | computer playing is the less intelligent one.
00:43:33.680 | And when we define intelligence
00:43:35.360 | as the ability to accomplish goal winning, right?
00:43:38.160 | It's gonna be the more intelligent one that wins.
00:43:40.560 | And if you have a human and then you have an AGI
00:43:45.480 | that's more intelligent than always,
00:43:47.920 | and they have different goals,
00:43:49.080 | guess who's gonna get their way, right?
00:43:50.440 | So I was just reading about this particular rhinoceros
00:43:55.440 | species that was driven extinct just a few years ago.
00:43:59.160 | And a bummer, I was looking at this cute picture
00:44:01.400 | of a mommy rhinoceros with its child,
00:44:05.120 | and why did we humans drive it to extinction?
00:44:09.400 | Wasn't because we were evil rhino haters as a whole,
00:44:12.840 | it was just because our goals weren't aligned
00:44:14.960 | with those of the rhinoceros,
00:44:16.040 | and it didn't work out so well for the rhinoceros
00:44:17.760 | 'cause we were more intelligent, right?
00:44:19.660 | So I think it's just so important
00:44:21.280 | that if we ever do build AGI,
00:44:23.520 | before we unleash anything,
00:44:27.200 | we have to make sure that it learns to understand our goals,
00:44:34.240 | and that it adopts our goals, and it retains those goals.
00:44:38.040 | - So the cool, interesting problem there
00:44:40.600 | is us as human beings trying to formulate our values.
00:44:45.600 | So you could think of the United States Constitution
00:44:50.160 | as a way that people sat down,
00:44:53.920 | at the time a bunch of white men,
00:44:56.120 | but which is a good example, we should say,
00:44:59.600 | they formulated the goals for this country,
00:45:01.560 | and a lot of people agree that those goals
00:45:03.560 | actually held up pretty well.
00:45:05.320 | That's an interesting formulation of values
00:45:07.200 | and failed miserably in other ways.
00:45:09.480 | So for the value alignment problem and the solution to it,
00:45:13.400 | we have to be able to put on paper
00:45:16.960 | or in a program, human values.
00:45:20.440 | How difficult do you think that is?
00:45:22.440 | - Very, but it's so important.
00:45:25.920 | We really have to give it our best.
00:45:28.040 | And it's difficult for two separate reasons.
00:45:30.200 | There's the technical value alignment problem,
00:45:33.280 | of figuring out just how to make machines understand
00:45:38.280 | their goals, adopt them and retain them.
00:45:40.560 | And then there's the separate part of it,
00:45:43.240 | the philosophical part, whose values anyway?
00:45:46.080 | And since it's not like we have any great consensus
00:45:48.360 | on this planet on values,
00:45:49.800 | what mechanism should we create then to aggregate
00:45:53.480 | and decide, okay, what's a good compromise?
00:45:55.640 | That second discussion can't just be left
00:45:59.080 | to tech nerds like myself, right?
00:46:01.160 | - That's right.
00:46:02.000 | - If we refuse to talk about it and then AGI gets built,
00:46:05.760 | who's going to be actually making the decision
00:46:07.680 | about whose values?
00:46:08.600 | It's going to be a bunch of dudes in some tech company.
00:46:11.400 | Are they necessarily so representative of all of humankind
00:46:17.920 | that we want to just entrust it to them?
00:46:19.520 | Are they even uniquely qualified to speak
00:46:23.080 | to future human happiness just because they're good
00:46:25.520 | at programming AI?
00:46:26.560 | I'd much rather have this be a really inclusive
00:46:29.000 | conversation.
00:46:30.320 | But do you think it's possible sort of,
00:46:32.640 | so you create a beautiful vision that includes
00:46:35.800 | sort of the diversity, cultural diversity
00:46:38.920 | and various perspectives on discussing rights, freedoms,
00:46:42.160 | human dignity, but how hard is it to come to that consensus?
00:46:46.600 | Do you think it's certainly a really important thing
00:46:50.480 | that we should all try to do,
00:46:51.960 | but do you think it's feasible?
00:46:54.280 | - I think there's no better way to guarantee failure
00:46:59.160 | than to refuse to talk about it or refuse to try.
00:47:02.880 | And I also think it's a really bad strategy to say,
00:47:05.680 | okay, let's first have a discussion for a long time.
00:47:08.640 | And then once we've reached complete consensus,
00:47:11.120 | then we'll try to load it into some machine.
00:47:13.440 | No, we shouldn't let perfect be the enemy of good.
00:47:16.600 | Instead, we should start with the kindergarten ethics
00:47:20.680 | that pretty much everybody agrees on
00:47:22.240 | and put that into our machines now.
00:47:24.520 | We're not doing that even.
00:47:26.000 | Look at the, you know,
00:47:27.600 | anyone who builds a passenger aircraft
00:47:30.240 | wants it to never under any circumstances
00:47:33.080 | to fly into a building or a mountain, right?
00:47:35.760 | Yet the September 11 hijackers were able to do that.
00:47:38.640 | And even more embarrassingly, you know,
00:47:40.920 | Andreas Lubitz, this depressed German wings pilot,
00:47:44.120 | when he flew his passenger jet into the Alps,
00:47:46.680 | killing over a hundred people,
00:47:48.520 | he just told the autopilot to do it.
00:47:50.720 | He told the freaking computer
00:47:52.320 | to change the altitude to a hundred meters.
00:47:55.080 | And even though it had the GPS maps, everything,
00:47:58.200 | the computer was like, okay.
00:48:00.720 | So we should take those very basic values.
00:48:04.440 | Where the problem is not that we don't agree.
00:48:07.560 | The problem is just we've been too lazy
00:48:10.200 | to try to put it into our machines
00:48:11.520 | and make sure that from now on airplanes will just,
00:48:14.800 | which all have computers in them,
00:48:17.000 | but we'll just refuse to do something like that.
00:48:19.840 | Go into safe mode, maybe lock the cockpit door,
00:48:22.240 | go to the nearest airport.
00:48:24.040 | And there's so much other technology
00:48:26.840 | in our world as well now, where it's really quite,
00:48:30.120 | becoming quite timely to put in
00:48:32.320 | some sort of very basic values like this.
00:48:34.200 | Even in cars, we've had enough vehicle terrorism attacks
00:48:39.200 | by now where people have driven trucks
00:48:41.560 | and vans into pedestrians.
00:48:43.160 | That is not at all a crazy idea
00:48:45.560 | to just have that hardwired into the car.
00:48:48.760 | 'Cause yeah, there are a lot of,
00:48:50.360 | there's always going to be people who,
00:48:51.600 | for some reason want to harm others.
00:48:53.640 | But most of those people don't have the technical expertise
00:48:56.360 | to figure out how to work around something like that.
00:48:58.640 | So if the car just won't do it, it helps.
00:49:01.840 | So let's start there.
00:49:02.960 | - So there's a lot of, that's a great point.
00:49:05.040 | So not chasing perfect.
00:49:06.880 | There's a lot of things that a lot,
00:49:08.920 | that most of the world agrees on.
00:49:10.920 | - Yeah, let's start there.
00:49:11.760 | - Let's start there.
00:49:12.720 | - And then once we start there,
00:49:14.640 | we'll also get into the habit
00:49:15.960 | of having these kinds of conversations about,
00:49:18.480 | okay, what else should we put in here
00:49:20.120 | and have these discussions?
00:49:21.880 | This should be a gradual process then.
00:49:24.000 | - Great, so, but that also means describing these things
00:49:28.680 | and describing it to a machine.
00:49:31.320 | So one thing we had a few conversations with Stephen Wall
00:49:34.880 | from, I'm not sure if you're familiar with Stephen Wall.
00:49:37.200 | - Oh yeah, I know him quite well.
00:49:38.400 | - So he has, he works with a bunch of things,
00:49:42.120 | but cellular automata, these simple computable things,
00:49:46.640 | these computation systems.
00:49:48.040 | And he kind of mentioned that,
00:49:49.960 | we probably have already within these systems
00:49:52.520 | already something that's AGI.
00:49:54.720 | Meaning like, we just don't know it
00:49:58.760 | because we can't talk to it.
00:50:00.440 | So if you give me this chance to try,
00:50:04.080 | to try to at least form a question out of this is,
00:50:06.760 | I think it's an interesting idea
00:50:09.880 | to think that we can have intelligent systems,
00:50:12.680 | but we don't know how to describe something to them
00:50:15.600 | and they can't communicate with us.
00:50:17.360 | I know you're doing a little bit of work
00:50:18.680 | in explainable AI, trying to get AI to explain itself.
00:50:22.040 | So what are your thoughts of natural language processing
00:50:25.480 | or some kind of other communication?
00:50:27.600 | How does the AI explain something to us?
00:50:30.080 | How do we explain something to it, to machines?
00:50:33.600 | Or you think of it differently?
00:50:35.280 | - So there are two separate parts to your question there.
00:50:39.920 | One of them has to do with communication,
00:50:42.400 | which is super interesting, and we'll get to that in a sec.
00:50:44.400 | The other is whether we already have AGI,
00:50:47.240 | but we just haven't noticed it there.
00:50:49.160 | There I beg to differ.
00:50:53.000 | I don't think there's anything in any cellular automaton
00:50:56.480 | or anything or the internet itself or whatever
00:50:59.040 | that has artificial general intelligence
00:51:03.560 | and that it can really do exactly everything
00:51:05.520 | we humans can do better.
00:51:07.040 | I think the day that happens,
00:51:09.320 | when that happens, we will very soon notice.
00:51:13.760 | We'll probably notice even before
00:51:15.640 | because in a very, very big way.
00:51:17.480 | But for the second part though-
00:51:18.840 | - Wait, can I just, sorry.
00:51:20.760 | So, 'cause you have this beautiful way
00:51:24.480 | to formulating consciousness as information processing.
00:51:29.480 | You can think of intelligence as information processing.
00:51:33.000 | You can think of the entire universe as these particles
00:51:36.920 | and these systems roaming around
00:51:38.760 | that have this information processing power.
00:51:41.400 | You don't think there is something with the power
00:51:44.880 | to process information in the way
00:51:47.840 | that we human beings do that's out there
00:51:49.920 | that needs to be sort of connected to?
00:51:55.400 | It seems a little bit philosophical perhaps,
00:51:57.880 | but there's something compelling to the idea
00:52:00.120 | that the power is already there,
00:52:01.920 | which is the focus should be more
00:52:04.160 | on being able to communicate with it.
00:52:07.360 | - Well, I agree that in a certain sense,
00:52:11.960 | the hardware processing power is already out there
00:52:15.400 | because our universe itself,
00:52:16.920 | can think of it as being a computer already, right?
00:52:21.040 | It's constantly computing what water waves,
00:52:23.840 | how it devolved the water waves and the river Charles
00:52:26.160 | and how to move the air molecules around.
00:52:28.480 | Seth Lloyd has pointed out, my colleague here,
00:52:30.480 | that you can even in a very rigorous way
00:52:32.960 | think of our entire universe as being a quantum computer.
00:52:35.520 | It's pretty clear that our universe supports
00:52:38.240 | this amazing processing power
00:52:40.360 | because you can even within this physics computer
00:52:44.200 | that we live in, right?
00:52:45.040 | We can even build actually laptops and stuff.
00:52:47.080 | So clearly the power is there.
00:52:49.040 | It's just that most of the compute power that nature has,
00:52:52.080 | it's in my opinion, kind of wasting on boring stuff
00:52:54.280 | like simulating yet another ocean wave somewhere
00:52:56.520 | where no one is even looking, right?
00:52:58.080 | So in a sense of what life does,
00:53:00.160 | what we are doing when we build computers
00:53:01.800 | is we're re-channeling all this compute
00:53:05.520 | that nature is doing anyway
00:53:07.240 | into doing things that are more interesting
00:53:09.400 | than just yet another ocean wave,
00:53:11.520 | and let's do something cool here.
00:53:13.240 | So the raw hardware power is there for sure.
00:53:17.240 | And even just like computing what's going to happen
00:53:21.120 | for the next five seconds in this water bottle,
00:53:23.560 | takes a ridiculous amount of compute
00:53:26.040 | if you do it on a human computer.
00:53:27.960 | This water bottle just did it.
00:53:29.960 | But that does not mean that this water bottle has AGI
00:53:33.440 | because AGI means it should also be able to
00:53:37.080 | like I've written my book, done this interview.
00:53:39.400 | - Yes.
00:53:40.240 | - And I don't think it's just communication problems.
00:53:42.120 | I don't really know.
00:53:42.960 | I don't think it can do it.
00:53:46.840 | - Although Buddhists say when they watch the water
00:53:49.320 | and that there is some beauty,
00:53:51.320 | that there's some depth and beauty in nature
00:53:53.760 | that they can communicate with.
00:53:54.880 | - Communication is also very important though
00:53:56.520 | because I mean, look, part of my job is being a teacher
00:54:01.240 | and I know some very intelligent professors even
00:54:06.240 | who just have a better hard time communicating.
00:54:09.840 | They come up with all these brilliant ideas,
00:54:12.640 | but to communicate with somebody else,
00:54:14.560 | you have to also be able to simulate their own mind.
00:54:16.960 | - Yes, empathy.
00:54:18.360 | - Build well enough and understand model of their mind
00:54:20.680 | that you can say things that they will understand.
00:54:24.400 | And that's quite difficult.
00:54:26.480 | And that's why today it's so frustrating
00:54:28.280 | if you have a computer that makes some cancer diagnosis
00:54:32.600 | and you ask it, well, why are you saying
00:54:34.120 | I should have a surgery?
00:54:35.800 | And if it can only reply,
00:54:38.000 | I was trained on five terabytes of data
00:54:40.840 | and this is my diagnosis, boop, boop, beep, beep.
00:54:45.120 | Doesn't really instill a lot of confidence, right?
00:54:48.560 | - Right.
00:54:49.400 | - So I think we have a lot of work to do
00:54:51.160 | on communication there.
00:54:54.360 | - So what kind of, I think you're doing a little bit work
00:54:58.080 | in explainable AI.
00:54:59.360 | What do you think are the most promising avenues?
00:55:01.360 | Is it mostly about sort of the Alexa problem
00:55:05.280 | of natural language processing,
00:55:06.680 | of being able to actually use human interpretable methods
00:55:11.560 | of communication?
00:55:13.120 | So being able to talk to a system and it talk back to you,
00:55:16.000 | or is there some more fundamental problems to be solved?
00:55:18.640 | - I think it's all of the above.
00:55:20.480 | Human, the natural language processing
00:55:22.440 | is obviously important,
00:55:23.520 | but there are also more nerdy fundamental problems.
00:55:27.560 | Like if you take, you play chess?
00:55:31.600 | - Of course, I'm Russian.
00:55:33.000 | I have to.
00:55:33.840 | - You speak Russian?
00:55:34.800 | - Yes, I speak Russian.
00:55:35.640 | - Great, I didn't know.
00:55:38.040 | - When did you learn Russian?
00:55:39.200 | - I speak Russian very poorly.
00:55:40.600 | I'm only an autodidact.
00:55:41.800 | I bought a book, Teach Yourself Russian,
00:55:44.560 | read a lot, but it was very difficult.
00:55:47.720 | - Wow.
00:55:48.560 | - That's why I speak so poorly.
00:55:49.960 | - How many languages do you know?
00:55:51.960 | Wow, that's really impressive.
00:55:53.880 | - I don't know, my wife has some calculations.
00:55:56.320 | But my point was, if you play chess,
00:55:58.480 | have you looked at the AlphaZero games?
00:56:00.840 | - Oh, the actual games, no.
00:56:02.600 | Check it out, some of them are just mind-blowing.
00:56:05.040 | Really beautiful.
00:56:07.720 | And if you ask, how did it do that?
00:56:12.400 | You go talk to Demis Hassabis and others from DeepMind,
00:56:18.240 | all they'll ultimately be able to give you
00:56:20.600 | is big tables of numbers,
00:56:23.120 | matrices that define the neural network.
00:56:25.720 | And you can stare at these tables of numbers
00:56:28.080 | until your face turned blue.
00:56:29.640 | And you're not going to understand much
00:56:32.520 | about why it made that move.
00:56:34.560 | And even if you have natural language processing
00:56:37.640 | that can tell you in human language
00:56:40.080 | about, oh, five, seven, .28,
00:56:42.600 | still not going to really help.
00:56:43.560 | So I think there's a whole spectrum of fun challenges
00:56:47.520 | that are involved in taking a computation
00:56:50.480 | that does intelligent things
00:56:52.240 | and transforming it into something equally good,
00:56:57.760 | equally intelligent, but that's more understandable.
00:57:01.840 | And I think that's really valuable
00:57:03.240 | because I think as we put machines in charge
00:57:07.440 | of ever more infrastructure in our world,
00:57:09.760 | the power grid, the trading on the stock market,
00:57:12.680 | weapon systems, and so on,
00:57:14.320 | it's absolutely crucial that we can trust these AIs
00:57:18.320 | to do all we want.
00:57:19.400 | And trust really comes from understanding
00:57:21.520 | in a very fundamental way.
00:57:24.400 | And that's why I'm working on this
00:57:27.560 | because I think the more,
00:57:29.200 | if we're going to have some hope of ensuring
00:57:31.880 | that machines have adopted our goals
00:57:33.560 | and that they're going to retain them,
00:57:35.800 | that kind of trust, I think,
00:57:38.840 | needs to be based on things you can actually understand,
00:57:41.200 | preferably even make, preferably even prove theorems on.
00:57:44.240 | Even with a self-driving car, right?
00:57:46.120 | If someone just tells you it's been trained
00:57:48.720 | on tons of data and it never crashed,
00:57:50.680 | it's less reassuring than if someone actually has a proof.
00:57:54.280 | Maybe it's a computer verified proof,
00:57:56.000 | but still it says that under no circumstances
00:57:58.840 | is this car just going to swerve into oncoming traffic.
00:58:02.320 | - And that kind of information helps build trust
00:58:04.680 | and helps build the alignment of goals,
00:58:08.080 | at least awareness that your goals, your values are aligned.
00:58:12.200 | - And I think even in the very short term,
00:58:13.840 | if you look at how, today, right?
00:58:16.400 | This absolutely pathetic state of cybersecurity that we have,
00:58:21.440 | where is it, three billion Yahoo accounts were hacked,
00:58:26.080 | almost every American's credit card and so on.
00:58:31.800 | Why is this happening?
00:58:34.200 | It's ultimately happening because we have software
00:58:38.040 | that nobody fully understood how it worked.
00:58:41.280 | That's why the bugs hadn't been found, right?
00:58:44.880 | And I think AI can be used very effectively for offense,
00:58:48.640 | for hacking, but it can also be used for defense,
00:58:51.560 | hopefully automating verifiability
00:58:55.480 | and creating systems that are built in different ways
00:59:00.480 | so you can actually prove things about them.
00:59:03.040 | And it's important.
00:59:05.360 | - So speaking of software
00:59:06.920 | that nobody understands how it works,
00:59:08.800 | of course, a bunch of people ask about your paper,
00:59:11.480 | about your thoughts of why does deep
00:59:13.040 | and cheap learning work so well?
00:59:14.720 | That's the paper, but what are your thoughts
00:59:17.240 | on deep learning, these kind of simplified models
00:59:20.120 | of our own brains have been able to do some successful
00:59:25.120 | perception work, pattern recognition work,
00:59:27.560 | and now with alpha zero and so on, do some clever things.
00:59:30.880 | What are your thoughts about the promise,
00:59:33.160 | limitations of this piece?
00:59:35.720 | - Great.
00:59:36.560 | I think there are a number of very important insights,
00:59:42.080 | very important lessons we can already draw
00:59:44.680 | from these kinds of successes.
00:59:47.160 | One of them is when you look at the human brain
00:59:48.960 | and you see it's very complicated, 10th of 11 neurons,
00:59:51.480 | and there are all these different kinds of neurons
00:59:53.320 | and yada, yada, and there's been this long debate
00:59:55.040 | about whether the fact that we have dozens
00:59:57.200 | of different kinds is actually necessary for intelligence.
01:00:00.040 | We can now, I think, quite convincingly answer
01:00:03.360 | that question of no, it's enough to have just one kind.
01:00:07.640 | If you look under the hood of alpha zero,
01:00:09.960 | there's only one kind of neuron
01:00:11.080 | and it's ridiculously simple, simple mathematical thing.
01:00:15.000 | So it's not, it's just like in physics,
01:00:17.280 | it's not, if you have a gas with waves in it,
01:00:20.360 | it's not the detailed nature of the molecules that matter,
01:00:23.240 | it's the collective behavior somehow.
01:00:26.040 | Similarly, it's this higher level structure
01:00:30.720 | of the network that matters,
01:00:31.760 | not that you have 20 kinds of neurons.
01:00:34.080 | I think our brain is such a complicated mess
01:00:37.000 | because it wasn't evolved just to be intelligent,
01:00:41.720 | it was evolved to also be self-assembling
01:00:45.840 | and self-repairing and evolutionarily attainable.
01:00:52.000 | - Patches and so on.
01:00:53.560 | - So I think it's pretty, my hunch is
01:00:56.120 | that we're gonna understand how to build AGI
01:00:58.340 | before we fully understand how our brains work.
01:01:00.920 | Just like we understood how to build flying machines
01:01:04.960 | long before we were able to build a mechanical bird.
01:01:07.800 | - Yeah, that's right, you've given the example exactly
01:01:12.360 | of mechanical birds and airplanes
01:01:14.000 | and airplanes do a pretty good job of flying
01:01:16.180 | without really mimicking bird flight.
01:01:18.560 | - And even now after a hundred years later,
01:01:20.920 | did you see the TED talk with this German mechanical bird?
01:01:23.880 | - I heard you mention it.
01:01:24.720 | - Check it out, it's amazing.
01:01:26.520 | But even after that, right,
01:01:27.760 | we still don't fly in mechanical birds
01:01:29.360 | because it turned out the way we came up with was simpler
01:01:32.760 | and it's better for our purposes.
01:01:33.840 | And I think it might be the same there.
01:01:35.280 | That's one lesson.
01:01:37.520 | Another lesson, which is more what our paper was about.
01:01:42.520 | First, I as a physicist thought it was fascinating
01:01:45.840 | how there's a very close mathematical relationship
01:01:48.240 | actually between artificial neural networks
01:01:50.800 | and a lot of things that we've studied for in physics
01:01:54.560 | that go by nerdy names
01:01:55.640 | like the renormalization group equation
01:01:57.520 | and Hamiltonians and yada, yada, yada.
01:01:59.880 | And when you look a little more closely at this,
01:02:05.720 | you have, at first I was like,
01:02:10.720 | whoa, there's something crazy here that doesn't make sense.
01:02:13.520 | 'Cause we know that if you even want to build
01:02:18.520 | a super simple neural network,
01:02:21.080 | tell apart cat pictures and dog pictures, right?
01:02:23.600 | That you can do that very, very well now.
01:02:25.640 | But if you think about it a little bit,
01:02:29.080 | you convince yourself it must be impossible
01:02:30.800 | because if I have one megapixel,
01:02:33.680 | even if each pixel is just black or white,
01:02:35.840 | there's two to the power of 1 million possible images,
01:02:38.760 | which is way more than there are atoms in our universe.
01:02:40.920 | So in order to, and then for each one of those,
01:02:45.000 | I have to assign a number,
01:02:46.400 | which is the probability that it's a dog.
01:02:48.960 | So an arbitrary function of images
01:02:51.000 | is a list of more numbers than there are atoms
01:02:55.800 | in our universe.
01:02:56.760 | So clearly I can't store that under the hood of my GPU
01:02:59.760 | or my computer, yet somehow it works.
01:03:03.000 | So what does that mean?
01:03:04.000 | Well, it means that out of all of the problems
01:03:07.480 | that you could try to solve with a neural network,
01:03:10.680 | almost all of them are impossible to solve
01:03:15.400 | with a reasonably sized one.
01:03:16.920 | But then what we showed in our paper
01:03:19.720 | was that the fraction of all the problems
01:03:24.720 | that you could possibly pose,
01:03:28.800 | that we actually care about given the laws of physics,
01:03:31.840 | is also an infinitesimally tiny little part.
01:03:34.880 | And amazingly, they're basically the same part.
01:03:37.800 | - Yeah, it's almost like our world was created for,
01:03:39.920 | I mean, they kind of come together.
01:03:41.400 | - Yeah, but you could say maybe where the world created,
01:03:44.240 | the world was created for us,
01:03:45.440 | but I have a more modest interpretation,
01:03:47.320 | which is that instead evolution endowed us
01:03:50.360 | with neural networks precisely for that reason.
01:03:53.120 | 'Cause this particular architecture,
01:03:54.640 | as opposed to the one in your laptop,
01:03:56.040 | is very, very well adapted to solving the kind of problems
01:04:02.040 | that nature kept presenting our ancestors with, right?
01:04:05.560 | So it makes sense that why do we have a brain
01:04:08.120 | in the first place?
01:04:09.280 | It's to be able to make predictions
01:04:10.600 | about the future and so on.
01:04:12.880 | So if we had a sucky system,
01:04:14.240 | which could never solve it, it wouldn't have evolved.
01:04:17.480 | So this is, I think, a very beautiful fact.
01:04:22.480 | We also realize that there's been earlier work
01:04:29.080 | on why deeper networks are good,
01:04:32.120 | but we were able to show an additional cool fact there,
01:04:34.760 | which is that even incredibly simple problems,
01:04:38.440 | like suppose I give you a thousand numbers
01:04:41.160 | and ask you to multiply them together,
01:04:43.440 | you can write a few lines of code, boom, done, trivial.
01:04:46.740 | If you just try to do that with a neural network
01:04:49.600 | that has only one single hidden layer in it,
01:04:52.520 | you can do it, but you're gonna need
01:04:56.200 | two to the power of a thousand neurons.
01:04:59.160 | - Yeah. - To multiply
01:05:00.080 | a thousand numbers, which is again,
01:05:01.440 | more neurons than there are atoms in our universe.
01:05:03.200 | So that's fascinating.
01:05:05.520 | But if you allow yourself,
01:05:08.160 | make it a deep network of many layers,
01:05:11.360 | you only need 4,000 neurons.
01:05:13.240 | It's perfectly feasible.
01:05:16.400 | - That's really interesting.
01:05:17.960 | Yeah, so on another architecture type,
01:05:21.020 | I mean, you mentioned Schrodinger's equation
01:05:22.720 | and what are your thoughts about quantum computing
01:05:27.240 | and the role of this kind of computational unit
01:05:32.240 | in creating an intelligence system?
01:05:34.880 | - In some Hollywood movies,
01:05:36.560 | that I will not mention by name
01:05:39.520 | 'cause I don't wanna spoil them,
01:05:41.080 | the way they get AGI is building a quantum computer.
01:05:44.440 | - Yeah.
01:05:45.520 | - Because the word quantum sounds cool and so on.
01:05:47.640 | - That's right.
01:05:48.480 | - First of all, I think we don't need quantum computers
01:05:52.920 | to build AGI.
01:05:54.960 | I suspect your brain is not quantum computer
01:05:59.320 | in any profound sense.
01:06:00.720 | - So you don't-
01:06:02.520 | - I even wrote a paper about that many years ago.
01:06:04.400 | I calculated the decoherence,
01:06:06.040 | so-called decoherence time,
01:06:08.200 | how long it takes until the quantum computerness
01:06:10.360 | of what your neurons are doing gets erased
01:06:13.440 | by just random noise from the environment.
01:06:18.040 | And it's about 10 to the minus 21 seconds.
01:06:21.400 | So as cool as it would be
01:06:23.720 | to have a quantum computer in my head,
01:06:25.120 | I don't think that fast.
01:06:26.440 | On the other hand,
01:06:28.440 | there are very cool things you could do
01:06:33.120 | with quantum computers,
01:06:34.280 | or I think we'll be able to do soon
01:06:37.560 | when we get bigger ones,
01:06:39.440 | that might actually help machine learning
01:06:41.040 | do even better than the brain.
01:06:43.240 | So for example,
01:06:45.720 | one, this is just a moonshot,
01:06:50.840 | but learning is very much
01:06:55.840 | same thing as search.
01:07:00.520 | If you're trying to train a neural network
01:07:03.240 | to get really learned,
01:07:04.600 | to do something really well,
01:07:06.320 | you have some loss function,
01:07:07.360 | you have a bunch of knobs you can turn,
01:07:10.440 | represented by a bunch of numbers,
01:07:12.160 | and you're trying to tweak them
01:07:12.980 | so that it becomes as good as possible at this thing.
01:07:15.160 | So if you think of a landscape
01:07:18.480 | with some valley,
01:07:19.680 | where each dimension of the landscape
01:07:22.160 | corresponds to some number you can change,
01:07:24.120 | you're trying to find the minimum.
01:07:25.680 | And it's well known that
01:07:26.800 | if you have a very high dimensional landscape,
01:07:29.080 | complicated things,
01:07:29.960 | it's super hard to find the minimum, right?
01:07:32.120 | Quantum mechanics is amazingly good at this.
01:07:37.280 | If I want to know what's the lowest energy state
01:07:39.200 | this water can possibly have,
01:07:40.720 | incredibly hard to compute,
01:07:43.560 | but nature will happily figure this out for you
01:07:46.440 | if you just cool it down,
01:07:47.680 | make it very, very cold.
01:07:49.000 | If you put a ball somewhere,
01:07:51.960 | it'll roll down to its minimum,
01:07:53.320 | and this happens metaphorically
01:07:55.320 | at the energy landscape too.
01:07:57.360 | And quantum mechanics even uses some clever tricks,
01:08:00.320 | which today's machine learning systems don't.
01:08:03.360 | Like, if you're trying to find the minimum
01:08:05.240 | and you get stuck in a little local minimum here,
01:08:07.960 | in quantum mechanics,
01:08:08.800 | you can actually tunnel through the barrier
01:08:11.440 | and get unstuck again.
01:08:13.040 | - That's really interesting.
01:08:15.400 | - Yeah, so it may be, for example,
01:08:17.200 | we'll one day use quantum computers
01:08:20.160 | that help train neural networks better.
01:08:23.800 | - That's really interesting.
01:08:24.760 | Okay, so as a component of kind of the learning process,
01:08:28.080 | for example.
01:08:28.920 | - Yeah.
01:08:29.800 | - Let me ask, sort of wrapping up here a little bit,
01:08:33.480 | let me return to the questions of our human nature
01:08:37.880 | and love, as I mentioned.
01:08:40.400 | So do you think,
01:08:42.040 | you mentioned sort of a helper robot,
01:08:46.360 | but you can think of also personal robots.
01:08:49.160 | Do you think the way we human beings fall in love
01:08:52.880 | and get connected to each other
01:08:55.120 | is possible to achieve in an AI system,
01:08:58.080 | in human level AI intelligence system?
01:09:00.400 | Do you think we would ever see that kind of connection?
01:09:03.760 | Or, you know, in all this discussion
01:09:06.200 | about solving complex goals,
01:09:08.520 | is this kind of human social connection,
01:09:10.800 | do you think that's one of the goals
01:09:12.600 | on the peaks and valleys that,
01:09:15.040 | with the raising sea levels that we'll be able to achieve?
01:09:17.400 | Or do you think that's something that's ultimately,
01:09:20.080 | or at least in the short term,
01:09:21.800 | relative to the other goals is not achievable?
01:09:23.680 | - I think it's all possible.
01:09:25.160 | And I mean, in recent,
01:09:27.640 | there's a very wide range of guesses, as you know,
01:09:30.880 | among AI researchers, when we're going to get AGI.
01:09:33.760 | Some people, you know, like our friend, Rodney Brooks,
01:09:37.680 | says it's going to be hundreds of years at least.
01:09:41.080 | And then there are many others
01:09:42.240 | who think it's going to happen much sooner.
01:09:44.080 | And recent polls,
01:09:45.520 | maybe half or so of AI researchers think
01:09:49.040 | we're going to get AGI within decades.
01:09:50.920 | So if that happens, of course,
01:09:53.720 | then I think these things are all possible.
01:09:56.160 | But in terms of whether it will happen,
01:09:58.160 | I think we shouldn't spend so much time asking,
01:10:01.720 | what do we think will happen in the future?
01:10:04.240 | As if we are just some sort of pathetic,
01:10:06.280 | passive bystanders, you know,
01:10:08.040 | waiting for the future to happen to us.
01:10:10.320 | Hey, we're the ones creating this future, right?
01:10:12.680 | So we should be proactive about it
01:10:16.640 | and ask ourselves what sort of future
01:10:17.960 | we would like to have happen.
01:10:19.160 | - That's right.
01:10:20.000 | - Trying to make it like that.
01:10:21.560 | Well, what I prefer,
01:10:22.640 | it's just some sort of incredibly boring zombie-like future
01:10:25.280 | where there's all these mechanical things happening
01:10:26.880 | and there's no passion, no emotion, no experience, maybe even.
01:10:30.040 | No, I would, of course, much rather prefer it
01:10:33.720 | if all the things that we find,
01:10:35.640 | that we value the most about humanity
01:10:40.480 | are subjective experience, passion, inspiration, love.
01:10:44.520 | If we can create a future where those things do exist,
01:10:49.520 | I think ultimately it's not our universe
01:10:54.600 | giving meaning to us,
01:10:55.600 | it's us giving meaning to our universe.
01:10:58.040 | And if we build more advanced intelligence,
01:11:01.920 | let's make sure we build it in such a way
01:11:03.760 | that meaning is part of it.
01:11:09.200 | - A lot of people that seriously study this problem
01:11:11.480 | and think of it from different angles
01:11:13.680 | have trouble in the majority of cases,
01:11:16.960 | if they think through that happen,
01:11:19.280 | are the ones that are not beneficial to humanity.
01:11:22.320 | - Right.
01:11:23.160 | - And so, yeah, so what are your thoughts?
01:11:25.760 | What should people,
01:11:29.480 | I really don't like people to be terrified.
01:11:32.120 | What's a way for people to think about it
01:11:35.120 | in a way that, in a way we can solve it
01:11:38.040 | and we can make it better?
01:11:39.640 | - No, I don't think panicking is gonna help in any way.
01:11:43.000 | It's not gonna increase chances of things going well either.
01:11:45.920 | Even if you are in a situation where there is a real threat,
01:11:48.440 | does it help if everybody just freaks out?
01:11:50.760 | - Right.
01:11:51.600 | - No, of course not.
01:11:52.720 | I think, yeah, there are of course ways
01:11:56.640 | in which things can go horribly wrong.
01:11:58.540 | First of all, it's important when we think about this thing,
01:12:03.720 | about the problems and risks,
01:12:05.320 | to also remember how huge the upsides can be
01:12:07.200 | if we get it right, right?
01:12:08.480 | Everything we love about society and civilization
01:12:12.400 | is a product of intelligence.
01:12:13.400 | So if we can amplify our intelligence
01:12:15.320 | with machine intelligence and not anymore lose our loved one
01:12:18.760 | to what we're told is an uncurable disease
01:12:21.080 | and things like this, of course, we should aspire to that.
01:12:24.800 | So that can be a motivator, I think,
01:12:26.640 | reminding ourselves that the reason we try to solve problems
01:12:29.120 | is not just because we're trying to avoid gloom,
01:12:33.520 | but because we're trying to do something great.
01:12:35.800 | But then in terms of the risks,
01:12:37.680 | I think the really important question is to ask,
01:12:42.680 | what can we do today that will actually help
01:12:45.600 | make the outcome good, right?
01:12:47.520 | And dismissing the risk is not one of them.
01:12:50.160 | I find it quite funny often when I'm in discussion panels
01:12:55.000 | about these things, how the people who work for companies
01:13:00.000 | always be like, oh, nothing to worry about,
01:13:03.340 | nothing to worry about, nothing to worry about.
01:13:04.980 | And it's only academics sometimes express concerns.
01:13:09.840 | That's not surprising at all if you think about it.
01:13:13.000 | Upton Sinclair quipped, right,
01:13:15.320 | that it's hard to make a man believe in something
01:13:18.160 | when his income depends on not believing in it.
01:13:20.240 | And frankly, we know a lot of these people in companies
01:13:24.200 | that they're just as concerned as anyone else.
01:13:26.380 | But if you're the CEO of a company,
01:13:28.600 | that's not something you want to go on record saying
01:13:30.400 | when you have silly journalists
01:13:31.680 | who are going to put a picture of a Terminator robot
01:13:34.920 | when they quote you.
01:13:35.840 | So the issues are real.
01:13:39.160 | And the way I think about what the issue is,
01:13:42.100 | is basically, the real choice we have is,
01:13:47.040 | first of all, are we going to just dismiss this, the risks,
01:13:51.040 | and say, well, let's just go ahead and build machines
01:13:54.660 | that can do everything we can do better and cheaper.
01:13:57.740 | Let's just make ourselves obsolete as fast as possible.
01:14:00.400 | What could possibly go wrong?
01:14:01.900 | That's one attitude.
01:14:03.640 | The opposite attitude, I think, is to say,
01:14:05.720 | here's this incredible potential.
01:14:08.960 | Let's think about what kind of future
01:14:12.120 | we're really, really excited about.
01:14:14.840 | What are the shared goals that we can really aspire towards?
01:14:18.680 | And then let's think really hard
01:14:19.760 | about how we can actually get there.
01:14:21.960 | And it's to start with,
01:14:22.880 | don't start thinking about the risks.
01:14:24.360 | Start thinking about the goals.
01:14:25.840 | - Goals, yeah.
01:14:26.900 | - And then when you do that,
01:14:28.360 | then you can think about the obstacles you want to avoid.
01:14:30.680 | I often get students coming in right here into my office
01:14:33.000 | for career advice.
01:14:34.280 | I always ask them this very question.
01:14:35.720 | Where do you want to be in the future?
01:14:38.080 | If all she can say is, oh, maybe I'll have cancer.
01:14:40.780 | Maybe I'll get run over by a truck.
01:14:42.640 | - Focus on the obstacles instead of the goals.
01:14:44.400 | - She's just going to end up a hypochondriac paranoid.
01:14:47.040 | Whereas if she comes in and fire in her eyes
01:14:50.040 | and is like, I want to be there.
01:14:51.980 | And then we can talk about the obstacles
01:14:54.080 | and see how we can circumvent them.
01:14:55.920 | That's, I think, a much, much healthier attitude.
01:14:58.680 | - That's really well put.
01:15:01.080 | And I feel it's very challenging to come up with a vision
01:15:06.080 | for the future, which we are unequivocally excited about.
01:15:10.720 | I'm not just talking now in the vague terms,
01:15:12.760 | like, yeah, let's cure cancer, fine.
01:15:14.840 | Talking about what kind of society do we want to create?
01:15:18.320 | What do we want it to mean to be human in the age of AI,
01:15:22.960 | in the age of AGI?
01:15:24.280 | So if we can have this conversation,
01:15:27.840 | broad, inclusive conversation,
01:15:30.560 | and gradually start converging towards some future
01:15:34.480 | that with some direction, at least,
01:15:36.600 | that we want to steer towards,
01:15:37.920 | then we'll be much more motivated
01:15:40.560 | to constructively take on the obstacles.
01:15:42.360 | And I think if I had to,
01:15:45.680 | if I try to wrap this up in a more succinct way,
01:15:49.080 | I think we can all agree already now
01:15:53.920 | that we should aspire to build AGI,
01:15:59.160 | that doesn't overpower us, but that empowers us.
01:16:04.160 | - And think of the many various ways that can do that,
01:16:08.760 | whether that's from my side of the world
01:16:11.200 | of autonomous vehicles.
01:16:12.920 | I'm personally actually from the camp
01:16:14.920 | that believes there's human level intelligence is required
01:16:17.600 | to achieve something like vehicles
01:16:20.680 | that would actually be something we would enjoy using
01:16:24.080 | and being part of.
01:16:25.320 | So that's the one example.
01:16:26.320 | And certainly there's a lot of other types of robots
01:16:28.440 | in medicine and so on.
01:16:31.080 | So focusing on those and then coming up with the obstacles,
01:16:34.040 | coming up with the ways that that can go wrong
01:16:36.120 | and solving those one at a time.
01:16:38.360 | - And just because you can build an autonomous vehicle,
01:16:41.720 | even if you could build one that would drive this final,
01:16:45.280 | maybe there are some things in life
01:16:46.880 | that we would actually want to do ourselves.
01:16:48.560 | - That's right.
01:16:49.640 | - Like for example,
01:16:51.560 | if you think of our society as a whole,
01:16:53.200 | there's some things that we find very meaningful to do.
01:16:57.360 | And that doesn't mean we have to stop doing them
01:16:59.800 | just because machines can do them better.
01:17:02.160 | I'm not gonna stop playing tennis
01:17:04.240 | just the day someone built a tennis robot and beat me.
01:17:07.520 | - People are still playing chess and even go.
01:17:09.760 | - Yeah, and in the very near term,
01:17:14.200 | even some people are advocating basic income, replace jobs.
01:17:19.000 | But if the government is gonna be willing
01:17:20.960 | to just hand out cash to people for doing nothing,
01:17:24.200 | then one should also seriously consider
01:17:26.000 | whether the government should also just hire
01:17:27.800 | a lot more teachers and nurses and the kind of jobs
01:17:30.680 | which people often find great fulfillment in doing.
01:17:34.600 | I get very tired of hearing politicians saying,
01:17:36.480 | oh, we can't afford hiring more teachers,
01:17:39.480 | but we're gonna maybe have basic income.
01:17:41.680 | If we can have more serious research and thought
01:17:44.160 | into what gives meaning to our lives,
01:17:46.360 | the jobs give so much more than income.
01:17:48.880 | And then think about in the future,
01:17:53.440 | what are the roles that we wanna have people
01:17:58.440 | continually feeling empowered by machines?
01:18:03.160 | - And I think sort of, I come from Russia,
01:18:06.240 | from the Soviet Union.
01:18:07.360 | And I think for a lot of people in the 20th century,
01:18:10.280 | going to the moon, going to space was an inspiring thing.
01:18:14.200 | I feel like the universe of the mind,
01:18:18.200 | so AI, understanding, creating intelligence
01:18:21.000 | is that for the 21st century.
01:18:23.360 | So it's really surprising.
01:18:24.520 | And I've heard you mention this.
01:18:25.760 | It's really surprising to me,
01:18:27.520 | both on the research funding side,
01:18:29.360 | that it's not funded as greatly as it could be,
01:18:31.880 | but most importantly on the politician side,
01:18:34.840 | that it's not part of the public discourse,
01:18:36.640 | except in the killer bots, Terminator kind of view,
01:18:40.880 | that people are not yet, I think,
01:18:43.800 | perhaps excited by the possible positive future
01:18:46.760 | that we can build together.
01:18:48.200 | - And we should be, because politicians usually
01:18:51.000 | just focus on the next election cycle, right?
01:18:53.360 | The single most important thing I feel we humans have learned
01:18:57.240 | in the entire history of science
01:18:59.400 | is they were the masters of underestimation.
01:19:02.160 | We underestimated the size of our cosmos,
01:19:07.160 | again and again, realizing that everything we thought existed
01:19:10.280 | was just a small part of something grander, right?
01:19:12.360 | Planet, solar system, a galaxy,
01:19:15.800 | clusters of galaxies, universe.
01:19:18.520 | And we now know that the future has just
01:19:23.200 | so much more potential than our ancestors
01:19:25.880 | could ever have dreamt of.
01:19:27.680 | This cosmos, imagine if all of Earth
01:19:32.360 | was completely devoid of life,
01:19:35.480 | except for Cambridge, Massachusetts.
01:19:38.600 | Wouldn't it be kind of lame if all we ever aspired to
01:19:42.720 | was to stay in Cambridge, Massachusetts forever
01:19:45.600 | and then go extinct in one week,
01:19:47.200 | even though Earth was going to continue on for longer?
01:19:49.760 | That sort of attitude I think we have now
01:19:52.840 | on the cosmic scale, life can flourish on Earth,
01:19:57.840 | not for four years, but for billions of years.
01:20:00.880 | I can even tell you about how to move it out of harm's way
01:20:02.920 | when the sun gets too hot.
01:20:04.520 | And then we have so much more resources out here,
01:20:09.520 | which today, maybe there are a lot of other planets
01:20:12.480 | with bacteria or cow-like life on them,
01:20:15.000 | but most of this, all this opportunity
01:20:19.560 | seems as far as we can tell to be largely dead,
01:20:22.520 | like the Sahara desert.
01:20:23.640 | And yet we have the opportunity to help life flourish
01:20:28.560 | around this for billions of years.
01:20:30.360 | So let's quit squabbling about
01:20:32.760 | whether some little border should be drawn
01:20:36.560 | one mile to the left or right,
01:20:38.520 | and look up into the skies and realize,
01:20:41.120 | hey, we can do such incredible things.
01:20:44.120 | - Yeah, and that's, I think, why it's really exciting
01:20:46.720 | that you and others are connected
01:20:49.520 | with some of the work Elon Musk is doing,
01:20:51.960 | because he's literally going out into that space,
01:20:54.560 | really exploring our universe, and it's wonderful.
01:20:57.080 | - That is exactly why Elon Musk is so misunderstood, right?
01:21:02.080 | Misconstrued him as some kind of pessimistic doomsayer.
01:21:05.080 | The reason he cares so much about AI safety
01:21:07.720 | is because he, more than almost anyone else,
01:21:11.000 | appreciates these amazing opportunities that will squander
01:21:14.400 | if we wipe out out here on earth.
01:21:16.720 | And we're not just gonna wipe out the next generation,
01:21:19.760 | but all generations,
01:21:20.680 | and this incredible opportunity that's out there,
01:21:23.920 | and that would really be a waste.
01:21:25.520 | And AI, for people who think
01:21:29.520 | that it would be better to do without technology,
01:21:32.600 | let me just mention that if we don't improve our technology,
01:21:36.440 | the question isn't whether humanity is gonna go extinct.
01:21:39.440 | The question is just whether we're gonna get taken out
01:21:41.200 | by the next big asteroid, or the next super volcano,
01:21:44.840 | or something else dumb that we could easily prevent
01:21:48.320 | with more tech, right?
01:21:49.880 | And if we want life to flourish throughout the cosmos,
01:21:53.200 | AI is the key to it.
01:21:54.800 | As I mentioned in a lot of detail in my book right there,
01:21:59.880 | even many of the most inspired sci-fi writers,
01:22:04.880 | I feel, have totally underestimated the opportunities
01:22:09.160 | for space travel, especially to other galaxies,
01:22:11.240 | because they weren't thinking about the possibility of AGI,
01:22:15.360 | which just makes it so much easier.
01:22:17.520 | - Right, yeah.
01:22:18.440 | So that goes to your view of AGI that enables our progress,
01:22:23.440 | that enables a better life.
01:22:25.800 | So that's a beautiful way to put it,
01:22:28.320 | and something to strive for.
01:22:29.960 | So Max, thank you so much.
01:22:31.440 | Thank you for your time today.
01:22:32.560 | It's been awesome.
01:22:33.560 | - Thank you so much.
01:22:34.400 | - Thanks.
01:22:35.240 | (speaking in foreign language)
01:22:36.240 | (laughing)
01:22:38.560 | (upbeat music)
01:22:41.160 | (upbeat music)
01:22:43.760 | (upbeat music)
01:22:46.360 | (upbeat music)
01:22:48.960 | (upbeat music)
01:22:51.560 | (upbeat music)
01:22:54.160 | [BLANK_AUDIO]