back to index

Max Tegmark: AI and Physics | Lex Fridman Podcast #155


Chapters

0:0 Introduction
2:49 AI and physics
16:7 Can AI discover new laws of physics?
24:57 AI safety
42:33 Extinction of human species
53:31 How to fix fake news and misinformation
75:5 Autonomous weapons
90:28 The man who prevented nuclear war
100:36 Elon Musk and AI
114:14 AI alignment
120:16 Consciousness
129:20 Richard Feynman
133:30 Machine learning and computational physics
144:28 AI and creativity
155:42 Aliens
171:25 Mortality

Whisper Transcript | Transcript Only Page

00:00:00.000 | The following is a conversation with Max Tegmark,
00:00:02.840 | his second time on the podcast.
00:00:04.760 | In fact, the previous conversation
00:00:07.120 | was episode number one of this very podcast.
00:00:10.960 | He is a physicist and artificial intelligence researcher
00:00:14.800 | at MIT, co-founder of the Future of Life Institute,
00:00:18.840 | and author of "Life 3.0",
00:00:21.360 | Being Human in the Age of Artificial Intelligence.
00:00:24.560 | He's also the head of a bunch of other
00:00:26.560 | huge fascinating projects,
00:00:28.440 | and has written a lot of different things
00:00:30.560 | that you should definitely check out.
00:00:32.120 | He has been one of the key humans
00:00:34.480 | who has been outspoken about long-term
00:00:36.400 | existential risks of AI,
00:00:38.160 | and also its exciting possibilities
00:00:40.480 | and solutions to real world problems.
00:00:42.880 | Most recently at the intersection of AI and physics,
00:00:46.400 | and also in re-engineering the algorithms
00:00:49.960 | that divide us by controlling the information we see,
00:00:53.160 | and thereby creating bubbles
00:00:55.000 | and all other kinds of complex social phenomena
00:00:58.240 | that we see today.
00:00:59.640 | In general, he's one of the most passionate
00:01:01.440 | and brilliant people I have the fortune of knowing.
00:01:04.360 | I hope to talk to him many more times
00:01:06.200 | on this podcast in the future.
00:01:08.280 | Quick mention of our sponsors,
00:01:10.000 | the Jordan Harbinger Show,
00:01:12.200 | Four Sigmatic Mushroom Coffee,
00:01:14.360 | BetterHelp Online Therapy,
00:01:16.200 | and ExpressVPN.
00:01:18.480 | So the choices, wisdom, caffeine, sanity, or privacy.
00:01:23.480 | Choose wisely, my friends.
00:01:25.000 | And if you wish, click the sponsor links below
00:01:27.480 | to get a discount and to support this podcast.
00:01:30.560 | As a side note, let me say that
00:01:32.160 | much of the researchers in the machine learning
00:01:35.400 | and artificial intelligence communities
00:01:37.760 | do not spend much time thinking deeply
00:01:40.400 | about existential risks of AI.
00:01:42.720 | Because our current algorithms are seen as useful but dumb,
00:01:46.160 | it's difficult to imagine how they may become destructive
00:01:49.240 | to the fabric of human civilization
00:01:51.240 | in the foreseeable future.
00:01:53.000 | I understand this mindset, but it's very troublesome.
00:01:56.120 | To me, this is both a dangerous
00:01:58.320 | and uninspiring perspective,
00:02:00.480 | reminiscent of a lobster sitting in a pot of lukewarm water
00:02:03.960 | that a minute ago was cold.
00:02:06.160 | I feel a kinship with this lobster.
00:02:08.640 | I believe that already the algorithms
00:02:10.560 | that drive our interaction on social media
00:02:12.960 | have an intelligence and power
00:02:15.000 | that far outstrip the intelligence and power
00:02:17.360 | of any one human being.
00:02:19.200 | Now really is the time to think about this,
00:02:21.640 | to define the trajectory of the interplay of technology
00:02:24.920 | and human beings in our society.
00:02:26.960 | I think that the future of human civilization
00:02:29.680 | very well may be at stake over this very question
00:02:32.840 | of the role of artificial intelligence in our society.
00:02:36.240 | If you enjoy this thing, subscribe on YouTube,
00:02:38.160 | review it on Apple Podcasts, follow on Spotify,
00:02:40.960 | support on Patreon, or connect with me on Twitter,
00:02:43.840 | @lexfriedman.
00:02:45.260 | And now, here's my conversation with Max Tagmark.
00:02:48.840 | So people might not know this,
00:02:51.440 | but you were actually episode number one of this podcast
00:02:55.280 | just a couple of years ago, and now we're back.
00:02:59.280 | And it so happens that a lot of exciting things
00:03:01.800 | happened in both physics and artificial intelligence,
00:03:05.600 | both fields that you're super passionate about.
00:03:08.480 | Can we try to catch up to some of the exciting things
00:03:11.640 | happening in artificial intelligence,
00:03:14.040 | especially in the context of the way it's cracking open
00:03:17.740 | the different problems of the sciences?
00:03:21.180 | - Yeah, I'd love to, especially now as we start 2021 here.
00:03:25.760 | It's a really fun time to think about
00:03:27.400 | what were the biggest breakthroughs in AI?
00:03:30.840 | Not the ones necessarily that media wrote about,
00:03:33.040 | but that really matter.
00:03:34.440 | And what does that mean for our ability
00:03:37.000 | to do better science?
00:03:38.640 | What does it mean for our ability
00:03:41.240 | to help people around the world?
00:03:44.320 | And what does it mean for new problems
00:03:47.720 | that they could cause if we're not smart enough
00:03:49.640 | to avoid them?
00:03:50.480 | You know, what do we learn basically from this?
00:03:53.260 | - Yes, absolutely.
00:03:54.100 | So one of the amazing things you're part of
00:03:56.220 | is the AI Institute for Artificial Intelligence
00:03:58.980 | and Fundamental Interactions.
00:04:02.020 | What's up with this institute?
00:04:03.580 | What are you working on?
00:04:05.020 | What are you thinking about?
00:04:06.300 | - The idea is something I'm very on fire with,
00:04:10.420 | which is basically AI meets physics.
00:04:13.340 | And you know, it's been almost five years now
00:04:16.740 | since I shifted my own MIT research
00:04:19.680 | from physics to machine learning.
00:04:22.120 | And in the beginning, I noticed a lot of my colleagues,
00:04:24.200 | even though they were polite about it,
00:04:25.760 | were like kind of, "What is Max doing?
00:04:29.120 | What is this weird stuff?"
00:04:30.320 | - He's lost his mind.
00:04:32.280 | - But then gradually, I, together with some colleagues,
00:04:36.600 | were able to persuade more and more
00:04:39.080 | of the other professors in our physics department
00:04:42.640 | to get interested in this.
00:04:44.140 | And now we got this amazing NSF center,
00:04:47.620 | so 20 million bucks for the next five years,
00:04:50.620 | MIT and a bunch of neighboring universities here also.
00:04:54.420 | And I noticed now those colleagues
00:04:56.500 | who were looking at me funny have stopped asking
00:04:58.860 | what the point is of this,
00:05:01.620 | because it's becoming more clear.
00:05:03.700 | And I really believe that, of course,
00:05:06.780 | AI can help physics a lot to do better physics,
00:05:10.720 | but physics can also help AI a lot,
00:05:14.280 | both by building better hardware.
00:05:17.600 | My colleague, Marin Solzhachich, for example,
00:05:19.840 | is working on an optical chip
00:05:22.960 | for much faster machine learning
00:05:24.400 | where the computation is done
00:05:26.420 | not by moving electrons around,
00:05:29.000 | but by moving photons around,
00:05:31.340 | dramatically less energy use, faster, better.
00:05:33.820 | We can also help AI a lot, I think,
00:05:38.600 | by having a different set of tools
00:05:43.600 | and a different, maybe more audacious attitude.
00:05:46.680 | AI has, to a significant extent,
00:05:51.040 | been an engineering discipline
00:05:52.480 | where you're just trying to make things that work
00:05:55.080 | and being more interested in maybe selling them
00:05:57.520 | than in figuring out exactly how they work
00:06:01.200 | and proving theorems about that they will always work.
00:06:04.560 | Contrast that with physics.
00:06:06.080 | When Elon Musk sends a rocket
00:06:08.600 | to the International Space Station,
00:06:11.020 | they didn't just train with machine learning,
00:06:12.800 | oh, let's fire it a little bit more to the left,
00:06:14.720 | a bit more to the right,
00:06:15.560 | oh, that also missed, let's try here.
00:06:17.600 | No, we figured out Newton's laws of gravitation
00:06:21.840 | and other things
00:06:24.040 | and got a really deep fundamental understanding.
00:06:26.440 | And that's what gives us such confidence in rockets.
00:06:31.480 | And my vision is that in the future,
00:06:36.480 | all machine learning systems
00:06:38.200 | that actually have impact on people's lives
00:06:40.640 | will be understood at a really, really deep level.
00:06:43.720 | So we trust them not 'cause some sales rep told us to,
00:06:47.560 | but because they've earned our trust.
00:06:49.400 | And really safety-critical things even prove
00:06:53.200 | that they will always do what we expect them to do.
00:06:56.200 | That's very much the physics mindset.
00:06:57.640 | So it's interesting, if you look at big breakthroughs
00:07:00.760 | that have happened in machine learning this year,
00:07:03.200 | from dancing robots, it's pretty fantastic,
00:07:08.800 | not just because it's cool,
00:07:09.880 | but if you just think about not that many years ago,
00:07:13.480 | this YouTube video at this DARPA challenge
00:07:16.120 | where the MIT robot comes out of the car and face plants,
00:07:19.800 | how far we've come in just a few years.
00:07:24.400 | Similarly, AlphaFold2,
00:07:29.560 | crushing the protein folding problem.
00:07:31.760 | We can talk more about implications
00:07:33.640 | for medical research and stuff,
00:07:34.960 | but hey, that's huge progress.
00:07:38.320 | You can look at GPT-3 that can spout off English texts,
00:07:44.840 | which sometimes really, really blows you away.
00:07:49.440 | You can look at the Google at DeepMind's MuZero,
00:07:53.480 | which doesn't just kick our butt in Go and Chess and Shogi,
00:07:58.480 | but also in all these Atari games,
00:08:00.360 | and you don't even have to teach it the rules now.
00:08:03.560 | What all of those have in common is,
00:08:05.840 | besides being powerful,
00:08:07.480 | is we don't fully understand how they work.
00:08:10.000 | And that's fine if it's just some dancing robots,
00:08:13.760 | and the worst thing that can happen is they face plant,
00:08:17.040 | or if they're playing Go,
00:08:18.600 | and the worst thing that can happen
00:08:19.680 | is that they make a bad move and lose the game.
00:08:22.040 | It's less fine if that's what's controlling
00:08:25.400 | your self-driving car or your nuclear power plant.
00:08:28.720 | And we've seen already that even though Hollywood
00:08:33.680 | had all these movies where they try to make us worry
00:08:35.720 | about the wrong things, like machines turning evil,
00:08:39.160 | the actual bad things that have happened with automation
00:08:42.840 | have not been machines turning evil.
00:08:45.560 | They've been caused by overtrust
00:08:48.400 | in things we didn't understand
00:08:50.160 | as well as we thought we did.
00:08:51.400 | Even very simple automated systems,
00:08:54.280 | like what Boeing put into the 737 MAX,
00:08:59.120 | killed a lot of people.
00:09:00.400 | Was it that that little simple system was evil?
00:09:02.920 | Of course not.
00:09:03.920 | But we didn't understand it as well as we should have.
00:09:07.360 | - And we trusted without understanding.
00:09:10.600 | - Exactly.
00:09:11.440 | - Hence the overtrust.
00:09:12.400 | - We didn't even understand that we didn't understand.
00:09:15.680 | The humility is really at the core of being a scientist.
00:09:19.800 | I think step one, if you wanna be a scientist,
00:09:21.880 | is don't ever fool yourself into thinking
00:09:24.160 | you understand things when you actually don't.
00:09:26.120 | - Yes.
00:09:26.960 | - Right?
00:09:27.800 | - That's probably good advice for humans in general.
00:09:29.480 | - I think humility in general can do us good.
00:09:31.320 | But in science, it's so spectacular.
00:09:33.080 | Like why did we have the wrong theory of gravity
00:09:35.880 | ever from Aristotle onward until Galileo's time?
00:09:40.520 | Why would we believe something so dumb
00:09:42.520 | as that if I throw this water bottle,
00:09:44.600 | it's gonna go up with constant speed
00:09:47.280 | until it realizes that its natural motion is down.
00:09:49.760 | - It changes its mind.
00:09:51.040 | - Because people just kind of assumed Aristotle was right,
00:09:55.320 | he's an authority, we understand that.
00:09:57.680 | Why did we believe things like
00:09:59.320 | that the sun is going around the earth?
00:10:01.880 | Why did we believe that time flows at the same rate
00:10:04.600 | for everyone until Einstein?
00:10:06.400 | Same exact mistake over and over again.
00:10:08.560 | We just weren't humble enough to acknowledge
00:10:11.760 | that we actually didn't know for sure.
00:10:13.920 | We assumed we knew.
00:10:15.720 | So we didn't discover the truth
00:10:17.280 | because we assumed there was nothing there
00:10:18.840 | to be discovered, right?
00:10:20.560 | There was something to be discovered about the 737 MAX.
00:10:24.360 | And if you had been a bit more suspicious
00:10:26.520 | and tested it better, we would have found it.
00:10:28.680 | And it's the same thing with most harm
00:10:30.640 | that's been done by automation so far, I would say.
00:10:33.720 | So I don't know if you, did you hear of a company
00:10:35.520 | called Knight Capital?
00:10:37.200 | - No.
00:10:38.040 | - So good, that means you didn't invest in them earlier.
00:10:40.520 | (both laughing)
00:10:42.080 | They deployed this automated rating system.
00:10:44.440 | - Yes.
00:10:45.600 | - All nice and shiny.
00:10:46.960 | They didn't understand it as well as they thought.
00:10:49.480 | And it went about losing 10 million bucks per minute
00:10:53.040 | for 44 minutes straight.
00:10:54.720 | - No.
00:10:55.560 | - Until someone presumably was like,
00:10:56.960 | "Oh no, shut this off."
00:10:58.760 | You know, was it evil?
00:11:00.480 | No, it was again, misplaced trust.
00:11:03.480 | Something they didn't fully understand, right?
00:11:05.240 | And there have been so many,
00:11:08.480 | even when people have been killed by robots,
00:11:11.000 | which is quite rare still, but in factory accidents,
00:11:14.280 | it's in every single case been not malice,
00:11:17.560 | just that the robot didn't understand
00:11:19.120 | that a human is different from an auto part or whatever.
00:11:22.200 | So this is where I think there's so much opportunity
00:11:28.040 | for a physics approach, where you just aim
00:11:31.400 | for a higher level of understanding.
00:11:33.680 | And if you look at all these systems that we talked about,
00:11:37.040 | from reinforcement learning systems and dancing robots
00:11:42.040 | to all these neural networks that power GPT-3
00:11:46.240 | and Go playing software and stuff,
00:11:49.560 | they're all basically black boxes,
00:11:52.640 | much like not so different
00:11:54.480 | from if you teach a human something,
00:11:55.880 | you have no idea how their brain works, right?
00:11:58.080 | Except the human brain at least has been error corrected
00:12:02.200 | during many, many centuries of evolution
00:12:04.440 | in a way that some of these systems have not, right?
00:12:07.480 | And my MIT research is entirely focused
00:12:10.600 | on demystifying this black box.
00:12:12.760 | Intelligible intelligence is my slogan.
00:12:15.960 | - That's a good line, intelligible intelligence.
00:12:18.440 | - Yeah, that we shouldn't settle for something
00:12:20.360 | that seems intelligent, but it should be intelligible
00:12:23.080 | so that we actually trust it because we understand it, right?
00:12:26.520 | Like, again, Elon trusts his rockets
00:12:28.920 | because he understands Newton's laws
00:12:30.640 | and thrust and how everything works.
00:12:32.880 | And let me tell you why,
00:12:35.160 | can I tell you why I'm optimistic about this?
00:12:36.880 | - Yes.
00:12:37.720 | - I think we've made a bit of a mistake
00:12:41.280 | where some people still think that somehow
00:12:44.280 | we're never gonna understand neural networks
00:12:46.520 | and we're just gonna have to learn to live with this.
00:12:49.520 | It's this very powerful black box.
00:12:52.240 | Basically, for those who haven't spent time
00:12:55.880 | building their own, it's super simple what happens inside.
00:12:59.000 | You send in a long list of numbers
00:13:01.280 | and then you do a bunch of operations on them,
00:13:04.880 | multiply by matrices, et cetera, et cetera,
00:13:06.960 | and some other numbers come out, that's the output of it.
00:13:09.920 | And then there are a bunch of knobs you can tune.
00:13:13.600 | And when you change them, it affects the computation,
00:13:16.720 | the input-output relation.
00:13:18.120 | And then you just give the computer some definition of good
00:13:21.000 | and it keeps optimizing these knobs
00:13:22.640 | until it performs as good as possible.
00:13:24.720 | And often you go like, wow, that's really good.
00:13:27.120 | This robot can dance,
00:13:28.680 | or this machine is beating me at chess now.
00:13:30.840 | And in the end, you have something
00:13:33.440 | which even though you can look inside it,
00:13:35.120 | you have very little idea of how it works.
00:13:38.680 | You can print out tables
00:13:40.240 | of all the millions of parameters in there.
00:13:43.200 | Is it crystal clear now how it's working?
00:13:44.880 | No, of course not, right?
00:13:46.840 | Many of my colleagues seem willing to settle for that.
00:13:48.920 | And I'm like, no, that's like the halfway point.
00:13:51.720 | Some have even gone as far as sort of guessing
00:13:57.560 | that the instrutability of this
00:14:00.480 | is where some of the power comes from
00:14:02.760 | and some sort of mysticism.
00:14:05.040 | I think that's total nonsense.
00:14:06.800 | I think the real power of neural networks
00:14:10.240 | comes not from instrutability,
00:14:12.160 | but from differentiability.
00:14:15.040 | And what I mean by that is simply that
00:14:17.960 | the output changes only smoothly if you tweak your knobs.
00:14:23.880 | And then you can use all these powerful methods
00:14:26.640 | we have for optimization in science.
00:14:28.360 | We can just tweak them a little bit
00:14:29.520 | and see did that get better or worse?
00:14:31.680 | That's the fundamental idea of machine learning,
00:14:33.920 | that the machine itself can keep optimizing
00:14:36.080 | until it gets better.
00:14:37.240 | Suppose you wrote this algorithm instead in Python
00:14:41.880 | or some other programming language.
00:14:43.640 | And then what the knobs did was
00:14:45.600 | they just changed random letters in your code.
00:14:48.280 | Now it would just epically fail.
00:14:51.440 | You change one thing and instead of saying print,
00:14:53.400 | it says sint, syntax error.
00:14:56.800 | You don't even know, was that for the better
00:14:58.720 | or for the worse, right?
00:14:59.880 | This to me is, this is what I believe
00:15:02.440 | is the fundamental power of neural networks.
00:15:05.240 | - And just to clarify, the changing of the different letters
00:15:07.440 | in a program would not be a differentiable process.
00:15:10.640 | - It would make it an invalid program, typically.
00:15:13.760 | And then you wouldn't even know if you changed more letters
00:15:16.760 | if it would make it work again.
00:15:18.560 | - So that's the magic of neural networks,
00:15:22.120 | the inscrutability.
00:15:23.360 | - The differentiability, that every setting
00:15:26.040 | of the parameters is a program
00:15:27.320 | and you can tell is it better or worse.
00:15:29.320 | - So you don't like the poetry of the mystery
00:15:32.840 | of neural networks as the source of its power?
00:15:35.120 | - I generally like poetry, but not in this case.
00:15:39.160 | It's so misleading and above all, it shortchanges us.
00:15:42.880 | It makes us underestimate the good things
00:15:46.440 | we can accomplish because, so what we've been doing
00:15:48.640 | in my group is basically step one,
00:15:51.440 | train the mysterious neural network to do something well.
00:15:54.900 | And then step two, do some additional AI techniques
00:15:59.580 | to see if we can now transform this black box
00:16:02.680 | into something equally intelligent
00:16:05.320 | that you can actually understand.
00:16:07.160 | So for example, I'll give you one example.
00:16:08.540 | This AI Feynman project that we just published.
00:16:11.600 | So we took the 100 most famous or complicated equations
00:16:16.600 | from one of my favorite physics textbooks.
00:16:21.160 | In fact, the one that got me into physics
00:16:22.600 | in the first place, the Feynman lectures on physics.
00:16:25.920 | And so you have a formula, maybe it has,
00:16:29.480 | what goes into the formula is six different variables
00:16:34.200 | and then what comes out is one.
00:16:36.000 | So then you can make like a giant Excel spreadsheet
00:16:38.080 | of seven columns.
00:16:39.420 | You put in just random numbers for the six columns
00:16:41.680 | for those six input variables and then you calculate
00:16:44.340 | with a formula of the seventh column, the output.
00:16:46.800 | So maybe it's like the force equals in the last column
00:16:50.440 | some function of the other.
00:16:51.680 | And now the task is, okay, if I don't tell you
00:16:53.860 | what the formula was, can you figure that out
00:16:57.340 | from looking at my spreadsheet I gave you?
00:16:59.020 | - Yes.
00:17:00.100 | - This problem is called symbolic regression.
00:17:03.500 | If I tell you that the formula is what we call
00:17:06.260 | a linear formula, so it's just that the output is
00:17:09.020 | some sum of all the things input, the times some constants,
00:17:15.840 | that's the famous easy problem we can solve.
00:17:18.680 | We do it all the time in science and engineering.
00:17:21.340 | But the general one, if it's more complicated functions
00:17:24.500 | with logarithms or cosines or other math,
00:17:26.940 | it's a very, very hard one and probably impossible
00:17:30.600 | to do fast in general just because the number of formulas
00:17:34.620 | with n symbols just grows exponentially.
00:17:37.780 | Just like the number of passwords you can make
00:17:39.380 | grow dramatically with length.
00:17:41.320 | But we had this idea that if you first have
00:17:46.380 | a neural network that can actually approximate the formula,
00:17:48.980 | you just trained it, even if you don't understand
00:17:50.740 | how it works, that can be a first step
00:17:54.940 | towards actually understanding how it works.
00:17:57.340 | So that's what we do first.
00:17:59.420 | And then we study that neural network now
00:18:03.220 | and put in all sorts of other data that wasn't
00:18:05.340 | in the original training data and use that to discover
00:18:08.620 | simplifying properties of the formula.
00:18:11.420 | And that lets us break it apart often
00:18:13.140 | into many simpler pieces in a kind of divide
00:18:15.540 | and conquer approach.
00:18:17.420 | So we were able to solve all of those 100 formulas,
00:18:20.020 | discover them automatically, plus a whole bunch
00:18:22.140 | of other ones.
00:18:22.980 | It's actually kind of humbling to see that this code,
00:18:28.020 | which anyone who wants now, who's listening to this,
00:18:30.260 | can type pip install AI Feynman on the computer
00:18:33.700 | and run it.
00:18:34.540 | It can actually do what Johannes Kepler spent four years
00:18:38.020 | doing when he stared at Mars data.
00:18:40.180 | Until he's like, "Finally, Eureka, this is an ellipse!"
00:18:42.980 | This will do it automatically for you in one hour, right?
00:18:46.900 | Or Max Planck, he was looking at how much radiation
00:18:51.020 | comes out at different wavelengths from a hot object
00:18:54.140 | and discovered the famous blackbody formula.
00:18:57.180 | This discovers it automatically.
00:18:59.980 | I'm actually excited about seeing if we can discover
00:19:04.980 | not just old formulas again, but new formulas
00:19:09.660 | that no one has seen before.
00:19:11.940 | - I do like this process of using kind of a neural network
00:19:14.660 | to find some basic insights and then dissecting
00:19:18.380 | the neural network to then gain the final.
00:19:21.620 | So that's, in that way, you've forcing
00:19:27.260 | the explainability issue of really trying to analyze
00:19:32.020 | a neural network for the things it knows
00:19:35.580 | in order to come up with the final, beautiful,
00:19:38.340 | simple theory underlying the initial system
00:19:42.220 | that you were looking at.
00:19:43.060 | - I love that.
00:19:44.180 | And the reason I'm so optimistic that it can be generalized
00:19:48.420 | to so much more is because that's exactly what we do
00:19:52.020 | as human scientists.
00:19:53.500 | Think of Galileo, whom we mentioned, right?
00:19:55.660 | I bet when he was a little kid,
00:19:57.060 | if his dad threw him an apple, he would catch it.
00:20:01.860 | Because he had a neural network in his brain
00:20:04.420 | that he had trained to predict the parabolic orbit
00:20:07.220 | of apples that are thrown under gravity.
00:20:09.900 | If you throw a tennis ball to a dog,
00:20:11.940 | it also has this same ability of deep learning
00:20:15.380 | to figure out how the ball is gonna move and catch it.
00:20:18.220 | But Galileo went one step further when he got older.
00:20:21.980 | He went back and was like, "Wait a minute.
00:20:24.100 | (Lex laughs)
00:20:26.100 | "I can write down a formula for this.
00:20:27.900 | "Y equals X squared, a parabola."
00:20:31.660 | And he helped revolutionize physics as we know it, right?
00:20:36.580 | - So there was a basic neural network in there
00:20:38.820 | from childhood that captured the experiences
00:20:43.380 | of observing different kinds of trajectories.
00:20:46.460 | And then he was able to go back in
00:20:48.260 | with another extra little neural network
00:20:51.020 | and analyze all those experiences
00:20:53.180 | and be like, "Wait a minute.
00:20:54.540 | "There's a deeper rule here."
00:20:56.300 | - Exactly.
00:20:57.140 | He was able to distill out in symbolic form
00:21:00.740 | what that complicated black box neural network was doing.
00:21:04.020 | Not only did the formula he got
00:21:06.940 | ultimately become more accurate,
00:21:09.100 | and similarly, this is how Newton got Newton's laws,
00:21:11.860 | which is why Elon can send rockets to the space station now.
00:21:15.660 | So it's not only more accurate,
00:21:17.260 | but it's also simpler, much simpler.
00:21:20.140 | And it's so simple that we can actually describe it
00:21:22.340 | to our friends and each other, right?
00:21:24.940 | We've talked about it just in the context of physics now,
00:21:28.780 | but hey, isn't this what we're doing
00:21:31.340 | when we're talking to each other also?
00:21:33.420 | We go around with our neural networks,
00:21:35.500 | just like dogs and cats and chipmunks and blue jays,
00:21:38.740 | and we experience things in the world.
00:21:41.860 | But then we humans do this additional step on top of that,
00:21:44.460 | where we then distill out certain high-level knowledge
00:21:48.780 | that we've extracted from this in a way
00:21:50.740 | that we can communicate it to each other
00:21:52.900 | in a symbolic form, in English in this case, right?
00:21:56.660 | So if we can do it,
00:21:59.240 | and we believe that we are information processing entities,
00:22:02.900 | then we should be able to make machine learning
00:22:04.860 | that does it also.
00:22:05.920 | - Well, do you think the entire thing could be learning?
00:22:10.140 | Because this dissection process, like for AI Feynman,
00:22:14.140 | the secondary stage feels like something like reasoning,
00:22:19.220 | and the initial step feels like more like
00:22:21.300 | the more basic kind of differentiable learning.
00:22:25.300 | Do you think the whole thing could be differentiable learning?
00:22:29.100 | Do you think the whole thing could be
00:22:30.500 | basically neural networks on top of each other?
00:22:32.340 | It's like turtles all the way down?
00:22:33.820 | Can it be neural networks all the way down?
00:22:35.940 | - I mean, that's a really interesting question.
00:22:37.900 | We know that in your case,
00:22:39.560 | it is neural networks all the way down,
00:22:41.320 | because that's all you have in your skull,
00:22:42.940 | is a bunch of neurons doing their thing, right?
00:22:45.900 | But if you ask the question more generally,
00:22:50.380 | what algorithms are being used in your brain?
00:22:54.180 | I think it's super interesting to compare.
00:22:56.180 | I think we've gone a little bit backwards historically,
00:22:58.740 | because we humans first discovered good old-fashioned AI,
00:23:03.100 | the logic-based AI that we often called Go-Fi,
00:23:06.900 | for good old-fashioned AI.
00:23:08.300 | And then more recently, we did machine learning,
00:23:12.780 | because it required bigger computers,
00:23:14.220 | so we had to discover it later.
00:23:16.020 | So we think of machine learning with neural networks
00:23:19.300 | as the modern thing,
00:23:20.580 | and the logic-based AI as the old-fashioned thing.
00:23:23.180 | But if you look at evolution on Earth,
00:23:27.820 | it's actually been the other way around.
00:23:29.900 | I would say that, for example,
00:23:32.780 | an eagle has a better vision system than I have,
00:23:37.020 | and dogs are just as good at casting tennis balls as I am.
00:23:42.460 | All this stuff which is done by training a neural network
00:23:46.060 | and not interpreting it in words,
00:23:48.020 | is something so many of our animal friends can do,
00:23:51.900 | at least as well as us, right?
00:23:53.780 | What is it that we humans can do
00:23:55.460 | that the chipmunks and the eagles cannot?
00:23:58.180 | It's more to do with this logic-based stuff, right,
00:24:01.620 | where we can extract out information in symbols,
00:24:06.620 | in language, and now even with equations,
00:24:10.260 | if you're a scientist, right?
00:24:12.260 | So basically what happened was,
00:24:13.620 | first we built these computers
00:24:14.900 | that could multiply numbers real fast and manipulate symbols,
00:24:18.180 | and we felt they were pretty dumb.
00:24:20.660 | And then we made neural networks
00:24:22.740 | that can see as well as a cat can
00:24:25.060 | and do a lot of this inscrutable black box neural networks.
00:24:29.240 | What we humans can do also
00:24:31.860 | is put the two together in a useful way.
00:24:34.140 | - Yes, in our own brain.
00:24:36.260 | - Yes, in our own brain.
00:24:37.420 | So if we ever wanna get artificial general intelligence
00:24:40.980 | that can do all jobs as well as humans can, right,
00:24:45.180 | then that's what's gonna be required,
00:24:47.180 | to be able to combine the neural networks with symbolic,
00:24:52.180 | combine the old AI with the new AI in a good way.
00:24:55.340 | We do it in our brains,
00:24:57.380 | and there seems to be basically two strategies
00:24:59.860 | I see in industry now.
00:25:01.140 | One scares the heebie-jeebies out of me,
00:25:03.680 | and the other one I find much more encouraging.
00:25:05.940 | - Okay, which one?
00:25:07.180 | Can we break them apart?
00:25:08.420 | Which of the two? (laughs)
00:25:09.740 | - The one that scares the heebie-jeebies out of me
00:25:11.740 | is this attitude that we're just gonna make
00:25:13.280 | ever bigger systems that we still don't understand
00:25:15.940 | until they can be as smart as humans.
00:25:18.420 | What could possibly go wrong, right?
00:25:22.340 | I think it's just such a reckless thing to do.
00:25:24.260 | And unfortunately, and if we actually succeed as a species
00:25:28.340 | to build artificial general intelligence,
00:25:30.180 | then we still have no clue how it works,
00:25:31.940 | I think at least 50% chance
00:25:35.260 | we're gonna be extinct before too long.
00:25:37.100 | It's just gonna be an utter epic own goal.
00:25:40.540 | - So it's that 44-minute losing money problem
00:25:45.380 | or the paperclip problem
00:25:47.380 | where we don't understand how it works,
00:25:49.500 | and it just, in a matter of seconds,
00:25:51.300 | runs away in some kind of direction
00:25:52.800 | that's going to be very problematic.
00:25:54.500 | - Even long before you have to worry
00:25:56.820 | about the machines themselves
00:25:59.180 | somehow deciding to do things to us,
00:26:02.660 | we have to worry about people using machines.
00:26:06.900 | They're short of AI, AGI, and power to do bad things.
00:26:09.900 | I mean, just take a moment,
00:26:11.800 | and if anyone is not worried particularly about advanced AI,
00:26:18.100 | just take 10 seconds and just think about
00:26:21.060 | your least favorite leader on the planet right now.
00:26:23.780 | Don't tell me who it is.
00:26:25.220 | I'm gonna keep this apolitical.
00:26:26.840 | But just see the face in front of you,
00:26:28.900 | that person, for 10 seconds.
00:26:30.540 | Now imagine that that person has
00:26:32.980 | this incredibly powerful AI under their control
00:26:36.660 | and can use it to impose their will on the whole planet.
00:26:38.780 | How does that make you feel?
00:26:40.180 | - Yeah.
00:26:43.780 | Can we break that apart just briefly?
00:26:48.720 | For the 50% chance that we'll run
00:26:51.820 | to trouble with this approach,
00:26:53.680 | do you see the bigger worry in that leader
00:26:56.940 | or humans using the system to do damage,
00:27:00.700 | or are you more worried,
00:27:03.420 | and I think I'm in this camp,
00:27:05.440 | more worried about accidental,
00:27:07.540 | unintentional destruction of everything?
00:27:10.940 | So humans trying to do good,
00:27:12.980 | and in a way where everyone agrees it's kinda good,
00:27:17.500 | it's just that they're trying to do good
00:27:18.940 | without understanding.
00:27:20.160 | 'Cause I think every evil leader in history
00:27:22.580 | thought they're, to some degree,
00:27:24.500 | thought they were trying to do good.
00:27:25.700 | - Oh yeah, I'm sure Hitler thought he was doing good.
00:27:28.140 | - Yeah, Stalin.
00:27:29.620 | I've been reading a lot about Stalin.
00:27:31.220 | I'm sure Stalin, he legitimately thought
00:27:34.820 | that communism was good for the world,
00:27:36.700 | and that he was doing good.
00:27:37.880 | - I think Mao Zedong thought what he was doing
00:27:39.700 | with the Great Leap Forward was good too.
00:27:41.380 | Yeah.
00:27:42.220 | I'm actually concerned about both of those.
00:27:45.660 | Before, I promised to answer this in detail,
00:27:48.460 | but before we do that,
00:27:49.780 | let me finish answering the first question,
00:27:51.300 | 'cause I told you that there were two different routes
00:27:53.460 | we could get to artificial general intelligence,
00:27:55.140 | and one scares the hippies out of me,
00:27:57.140 | which is this one where we build something,
00:27:59.300 | we just say bigger neural networks,
00:28:01.020 | ever more hardware,
00:28:02.060 | and just train the heck out of more data,
00:28:03.820 | and poof, now it's very powerful.
00:28:06.180 | That, I think, is the most unsafe and reckless approach.
00:28:11.880 | The alternative to that is the intelligent,
00:28:15.340 | intelligible intelligence approach instead,
00:28:18.060 | where we say neural networks is just a tool
00:28:23.060 | like for the first step to get the intuition,
00:28:27.060 | but then we're gonna spend also serious resources
00:28:30.660 | on other AI techniques for demystifying this black box
00:28:35.260 | and figuring out what it's actually doing
00:28:37.620 | so we can convert it into something
00:28:39.820 | that's equally intelligent,
00:28:41.040 | but that we actually understand what it's doing.
00:28:44.100 | Maybe we can even prove theorems about it,
00:28:45.980 | that this car here will never be hacked when it's driving,
00:28:50.140 | because here's the proof.
00:28:51.420 | There is a whole science of this.
00:28:55.180 | It doesn't work for neural networks.
00:28:57.100 | There are big black boxes,
00:28:58.140 | but it works well in certain other kinds of codes.
00:29:01.020 | That approach, I think, is much more promising.
00:29:05.180 | That's exactly why I'm working on it, frankly,
00:29:07.180 | not just because I think it's cool for science,
00:29:09.460 | but because I think the more we understand these systems,
00:29:14.120 | the better the chances that we can make them
00:29:16.740 | do the things that are good for us,
00:29:18.420 | that are actually intended, not unintended.
00:29:21.600 | - So you think it's possible to prove things
00:29:24.300 | about something as complicated as a neural network?
00:29:27.440 | That's the hope?
00:29:28.540 | - Well, ideally, there's no reason
00:29:30.820 | there has to be a neural network in the end either.
00:29:33.500 | Right?
00:29:34.340 | Like, we discovered that Newton's laws of gravity
00:29:36.580 | with neural network in Newton's head.
00:29:39.740 | - Yes.
00:29:40.580 | - But that's not the way it's programmed
00:29:41.500 | into the navigation system of Elon Musk's rocket anymore.
00:29:46.460 | - Right.
00:29:47.300 | - It's written in C++,
00:29:48.820 | or I don't know what language he uses exactly.
00:29:51.140 | - Yeah.
00:29:51.980 | - And then there are software tools
00:29:52.820 | called symbolic verification,
00:29:54.660 | DARPA and the US military
00:29:57.860 | has done a lot of really great research on this,
00:30:00.560 | 'cause they really want to understand
00:30:01.960 | that when they build weapon systems,
00:30:03.800 | they don't just go fire at random or malfunction, right?
00:30:07.600 | And there's even a whole operating system called Cell 3
00:30:11.520 | that's been developed by a DARPA grant
00:30:12.960 | where you can actually mathematically prove
00:30:16.200 | that this thing can never be hacked.
00:30:18.040 | - Wow.
00:30:20.400 | - One day, I hope that will be something you can say
00:30:22.680 | about the OS that's running on our laptops too.
00:30:25.220 | As you know, we're not there,
00:30:26.980 | but I think we should be ambitious, frankly.
00:30:29.900 | - Yeah.
00:30:30.740 | - And if we can use machine learning
00:30:34.220 | to help do the proofs and so on as well, right,
00:30:36.380 | then it's much easier to verify that a proof is correct
00:30:40.140 | than to come up with a proof in the first place.
00:30:43.060 | That's really the core idea here.
00:30:45.140 | If someone comes on your podcast and says
00:30:47.540 | they proved the Riemann hypothesis
00:30:49.820 | or some new sensational new theorem,
00:30:52.920 | it's much easier for someone else,
00:30:57.680 | take some smart math grad students to check,
00:31:00.000 | oh, there's an error here in equation five,
00:31:02.600 | or this really checks out
00:31:04.040 | than it was to discover the proof.
00:31:06.060 | - Yeah, although some of those proofs are pretty complicated,
00:31:09.040 | but yes, it's still nevertheless much easier
00:31:11.120 | to verify the proof.
00:31:12.920 | I love the optimism.
00:31:14.500 | You know, we kinda, even with the security of systems,
00:31:17.520 | there's a kinda cynicism that pervades people
00:31:21.780 | who think about this, which is like, oh, it's hopeless.
00:31:24.960 | I mean, in the same sense,
00:31:26.040 | exactly like you're saying when you own networks,
00:31:27.920 | oh, it's hopeless to understand what's happening.
00:31:30.480 | With security, people are just like,
00:31:32.080 | well, there's always going to be attack vectors,
00:31:37.080 | like ways to attack the system.
00:31:40.840 | But you're right, we're just very new
00:31:42.240 | with these computational systems.
00:31:44.120 | We're new with these intelligent systems,
00:31:46.440 | and it's not out of the realm of possibility.
00:31:49.600 | Just like people didn't understand the movement
00:31:51.900 | of the stars and the planets and so on.
00:31:53.740 | - Yeah.
00:31:54.580 | - It's entirely possible that within, hopefully soon,
00:31:58.340 | but it could be within 100 years,
00:32:00.420 | we start to have an obvious laws of gravity
00:32:03.660 | about intelligence, and God forbid,
00:32:07.820 | about consciousness, too.
00:32:09.100 | That one is--
00:32:11.020 | - Agreed.
00:32:12.340 | I think, of course, if you're selling computers
00:32:15.300 | that get hacked a lot, that's in your interest as a company
00:32:17.340 | that people think it's impossible to make it safe,
00:32:19.400 | so nobody's going to get the idea of suing you.
00:32:21.140 | But I want to really inject optimism here.
00:32:23.700 | It's absolutely possible to do much better
00:32:29.540 | than we're doing now.
00:32:30.380 | And your laptop does so much stuff.
00:32:34.900 | You don't need the music player to be super safe
00:32:38.020 | in your future self-driving car, right?
00:32:42.200 | If someone hacks it and starts playing music you don't like,
00:32:45.200 | the world won't end.
00:32:47.940 | But what you can do is you can break out
00:32:49.600 | and say the drive computer that controls your safety
00:32:53.120 | must be completely physically decoupled entirely
00:32:55.960 | from the entertainment system,
00:32:57.640 | and it must physically be such that it can't take on
00:33:01.140 | over-the-air updates while you're driving.
00:33:03.240 | It can have, ultimately, some operating system on it
00:33:10.000 | which is symbolically verified and proven
00:33:12.320 | that it's always going to do what it's supposed to do.
00:33:17.880 | We can basically have,
00:33:19.100 | and companies should take that attitude too.
00:33:20.660 | They should look at everything they do
00:33:22.180 | and say what are the few systems in our company
00:33:25.880 | that threaten the whole life of the company
00:33:27.440 | if they get hacked, you know,
00:33:28.720 | and have the highest standards for them.
00:33:31.840 | And then they can save money by going for the El Cheapo,
00:33:34.600 | poorly understood stuff for the rest.
00:33:36.980 | This is very feasible, I think.
00:33:38.960 | And coming back to the bigger question
00:33:41.760 | that you worried about,
00:33:43.200 | that there'll be unintentional failures, I think,
00:33:46.320 | there are two quite separate risks here, right?
00:33:48.280 | We talked a lot about one of them,
00:33:49.600 | which is that the goals are noble of the human.
00:33:52.640 | The human says, "I want this airplane to not crash
00:33:56.920 | 'cause this is not Mohammed Atta
00:33:58.640 | now flying the airplane," right?
00:34:00.480 | And now there's this technical challenge
00:34:03.240 | of making sure that the autopilot
00:34:05.500 | is actually gonna behave as the pilot wants.
00:34:08.320 | If you set that aside, there's also the separate question.
00:34:13.360 | How do you make sure that the goals of the pilot
00:34:17.400 | are actually aligned with the goals of the passenger?
00:34:19.640 | How do you make sure very much more broadly
00:34:22.420 | that if we can all agree as a species
00:34:24.580 | that we would like things to kind of go well
00:34:26.120 | for humanity as a whole,
00:34:28.080 | that the goals are aligned here, the alignment problem.
00:34:31.480 | And yeah, there's been a lot of progress
00:34:35.960 | in the sense that there's suddenly
00:34:39.080 | huge amounts of research going on about it.
00:34:42.000 | I'm very grateful to Elon Musk
00:34:43.360 | for giving us that money five years ago
00:34:44.920 | so we could launch the first research program
00:34:46.640 | on technical AI safety and alignment.
00:34:49.480 | There's a lot of stuff happening.
00:34:51.240 | But I think we need to do more
00:34:54.100 | than just make sure little machines
00:34:55.640 | do always what their owners do.
00:34:57.280 | That wouldn't have prevented September 11th
00:35:00.120 | if Mohammed Atta said, "Okay, autopilot,
00:35:03.000 | please fly into World Trade Center."
00:35:05.640 | And it's like, okay.
00:35:07.680 | That even happened in a different situation.
00:35:11.800 | There was this depressed pilot named Andreas Lubitz,
00:35:15.680 | who told his Germanwings passenger jet
00:35:17.600 | to fly into the Alps.
00:35:19.040 | He just told the computer to change the altitude
00:35:21.600 | to 100 meters or something like that.
00:35:23.280 | And you know what the computer said?
00:35:25.360 | Okay.
00:35:26.560 | And it had the freaking topographical map of the Alps
00:35:29.520 | in there, it had GPS, everything.
00:35:31.440 | No one had bothered teaching it
00:35:33.080 | even the basic kindergarten ethics of like,
00:35:35.560 | no, we never want airplanes to fly into mountains
00:35:39.560 | under any circumstances.
00:35:41.980 | And so we have to think beyond just the technical issues
00:35:46.980 | and think about how do we align in general incentives
00:35:52.100 | on this planet for the greater good?
00:35:54.740 | So starting with simple stuff like that,
00:35:56.580 | every airplane that has a computer in it
00:35:59.140 | should be taught whatever kindergarten ethics
00:36:01.900 | that's smart enough to understand.
00:36:03.220 | Like, no, don't fly into fixed objects
00:36:05.980 | if the pilot tells you to do so,
00:36:08.260 | then go on autopilot mode, send an email to the cops
00:36:12.980 | and land at the latest airport, nearest airport.
00:36:16.300 | Any car with a forward facing camera
00:36:19.660 | should just be programmed by the manufacturer
00:36:22.180 | so that it will never accelerate into a human ever.
00:36:25.040 | That would avoid things like the Nice attack
00:36:30.300 | and many horrible terrorist vehicle attacks
00:36:32.620 | where they deliberately did that, right?
00:36:35.140 | This was not some sort of thing,
00:36:36.540 | oh, you know, US and China, different views on,
00:36:39.860 | no, there was not a single car manufacturer
00:36:42.940 | in the world who wanted the cars to do this.
00:36:45.460 | They just hadn't thought to do the alignment.
00:36:47.180 | And if you look at more broadly,
00:36:49.180 | problems that happen on this planet,
00:36:50.980 | the vast majority have to do with poor alignment.
00:36:55.020 | I mean, think about, let's go back really big
00:36:58.380 | 'cause I know you're so good at that.
00:37:00.380 | - Let's go big, yeah.
00:37:01.220 | - Yeah, so long ago in evolution, we had these genes
00:37:05.100 | and they wanted to make copies of themselves.
00:37:07.620 | That's really all they cared about.
00:37:09.380 | So some genes said, hey, I'm gonna build a brain
00:37:14.380 | on this body I'm in so that I can get better
00:37:17.420 | at making copies of myself.
00:37:18.900 | And then they decided for their benefit
00:37:21.820 | to get copied more, to align your brain's incentives
00:37:24.860 | with their incentives.
00:37:26.020 | So it didn't want you to starve to death,
00:37:29.640 | so it gave you an incentive to eat
00:37:33.260 | and it wanted you to make copies of the genes.
00:37:36.580 | So it gave you incentive to fall in love
00:37:38.620 | and do all sorts of naughty things
00:37:42.460 | to make copies of itself, right?
00:37:45.340 | So that was successful value alignment done on the genes.
00:37:49.140 | They created something more intelligent than themselves,
00:37:51.620 | but they made sure to try to align the values.
00:37:54.140 | But then something went a little bit wrong
00:37:57.060 | against the idea of what the genes wanted
00:37:59.600 | because a lot of humans discovered,
00:38:01.700 | hey, yeah, we really like this business about sex
00:38:05.580 | that the genes have made us enjoy,
00:38:07.660 | but we don't wanna have babies right now.
00:38:10.260 | So we're gonna hack the genes and use birth control.
00:38:14.700 | And I really feel like drinking a Coca-Cola right now,
00:38:19.220 | but I don't wanna get a potbelly,
00:38:20.660 | so I'm gonna drink Diet Coke.
00:38:22.540 | We have all these things we've figured out
00:38:25.660 | because we're smarter than the genes,
00:38:26.980 | how we can actually subvert their intentions.
00:38:29.700 | So it's not surprising that we humans now,
00:38:33.500 | when we're in the role of these genes,
00:38:34.860 | creating other non-human entities with a lot of power,
00:38:37.700 | have to face the same exact challenge.
00:38:39.500 | How do we make other powerful entities
00:38:41.780 | have incentives that are aligned with ours
00:38:44.580 | so they won't hack them?
00:38:47.060 | Corporations, for example, right?
00:38:48.780 | We humans decided to create corporations
00:38:51.360 | because it can benefit us greatly.
00:38:53.520 | Now all of a sudden there's a supermarket.
00:38:55.180 | I can go buy food there.
00:38:56.340 | I don't have to hunt.
00:38:57.260 | Awesome.
00:38:59.580 | And then to make sure that this corporation
00:39:02.900 | would do things that were good for us and not bad for us,
00:39:05.980 | we created institutions to keep them in check.
00:39:08.300 | Like if the local supermarket sells poisonous food,
00:39:12.540 | then the owners of the supermarket
00:39:17.540 | have to spend some years reflecting behind bars, right?
00:39:22.180 | So we created incentives to align them.
00:39:25.720 | But of course, just like we were able to see
00:39:27.480 | through this thing and you,
00:39:29.460 | birth control, if you're a powerful corporation,
00:39:31.860 | you also have an incentive to try to hack the institutions
00:39:35.100 | that are supposed to govern you
00:39:36.340 | 'cause you ultimately as a corporation
00:39:38.180 | have an incentive to maximize your profit.
00:39:40.940 | Just like you have an incentive
00:39:42.080 | to maximize the enjoyment your brain has,
00:39:44.180 | not for your genes.
00:39:46.020 | So if they can figure out a way of bribing regulators,
00:39:50.460 | then they're gonna do that.
00:39:52.380 | In the US, we kind of caught onto that
00:39:54.420 | and made laws against corruption and bribery.
00:39:58.580 | Then in the late 1800s, Teddy Roosevelt realized that,
00:40:03.580 | no, we were still being kind of hacked
00:40:05.340 | 'cause the Massachusetts Railroad Companies
00:40:07.280 | had like a bigger budget than the state of Massachusetts
00:40:10.140 | and they were doing a lot of very corrupt stuff.
00:40:13.600 | So he did the whole trust busting thing
00:40:15.500 | to try to align these other non-human entities,
00:40:18.460 | the companies, again, more with the incentives
00:40:21.660 | of Americans as a whole.
00:40:23.040 | It's not surprising though that this is a battle
00:40:26.100 | you have to keep fighting.
00:40:27.180 | Now we have even larger companies than we ever had before.
00:40:31.460 | And of course, they're gonna try to, again,
00:40:34.340 | subvert the institutions, not because,
00:40:39.640 | I think people make a mistake of getting all too,
00:40:43.100 | black, thinking about things in terms of good and evil,
00:40:47.940 | like arguing about whether corporations are good or evil
00:40:51.380 | or whether robots are good or evil.
00:40:54.020 | A robot isn't good or evil.
00:40:57.020 | It's a tool and you can use it for great things
00:40:59.300 | like robotic surgery or for bad things.
00:41:02.020 | And a corporation also is a tool, of course.
00:41:05.140 | And if you have good incentives to the corporation,
00:41:07.460 | it'll do great things like start a hospital
00:41:09.620 | or a grocery store.
00:41:10.980 | If you have really bad incentives,
00:41:12.680 | then it's gonna start maybe marketing addictive drugs
00:41:16.780 | to people and you'll have an opioid epidemic, right?
00:41:19.440 | It's all about, we should not make the mistake
00:41:25.620 | of getting into some sort of fairy tale,
00:41:27.380 | good, evil thing about corporations or robots.
00:41:30.500 | We should focus on putting the right incentives in place.
00:41:33.420 | My optimistic vision is that if we can do that,
00:41:35.740 | then we can really get good things.
00:41:38.460 | We're not doing so great with that right now,
00:41:40.580 | either on AI, I think, or on other intelligent,
00:41:44.380 | non-human entities like big companies.
00:41:46.540 | We just have a new Secretary of Defense
00:41:50.180 | that's gonna start up now in the Biden administration
00:41:53.620 | who was an active member of the board of Raytheon,
00:41:58.140 | for example. - I hope, yeah.
00:41:59.220 | - So I have nothing against Raytheon.
00:42:03.300 | I'm not a pacifist,
00:42:05.660 | but there's an obvious conflict of interest
00:42:08.540 | if someone is in the job where they decide
00:42:12.340 | who they're gonna contract with.
00:42:14.260 | And I think somehow we have,
00:42:16.700 | maybe we need another Teddy Roosevelt to come along again
00:42:19.540 | and say, "Hey, we want what's good for all Americans.
00:42:23.460 | "And we need to go do some serious realigning again
00:42:26.540 | "of the incentives that we're giving to these big companies.
00:42:29.820 | "And then we're gonna be better off."
00:42:33.900 | - It seems that naturally with human beings,
00:42:35.820 | just like you beautifully described the history
00:42:37.740 | of this whole thing, it all started with the genes,
00:42:40.780 | and they're probably pretty upset
00:42:42.660 | by all the unintended consequences that happened since.
00:42:45.600 | But it seems that it kinda works out.
00:42:48.700 | Like it's in this collective intelligence
00:42:51.140 | that emerges at the different levels.
00:42:53.500 | It seems to find sometimes last minute
00:42:56.920 | a way to realign the values or keep the values aligned.
00:43:00.940 | It's almost, it finds a way.
00:43:03.820 | Like different leaders, different humans pop up
00:43:07.580 | all over the place that reset the system.
00:43:10.700 | Do you want, I mean, do you have an explanation why that is?
00:43:15.260 | Or is that just survivor bias?
00:43:17.260 | And also, is that different,
00:43:19.620 | somehow fundamentally different than with AI systems,
00:43:23.100 | where you're no longer dealing with something
00:43:26.420 | that was a direct, maybe companies are the same,
00:43:30.180 | a direct byproduct of the evolutionary process?
00:43:33.340 | - I think there is one thing which has changed.
00:43:36.180 | That's why I'm not all optimistic.
00:43:40.280 | That's why I think there's about a 50% chance
00:43:42.260 | if we take the dumb route with artificial intelligence
00:43:46.100 | that humanity will be extinct in this century.
00:43:50.240 | First, just the big picture, yeah,
00:43:53.580 | companies need to have the right incentives.
00:43:56.720 | Even governments, right?
00:43:58.980 | We used to have governments,
00:44:00.900 | usually there were just some king, you know,
00:44:04.220 | who was the king because his dad was the king, you know?
00:44:07.180 | And then there were some benefits
00:44:10.580 | of having this powerful kingdom or empire of any sort,
00:44:15.280 | because then it could prevent a lot of local squabbles.
00:44:17.980 | So at least everybody in that region
00:44:19.380 | would stop warring against each other.
00:44:20.780 | And their incentives of different cities in the kingdom
00:44:24.180 | became more aligned, right?
00:44:25.140 | That was the whole selling point.
00:44:27.220 | Harari, Noah Ural Harari has a beautiful piece
00:44:31.500 | on how empires were collaboration enablers.
00:44:35.340 | And then we also, Harari says, invented money
00:44:37.620 | for that reason, so we could have better alignment
00:44:40.660 | and we could do trade even with people we didn't know.
00:44:44.140 | So this sort of stuff has been playing out
00:44:45.840 | since time immemorial, right?
00:44:47.840 | What's changed is that it happens on ever larger scales,
00:44:51.520 | right, technology keeps getting better
00:44:53.480 | because science gets better.
00:44:54.760 | So now we can communicate over larger distances,
00:44:57.600 | transport things fast over larger distances.
00:44:59.840 | And so the entities get ever bigger,
00:45:02.960 | but our planet is not getting bigger anymore.
00:45:05.480 | So in the past, you could have one experiment
00:45:08.120 | that just totally screwed up, like Easter Island,
00:45:12.020 | where they actually managed to have such poor alignment
00:45:15.220 | that when they went extinct, people there,
00:45:17.660 | there was no one else to come back and replace them, right?
00:45:20.620 | If Elon Musk doesn't get us to Mars,
00:45:24.060 | and then we go extinct on a global scale,
00:45:27.740 | then we're not coming back.
00:45:29.020 | That's the fundamental difference.
00:45:31.540 | And that's a mistake I would rather that we don't make
00:45:34.740 | for that reason.
00:45:35.860 | In the past, of course, history is full of fiascos, right?
00:45:39.860 | But it was never the whole planet.
00:45:42.200 | And then, okay, now there's this nice uninhabited land here,
00:45:46.000 | some other people could move in and organize things better.
00:45:49.440 | This is different.
00:45:50.760 | The second thing which is also different
00:45:52.720 | is that technology gives us so much more empowerment,
00:45:57.720 | right, both to do good things and also to screw up.
00:46:00.560 | In the Stone Age, even if you had someone
00:46:02.960 | whose goals were really poorly aligned,
00:46:04.800 | like maybe he was really pissed off
00:46:06.720 | because his Stone Age girlfriend dumped him
00:46:08.760 | and he just wanted to,
00:46:09.940 | if he wanted to kill as many people as he could,
00:46:12.660 | how many could he really take out with a rock and a stick
00:46:15.180 | before he was overpowered, right?
00:46:17.220 | Just a handful, right?
00:46:18.960 | Now, with today's technology,
00:46:23.780 | if we have an accidental nuclear war
00:46:25.660 | between Russia and the US,
00:46:27.900 | which we almost have about a dozen times,
00:46:31.100 | and then we have a nuclear winter,
00:46:32.340 | it could take out seven billion people,
00:46:34.780 | or six billion people, we don't know.
00:46:37.300 | So the scale of the damage is bigger than we can do.
00:46:40.460 | And if,
00:46:41.300 | there's obviously no law of physics that says
00:46:46.060 | that technology will never get powerful enough
00:46:48.100 | that we could wipe out our species entirely.
00:46:51.740 | That would just be fantasy to think
00:46:53.640 | that science is somehow doomed
00:46:55.100 | to not get more powerful than that, right?
00:46:57.220 | And it's not at all unfeasible in our lifetime
00:47:00.260 | that someone could design a designer pandemic
00:47:03.100 | which spreads as easily as COVID,
00:47:04.660 | but just basically kills everybody.
00:47:06.880 | We already had smallpox.
00:47:08.500 | It killed one third of everybody who got it.
00:47:10.700 | - What do you think of the,
00:47:14.420 | here's an intuition, maybe it's completely naive,
00:47:16.820 | and this optimistic intuition I have,
00:47:18.980 | which it seems, and maybe it's a biased experience
00:47:22.860 | that I have, but it seems like the most brilliant people
00:47:25.940 | I've met in my life all are really
00:47:29.940 | like fundamentally good human beings.
00:47:33.700 | And not like naive good,
00:47:35.860 | like they really want to do good for the world
00:47:38.020 | in a way that, well, maybe is aligned
00:47:39.920 | to my sense of what good means.
00:47:41.820 | And so I have a sense that the,
00:47:45.260 | the people that will be defining
00:47:48.980 | the very cutting edge of technology,
00:47:51.020 | there'll be much more of the ones that are doing good
00:47:53.980 | versus the ones that are doing evil.
00:47:55.860 | So the race, I'm optimistic on the,
00:48:00.180 | us always like last minute coming up with a solution.
00:48:03.100 | So if there's an engineered pandemic
00:48:06.500 | that has the capability to destroy
00:48:09.300 | most of the human civilization,
00:48:11.660 | it feels like to me, either leading up to that before
00:48:15.900 | or as it's going on, there will be,
00:48:19.260 | we're able to rally the collective genius
00:48:22.500 | of the human species.
00:48:23.820 | I can tell by your smile that you're
00:48:26.140 | at least some percentage doubtful.
00:48:30.060 | But could that be a fundamental law of human nature?
00:48:35.020 | That evolution only creates,
00:48:36.860 | it creates like karma is beneficial,
00:48:40.900 | good is beneficial, and therefore we'll be all right.
00:48:44.300 | - I hope you're right.
00:48:46.780 | I really would love it if you're right,
00:48:48.700 | if there's some sort of law of nature
00:48:50.540 | that says that we always get lucky
00:48:51.820 | in the last second because of karma.
00:48:53.560 | But you know, I prefer,
00:48:57.620 | I prefer not playing it so close
00:49:00.980 | and gambling on that.
00:49:03.060 | And I think, in fact, I think it can be dangerous
00:49:06.500 | to have too strong faith in that
00:49:08.140 | because it makes us complacent.
00:49:10.780 | Like if someone tells you, you never have to worry
00:49:12.500 | about your house burning down,
00:49:13.740 | then you're not gonna put in a smoke detector
00:49:15.340 | 'cause why would you need to, right?
00:49:16.980 | Even if it's sometimes very simple precautions,
00:49:19.040 | we don't take them.
00:49:20.000 | If you're like, oh, the government is gonna take care
00:49:23.500 | of everything for us, I can always trust my politicians.
00:49:25.820 | We can always, we abdicate our own responsibility.
00:49:28.700 | I think it's a healthier attitude to say,
00:49:30.300 | yeah, maybe things will work out.
00:49:32.180 | Maybe I'm actually gonna have to myself step up
00:49:34.700 | and take responsibility.
00:49:36.320 | And the stakes are so huge.
00:49:39.660 | I mean, if we do this right,
00:49:41.420 | we can develop all this ever more powerful technology
00:49:44.860 | and cure all diseases and create a future
00:49:47.580 | where humanity is healthy and wealthy
00:49:49.300 | for not just the next election cycle,
00:49:50.920 | but like billions of years throughout our universe.
00:49:53.880 | That's really worth working hard for.
00:49:55.700 | And not just sitting and hoping
00:49:58.860 | for some sort of fairy tale karma.
00:50:00.380 | - Well, I just mean, so you're absolutely right.
00:50:02.460 | From the perspective of the individual,
00:50:03.940 | like for me, the primary thing should be
00:50:06.420 | to take responsibility and to build the solutions
00:50:10.580 | that your skillset allows to build.
00:50:12.060 | - Yeah, which is a lot.
00:50:13.620 | I think we underestimate often very much
00:50:15.360 | how much good we can do.
00:50:17.300 | If you or anyone listening to this is completely confident
00:50:22.220 | that our government would do a perfect job
00:50:24.460 | on handling any future crisis with engineered pandemics
00:50:28.620 | or future AI, I actually-
00:50:31.020 | - The one or two people out there.
00:50:33.300 | - On what actually happened in 2020.
00:50:35.780 | Do you feel that the government by and large
00:50:40.340 | around the world has handled this flawlessly?
00:50:43.340 | - That's a really sad and disappointing reality
00:50:45.780 | that hopefully is a wake up call for everybody.
00:50:49.340 | For the scientists, for the engineers,
00:50:52.820 | for the researchers in AI especially.
00:50:54.780 | It was disappointing to see how inefficient we were
00:50:59.780 | at collecting the right amount of data
00:51:04.660 | in a privacy-preserving way and spreading that data
00:51:07.660 | and utilizing that data to make decisions,
00:51:09.780 | all that kind of stuff.
00:51:10.980 | - Yeah, I think when something bad happens to me,
00:51:13.900 | I made myself a promise many years ago
00:51:17.780 | that I would not be a whiner.
00:51:22.420 | And when something bad happens to me,
00:51:24.180 | of course it's a process of disappointment,
00:51:27.780 | but then I try to focus on what did I learn from this
00:51:31.060 | that can make me a better person in the future?
00:51:33.100 | And there's usually something to be learned when I fail.
00:51:36.260 | And I think we should all ask ourselves,
00:51:38.740 | what can we learn from the pandemic
00:51:42.020 | about how we can do better in the future?
00:51:43.460 | And you mentioned there a really good lesson.
00:51:46.420 | We were not as resilient as we thought we were
00:51:50.540 | and we were not as prepared maybe as we wish we were.
00:51:54.020 | You can even see very stark contrast around the planet.
00:51:57.340 | South Korea, they have over 50 million people.
00:52:01.820 | Do you know how many deaths they have
00:52:03.060 | from COVID last time I checked?
00:52:04.620 | - No. - It's about 500.
00:52:07.220 | Why is that?
00:52:10.340 | Well, the short answer is that they had prepared.
00:52:15.340 | They were incredibly quick,
00:52:19.260 | incredibly quick to get on it with very rapid testing
00:52:23.500 | and contact tracing and so on,
00:52:25.540 | which is why they never had more cases
00:52:28.100 | than they could contract trace effectively.
00:52:30.100 | They never even had to have the kind of big lockdowns
00:52:32.100 | we had in the West.
00:52:33.740 | But the deeper answer to it,
00:52:36.460 | it's not just the Koreans are just somehow better people.
00:52:39.100 | The reason I think they were better prepared
00:52:40.860 | was because they had already had a pretty bad hit
00:52:45.380 | from the SARS pandemic, which never became a pandemic,
00:52:49.940 | something like 17 years ago, I think.
00:52:52.100 | So it was a kind of fresh memory that,
00:52:53.700 | we need to be prepared for pandemics.
00:52:55.980 | So they were, right?
00:52:57.020 | And so maybe this is a lesson here
00:53:01.260 | for all of us to draw from COVID
00:53:03.300 | that rather than just wait for the next pandemic
00:53:06.340 | or the next problem with AI getting out of control
00:53:09.820 | or anything else, maybe we should just actually
00:53:14.100 | set aside a tiny fraction of our GDP
00:53:16.820 | to have people very systematically do some horizon scanning
00:53:20.460 | and say, okay, what are the things that could go wrong?
00:53:23.340 | And let's duke it out and see which are the more likely ones
00:53:25.820 | and which are the ones that are actually actionable
00:53:28.780 | and then be prepared.
00:53:29.820 | - So one of the observations as one little ant/human
00:53:36.540 | that I am of disappointment is the political division
00:53:40.940 | over information that has been observed,
00:53:45.540 | that I observed this year, that it seemed
00:53:48.940 | the discussion was less about
00:53:50.860 | sort of what happened and understanding what happened deeply
00:53:59.020 | and more about there's different truths out there.
00:54:04.060 | And it's like a argument,
00:54:05.420 | my truth is better than your truth.
00:54:07.620 | And it's like red versus blue or different,
00:54:10.460 | like it was like this ridiculous discourse
00:54:13.260 | that doesn't seem to get at any kind of notion of the truth.
00:54:16.540 | It's not like some kind of scientific process.
00:54:18.980 | Even science got politicized in ways
00:54:21.020 | that's very heartbreaking to me.
00:54:23.500 | You have an exciting project on the AI front
00:54:28.660 | of trying to rethink, you mentioned corporations,
00:54:33.660 | there's one of the other collective intelligence systems
00:54:37.340 | that have emerged through all of this is social networks.
00:54:40.540 | And just the spread, the internet,
00:54:42.580 | is the spread of information on the internet,
00:54:46.380 | our ability to share that information,
00:54:48.300 | there's all different kinds of news sources and so on.
00:54:50.620 | And so you said like that's from first principles,
00:54:53.180 | let's rethink how we think about the news,
00:54:57.300 | how we think about information.
00:54:59.060 | Can you talk about this amazing effort
00:55:02.500 | that you're undertaking?
00:55:03.660 | - Oh, I'd love to.
00:55:04.580 | This has been my big COVID project.
00:55:06.420 | I've spent nights and weekends on ever since the lockdown.
00:55:10.740 | To segue into this actually,
00:55:13.140 | let me come back to what you said earlier,
00:55:14.540 | that you had this hope that in your experience,
00:55:17.100 | people who you felt were very talented
00:55:18.860 | or often idealistic and wanted to do good.
00:55:21.260 | Frankly, I feel the same about all people by and large.
00:55:26.060 | There are always exceptions,
00:55:27.140 | but I think the vast majority of everybody,
00:55:29.420 | regardless of education and whatnot,
00:55:31.380 | really are fundamentally good, right?
00:55:34.260 | So how can it be that people still do
00:55:36.620 | so much nasty stuff, right?
00:55:39.060 | I think it has everything to do with this,
00:55:41.060 | with the information that we're given.
00:55:43.700 | If you go into Sweden 500 years ago
00:55:47.140 | and you start telling all the farmers
00:55:48.420 | that those Danes in Denmark, they're so terrible people,
00:55:51.900 | and we have to invade them
00:55:53.460 | because they've done all these terrible things
00:55:56.340 | that you can't fact check yourself.
00:55:58.300 | A lot of people, Swedes did that, right?
00:56:00.740 | And we're seeing so much of this today in the world,
00:56:05.740 | both geopolitically, where we are told
00:56:11.700 | that China is bad and Russia is bad and Venezuela is bad,
00:56:15.100 | and people in those countries are often told
00:56:17.100 | that we are bad.
00:56:18.380 | And we also see it at a micro level,
00:56:21.820 | where people are told that,
00:56:22.980 | "Oh, those who voted for the other party are bad people."
00:56:25.740 | It's not just an intellectual disagreement,
00:56:27.540 | but they're bad people.
00:56:30.660 | And we're getting ever more divided.
00:56:33.820 | And so how do you reconcile this
00:56:35.860 | with this intrinsic goodness in people?
00:56:40.300 | I think it's pretty obvious that it has, again,
00:56:42.620 | to do with this, with the information
00:56:44.500 | that we're fed and given, right?
00:56:47.300 | We evolved to live in small groups
00:56:50.620 | where you might know 30 people in total, right?
00:56:52.900 | So you then had a system that was quite good
00:56:56.300 | for assessing who you could trust and who you could not.
00:56:58.540 | And if someone told you that Joe there is a jerk,
00:57:03.500 | but you had interacted with him yourself
00:57:05.780 | and seen him in action,
00:57:07.300 | and you would quickly realize maybe
00:57:09.100 | that that's actually not quite accurate, right?
00:57:12.460 | But now that the most people on the planet
00:57:14.260 | are people who've never met,
00:57:16.020 | it's very important that we have a way
00:57:17.220 | of trusting the information we're given.
00:57:19.460 | So, okay, so where does the news project come in?
00:57:23.180 | Well, throughout history, you can go read Machiavelli
00:57:26.580 | from the 1400s and you'll see how already then
00:57:28.700 | they were busy manipulating people
00:57:30.100 | with propaganda and stuff.
00:57:31.660 | Propaganda is not new at all.
00:57:35.740 | And the incentives to manipulate people
00:57:37.740 | is just not new at all.
00:57:40.060 | What is it that's new?
00:57:41.260 | What's new is machine learning meets propaganda.
00:57:45.780 | That's what's new.
00:57:46.820 | That's why this has gotten so much worse.
00:57:49.100 | Some people like to blame certain individuals,
00:57:51.740 | like in my liberal university bubble,
00:57:54.220 | many people blame Donald Trump and say it was his fault.
00:57:56.980 | I see it differently.
00:57:59.220 | I think Donald Trump just had this extreme skill
00:58:04.860 | at playing this game in the machine learning algorithm age,
00:58:08.620 | a game he couldn't have played 10 years ago.
00:58:12.740 | So what's changed?
00:58:13.700 | What's changed is, well, Facebook and Google
00:58:16.020 | and other companies, and I'm not badmouthing them,
00:58:19.420 | I have a lot of friends who work for these companies,
00:58:21.660 | good people, they deployed machine learning algorithms
00:58:25.540 | just to increase their profit a little bit,
00:58:27.100 | to just maximize the time people spent watching ads.
00:58:31.300 | And they had totally underestimated
00:58:33.300 | how effective they were gonna be.
00:58:35.100 | This was, again, the black box,
00:58:36.900 | non-intelligible intelligence.
00:58:38.940 | They just noticed, oh, we're getting more ad revenue, great.
00:58:42.220 | It took a long time until they even realized why and how
00:58:44.860 | and how damaging this was for society.
00:58:48.140 | 'Cause of course, what the machine learning figured out was
00:58:51.420 | that the by far most effective way of gluing you
00:58:54.500 | to your little rectangle was to show you things
00:58:57.420 | that triggered strong emotions, anger, et cetera, resentment.
00:59:01.780 | And if it was true or not, didn't really matter.
00:59:07.700 | It was also easier to find stories that weren't true.
00:59:10.580 | If you weren't limited, that's just a limitation.
00:59:12.140 | - Right, that's a very limiting factor.
00:59:15.340 | - And before long, we got these amazing filter bubbles
00:59:19.980 | on a scale we had never seen before.
00:59:21.940 | A couple of this to the fact that also
00:59:23.820 | the online news media was so effective
00:59:28.540 | that they killed a lot of print journalism.
00:59:30.780 | There's less than half as many journalists now in America,
00:59:35.180 | I believe, as there was a generation ago.
00:59:39.300 | You just couldn't compete with the online advertising.
00:59:42.780 | So all of a sudden, most people are not getting,
00:59:47.580 | even reading newspapers.
00:59:48.660 | They get their news from social media.
00:59:51.300 | And most people only get news in their little bubble.
00:59:54.980 | So along comes now some people like Donald Trump
00:59:58.460 | who figured out, among the first successful politicians
01:00:01.580 | to figure out how to really play this new game
01:00:04.180 | and become very, very influential.
01:00:06.020 | But I think Donald Trump was a simple,
01:00:08.580 | well, he took advantage of it.
01:00:11.020 | He didn't create, the fundamental conditions were created
01:00:15.020 | by machine learning taking over the news media.
01:00:19.020 | So this is what motivated my little COVID project here.
01:00:22.940 | So I said before, machine learning and tech in general,
01:00:27.140 | it's not evil, but it's also not good.
01:00:29.060 | It's just a tool that you can use
01:00:31.580 | for good things or bad things.
01:00:32.700 | And as it happens, machine learning and news
01:00:36.020 | was mainly used by the big players, big tech,
01:00:39.700 | to manipulate people into watch as many ads as possible,
01:00:43.220 | which had this unintended consequence
01:00:44.860 | of really screwing up our democracy
01:00:46.660 | and fragmenting it into filter bubbles.
01:00:49.500 | So I thought, well, machine learning algorithms
01:00:53.180 | were basically free.
01:00:54.260 | They can run on your smartphone for free also
01:00:56.220 | if someone gives them away to you, right?
01:00:57.880 | There's no reason why they only have to help the big guy
01:01:00.740 | to manipulate the little guy.
01:01:03.020 | They can just as well help the little guy
01:01:05.300 | to see through all the manipulation attempts
01:01:07.900 | from the big guy.
01:01:08.740 | So did this project, it's called,
01:01:10.660 | you can go to improvethenews.org.
01:01:12.820 | The first thing we've built
01:01:14.300 | is this little news aggregator.
01:01:16.620 | Looks a bit like Google News,
01:01:17.860 | except it has these sliders on it
01:01:19.420 | to help you break out of your filter bubble.
01:01:21.740 | So if you're reading, you can click, click,
01:01:24.420 | and go to your favorite topic.
01:01:25.900 | And then if you just slide the left-right slider
01:01:31.460 | all the way over to the left.
01:01:32.580 | - There's two sliders, right?
01:01:33.700 | - Yeah, there's the one, the most obvious one
01:01:36.280 | is the one that has left-right labeled on it.
01:01:38.780 | You go to left, you get one set of articles,
01:01:40.580 | you go to the right, you see a very different truth
01:01:43.380 | appearing.
01:01:44.220 | - Oh, that's literally left and right on the--
01:01:46.700 | - Political spectrum.
01:01:47.660 | - On the political spectrum.
01:01:48.500 | - Yeah, so if you're reading about immigration,
01:01:50.780 | for example, it's very, very noticeable.
01:01:55.580 | And I think step one, always,
01:01:57.100 | if you wanna not get manipulated,
01:01:59.260 | is just to be able to recognize the techniques people use.
01:02:02.980 | So it's very helpful to just see
01:02:04.940 | how they spin things on the two sides.
01:02:08.200 | I think many people are under the misconception
01:02:11.280 | that the main problem is fake news.
01:02:14.120 | It's not.
01:02:14.960 | I had an amazing team of MIT students
01:02:18.560 | where we did an academic project
01:02:20.280 | to use machine learning to detect
01:02:22.240 | the main kinds of bias over the summer.
01:02:23.920 | And yes, of course, sometimes there's fake news
01:02:26.640 | where someone just claims something that's false, right?
01:02:30.800 | Like, oh, Hillary Clinton just got divorced or something.
01:02:33.840 | But what we see much more of is actually just omissions.
01:02:38.140 | - If you go to, there's some stories
01:02:41.700 | which just won't be mentioned by the left or the right
01:02:45.500 | because it doesn't suit their agenda.
01:02:47.420 | And then they'll mention other ones very, very, very much.
01:02:50.680 | So for example, we've had a number of stories
01:02:55.680 | about the Trump family's financial dealings.
01:02:59.600 | And then there's been some stories
01:03:02.540 | about the Biden family's, Hunter Biden's financial dealings.
01:03:06.240 | Surprise, surprise, they don't get equal coverage
01:03:08.540 | on the left and the right.
01:03:10.420 | One side loves to cover Hunter Biden's stuff
01:03:14.300 | and one side loves to cover the Trump.
01:03:16.380 | You can never guess which is which, right?
01:03:18.940 | But the great news is if you're a normal American citizen
01:03:22.440 | and you dislike corruption in all its forms,
01:03:25.000 | then slide, slide, you can just look at both sides
01:03:29.420 | and you'll see all those political corruption stories.
01:03:33.340 | It's really liberating to just take in the both sides,
01:03:38.340 | the spin on both sides.
01:03:40.480 | It somehow unlocks your mind to think on your own,
01:03:43.800 | to realize that, I don't know,
01:03:47.080 | it's the same thing that was useful
01:03:49.160 | in the Soviet Union times for when everybody
01:03:53.880 | was much more aware that they're surrounded by propaganda.
01:03:58.080 | - That is so interesting what you're saying, actually.
01:04:01.560 | So Noam Chomsky, used to be our MIT colleague,
01:04:04.980 | once said that propaganda is to democracy
01:04:08.460 | what violence is to totalitarianism.
01:04:12.740 | And what he means by that is
01:04:15.060 | if you have a really totalitarian government,
01:04:17.460 | you don't need propaganda.
01:04:18.820 | People will do what you want them to do anyway
01:04:23.740 | out of fear, right?
01:04:25.140 | But otherwise, you need propaganda.
01:04:28.860 | So I would say actually that the propaganda
01:04:30.700 | is much higher quality in democracies,
01:04:33.320 | much more believable.
01:04:34.800 | And it's really-- - That's brilliant.
01:04:36.400 | - It's really striking.
01:04:37.520 | When I talk to colleagues, science colleagues,
01:04:40.040 | like from Russia and China and so on,
01:04:41.880 | I notice they are actually much more aware
01:04:45.680 | of the propaganda in their own media
01:04:47.760 | than many of my American colleagues are
01:04:49.400 | about the propaganda in Western media.
01:04:51.680 | - That's brilliant.
01:04:52.520 | That means the propaganda in the Western media
01:04:54.400 | is just better. - Yes!
01:04:56.080 | - That's so brilliant. - Everything's better
01:04:57.360 | in the West, even the propaganda.
01:04:58.800 | (laughing)
01:05:00.640 | - So, but there's--
01:05:02.400 | (laughing)
01:05:05.080 | - That's good.
01:05:05.920 | - But once you realize that,
01:05:07.420 | you realize there's also something very optimistic there
01:05:09.340 | that you can do about it, right?
01:05:10.500 | Because first of all, omissions,
01:05:12.980 | as long as there's no outright censorship,
01:05:16.960 | you can just look at both sides
01:05:18.540 | and pretty quickly piece together
01:05:22.780 | a much more accurate idea of what's actually going on, right?
01:05:26.180 | - And develop a natural skepticism, too.
01:05:28.100 | - Yeah, yeah.
01:05:28.940 | - Develop an analytical, scientific mind
01:05:31.660 | about the way you're taking the information.
01:05:32.940 | - Yeah, and I think, I have to say,
01:05:35.540 | sometimes I feel that some of us in the academic bubble
01:05:38.540 | are too arrogant about this and somehow think,
01:05:41.500 | oh, it's just people who aren't as educated
01:05:44.660 | as us, who are fooled.
01:05:45.860 | When we are often just as gullible also,
01:05:48.260 | we read only our media and don't see through things.
01:05:52.140 | Anyone who looks at both sides like this in comparison,
01:05:54.620 | well, we immediately start noticing
01:05:56.380 | the shenanigans being pulled.
01:05:58.100 | And I think what I tried to do with this app
01:06:01.900 | is that the big tech has to some extent
01:06:05.820 | tried to blame the individual for being manipulated,
01:06:09.020 | much like big tobacco tried to blame the individuals
01:06:12.360 | entirely for smoking.
01:06:13.740 | And then later on, our government stepped up and said,
01:06:16.940 | actually, you can't just blame little kids
01:06:19.620 | for starting to smoke.
01:06:20.460 | We have to have more responsible advertising
01:06:22.440 | and this and that.
01:06:23.540 | I think it's a bit the same here.
01:06:24.660 | It's very convenient for a big tech to blame.
01:06:27.620 | So it's just people who are so dumb and get fooled.
01:06:30.120 | The blame usually comes in saying,
01:06:34.180 | oh, it's just human psychology.
01:06:36.000 | People just wanna hear what they already believe.
01:06:38.360 | But Professor David Rand at MIT actually partly debunked that
01:06:43.140 | with a really nice study showing that people
01:06:45.300 | tend to be interested in hearing things
01:06:47.620 | that go against what they believe
01:06:49.900 | if it's presented in a respectful way.
01:06:52.700 | Suppose, for example, that you have a company
01:06:57.560 | and you're just about to launch this project
01:06:59.180 | and you're convinced it's gonna work.
01:07:00.340 | And someone says, you know, Lex, I hate to tell you this,
01:07:04.380 | but this is gonna fail and here's why.
01:07:06.700 | Would you be like, shut up, I don't wanna hear it.
01:07:08.940 | La la la la la la la la la.
01:07:10.660 | Would you?
01:07:11.500 | You would be interested, right?
01:07:13.020 | And also, if you're on an airplane,
01:07:15.520 | back in the pre-COVID times, you know,
01:07:19.060 | and the guy next to you is clearly from the opposite side
01:07:22.900 | of the political spectrum, but is very respectful
01:07:26.780 | and polite to you, wouldn't you be kind of interested
01:07:29.040 | to hear a bit about how he or she thinks about things?
01:07:32.940 | - Of course.
01:07:33.900 | - But it's not so easy to find out
01:07:36.380 | respectful disagreement now because, for example,
01:07:38.880 | if you are a Democrat and you're like,
01:07:42.020 | oh, I wanna see something on the other side,
01:07:43.640 | so you just go Breitbart.com.
01:07:46.160 | And then after the first 10 seconds,
01:07:48.040 | you feel deeply insulted by something.
01:07:50.260 | It's not gonna work.
01:07:53.440 | Or if you take someone who votes Republican
01:07:56.620 | and they go to something on the left
01:07:58.380 | and they just get very offended very quickly
01:08:01.100 | by them having put a deliberately ugly picture
01:08:03.180 | of Donald Trump on the front page or something,
01:08:05.260 | it doesn't really work.
01:08:06.580 | So this news aggregator also has a nuance slider,
01:08:10.760 | which you can pull to the right and then,
01:08:13.300 | to make it easier to get exposed to actually more
01:08:16.000 | sort of academic style or more respectful
01:08:18.140 | portrayals of different views.
01:08:22.300 | And finally, the one kind of bias I think
01:08:25.420 | people are mostly aware of is the left-right,
01:08:28.340 | because it's so obvious,
01:08:29.180 | because both left and right are very powerful here.
01:08:33.420 | Both of them have well-funded TV stations and newspapers,
01:08:36.760 | and it's kind of hard to miss.
01:08:38.380 | But there's another one, the establishment slider,
01:08:41.940 | which is also really fun.
01:08:44.100 | I love to play with it.
01:08:45.820 | That's more about corruption.
01:08:47.620 | Because if you have a society
01:08:53.260 | where almost all the powerful entities
01:08:57.340 | want you to believe a certain thing,
01:08:59.500 | that's what you're gonna read in both the big media,
01:09:01.860 | mainstream media on the left and on the right, of course.
01:09:04.620 | And powerful companies can push back very hard,
01:09:08.220 | like tobacco companies pushed back very hard back in the day
01:09:10.820 | when some newspapers started writing articles
01:09:13.540 | about tobacco being dangerous.
01:09:15.380 | So it was hard to get a lot of coverage about it initially.
01:09:18.420 | And also if you look geopolitically, right?
01:09:20.860 | Of course, in any country, when you read their media,
01:09:23.100 | you're mainly gonna be reading a lot of articles
01:09:24.900 | about how our country is the good guy,
01:09:27.380 | and the other countries are the bad guys, right?
01:09:30.380 | So if you wanna have a really more nuanced understanding,
01:09:33.340 | like the Germans used to be told that the,
01:09:36.500 | the British used to be told
01:09:37.580 | that the French were the bad guys,
01:09:38.820 | and the French used to be told
01:09:39.860 | that the British were the bad guys.
01:09:41.900 | Now they visit each other's countries a lot
01:09:45.700 | and have a much more nuanced understanding.
01:09:47.380 | I don't think there's gonna be any more wars
01:09:48.860 | between France and Germany.
01:09:50.340 | On the geopolitical scale, it's just as much as ever,
01:09:54.260 | you know, big Cold War now, US, China, and so on.
01:09:57.620 | And if you wanna get a more nuanced understanding
01:10:01.180 | of what's happening geopolitically,
01:10:03.500 | then it's really fun to look at this establishment slider,
01:10:05.940 | because it turns out there are tons of little newspapers,
01:10:09.380 | both on the left and on the right,
01:10:11.340 | who sometimes challenge establishment and say,
01:10:14.460 | you know, maybe we shouldn't actually invade Iraq right now.
01:10:17.780 | Maybe this weapons of mass destruction thing is BS.
01:10:20.380 | If you look at the journalism research afterwards,
01:10:23.660 | you can actually see that quite clearly,
01:10:25.140 | that both CNN and Fox were very pro,
01:10:29.180 | let's get rid of Saddam,
01:10:30.620 | there are weapons of mass destruction.
01:10:32.580 | Then there were a lot of smaller newspapers.
01:10:34.660 | They were like, wait a minute,
01:10:36.200 | this evidence seems a bit sketchy, and maybe we,
01:10:39.340 | but of course, they were so hard to find.
01:10:42.260 | Most people didn't even know they existed, right?
01:10:44.580 | Yet it would have been better for American national security
01:10:47.420 | if those voices had also come up.
01:10:50.140 | I think it harmed America's national security, actually,
01:10:52.540 | that we invaded Iraq.
01:10:53.780 | - And arguably, there's a lot more interest
01:10:55.540 | in that kind of thinking, too, from those small sources.
01:11:00.460 | So like, when you say big,
01:11:02.620 | it's more about kind of the reach of the broadcast,
01:11:07.620 | but it's not big in terms of the interest.
01:11:12.060 | I think there's a lot of interest
01:11:14.100 | in that kind of anti-establishment,
01:11:16.220 | or like skepticism towards out-of-the-box thinking.
01:11:20.380 | There's a lot of interest in that kind of thing.
01:11:22.020 | Do you see this news project or something like it
01:11:26.900 | being basically taken over the world
01:11:30.580 | as the main way we consume information?
01:11:32.940 | Like, how do we get there?
01:11:35.140 | Like, how do we, you know?
01:11:37.340 | So, okay, the idea is brilliant.
01:11:39.100 | You're calling it your little project in 2020,
01:11:43.980 | but how does that become the new way we consume information?
01:11:48.500 | - I hope, first of all, just to plant a little seed there,
01:11:51.020 | because normally, the big barrier of doing anything
01:11:55.340 | in media is you need a ton of money,
01:11:57.260 | but this costs no money at all.
01:11:59.300 | I've just been paying myself,
01:12:01.100 | pay a tiny amount of money each month to Amazon
01:12:03.100 | to run the thing in their cloud.
01:12:04.700 | There will never be any ads.
01:12:06.940 | The point is not to make any money off of it,
01:12:09.340 | and we just train machine learning algorithms
01:12:11.660 | to classify the articles and stuff,
01:12:13.180 | so it just kind of runs by itself.
01:12:14.860 | So if it actually gets good enough at some point
01:12:17.740 | that it starts catching on, it could scale,
01:12:20.700 | and if other people carbon copy it
01:12:23.100 | and make other versions that are better,
01:12:24.940 | that's the more the merrier.
01:12:28.180 | I think there's a real opportunity for machine learning
01:12:32.900 | to empower the individual
01:12:35.220 | against the list of the powerful players.
01:12:38.860 | As I said in the beginning here,
01:12:41.420 | it's been mostly the other way around so far.
01:12:43.140 | The big players have the AI,
01:12:44.860 | and then they tell people, "This is the truth.
01:12:48.100 | This is how it is,"
01:12:49.660 | but it can just as well go the other way around.
01:12:52.220 | And when the internet was born, actually,
01:12:53.860 | a lot of people had this hope
01:12:54.980 | that maybe this will be a great thing for democracy,
01:12:57.540 | make it easier to find out about things,
01:12:59.620 | and maybe machine learning and things like this
01:13:01.420 | can actually help again.
01:13:03.780 | And I have to say, I think it's more important than ever now
01:13:07.140 | because this is very linked also to the whole future of life
01:13:12.180 | as we discussed earlier.
01:13:13.900 | We're getting this ever more powerful tech.
01:13:16.060 | Frank, it's pretty clear if you look on the one
01:13:19.980 | or two generation, three generation timescale
01:13:21.940 | that there are only two ways this can end geopolitically.
01:13:24.940 | Either it ends great for all humanity
01:13:27.740 | or it ends terribly for all of us.
01:13:31.700 | There's really no in between.
01:13:33.780 | And we're so stuck in,
01:13:36.420 | because technology knows no borders,
01:13:39.180 | and you can't have people fighting
01:13:42.300 | when the weapons just keep getting ever more powerful
01:13:45.220 | indefinitely.
01:13:47.100 | Eventually, the luck runs out.
01:13:50.420 | And right now we have, I love America,
01:13:55.420 | but the fact of the matter is what's good for America
01:13:59.900 | is not opposite in the long term
01:14:01.740 | to what's good for other countries.
01:14:03.660 | It would be if this was some sort of zero-sum game,
01:14:07.420 | like it was thousands of years ago
01:14:10.060 | when the only way one country could get more resources
01:14:13.300 | was to take land from other countries,
01:14:15.020 | 'cause that was basically the resource.
01:14:17.740 | Look at the map of Europe.
01:14:18.940 | Some countries kept getting bigger and smaller,
01:14:21.420 | endless wars.
01:14:22.380 | But then since 1945,
01:14:25.420 | there hasn't been any war in Western Europe,
01:14:27.260 | and they all got way richer because of tech.
01:14:29.980 | So the optimistic outcome is that the big winner
01:14:34.900 | in this century is gonna be America and China and Russia
01:14:38.260 | and everybody else,
01:14:39.100 | because technology just makes us all healthier and wealthier
01:14:41.820 | and we just find some way of keeping the peace
01:14:44.700 | on this planet.
01:14:45.540 | But I think, unfortunately,
01:14:48.620 | there are some pretty powerful forces right now
01:14:50.500 | that are pushing in exactly the opposite direction
01:14:52.620 | and trying to demonize other countries,
01:14:54.620 | which just makes it more likely
01:14:56.540 | that this ever more powerful tech we're building
01:14:59.340 | is gonna be used in disastrous ways.
01:15:02.260 | - Yeah, for aggression versus cooperation,
01:15:04.580 | that kind of thing.
01:15:05.420 | - Yeah, even look at just military AI now, right?
01:15:07.860 | It was so awesome to see these dancing robots.
01:15:12.180 | I loved it, right?
01:15:13.940 | But one of the biggest growth areas in robotics now
01:15:17.500 | is, of course, autonomous weapons.
01:15:19.300 | And 2020 was like the best marketing year ever
01:15:23.620 | for autonomous weapons,
01:15:24.420 | because in both Libya, it's a civil war,
01:15:27.620 | and in Nagorno-Karabakh,
01:15:29.660 | they made the decisive difference, right?
01:15:34.540 | And everybody else is like watching this.
01:15:36.260 | Oh yeah, we wanna build autonomous weapons too.
01:15:39.020 | In Libya, you had, on one hand,
01:15:44.020 | our ally, the United Arab Emirates,
01:15:46.500 | that were flying their autonomous weapons
01:15:48.420 | that they bought from China, bombing Libyans.
01:15:52.100 | And on the other side, you had our other ally, Turkey,
01:15:54.300 | flying their drones.
01:15:56.220 | They had no skin in the game, any of these other countries.
01:16:01.660 | And of course, it was the Libyans who really got screwed.
01:16:04.260 | In Nagorno-Karabakh, you had actually,
01:16:08.020 | again, Turkey is sending drones built by this company
01:16:12.260 | that was actually founded by a guy
01:16:15.300 | who went to MIT AeroAstro.
01:16:17.100 | Do you know that?
01:16:17.940 | - No. - Bakr Atiyar, yeah.
01:16:19.580 | So MIT has a direct responsibility for ultimately this,
01:16:22.740 | and a lot of civilians were killed there.
01:16:24.380 | And so because it was militarily so effective,
01:16:28.500 | now suddenly there's a huge push.
01:16:31.260 | Oh yeah, yeah, let's go build ever more autonomy
01:16:35.740 | into these weapons, and it's gonna be great.
01:16:39.460 | And I think actually,
01:16:42.900 | people who are obsessed about
01:16:45.580 | some sort of future Terminator scenario right now
01:16:48.300 | should start focusing on the fact that we have
01:16:52.460 | two much more urgent threats happening from machine learning.
01:16:55.060 | One of them is the whole destruction of democracy
01:16:57.940 | that we've talked about now,
01:16:59.940 | where our flow of information is being manipulated
01:17:03.620 | by machine learning.
01:17:04.460 | And the other one is that right now,
01:17:06.980 | this is the year when the big arms race,
01:17:09.860 | out-of-control arms race in at least autonomous weapons
01:17:12.140 | is gonna start, or it's gonna stop.
01:17:14.700 | - So you have a sense that there is,
01:17:16.500 | like 2020 was a instrumental catalyst
01:17:20.700 | for the race of, for the autonomous weapons race.
01:17:24.260 | - Yeah, 'cause it was the first year
01:17:25.460 | when they proved decisive in the battlefield.
01:17:28.380 | And these ones are still not fully autonomous,
01:17:31.060 | mostly they're remote controlled, right?
01:17:32.660 | But we could very quickly make things about
01:17:37.660 | the size and cost of a smartphone,
01:17:41.820 | which you just put in the GPS coordinates
01:17:44.300 | or the face of the one you wanna kill,
01:17:45.780 | a skin color or whatever, and it flies away and does it.
01:17:48.660 | The real good reason why the US
01:17:53.180 | and all the other superpowers should put the kibosh on this
01:17:57.140 | is the same reason we decided
01:17:59.420 | to put the kibosh on bioweapons.
01:18:01.660 | So we gave the Future of Life Award
01:18:05.060 | that we can talk more about later,
01:18:06.340 | Matthew Messelson from Harvard before
01:18:08.100 | for convincing Nixon to ban bioweapons.
01:18:10.340 | And I asked him, "How did you do it?"
01:18:13.580 | And he was like, "Well, I just said,
01:18:15.580 | "look, we don't want there to be a $500 weapon
01:18:19.340 | "of mass destruction that all our enemies can afford,
01:18:24.040 | "even non-state actors."
01:18:26.660 | And Nixon was like, "Good point."
01:18:31.660 | It's in America's interest that the powerful weapons
01:18:34.300 | are all really expensive, so only we can afford them,
01:18:37.580 | or maybe some more stable adversaries, right?
01:18:40.220 | Nuclear weapons are like that,
01:18:42.960 | but bioweapons were not like that.
01:18:44.920 | That's why we banned them.
01:18:46.460 | And that's why you never hear about them now.
01:18:48.380 | That's why we love biology.
01:18:50.320 | - So you have a sense that it's possible
01:18:52.900 | for the big powerhouses in terms of the big nations
01:18:57.900 | in the world to agree that autonomous weapons
01:19:00.380 | is not a race we wanna be on, that it doesn't end well.
01:19:03.740 | - Yeah, because we know it's just gonna end
01:19:05.560 | in mass proliferation, and every terrorist everywhere
01:19:08.460 | is gonna have these super cheap weapons
01:19:10.260 | that they will use against us.
01:19:11.760 | And our politicians have to constantly worry
01:19:15.940 | about being assassinated every time they go outdoors
01:19:18.220 | by some anonymous little mini-drone.
01:19:20.900 | We don't want that.
01:19:21.820 | And even if the US and China and everyone else
01:19:25.900 | could just agree that you can only build these weapons
01:19:28.520 | if they cost at least 10 million bucks,
01:19:31.220 | that would be a huge win for the superpowers,
01:19:34.700 | and frankly for everybody.
01:19:36.200 | People often push back and say,
01:19:40.420 | well, it's so hard to prevent cheating.
01:19:43.180 | But hey, you could say the same about bioweapons.
01:19:45.760 | Take any of your RMIT colleagues in biology.
01:19:49.300 | Of course they could build some nasty bioweapon
01:19:51.980 | if they really wanted to, but first of all,
01:19:54.060 | they don't want to 'cause they think it's disgusting
01:19:56.380 | 'cause of the stigma, and second,
01:19:58.340 | even if there's some sort of nutcase and want to,
01:20:02.020 | it's very likely that some of their grad students
01:20:04.120 | or someone would rat them out
01:20:05.160 | because everyone else thinks it's so disgusting.
01:20:07.900 | And in fact, we now know there was even
01:20:10.420 | a fair bit of cheating on the bioweapons ban,
01:20:13.460 | but no countries used them because it was so stigmatized
01:20:17.460 | that it just wasn't worth revealing that they had cheated.
01:20:22.340 | - You talk about drones, but you kind of think
01:20:26.060 | that drones is the remote operation.
01:20:28.900 | - Which they are mostly still.
01:20:30.620 | - But you're not taking the next intellectual step
01:20:34.500 | of like, where does this go?
01:20:36.260 | You're kind of saying the problem with drones
01:20:38.660 | is that you're removing yourself from direct violence,
01:20:42.340 | therefore you're not able to sort of maintain
01:20:44.900 | the common humanity required
01:20:46.380 | to make the proper decisions strategically.
01:20:48.740 | But that's the criticism as opposed to like,
01:20:51.380 | if this is automated, and just exactly as you said,
01:20:55.500 | if you automate it and there's a race,
01:20:58.660 | then the technology's gonna get better and better and better,
01:21:01.280 | which means getting cheaper and cheaper and cheaper.
01:21:03.740 | And unlike perhaps nuclear weapons,
01:21:06.060 | which is connected to resources in a way,
01:21:10.260 | like it's hard to get the--
01:21:11.780 | - It's hard to engineer.
01:21:13.740 | - It feels like there's too much overlap
01:21:17.620 | between the tech industry and autonomous weapons
01:21:20.420 | to where you could have smartphone type of cheapness.
01:21:24.420 | If you look at drones, for $1,000,
01:21:29.300 | you can have an incredible system
01:21:30.820 | that's able to maintain flight autonomously for you
01:21:34.620 | and take pictures and stuff.
01:21:36.260 | You could see that going into the autonomous weapon space.
01:21:41.420 | But why is that not thought about
01:21:43.260 | or discussed enough in the public, do you think?
01:21:45.660 | You see those dancing Boston Dynamics robots,
01:21:48.980 | and everybody has this kind of,
01:21:50.740 | like as if this is a far future.
01:21:55.340 | They have this fear, like, oh, this'll be Terminator
01:21:58.620 | in some, I don't know, unspecified 20, 30, 40 years.
01:22:03.060 | And they don't think about, well,
01:22:04.380 | this is some much less dramatic version of that
01:22:09.140 | is actually happening now.
01:22:11.140 | It's not gonna be legged, it's not gonna be dancing,
01:22:14.840 | but it already has the capability
01:22:17.180 | to use artificial intelligence to kill humans.
01:22:20.260 | - Yeah, the Boston Dynamics leg robots,
01:22:22.900 | I think the reason we imagine them holding guns
01:22:24.980 | is just 'cause you've all seen Arnold Schwarzenegger.
01:22:28.420 | That's our reference point.
01:22:30.580 | That's pretty useless.
01:22:32.700 | That's not gonna be the main military use of them.
01:22:35.340 | They might be useful in law enforcement in the future,
01:22:38.700 | and there's a whole debate about,
01:22:40.260 | do you want robots showing up at your house with guns
01:22:42.660 | telling you who'll be perfectly obedient
01:22:45.460 | to whatever dictator controls them?
01:22:47.540 | But let's leave that aside for a moment
01:22:49.220 | and look at what's actually relevant now.
01:22:51.300 | So there's a spectrum of things you can do
01:22:54.780 | with AI in the military.
01:22:55.780 | And again, to put my card on the table,
01:22:57.580 | I'm not the pacifist.
01:22:58.740 | I think we should have good defense.
01:23:01.240 | So for example, a Predator drone
01:23:07.540 | is basically a fancy little remote-controlled airplane.
01:23:10.540 | There's a human piloting it,
01:23:14.420 | and the decision ultimately about whether to kill somebody
01:23:17.100 | with it is made by a human still.
01:23:19.420 | And this is a line I think we should never cross.
01:23:23.900 | There's a current DoD policy.
01:23:25.900 | Again, you have to have a human in the loop.
01:23:27.940 | I think algorithms should never make life or death decisions.
01:23:31.540 | They should be left to humans.
01:23:34.140 | Now, why might we cross that line?
01:23:37.740 | Well, first of all, these are expensive, right?
01:23:40.540 | So for example, when Azerbaijan had all these drones
01:23:45.540 | and Armenia didn't have any,
01:23:47.640 | they started trying to jerry-rig little cheap things,
01:23:51.060 | fly around, but then of course, the Armenians would jam them
01:23:54.060 | or the Azeris would jam them.
01:23:55.660 | And remote-controlled things can be jammed.
01:23:58.340 | That makes them inferior.
01:24:00.060 | Also, there's a bit of a time delay between,
01:24:02.980 | if we're piloting something from far away, speed of light,
01:24:07.480 | and the human has a reaction time as well,
01:24:09.580 | it would be nice to eliminate that jamming possibility
01:24:12.500 | in the time delay by having it fully autonomous.
01:24:15.300 | But now you might be, so then if you do,
01:24:17.980 | but now you might be crossing that exact line.
01:24:20.260 | You might program it to just, oh yeah, the air drone,
01:24:23.220 | go hover over this country for a while.
01:24:26.160 | And whenever you find someone who is a bad guy,
01:24:29.980 | kill them.
01:24:31.760 | Now the machine is making these sort of decisions.
01:24:34.280 | And some people who defend this still say,
01:24:36.680 | well, that's morally fine because we are the good guys
01:24:40.560 | and we will tell it the definition of bad guy
01:24:43.580 | that we think is moral.
01:24:44.920 | But now it would be very naive to think
01:24:49.280 | that if ISIS buys that same drone,
01:24:52.080 | that they're gonna use our definition of bad guy.
01:24:54.640 | Maybe for them, bad guy is someone wearing
01:24:56.440 | a US Army uniform.
01:24:58.720 | Or maybe there will be some weird ethnic group
01:25:03.720 | who decides that someone of another ethnic group,
01:25:08.760 | they are the bad guys.
01:25:10.240 | The thing is, human soldiers, with all our faults,
01:25:14.480 | we still have some basic wiring in us.
01:25:17.080 | Like, no, it's not okay to kill kids and civilians.
01:25:21.100 | And Thomas Weapon has none of that.
01:25:24.840 | It's just gonna do whatever is programmed.
01:25:26.640 | It's like the perfect Adolf Eichmann on steroids.
01:25:30.520 | Like, they told him, Adolf Eichmann,
01:25:33.400 | you wanted to do this and this and this
01:25:34.840 | to make the Holocaust more efficient.
01:25:36.400 | And he was like, "Jawohl."
01:25:38.760 | And off he went and did it, right?
01:25:40.720 | Do we really wanna make machines that are like that?
01:25:43.800 | Like completely amoral and will take the user's definition
01:25:46.980 | of who's the bad guy?
01:25:48.520 | And do we then wanna make them so cheap
01:25:50.720 | that all our adversaries can have them?
01:25:52.400 | Like, what could possibly go wrong?
01:25:55.460 | That's, I think, the big argument for why we wanna,
01:26:00.200 | this year, really put the kibosh on this.
01:26:03.520 | And I think you can tell there's a lot of very active debate
01:26:08.280 | even going on within the US military,
01:26:10.160 | and undoubtedly in other militaries around the world also,
01:26:13.120 | about whether we should have
01:26:13.960 | some sort of international agreement
01:26:15.680 | to at least require that these weapons
01:26:18.920 | have to be above a certain size and cost,
01:26:21.900 | so that things just don't totally spiral out of control.
01:26:26.900 | And finally, just for your question,
01:26:31.680 | but is it possible to stop it?
01:26:33.560 | 'Cause some people tell me, "Oh, just give up."
01:26:35.960 | But again, so Matthew Messelson, again, from Harvard,
01:26:41.440 | right, the bioweapons hero,
01:26:43.580 | he had exactly this criticism also with bioweapons.
01:26:47.760 | People were like, "How can you check for sure
01:26:49.920 | that the Russians aren't cheating?"
01:26:51.720 | And he told me this, I think, really ingenious insight.
01:26:57.980 | He said, "You know, Max, some people think
01:27:01.540 | you have to have inspections and things,
01:27:03.660 | and you have to make sure that people,
01:27:05.660 | you can catch the cheaters with 100% chance.
01:27:08.960 | You don't need 100%, he said.
01:27:10.820 | 1% is usually enough."
01:27:14.100 | Because if it's an enemy, if it's another big state,
01:27:19.100 | like suppose China and the US have signed the treaty,
01:27:22.020 | drawing a certain line and saying,
01:27:24.380 | "Yeah, these kind of drones are okay,
01:27:26.260 | but these fully autonomous ones are not."
01:27:28.900 | Now suppose you are China and you have cheated
01:27:33.900 | and secretly developed some clandestine little thing,
01:27:36.000 | or you're thinking about doing it.
01:27:37.580 | What's your calculation that you do?
01:27:39.220 | Well, you're like, "Okay, what's the probability
01:27:41.900 | that we're gonna get caught?"
01:27:43.400 | If the probability is 100%, of course we're not gonna do it.
01:27:48.980 | But if the probability is 5% that we're gonna get caught,
01:27:52.620 | then it's gonna be a huge embarrassment for us.
01:27:55.140 | We still have our nuclear weapons anyway,
01:28:00.060 | so it doesn't really make an enormous difference
01:28:04.520 | in terms of deterring the US.
01:28:06.520 | - And that feeds the stigma that you kind of establish,
01:28:11.580 | like this fabric, this universal stigma over the thing.
01:28:14.660 | - Exactly, it's very reasonable for them to say,
01:28:16.580 | "Well, we probably get away with it, but if we don't,
01:28:19.660 | then the US will know we cheated,
01:28:21.780 | and then they're gonna go full tilt with their program
01:28:23.740 | and say, 'Look, the Chinese are cheaters,
01:28:25.020 | and now we have all these weapons against us,
01:28:27.100 | and that's bad.'"
01:28:27.940 | The stigma alone is very, very powerful.
01:28:32.100 | And again, look what happened with bioweapons.
01:28:34.540 | It's been 50 years now.
01:28:36.940 | When was the last time you read about a bioterrorism attack?
01:28:40.180 | The only deaths I really know about with bioweapons
01:28:42.700 | that have happened, when we Americans managed to kill
01:28:45.540 | some of our own with anthrax,
01:28:46.980 | you know, the idiot who sent them to Tom Daschle
01:28:49.300 | and others in letters, right?
01:28:50.900 | And similarly, in Sverdlovsk in the Soviet Union,
01:28:55.900 | they had some anthrax in some lab there.
01:28:57.900 | Maybe they were cheating or who knows,
01:29:00.020 | and it leaked out and killed a bunch of Russians.
01:29:02.540 | I'd say that's a pretty good success, right?
01:29:04.460 | 50 years, just two own goals by the superpowers,
01:29:08.420 | and then nothing.
01:29:09.580 | And that's why whenever I ask anyone
01:29:12.060 | what they think about biology, they think it's great.
01:29:15.380 | Associated with new cures, new diseases,
01:29:18.140 | maybe a good vaccine.
01:29:19.780 | This is how I want to think about AI in the future.
01:29:22.180 | And I want others to think about AI too,
01:29:24.900 | as a source of all these great solutions to our problems,
01:29:27.900 | not as, "Oh, AI, oh yeah, that's the reason
01:29:31.940 | I feel scared going outside these days."
01:29:34.660 | - Yeah, it's kind of brilliant that the bioweapons
01:29:37.980 | and nuclear weapons, we've figured out,
01:29:40.820 | I mean, of course, they're still a huge source of danger,
01:29:43.380 | but we figured out some way of creating rules
01:29:47.780 | and social stigma over these weapons
01:29:51.460 | that then creates a stability to our,
01:29:54.620 | whatever that game theoretic stability is, of course.
01:29:57.180 | - Exactly.
01:29:58.020 | - And we don't have that with AI.
01:29:59.220 | And you're kind of screaming from the top of the mountain
01:30:03.780 | about this, that we need to find that
01:30:05.540 | because just like, it's very possible
01:30:09.620 | with the future of life, as you've pointed out,
01:30:12.220 | Institute Awards pointed out that with nuclear weapons,
01:30:17.220 | we could have destroyed ourselves quite a few times.
01:30:21.260 | And it's a learning experience that is very costly.
01:30:26.260 | - We gave this future of life award,
01:30:31.100 | we gave it the first time to this guy, Vasily Arkhipov.
01:30:34.860 | He was on, most people haven't even heard of him.
01:30:37.700 | - Yeah, can you say who he is?
01:30:38.860 | - Vasily Arkhipov.
01:30:41.980 | Has, in my opinion, made the greatest positive contribution
01:30:46.980 | to humanity of any human in modern history.
01:30:50.180 | And maybe it sounds like hyperbole here,
01:30:52.100 | like I'm just over the top, but let me tell you the story
01:30:54.700 | and I think maybe you'll agree.
01:30:56.280 | So during the Cuban Missile Crisis,
01:30:58.380 | we Americans first didn't know
01:31:02.020 | that the Russians had sent four submarines,
01:31:04.440 | but we caught two of them.
01:31:06.900 | And we didn't know that,
01:31:09.340 | so we dropped practice depth charges
01:31:11.260 | on the one that he was on,
01:31:12.580 | try to force it to the surface.
01:31:14.140 | But we didn't know that this nuclear submarine
01:31:17.900 | actually was a nuclear submarine with a nuclear torpedo.
01:31:20.780 | We also didn't know that they had authorization
01:31:22.900 | to launch it without clearance from Moscow.
01:31:25.340 | And we also didn't know
01:31:26.260 | that they were running out of electricity.
01:31:28.460 | Their batteries were almost dead.
01:31:30.100 | They were running out of oxygen.
01:31:32.060 | Sailors were fainting left and right.
01:31:34.540 | The temperature was about 110, 120 Fahrenheit on board.
01:31:39.380 | It was really hellish conditions,
01:31:41.180 | really just a kind of doomsday.
01:31:43.500 | And at that point, these giant explosions start happening
01:31:46.520 | from Americans dropping these.
01:31:48.420 | The captain thought World War III had begun.
01:31:50.940 | They decided that they were gonna launch
01:31:52.760 | the nuclear torpedo.
01:31:53.940 | And one of them shouted, "We're all gonna die,
01:31:56.380 | "but we're not gonna disgrace our Navy."
01:31:59.180 | We don't know what would have happened
01:32:00.340 | if there had been a giant mushroom cloud all of a sudden
01:32:03.660 | against the Americans,
01:32:05.020 | but since everybody had their hands on the triggers,
01:32:07.580 | you don't have to be too creative
01:32:10.700 | to think that it could have led to an all-out nuclear war,
01:32:13.300 | in which case we wouldn't be having
01:32:14.420 | this conversation now, right?
01:32:15.900 | What actually took place was they needed three people
01:32:18.780 | to approve this.
01:32:21.260 | The captain had said yes.
01:32:22.420 | There was the Communist Party political officer.
01:32:24.340 | He also said, "Yes, let's do it."
01:32:26.220 | And the third man was this guy, Vasily Arkhipov,
01:32:29.260 | who said, "Nyet."
01:32:30.100 | For some reason, he was just more chill than the others,
01:32:34.500 | and he was the right man at the right time.
01:32:36.100 | I don't want us as a species rely on the right person
01:32:39.980 | being there at the right time.
01:32:41.540 | We tracked down his family living in relative poverty
01:32:46.980 | outside Moscow.
01:32:48.180 | We flew his daughter.
01:32:50.660 | He had passed away.
01:32:51.660 | And we flew them to London.
01:32:54.580 | They had never been to the West even.
01:32:55.900 | It was incredibly moving to get to honor them for this.
01:32:59.140 | The next year, we gave this Future Life Award
01:33:01.620 | to Stanislav Petrov.
01:33:03.940 | Have you heard of him?
01:33:04.780 | - Yes.
01:33:05.600 | - So he was in charge of the Soviet early warning station,
01:33:09.740 | which was built with Soviet technology
01:33:12.580 | and honestly not that reliable.
01:33:14.500 | It said that there were five US missiles coming in.
01:33:17.020 | Again, if they had launched at that point,
01:33:21.140 | we probably wouldn't be having this conversation.
01:33:23.180 | He decided, based on just mainly gut instinct,
01:33:28.180 | to just not escalate this.
01:33:32.500 | And I'm very glad he wasn't replaced by an AI
01:33:35.060 | that was just automatically following orders.
01:33:37.500 | And then we gave the third one to Matthew Messelson.
01:33:39.700 | Last year, we gave this award to these guys
01:33:44.180 | who actually use technology for good,
01:33:46.580 | not avoiding something bad, but for something good.
01:33:49.980 | The guys who eliminated this disease,
01:33:51.960 | which is way worse than COVID,
01:33:53.560 | that had killed half a billion people
01:33:56.940 | in its final century, smallpox.
01:33:59.380 | So we mentioned it earlier.
01:34:01.140 | COVID, on average, kills less than 1% of people who get it.
01:34:05.220 | Smallpox, about 30%.
01:34:08.180 | And ultimately, Viktor Zhdanov and Bill Fahy,
01:34:13.180 | most of my colleagues have never heard of either of them,
01:34:17.500 | one American, one Russian, they did this amazing effort.
01:34:22.020 | Not only was Zhdanov able to get the US and the Soviet Union
01:34:25.220 | to team up against smallpox during the Cold War,
01:34:27.980 | but Bill Fahy came up with this ingenious strategy
01:34:30.340 | for making it actually go all the way to defeat the disease
01:34:34.740 | without funding for vaccinating everyone.
01:34:37.620 | And as a result, we went from 15 million deaths
01:34:42.380 | the year I was born in smallpox,
01:34:44.420 | so what do we have in COVID now?
01:34:45.660 | A little bit short of 2 million, right?
01:34:47.100 | - Yes.
01:34:48.140 | - To zero deaths, of course, this year and forever.
01:34:51.940 | There have been 200 million people, they estimate,
01:34:54.820 | who would have died since then by smallpox
01:34:57.220 | had it not been for this.
01:34:58.100 | So isn't science awesome?
01:34:59.780 | - Yeah, it does.
01:35:00.620 | - When you use it for good.
01:35:01.620 | And the reason we wanna celebrate these sort of people
01:35:04.300 | is to remind them of this.
01:35:05.700 | Science is so awesome when you use it for good.
01:35:10.140 | - And those awards actually, the variety there,
01:35:13.500 | paints a very interesting picture.
01:35:14.900 | So the first two are looking at,
01:35:19.340 | it's kind of exciting to think that these average humans,
01:35:22.700 | in some sense, that are products of billions
01:35:26.180 | of other humans that came before them, evolution,
01:35:30.180 | and some little, you said gut, you know,
01:35:33.380 | but there's something in there
01:35:35.300 | that stopped the annihilation of the human race.
01:35:40.300 | And that's a magical thing,
01:35:43.060 | but that's like this deeply human thing.
01:35:45.260 | And then there's the other aspect
01:35:47.420 | where it's also very human,
01:35:49.820 | which is to build solution to the existential crises
01:35:54.460 | that we're facing, like to build it,
01:35:56.340 | to take the responsibility and to come up
01:35:58.780 | with different technologies and so on.
01:36:00.660 | And both of those are deeply human,
01:36:04.100 | the gut and the mind, whatever that is.
01:36:07.020 | - Yeah, and the best is when they work together.
01:36:08.660 | Arkhipov, I wish I could have met him, of course,
01:36:11.420 | but he had passed away.
01:36:13.260 | He was really a fantastic military officer,
01:36:16.740 | combining all the best traits that we in America admire
01:36:19.860 | in our military, because first of all,
01:36:21.940 | he was very loyal, of course.
01:36:23.180 | He never even told anyone about this during his whole life,
01:36:26.100 | even though you think he had some bragging rights, right?
01:36:28.420 | But he just was like, this is just business,
01:36:29.980 | just doing my job.
01:36:31.540 | It only came out later after his death.
01:36:34.300 | And second, the reason he did the right thing
01:36:37.100 | was not 'cause he was some sort of liberal,
01:36:39.220 | or some sort of, not because he was just,
01:36:43.940 | oh, you know, peace and love.
01:36:47.340 | It was partly because he had been the captain
01:36:49.780 | on another submarine that had a nuclear reactor meltdown.
01:36:53.060 | And it was his heroism that helped contain this.
01:36:58.020 | That's why he died of cancer later also.
01:36:59.740 | But he's seen many of his crew members die.
01:37:01.460 | And I think for him, that gave him this gut feeling
01:37:04.140 | that if there's a nuclear war between the US
01:37:06.940 | and the Soviet Union, the whole world is gonna go through
01:37:11.060 | what I saw my dear crew members suffer through.
01:37:13.740 | It wasn't just an abstract thing for him.
01:37:15.820 | I think it was real.
01:37:17.660 | And second, though, not just the gut, the mind, right?
01:37:20.620 | He was, for some reason, just a very level-headed personality
01:37:23.940 | and a very smart guy, which is exactly what we want
01:37:28.180 | our best fighter pilots to be also, right?
01:37:30.100 | I never forget Neil Armstrong when he's landing on the moon
01:37:32.860 | and almost running out of gas.
01:37:34.540 | And he doesn't even change, when they say 30 seconds,
01:37:37.420 | he doesn't even change the tone of voice, just keeps going.
01:37:39.660 | Arkhipov, I think, was just like that.
01:37:41.820 | So when the explosions start going off
01:37:43.460 | and his captain is screaming,
01:37:44.580 | and we should nuke them and all, he's like,
01:37:47.420 | I don't think the Americans are trying to sink us.
01:37:54.300 | I think they're trying to send us a message.
01:37:58.060 | That's pretty badass coolness.
01:38:00.500 | 'Cause he said, if they wanted to sink us,
01:38:02.700 | and he said, listen, listen, it's alternating.
01:38:06.900 | One loud explosion on the left, one on the right.
01:38:10.180 | One on the left, one on the right.
01:38:12.100 | He was the only one to notice this pattern.
01:38:14.260 | And he's like, I think this is them trying to send us
01:38:18.700 | a signal that they want us to surface,
01:38:22.820 | and they're not gonna sink us.
01:38:25.740 | And somehow, this is how he then managed to ultimately,
01:38:30.740 | with his combination of gut,
01:38:34.620 | and also just cool analytical thinking,
01:38:37.940 | was able to deescalate the whole thing.
01:38:40.140 | And yeah, so this is some of the best in humanity.
01:38:44.220 | I guess coming back to what we talked about earlier,
01:38:45.820 | is the combination of the neural network, the instinctive,
01:38:48.580 | with, I'm tearing up here, I'm getting emotional.
01:38:51.660 | But he was just, he is one of my superheroes.
01:38:55.780 | Having both the heart and the mind combined.
01:39:00.460 | - Especially in that time, there's something about the,
01:39:03.700 | I mean, this is a very, in America,
01:39:05.380 | people are used to this kind of idea of being the individual,
01:39:09.100 | of on your own thinking.
01:39:11.180 | I think in the Soviet Union, under communism,
01:39:15.500 | it's actually much harder to do that.
01:39:17.620 | - Oh yeah, he didn't even, he even got,
01:39:19.980 | he didn't get any accolades either
01:39:21.860 | when he came back for this, right?
01:39:24.260 | They just wanted to hush the whole thing up.
01:39:25.900 | - Yeah, there's echoes of that with Chernobyl,
01:39:27.980 | there's all kinds of, that's one,
01:39:32.460 | that's a really hopeful thing,
01:39:34.380 | that amidst big centralized powers,
01:39:37.580 | whether it's companies or states,
01:39:39.960 | there's still the power of the individual
01:39:42.500 | to think on their own, to act.
01:39:43.900 | - But I think we need to think of people like this,
01:39:46.940 | not as a panacea we can always count on,
01:39:50.180 | but rather as a wake-up call.
01:39:54.120 | Because of them, because of Arkhipov,
01:39:58.580 | we are alive to learn from this lesson,
01:40:01.380 | to learn from the fact that we shouldn't
01:40:02.700 | keep playing Russian roulette
01:40:03.740 | and almost have a nuclear war by mistake now and then,
01:40:06.660 | 'cause relying on luck is not a good long-term strategy.
01:40:09.620 | If you keep playing Russian roulette over and over again,
01:40:11.420 | the probability of surviving
01:40:12.540 | just drops exponentially with time.
01:40:15.100 | And if you have some probability
01:40:16.720 | of having an accidental nuclear war every year,
01:40:18.700 | the probability of not having one
01:40:20.180 | also drops exponentially.
01:40:21.220 | I think we can do better than that.
01:40:22.860 | So I think the message is very clear,
01:40:26.020 | once in a while shit happens,
01:40:27.900 | and there's a lot of very concrete things we can do
01:40:31.380 | to reduce the risk of things like that
01:40:34.580 | happening in the first place.
01:40:36.580 | - On the AI front, if we just link on that for a second.
01:40:41.020 | So you're friends with, you often talk with Elon Musk,
01:40:44.140 | throughout history, you've did a lot
01:40:46.700 | of interesting things together.
01:40:48.700 | He has a set of fears about the future
01:40:52.300 | of artificial intelligence, AGI.
01:40:54.980 | Do you have a sense, we've already talked about
01:40:59.740 | the things we should be worried about with AI.
01:41:01.580 | Do you have a sense of the shape of his fears in particular
01:41:05.420 | about AI, of which subset of what we've talked about,
01:41:10.160 | whether it's creating, it's that direction of creating
01:41:15.160 | sort of these giant competition systems
01:41:17.500 | that are not explainable,
01:41:19.140 | that they're not intelligible intelligence,
01:41:21.820 | or is it the, and then as a branch of that,
01:41:26.820 | is it the manipulation by big corporations of that
01:41:31.800 | or individual evil people to use that for destruction
01:41:35.340 | or the unintentional consequences?
01:41:37.420 | Do you have a sense of where his thinking is on this?
01:41:40.260 | - From my many conversations with Elon,
01:41:42.460 | yeah, I certainly have a model of how he thinks.
01:41:47.460 | It's actually very much like the way I think also.
01:41:49.900 | I'll elaborate on it a bit.
01:41:51.100 | I just want to push back on when you said evil people.
01:41:54.660 | I don't think it's a very helpful concept, evil people.
01:41:59.660 | Sometimes people do very, very bad things,
01:42:02.380 | but they usually do it because they think it's a good thing
01:42:05.460 | because somehow other people had told them
01:42:07.800 | that that was a good thing or given them
01:42:09.980 | incorrect information or whatever, right?
01:42:15.540 | I believe in the fundamental goodness of humanity
01:42:18.420 | that if we educate people well
01:42:21.700 | and they find out how things really are,
01:42:24.300 | people generally want to do good and be good.
01:42:27.280 | - Hence the value alignment.
01:42:30.380 | - Yes.
01:42:31.220 | - It's about information, about knowledge,
01:42:33.700 | and then once we have that,
01:42:35.380 | we'll likely be able to do good in the way that's aligned
01:42:40.380 | with everybody else who thinks the same way.
01:42:42.180 | - Yeah, and it's not just the individual people
01:42:44.060 | we have to align.
01:42:45.020 | So we don't just want people to be educated
01:42:49.620 | to know the way things actually are
01:42:51.220 | and to treat each other well,
01:42:53.260 | but we also would need to align other non-human entities.
01:42:56.300 | We talked about corporations, there has to be institutions
01:42:58.580 | so that what they do is actually good
01:42:59.980 | for the country they're in,
01:43:00.940 | and we should make sure that what the countries do
01:43:03.500 | is actually good for the species as a whole, et cetera.
01:43:07.820 | Coming back to Elon,
01:43:08.700 | yeah, my understanding of how Elon sees this
01:43:13.660 | is really quite similar to my own,
01:43:15.260 | which is one of the reasons I like him so much
01:43:18.240 | and enjoy talking with him so much.
01:43:19.380 | I feel he's quite different from most people
01:43:22.980 | in that he thinks much more than most people
01:43:27.740 | about the really big picture,
01:43:29.860 | not just what's going to happen in the next election cycle,
01:43:32.540 | but in millennia, millions and billions of years from now.
01:43:36.040 | And when you look in this more cosmic perspective,
01:43:39.280 | it's so obvious that we're gazing out into this universe
01:43:43.060 | that as far as we can tell is mostly dead
01:43:46.220 | with life being an almost imperceptibly tiny perturbation.
01:43:50.500 | And he sees this enormous opportunity
01:43:52.580 | for our universe to come alive,
01:43:54.260 | for us to become an interplanetary species.
01:43:56.460 | Mars is obviously just first stop on this cosmic journey.
01:44:01.460 | And precisely because he thinks more long-term,
01:44:04.960 | it's much more clear to him than to most people
01:44:09.520 | that what we do with this Russian roulette thing
01:44:11.300 | we keep playing with our nukes is a really poor strategy,
01:44:15.300 | really reckless strategy.
01:44:16.700 | And also that we're just building
01:44:18.580 | these ever more powerful AI systems
01:44:20.260 | that we don't understand
01:44:21.620 | is also a really reckless strategy.
01:44:23.780 | I feel Elon is very much a humanist
01:44:26.620 | in the sense that he wants an awesome future for humanity.
01:44:30.860 | He wants it to be us that control the machines
01:44:35.860 | rather than the machines that control us.
01:44:39.380 | And why shouldn't we insist on that?
01:44:42.060 | We're building them after all, right?
01:44:44.540 | Why should we build things that just make us
01:44:46.500 | into some little cog in the machinery
01:44:48.400 | that has no further say in the matter, right?
01:44:50.220 | That's not my idea of an inspiring future either.
01:44:54.540 | - Yeah, if you think on the cosmic scale
01:44:57.860 | in terms of both time and space,
01:44:59.800 | so much is put into perspective.
01:45:02.620 | - Yeah.
01:45:04.220 | Whenever I have a bad day, that's what I think about.
01:45:06.420 | It immediately makes me feel better.
01:45:09.260 | - It makes me sad that for us individual humans,
01:45:13.500 | at least for now, the ride ends too quickly.
01:45:16.020 | We don't get to experience the cosmic scale.
01:45:20.060 | - Yeah, I mean, I think of our universe sometimes
01:45:22.220 | as an organism that has only begun to wake up a tiny bit.
01:45:25.120 | Just like the very first little glimmers of consciousness
01:45:30.100 | you have in the morning when you start coming around.
01:45:32.140 | - Before the coffee.
01:45:33.180 | - Before the coffee.
01:45:34.300 | Even before you get out of bed,
01:45:35.820 | before you even open your eyes.
01:45:37.540 | You start to wake up a little bit.
01:45:40.100 | Oh, there's something here.
01:45:41.500 | That's very much how I think of what we are.
01:45:47.140 | All those galaxies out there,
01:45:48.580 | I think they're really beautiful.
01:45:51.180 | But why are they beautiful?
01:45:52.820 | They're beautiful because conscious entities
01:45:55.060 | are actually observing them,
01:45:56.980 | experiencing them through our telescopes.
01:45:59.100 | I define consciousness as subjective experience,
01:46:05.860 | whether it be colors or emotions or sounds.
01:46:09.380 | So beauty is an experience, meaning is an experience,
01:46:13.740 | purpose is an experience.
01:46:15.820 | If there was no conscious experience observing these galaxies
01:46:18.940 | they wouldn't be beautiful.
01:46:20.260 | If we do something dumb with advanced AI in the future here
01:46:24.900 | and Earth originating, life goes extinct.
01:46:29.340 | And that was it for this.
01:46:30.460 | If there is nothing else with telescopes in our universe,
01:46:33.540 | then it's kind of game over for beauty and meaning
01:46:36.900 | and purpose in the whole universe.
01:46:38.100 | And I think that would be just such an opportunity lost,
01:46:40.980 | frankly.
01:46:41.820 | And I think when Elon points this out,
01:46:46.060 | he gets very unfairly maligned in the media
01:46:49.620 | for all the dumb media bias reasons we talked about.
01:46:52.420 | They want to print precisely the things about Elon
01:46:55.660 | out of context that are really clickbaity.
01:46:58.300 | Like he has gotten so much flack
01:47:00.460 | for this summoning the demon statement.
01:47:03.420 | I happen to know exactly the context
01:47:07.700 | 'cause I was in the front row when he gave that talk.
01:47:09.740 | It was at MIT, you'll be pleased to know.
01:47:11.300 | It was the AeroAstro anniversary.
01:47:13.860 | They had Buzz Aldrin there from the moon landing,
01:47:16.780 | the whole house, Kresge Auditorium,
01:47:19.020 | packed with MIT students.
01:47:20.860 | And he had this amazing Q&A, might've gone for an hour.
01:47:23.940 | And they talked about rockets and Mars and everything.
01:47:27.180 | At the very end, this one student
01:47:29.620 | who's actually in my class asked him, "What about AI?"
01:47:33.220 | Elon makes this one comment
01:47:35.220 | and they take this out of context, print it, goes viral.
01:47:39.420 | - Was it like with AI, we're summoning the demons,
01:47:41.660 | something like that?
01:47:42.500 | - Mm-hmm, and try to cast him
01:47:43.900 | as some sort of doom and gloom dude.
01:47:47.460 | You know Elon.
01:47:48.700 | - Yes.
01:47:49.660 | - He's not the doom and gloom dude.
01:47:51.980 | He is such a positive visionary.
01:47:53.980 | And the whole reason he warns about this
01:47:55.660 | is because he realizes more than most
01:47:57.700 | what the opportunity cost is of screwing up,
01:47:59.860 | that there is so much awesomeness in the future
01:48:02.340 | that we can and our descendants can enjoy
01:48:05.460 | if we don't screw up, right?
01:48:07.740 | I get so pissed off when people try to cast him
01:48:10.340 | as some sort of technophobic Luddite.
01:48:14.300 | And at this point, it's kind of ludicrous
01:48:18.460 | when I hear people say that people who worry
01:48:20.500 | about artificial general intelligence are Luddites,
01:48:24.560 | because of course, if you look more closely,
01:48:26.980 | you have some of the most outspoken people making warnings
01:48:31.980 | are people like Professor Stuart Russell from Berkeley
01:48:35.660 | who's written the best-selling AI textbook, you know.
01:48:38.380 | So claiming that he's a Luddite who doesn't understand AI,
01:48:44.260 | the joke is really on the people who said it,
01:48:46.500 | but I think more broadly,
01:48:48.220 | this message is really not sunk in at all,
01:48:50.780 | what it is that people worry about.
01:48:52.660 | They think that Elon and Stuart Russell and others
01:48:56.660 | are worried about the dancing robots picking up an AR-15
01:49:01.660 | and going on a rampage, right?
01:49:04.340 | They think they're worried about robots turning evil.
01:49:08.420 | They're not, I'm not.
01:49:10.020 | The risk is not malice, it's competence.
01:49:15.020 | The risk is just that we build some systems
01:49:17.540 | that are incredibly competent,
01:49:18.780 | which means they're always gonna get
01:49:20.020 | their goals accomplished, even if they clash with our goals.
01:49:24.060 | That's the risk.
01:49:25.900 | Why did we humans drive the West African black rhino extinct?
01:49:30.900 | Is it because we're malicious, evil rhinoceros haters?
01:49:35.460 | No, it's just 'cause our goals didn't align
01:49:38.700 | with the goals of those rhinos
01:49:39.900 | and tough luck for the rhinos.
01:49:41.860 | So the point is just we don't wanna put ourselves
01:49:47.340 | in the position of those rhinos,
01:49:48.740 | creating something more powerful than us
01:49:51.860 | if we haven't first figured out how to align the goals.
01:49:54.540 | And I am optimistic.
01:49:55.540 | I think we could do it if we worked really hard on it
01:49:57.540 | because I spent a lot of time around intelligent entities
01:50:01.460 | that were more intelligent than me, my mom and my dad.
01:50:04.620 | And I was little and that was fine
01:50:08.180 | 'cause their goals were actually aligned
01:50:09.740 | with mine quite well.
01:50:10.780 | But we've seen today many examples
01:50:14.660 | of where the goals of our powerful systems
01:50:16.900 | are not so aligned.
01:50:17.820 | So those click-through optimization algorithms
01:50:22.820 | that are polarized social media, right?
01:50:25.140 | They were actually pretty poorly aligned
01:50:26.740 | with what was good for democracy, it turned out.
01:50:29.580 | And again, almost all problems we've had
01:50:32.220 | in the machine learning, again, came so far,
01:50:34.380 | not from malice, but from poor alignment.
01:50:36.180 | And that's exactly why that's why we should be concerned
01:50:38.940 | about it in the future.
01:50:39.980 | - Do you think it's possible that with systems
01:50:43.900 | like Neuralink and brain-computer interfaces,
01:50:47.260 | again, thinking of the cosmic scale,
01:50:49.980 | Elon has talked about this,
01:50:51.780 | but others have as well throughout history
01:50:54.540 | of figuring out how the exact mechanism
01:50:57.900 | of how to achieve that kind of alignment.
01:50:59.980 | So one of them is having a symbiosis with AI,
01:51:03.140 | which is like coming up with clever ways
01:51:05.580 | where we're like stuck together in this weird relationship,
01:51:10.340 | whether it's biological or in some kind of other way.
01:51:14.180 | Do you think that's a possibility
01:51:17.220 | of having that kind of symbiosis?
01:51:19.220 | Or do we wanna instead kind of focus
01:51:20.940 | on this distinct entities of us humans
01:51:25.940 | talking to these intelligible, self-doubting AIs,
01:51:31.740 | maybe like Stuart Russell thinks about it,
01:51:33.580 | like we're self-doubting and full of uncertainty,
01:51:37.620 | and then have our AI systems that are full of uncertainty,
01:51:39.740 | we communicate back and forth,
01:51:41.540 | and in that way achieve symbiosis?
01:51:43.740 | - I honestly don't know.
01:51:46.220 | I would say that because we don't know for sure
01:51:48.580 | what if any of our, which of any of our ideas will work,
01:51:52.220 | but we do know that if we don't,
01:51:55.220 | I'm pretty convinced that if we don't get
01:51:56.700 | any of these things to work and just barge ahead,
01:51:59.860 | then our species is probably gonna go extinct this century.
01:52:03.740 | I think-- - This century.
01:52:05.540 | You think we're facing this crisis is a 21st century crisis.
01:52:10.540 | - Oh yeah. - This century
01:52:12.460 | will be remembered. (laughs)
01:52:16.100 | - On a hard drive somewhere. - On a hard drive somewhere.
01:52:18.540 | - Or maybe by future generations as like,
01:52:22.300 | like there'll be future, Future of Life Institute awards
01:52:26.260 | for people that have done something about AI.
01:52:30.700 | - It could also end even worse,
01:52:31.900 | where there is, we're not superseded
01:52:33.740 | by leaving any AI behind either.
01:52:35.300 | We just totally wipe out, you know, like on Easter Island.
01:52:38.500 | Our century is long.
01:52:39.900 | No, there are still 79 years left of it, right?
01:52:44.300 | Think about how far we've come just in the last 30 years.
01:52:47.700 | So we can talk more about what might go wrong,
01:52:52.700 | but you asked me this really good question
01:52:54.580 | about what's the best strategy.
01:52:55.780 | Is it Neuralink or Russell's approach or whatever?
01:52:59.780 | I think, you know, when we did the Manhattan Project,
01:53:04.780 | we didn't know if any of our four ideas
01:53:08.460 | for enriching uranium and getting out the uranium-235
01:53:11.740 | were gonna work.
01:53:12.900 | But we felt this was really important
01:53:14.780 | to get it before Hitler did.
01:53:16.700 | So, you know, what we did, we tried all four of them.
01:53:19.500 | Here, I think it's analogous where there's the greatest
01:53:24.100 | threat that's ever faced our species.
01:53:25.940 | And of course, US national security by implication.
01:53:29.260 | We don't know, we don't have any method
01:53:31.500 | that's guaranteed to work, but we have a lot of ideas.
01:53:34.660 | So we should invest pretty heavily in pursuing all of them
01:53:36.860 | with an open mind and hope that one of them at least works.
01:53:40.540 | These are, the good news is the century is long, you know,
01:53:45.340 | and it might take decades until we have
01:53:48.860 | artificial general intelligence.
01:53:50.180 | So we have some time, hopefully.
01:53:52.740 | But it takes a long time for us to solve
01:53:55.260 | these very, very difficult problems.
01:53:57.100 | It's gonna actually be, it's the most difficult problem
01:53:59.140 | we were ever trying to solve as a species.
01:54:01.340 | So we have to start now, so we don't have,
01:54:04.260 | rather than, you know, begin thinking about it
01:54:05.860 | the night before some people who've had too much Red Bull
01:54:08.740 | switch it on.
01:54:09.580 | And we have to, coming back to your question,
01:54:11.860 | we have to pursue all of these different avenues and see.
01:54:14.260 | If you were my investment advisor,
01:54:16.820 | and I was trying to invest in the future,
01:54:19.900 | how do you think the human species
01:54:22.140 | is most likely to destroy itself in this century?
01:54:27.540 | Yeah, so if the crises, many of the crises we're facing
01:54:33.540 | are really before us within the next 100 years,
01:54:37.260 | how do we make explicit the,
01:54:42.340 | make known the unknowns and solve those problems
01:54:46.660 | to avoid the biggest,
01:54:48.220 | starting with the biggest existential crisis?
01:54:51.940 | - So as your investment advisor,
01:54:53.180 | how are you planning to make money
01:54:54.740 | on us destroying ourselves, I have to ask?
01:54:57.340 | - I don't know.
01:54:58.180 | It might be the Russian origins.
01:55:00.100 | Somehow it's involved.
01:55:02.860 | - At the micro level of detailed strategies,
01:55:04.740 | of course, these are unsolved problems.
01:55:06.700 | For AI alignment, we can break it into three sub-problems.
01:55:12.260 | That are all unsolved, I think.
01:55:14.420 | You want first to make machines understand our goals,
01:55:18.380 | then adopt our goals, and then retain our goals.
01:55:23.380 | So to hit on all three real quickly.
01:55:26.140 | The problem when Andreas Lubitz told his autopilot
01:55:31.100 | to fly into the Alps was that the computer
01:55:34.340 | didn't even understand anything about his goals.
01:55:39.060 | It was too dumb.
01:55:40.500 | It could have understood, actually.
01:55:41.980 | But you would have had to put some effort in
01:55:45.300 | as the system designer.
01:55:46.860 | Don't fly into mountains.
01:55:48.860 | So that's the first challenge.
01:55:49.940 | How do you program into computers
01:55:52.140 | human values, human goals?
01:55:55.300 | We could start, rather than saying,
01:55:58.260 | oh, it's so hard, we should start with the simple stuff,
01:56:00.340 | as I said.
01:56:01.180 | Self-driving cars, airplanes,
01:56:04.100 | just put in all the goals that we all agree on already.
01:56:07.220 | And then have a habit of whenever machines get smarter,
01:56:10.500 | so they can understand one level higher goals,
01:56:14.140 | put them in too.
01:56:16.900 | The second challenge is getting them to adopt the goals.
01:56:20.780 | It's easy for situations like that,
01:56:22.260 | where you just program it in.
01:56:23.220 | But when you have self-learning systems like children,
01:56:25.940 | any parent knows that there's a difference
01:56:32.380 | between getting our kids to understand
01:56:33.980 | what we want them to do,
01:56:34.820 | and to actually adopt our goals.
01:56:37.620 | With humans, with children,
01:56:39.580 | unfortunately, they go through this phase.
01:56:44.020 | First, they're too dumb to understand what we want,
01:56:46.260 | our goals are.
01:56:47.100 | And then they have this period of some years,
01:56:50.420 | when they're both smart enough to understand them,
01:56:52.100 | and malleable enough that we have a chance
01:56:53.540 | to raise them well.
01:56:55.420 | And then they become teenagers,
01:56:56.860 | and it's kind of too late.
01:56:59.220 | But we have this window.
01:57:00.620 | With machines, the challenges,
01:57:01.940 | the intelligence might grow so fast
01:57:04.140 | that that window is pretty short.
01:57:05.940 | So that's a research problem.
01:57:08.540 | The third one is, how do you make sure they keep the goals,
01:57:11.380 | if they keep learning more and getting smarter?
01:57:14.580 | Many sci-fi movies are about how you have something
01:57:17.460 | which initially was aligned,
01:57:18.540 | but then things kind of go off the heel.
01:57:20.380 | And my kids were very, very excited
01:57:24.700 | about their Legos when they were little.
01:57:27.420 | Now they're just gathering dust in the basement.
01:57:29.820 | If we create machines that are really on board
01:57:32.620 | with the goal of taking care of humanity,
01:57:34.380 | we don't want them to get as bored with us,
01:57:36.340 | as my kids got with Legos.
01:57:39.540 | So this is another research challenge.
01:57:41.940 | How can you make some sort of recursively
01:57:43.460 | self-improving system retain certain basic goals?
01:57:47.480 | - That said, a lot of adult people still play with Legos.
01:57:50.940 | So maybe we succeeded with the Legos.
01:57:52.780 | - Maybe, I like your optimism.
01:57:54.660 | (laughing)
01:57:55.500 | - So not all AI systems have to maintain the goals, right?
01:57:58.820 | Some just some fraction.
01:58:00.220 | - Yeah, so there's a lot of talented AI researchers now
01:58:04.940 | who have heard of this and wanna work on it.
01:58:07.300 | Not so much funding for it yet.
01:58:08.940 | Of the billions that go into building AI more powerful,
01:58:13.900 | it's only a minuscule fraction.
01:58:16.260 | So for going into the safety research,
01:58:18.340 | my attitude is generally we should not try
01:58:20.080 | to slow down the technology,
01:58:21.580 | but we should greatly accelerate the investment
01:58:23.420 | in this sort of safety research.
01:58:25.020 | And also make sure, this was very embarrassing last year,
01:58:29.380 | but the NSF decided to give out six of these big institutes.
01:58:33.720 | We got one of them for AI and science, you asked me about.
01:58:37.100 | Another one was supposed to be for AI safety research.
01:58:39.800 | And they gave it to people studying oceans
01:58:43.580 | and climate and stuff.
01:58:44.980 | So I'm all for studying oceans and climates,
01:58:49.380 | but we need to actually have some money
01:58:51.220 | that actually goes into AI safety research also
01:58:53.460 | and doesn't just get grabbed by whatever.
01:58:55.500 | That's a fantastic investment.
01:58:58.020 | And then at the higher level, you asked this question,
01:59:00.580 | okay, what can we do?
01:59:02.760 | What are the biggest risks?
01:59:04.120 | I think we cannot just consider this
01:59:08.840 | to be only a technical problem.
01:59:11.080 | Again, 'cause if you solve only the technical problem,
01:59:13.720 | can I play with your robot?
01:59:14.720 | - Yes, please.
01:59:15.560 | - If we can get our machines to just blindly obey
01:59:20.620 | the orders we give them,
01:59:21.940 | so we can always trust that it will do what we want,
01:59:26.180 | that might be great for the owner of the robot,
01:59:28.480 | but it might not be so great for the rest of humanity
01:59:31.420 | if that person is that least favorite world leader
01:59:34.140 | or whatever you imagine, right?
01:59:35.740 | So we have to also take a look at the apply alignment,
01:59:40.060 | not just to machines,
01:59:42.020 | but to all the other powerful structures.
01:59:44.600 | That's why it's so important
01:59:45.780 | to strengthen our democracy again.
01:59:47.060 | As I said, to have institutions,
01:59:48.580 | make sure that the playing field is not rigged
01:59:51.460 | so that corporations are given the right incentives
01:59:54.860 | to do the things that both make profit
01:59:57.280 | and are good for people,
01:59:58.900 | to make sure that countries have incentives
02:00:00.980 | to do things that are both good for their people
02:00:03.380 | and don't screw up the rest of the world.
02:00:06.860 | And this is not just something for AI nerds to geek out on.
02:00:10.300 | This is an interesting challenge for political scientists,
02:00:13.100 | economists, and so many other thinkers.
02:00:16.820 | - So one of the magical things that perhaps makes
02:00:22.860 | this earth quite unique is that it's home
02:00:27.280 | to conscious beings.
02:00:28.840 | So you mentioned consciousness.
02:00:30.440 | Perhaps as a small aside,
02:00:35.000 | because we didn't really get specific
02:00:36.720 | to how we might do the alignment.
02:00:39.440 | Like you said, it's just a really important
02:00:41.080 | research problem.
02:00:41.920 | But do you think engineering consciousness
02:00:44.720 | into AI systems is a possibility?
02:00:49.880 | Is something that we might one day do?
02:00:53.060 | Or is there something fundamental to consciousness
02:00:56.820 | that is, is there something about consciousness
02:00:59.860 | that is fundamental to humans and humans only?
02:01:02.360 | - I think it's possible.
02:01:04.620 | I think both consciousness and intelligence
02:01:08.340 | are information processing,
02:01:10.740 | certain types of information processing.
02:01:13.480 | And that fundamentally, it doesn't matter
02:01:15.980 | whether the information is processed by carbon atoms
02:01:19.020 | in neurons and brains or by silicon atoms
02:01:22.660 | and so on in our technology.
02:01:25.940 | Some people disagree.
02:01:28.260 | This is what I think as a physicist.
02:01:30.200 | - That consciousness is the same kind of,
02:01:34.980 | you said consciousness is information processing.
02:01:37.700 | So meaning, I think you had a quote of something like,
02:01:42.700 | it's information knowing itself, that kind of thing.
02:01:47.740 | - I think consciousness is the way information feels
02:01:51.060 | when it's being processed.
02:01:51.980 | - Once people die, yeah.
02:01:52.820 | - In complex ways.
02:01:53.660 | We don't know exactly what those complex ways are.
02:01:56.140 | It's clear that most of the information processing
02:01:59.260 | in our brains does not create an experience.
02:02:01.740 | We're not even aware of it.
02:02:03.620 | Like for example, you're not aware
02:02:06.340 | of your heartbeat regulation right now,
02:02:07.900 | even though it's clearly being done by your body.
02:02:10.660 | It's just kind of doing its own thing.
02:02:12.140 | When you go jogging, there's a lot of complicated stuff
02:02:15.340 | about how you put your foot down.
02:02:17.860 | And we know it's hard.
02:02:18.740 | That's why robots used to fall over so much.
02:02:20.620 | But you're mostly unaware about it.
02:02:22.760 | Your brain, your CEO consciousness module
02:02:25.780 | just sends an email, hey, I'm gonna keep jogging
02:02:28.140 | along this path.
02:02:29.180 | The rest is on autopilot, right?
02:02:31.620 | So most of it is not conscious,
02:02:33.220 | but somehow there is some of the information processing,
02:02:36.660 | which is we don't know what exactly.
02:02:41.660 | I think this is a science problem
02:02:44.140 | that I hope one day we'll have some equation for
02:02:47.660 | or something so we can be able to build
02:02:49.060 | a consciousness detector and say, yeah,
02:02:50.980 | here there is some consciousness, here there is not.
02:02:53.900 | Oh, don't boil that lobster because it's feeling pain
02:02:56.620 | or it's okay because it's not feeling pain.
02:02:59.860 | Right now we treat this as sort of just metaphysics,
02:03:02.460 | but it would be very useful in emergency rooms
02:03:06.900 | to know if a patient has locked in syndrome
02:03:09.740 | and is conscious or if they are actually just out.
02:03:14.580 | And in the future, if you build a very, very intelligent
02:03:17.740 | helper robot to take care of you,
02:03:20.100 | I think you'd like to know if you should feel guilty
02:03:22.500 | by shutting it down or if it's just like a zombie
02:03:26.180 | going through the motions like a fancy tape recorder.
02:03:28.880 | Once we can make progress on the science of consciousness
02:03:34.060 | and figure out what is conscious and what isn't,
02:03:38.340 | then we, assuming we wanna create positive experiences
02:03:43.340 | and not suffering, we'll probably choose to build
02:03:48.900 | some machines that are deliberately unconscious
02:03:51.780 | that do incredibly boring, repetitive jobs
02:03:56.780 | in an iron mine somewhere or whatever.
02:03:59.700 | And maybe we'll choose to create helper robots
02:04:03.120 | for the elderly that are conscious
02:04:05.340 | so that people don't just feel creeped out,
02:04:07.060 | that the robot is just faking it
02:04:10.180 | when it acts like it's sad or happy.
02:04:12.140 | - Like you said, elderly, I think everybody
02:04:14.500 | gets pretty deeply lonely in this world.
02:04:16.900 | And so there's a place, I think, for everybody
02:04:19.660 | to have a connection with conscious beings,
02:04:21.660 | whether they're human or otherwise.
02:04:24.400 | - But I know for sure that I would, if I had a robot,
02:04:28.820 | if I was gonna develop any kind of personal,
02:04:31.060 | emotional connection with it, I would be very creeped out
02:04:33.820 | if I knew it at an intellectual level
02:04:35.260 | that the whole thing was just a fraud.
02:04:36.820 | Now, today you can buy a little talking doll for a kid,
02:04:41.820 | which will say things, and the little child
02:04:45.340 | will often think that this is actually conscious
02:04:47.820 | and even real secrets to it that then go on the internet
02:04:50.420 | and with all sorts of creepy repercussions.
02:04:52.580 | I would not wanna be just hacked and tricked like this.
02:04:58.060 | If I was gonna be developing real emotional connections
02:05:01.580 | with a robot, I would wanna know that this is actually real.
02:05:05.420 | It's acting conscious, acting happy
02:05:08.100 | because it actually feels it.
02:05:09.900 | And I think this is not sci-fi.
02:05:11.420 | I think-- - It's possible to measure,
02:05:13.580 | to come up with tools and make,
02:05:15.560 | after we understand the science of consciousness,
02:05:17.540 | you're saying we'll be able to come up with tools
02:05:19.780 | that can measure consciousness and definitively say
02:05:23.020 | this thing is experiencing the things
02:05:26.060 | it says it's experiencing. - Yeah, kind of by definition.
02:05:28.300 | If it is a physical phenomenon, information processing,
02:05:31.540 | and we know that some information processing is conscious
02:05:34.020 | and some isn't, well, then there is something there
02:05:35.980 | to be discovered with the methods of science.
02:05:38.020 | Giulio Tononi has stuck his neck out the farthest
02:05:41.100 | and written down some equations for a theory.
02:05:43.620 | Maybe that's right, maybe it's wrong.
02:05:45.700 | We certainly don't know.
02:05:47.060 | But I applaud that kind of efforts to sort of take this,
02:05:50.460 | say this is not just something that philosophers
02:05:53.940 | can have beer and muse about,
02:05:56.340 | but something we can measure and study.
02:05:58.740 | And coming, bringing that back to us,
02:06:00.580 | I think what we would probably choose to do, as I said,
02:06:02.980 | is if we cannot figure this out,
02:06:04.580 | choose to make, be quite mindful
02:06:09.020 | about what sort of consciousness, if any,
02:06:11.300 | we put in different machines that we have.
02:06:13.740 | And certainly, we wouldn't wanna make,
02:06:19.020 | should not be making a bunch of machines that suffer
02:06:21.700 | without us even knowing it, right?
02:06:23.580 | And if at any point someone decides to upload themselves,
02:06:28.260 | like Ray Kurzweil wants to do,
02:06:30.060 | I don't know if you've had him on your show.
02:06:31.420 | - We agree, but then COVID happened,
02:06:32.900 | so we're waiting it out a little bit.
02:06:34.580 | - Suppose he uploads himself into this robo-Ray,
02:06:38.460 | and it talks like him and acts like him and laughs like him,
02:06:42.100 | and before he powers off his biological body,
02:06:44.780 | he would probably be pretty disturbed
02:06:47.660 | if he realized that there's no one home.
02:06:49.540 | This robot is not having any subjective experience, right?
02:06:52.760 | If humanity gets replaced by machine descendants
02:06:59.820 | which do all these cool things and build spaceships
02:07:02.260 | and go to intergalactic rock concerts,
02:07:05.620 | and it turns out that they are all unconscious,
02:07:09.980 | just going through the motions,
02:07:11.420 | wouldn't that be like the ultimate zombie apocalypse, right?
02:07:16.180 | Just a play for empty benches?
02:07:18.020 | - Yeah, I have a sense that there's some kind of,
02:07:21.180 | once we understand consciousness better,
02:07:22.780 | we'll understand that there's some kind of continuum,
02:07:25.620 | and it would be a greater appreciation.
02:07:28.020 | And we'll probably understand, just like you said,
02:07:30.460 | it'd be unfortunate if it's a trick.
02:07:32.420 | We'll probably definitively understand
02:07:33.940 | that love is indeed a trick that we play on each other,
02:07:37.780 | that we humans are, we convince ourselves we're conscious,
02:07:40.960 | but we're really, us and trees and dolphins
02:07:45.220 | are all the same kind of consciousness.
02:07:46.620 | - Can I try to cheer you up a little bit
02:07:48.140 | with a philosophical thought here about the love part?
02:07:50.260 | - Yes, let's do it.
02:07:51.360 | - You might say, okay, yeah,
02:07:54.380 | love is just a collaboration enabler.
02:07:57.960 | And then maybe you can go and get depressed about that.
02:08:01.840 | But I think that would be the wrong conclusion, actually.
02:08:05.080 | I know that the only reason I enjoy food
02:08:08.680 | is because my genes hacked me,
02:08:11.040 | and they don't want me to starve to death.
02:08:13.760 | Not because they care about me
02:08:16.760 | consciously enjoying succulent delights
02:08:19.080 | of pistachio ice cream,
02:08:21.120 | but they just want me to make copies of them.
02:08:23.360 | The whole thing, so in a sense,
02:08:24.560 | the whole enjoyment of food is also a scam like this.
02:08:29.000 | But does that mean I shouldn't take pleasure
02:08:31.320 | in this pistachio ice cream?
02:08:32.600 | I love pistachio ice cream, and I can tell you,
02:08:34.960 | I know this is an experimental fact,
02:08:38.240 | I enjoy pistachio ice cream every bit as much,
02:08:41.640 | even though I scientifically know exactly
02:08:43.680 | what kind of scam this was.
02:08:46.880 | - Your genes really appreciate
02:08:48.680 | that you like the pistachio ice cream.
02:08:50.480 | - Well, but my mind appreciates it too,
02:08:53.120 | and I have a conscious experience right now.
02:08:55.840 | Ultimately, all of my brain is also just something
02:08:58.680 | the genes built to copy themselves, but so what?
02:09:01.360 | I'm grateful that, yeah, thanks genes for doing this,
02:09:05.000 | but now it's my brain that's in charge here,
02:09:07.640 | and I'm gonna enjoy my conscious experience,
02:09:09.560 | thank you very much, and not just the pistachio ice cream,
02:09:12.480 | but also the love I feel for my amazing wife,
02:09:15.480 | and all the other delights of being conscious.
02:09:20.240 | Actually, Richard Feynman, I think, said this so well,
02:09:25.080 | he is also the guy who really got me into physics.
02:09:27.960 | Some art friend said that,
02:09:31.240 | oh, science kind of just is the party pooper,
02:09:34.520 | it kind of ruins the fun, right?
02:09:36.280 | When like, you have a beautiful flower, says the artist,
02:09:39.680 | and then the scientist is gonna deconstruct that
02:09:41.600 | into just a blob of quarks and electrons,
02:09:44.160 | and Feynman just pushed back on that
02:09:46.080 | in such a beautiful way,
02:09:47.480 | which I think also can be used to push back
02:09:49.920 | and make you not feel guilty about falling in love.
02:09:53.200 | So here's what Feynman basically said,
02:09:54.880 | he said to his friend,
02:09:56.920 | yeah, I can also, as a scientist,
02:09:58.880 | see that this is a beautiful flower, thank you very much.
02:10:00.960 | Maybe I can't draw as good a painting as you,
02:10:03.280 | 'cause I'm not as talented an artist,
02:10:04.560 | but yeah, I can really see the beauty in it,
02:10:06.800 | and it also looks beautiful to me.
02:10:09.360 | But in addition to that, Feynman said, as a scientist,
02:10:12.200 | I see even more beauty that the artist did not see, right?
02:10:16.960 | Suppose this is a flower on a blossoming apple tree,
02:10:21.080 | you could say this tree has more beauty in it
02:10:23.840 | than just the colors and the fragrance.
02:10:26.400 | This tree is made of air, Feynman wrote.
02:10:29.040 | This is one of my favorite Feynman quotes ever.
02:10:31.240 | And it took the carbon out of the air
02:10:33.760 | and bound it in using the flaming heat of the sun,
02:10:36.160 | you know, to turn the air into a tree,
02:10:38.600 | and when you burn logs in your fireplace,
02:10:42.760 | it's really beautiful to think that this is being reversed.
02:10:45.120 | Now the tree is going, the wood is going back into air,
02:10:48.560 | and in this flaming, beautiful dance of the fire
02:10:52.520 | that the artist can see is the flaming light of the sun
02:10:55.880 | that was bound in to turn the air into tree,
02:10:59.120 | and then the ashes is the little residue
02:11:01.440 | that didn't come from the air,
02:11:02.560 | that the tree sucked out of the ground.
02:11:04.280 | Feynman said, these are beautiful things,
02:11:06.160 | and science just adds, it doesn't subtract.
02:11:10.000 | And I feel exactly that way about love
02:11:12.760 | and about pistachio ice cream also.
02:11:14.840 | I can understand that there is even more nuance
02:11:18.720 | to the whole thing, right?
02:11:20.520 | At this very visceral level, you can fall in love
02:11:23.680 | just as much as someone who knows nothing about neuroscience
02:11:26.680 | but you can also appreciate this even greater beauty in it.
02:11:32.800 | Isn't it remarkable that it came about
02:11:35.640 | from this completely lifeless universe,
02:11:38.600 | just a bunch of hot blob of plasma expanding?
02:11:43.120 | And then over the eons, gradually,
02:11:46.200 | first the strong nuclear force decided
02:11:48.480 | to combine quarks together into nuclei,
02:11:50.960 | and then the electric force bound in electrons
02:11:53.080 | and made atoms, and then they clustered from gravity,
02:11:55.280 | and you got planets and stars and this and that,
02:11:57.760 | and then natural selection came along,
02:12:00.080 | and the genes had their little thing,
02:12:01.840 | and you started getting what went from seeming
02:12:04.680 | like a completely pointless universe
02:12:06.280 | that was just trying to increase entropy
02:12:08.080 | and approach heat death into something
02:12:10.200 | that looked more goal-oriented.
02:12:11.760 | Isn't that kind of beautiful?
02:12:13.280 | And then this goal-orientedness through evolution
02:12:15.800 | got ever more sophisticated where you got ever more,
02:12:18.760 | and then you started getting this thing
02:12:20.160 | which is kind of like DeepMind's mu zero and steroids,
02:12:25.160 | the ultimate self-play is not what DeepMind's AI
02:12:29.440 | does against itself to get better at the go.
02:12:32.120 | It's what all these little quark blobs did
02:12:34.480 | against each other in the game of survival of the fittest.
02:12:38.960 | Now, when you had really dumb bacteria living
02:12:42.320 | in a simple environment, there wasn't much incentive
02:12:45.200 | to get intelligent, but then the life made environment
02:12:49.520 | more complex, and then there was more incentive
02:12:52.080 | to get even smarter, and that gave the other organisms
02:12:56.000 | more of incentive to also get smarter,
02:12:57.560 | and then here we are now, just like mu zero learned
02:13:02.560 | to become world master at the go and chess
02:13:05.560 | from playing against itself, by just playing against itself.
02:13:08.600 | All the quarks here on our planet and electrons
02:13:11.360 | have created giraffes and elephants and humans and love.
02:13:16.360 | I just find that really beautiful, and to me,
02:13:21.200 | that just adds to the enjoyment of love.
02:13:24.240 | It doesn't subtract anything.
02:13:25.720 | Do you feel a little more cheerful now?
02:13:27.400 | - I feel way better.
02:13:29.000 | That was incredible.
02:13:30.680 | So this self-play of quarks, taking back
02:13:34.600 | to the beginning of our conversation a little bit,
02:13:37.080 | there's so many exciting possibilities
02:13:39.560 | about artificial intelligence,
02:13:40.840 | understanding the basic laws of physics.
02:13:44.260 | Do you think AI will help us unlock,
02:13:47.420 | there's been quite a bit of excitement
02:13:49.280 | throughout the history of physics of coming up
02:13:51.720 | with more and more general, simple laws
02:13:55.880 | that explain the nature of our reality,
02:13:58.440 | and then the ultimate of that would be a theory
02:14:01.120 | of everything that combines everything together.
02:14:03.680 | Do you think it's possible that, well, one, we humans,
02:14:07.440 | but perhaps AI systems will figure out a theory of physics
02:14:12.440 | that unifies all the laws of physics?
02:14:16.200 | - Yeah, I think it's absolutely possible.
02:14:19.920 | I think it's very clear that we're gonna see
02:14:22.820 | a great boost to science.
02:14:24.960 | We're already seeing a boost, actually,
02:14:26.720 | from machine learning helping science.
02:14:28.760 | Alpha fold was an example,
02:14:30.520 | the decades-old protein folding problem.
02:14:33.340 | So, and gradually, yeah, unless we go extinct
02:14:38.160 | by doing something dumb like we discussed,
02:14:39.720 | I think it's very likely that our understanding
02:14:44.720 | of physics will become so good
02:14:48.080 | that our technology will no longer be limited
02:14:53.080 | by human intelligence, but instead be limited
02:14:56.280 | by the laws of physics.
02:14:57.440 | So our tech today is limited
02:15:00.120 | by what we've been able to invent, right?
02:15:02.160 | I think as AI progresses, it'll just be limited
02:15:05.840 | by the speed of light and other physical limits,
02:15:09.280 | which will mean it's gonna be just dramatically
02:15:13.120 | beyond where we are now.
02:15:15.320 | - Do you think it's a fundamentally mathematical pursuit
02:15:18.600 | of trying to understand the laws
02:15:22.120 | that govern our universe from a mathematical perspective?
02:15:25.760 | It's almost like if it's AI,
02:15:28.000 | it's exploring the space of theorems
02:15:31.640 | and those kinds of things.
02:15:33.520 | Or is there some other more computational ideas,
02:15:38.520 | more sort of empirical ideas?
02:15:41.280 | - They're both, I would say.
02:15:43.120 | It's really interesting to look out at the landscape
02:15:45.920 | of everything we call science today.
02:15:48.000 | So here you come now with this big new hammer.
02:15:50.200 | It says machine learning on it,
02:15:51.480 | and that's, you know, where are there some nails
02:15:53.400 | that you can help with here that you can hammer?
02:15:56.640 | Ultimately, if machine learning gets to the point
02:16:00.160 | that it can do everything better than us,
02:16:02.840 | it will be able to help across the whole space of science.
02:16:06.040 | But maybe we can anchor it by starting a little bit
02:16:08.160 | right now near term and see how we kind of move forward.
02:16:11.680 | So like right now, first of all,
02:16:14.880 | you have a lot of big data science, right?
02:16:17.400 | Where, for example, with telescopes,
02:16:19.400 | we are able to collect way more data every hour
02:16:24.120 | than a grad student can just pour over
02:16:26.720 | like in the old times, right?
02:16:28.760 | And machine learning is already being used very effectively,
02:16:31.040 | even at MIT, right?
02:16:32.120 | To find planets around other stars,
02:16:34.680 | to detect exciting new signatures
02:16:36.560 | of new particle physics in the sky,
02:16:38.760 | to detect the ripples in the fabric of space-time
02:16:42.960 | that we call gravitational waves
02:16:44.640 | caused by enormous black holes crashing into each other
02:16:47.720 | halfway across our observable universe.
02:16:49.920 | Machine learning is running and taking it right now,
02:16:52.680 | doing all these things,
02:16:53.800 | and it's really helping all these experimental fields.
02:16:57.560 | There is a separate front of physics, computational physics,
02:17:03.240 | which is getting an enormous boost also.
02:17:05.680 | So we had to do all our computations by hand, right?
02:17:09.520 | People would have these giant books
02:17:11.240 | with tables of logarithms, and oh my God,
02:17:15.440 | it pains me to even think how long time
02:17:17.880 | it would have taken to do simple stuff.
02:17:19.920 | Then we started to get little calculators and computers
02:17:23.600 | that could do some basic math for us.
02:17:26.560 | Now, what we're starting to see is
02:17:28.880 | kind of a shift from Go-Fi computational physics
02:17:35.640 | to neural network computational physics.
02:17:40.040 | What I mean by that is most computational physics
02:17:44.560 | would be done by humans programming in
02:17:48.520 | the intelligence of how to do the computation
02:17:50.240 | into the computer.
02:17:51.200 | Just as when Garry Kasparov got his posterior kicked
02:17:55.440 | by IBM's Deep Blue in chess,
02:17:56.920 | humans had programmed in exactly how to play chess.
02:17:59.920 | Intelligence came from the humans.
02:18:01.200 | It wasn't learned, right?
02:18:02.440 | Mu zero can be not only Kasparov in chess,
02:18:08.520 | but also Stockfish,
02:18:09.880 | which is the best sort of Go-Fi chess program.
02:18:13.560 | By learning, and we're seeing more of that now,
02:18:17.640 | that shift beginning to happen in physics.
02:18:19.400 | Let me give you an example.
02:18:21.520 | So lattice QCD is an area of physics
02:18:25.080 | whose goal is basically to take the periodic table
02:18:28.320 | and just compute the whole thing from first principles.
02:18:31.080 | This is not the search for theory of everything.
02:18:34.840 | We already know the theory that's supposed to produce
02:18:38.320 | this output, the periodic table,
02:18:40.680 | which atoms are stable, how heavy they are,
02:18:43.320 | all that good stuff, their spectral lines.
02:18:45.440 | It's a theory, lattice QCD,
02:18:48.720 | you can put it on your t-shirt.
02:18:50.600 | Our colleague, Frank Wilczek,
02:18:51.760 | got the Nobel prize for working on it.
02:18:53.720 | But the math is just too hard for us to solve.
02:18:57.160 | We have not been able to start with these equations
02:18:59.240 | and solve them to the extent that we can predict, oh yeah.
02:19:02.080 | And then there is carbon,
02:19:03.960 | and this is what the spectrum of the carbon atom looks like.
02:19:07.640 | But awesome people are building
02:19:10.000 | these super computer simulations
02:19:12.080 | where you just put in these equations
02:19:15.000 | and you make a big cubic lattice of space,
02:19:20.000 | or actually it's a very small lattice
02:19:22.120 | because you're going down to the subatomic scale,
02:19:25.680 | and you try to solve it.
02:19:26.920 | But it's just so computationally expensive
02:19:29.000 | that we still haven't been able to calculate things
02:19:31.840 | as accurately as we measure them in many cases.
02:19:35.000 | And now machine learning is really revolutionizing this.
02:19:37.560 | So my colleague, Fiola Shanahan at MIT, for example,
02:19:40.080 | she's been using this really cool machine learning technique
02:19:44.640 | called normalizing flows,
02:19:47.600 | where she's realized she can actually
02:19:49.360 | speed up the calculation dramatically
02:19:52.240 | by having the AI learn how to do things faster.
02:19:54.680 | Another area like this where we suck up
02:20:00.720 | an enormous amount of super computer time to do physics
02:20:05.080 | is black hole collisions.
02:20:07.200 | So now that we've done the sexy stuff
02:20:08.640 | of detecting a bunch of this,
02:20:10.480 | LIGO and other experiments,
02:20:11.720 | we want to be able to know what we're seeing.
02:20:15.120 | And so it's a very simple conceptual problem.
02:20:18.160 | It's the two-body problem.
02:20:19.560 | Newton solved it for classical gravity hundreds of years ago,
02:20:24.720 | but the two-body problem is still not fully solved.
02:20:27.800 | - For black holes.
02:20:29.200 | - Yes, and Einstein's gravity,
02:20:30.760 | because they won't just orbit each other forever anymore,
02:20:33.560 | two things, they give off gravitational waves,
02:20:36.040 | and eventually they crash into each other.
02:20:37.800 | And the game, what you want to do
02:20:39.440 | is you want to figure out, okay,
02:20:41.520 | what kind of wave comes out
02:20:43.480 | as a function of the masses of the two black holes,
02:20:46.320 | as a function of how they're spinning
02:20:48.120 | relative to each other, et cetera.
02:20:50.720 | And that is so hard.
02:20:52.080 | It can take months of super computer time
02:20:54.200 | and massive numbers of cores to do it.
02:20:56.160 | Wouldn't it be great if you can use machine learning
02:21:00.400 | to greatly speed that up, right?
02:21:04.760 | Now you can use the expensive old Gophi calculation
02:21:09.360 | as the truth, and then see if machine learning
02:21:11.920 | can figure out a smarter, faster way
02:21:13.600 | of getting the right answer.
02:21:15.000 | Yet another area of computational physics.
02:21:20.000 | These are probably the big three
02:21:22.280 | that suck up the most computer time,
02:21:24.240 | lattice QCD, black hole collisions,
02:21:27.160 | and cosmological simulations,
02:21:29.600 | where you take not a subatomic thing
02:21:32.320 | and try to figure out the mass of the proton,
02:21:34.440 | but you take something that's enormous
02:21:37.720 | and try to look at how all the galaxies get formed in there.
02:21:41.360 | There again, there are a lot of very cool ideas right now
02:21:44.760 | about how you can use machine learning
02:21:46.440 | to do this sort of stuff better.
02:21:48.040 | The difference between this and the big data
02:21:51.600 | is you kind of make the data yourself, right?
02:21:54.600 | So, and then finally,
02:21:58.480 | we're looking over the physics landscape
02:22:00.240 | and seeing what can we hammer with machine learning, right?
02:22:02.160 | So we talked about experimental data, big data,
02:22:05.520 | discovering cool stuff that we humans
02:22:07.880 | then look more closely at.
02:22:09.520 | Then we talked about taking the expensive computations
02:22:13.440 | we're doing now and figuring out how to do them
02:22:15.520 | much faster and better with AI.
02:22:18.560 | And finally, let's go really theoretical.
02:22:20.880 | So things like discovering equations,
02:22:24.000 | having deep fundamental insights.
02:22:28.800 | This is something closest to what I've been doing
02:22:33.000 | in my group.
02:22:33.840 | We talked earlier about the whole AI Feynman project
02:22:35.880 | where if you just have some data,
02:22:37.880 | how do you automatically discover equations
02:22:39.800 | that seem to describe this well
02:22:42.160 | that you can then go back as a human
02:22:44.080 | and work with and test and explore?
02:22:46.600 | And you asked a really good question also
02:22:51.160 | about if this is sort of a search problem in some sense.
02:22:54.880 | That's very deep actually what you said, because it is.
02:22:57.720 | Suppose I ask you to prove some mathematical theorem.
02:23:01.040 | What is a proof in math?
02:23:03.680 | It's just a long string of steps,
02:23:05.520 | logical steps that you can write out with symbols.
02:23:08.720 | And once you find it, it's very easy to write a program
02:23:10.960 | to check whether it's a valid proof or not.
02:23:13.760 | So why is it so hard to prove it then?
02:23:16.880 | Well, because there are ridiculously many possible
02:23:19.760 | candidate proofs you could write down, right?
02:23:22.400 | If the proof contains 10,000 symbols,
02:23:26.240 | even if there are only 10 options
02:23:28.560 | for what each symbol could be,
02:23:29.960 | that's 10 to the power of 1,000 possible proofs,
02:23:34.200 | which is way more than there are atoms in our universe.
02:23:36.800 | So you could say it's trivial to prove these things.
02:23:39.200 | You just write a computer, generate all strings,
02:23:42.000 | and then check, is this a valid proof?
02:23:43.560 | Eh, no.
02:23:44.920 | Is this a valid proof?
02:23:46.000 | Eh, no.
02:23:46.840 | And then you just keep doing this forever.
02:23:51.020 | But there are a lot of,
02:23:53.160 | but it is fundamentally a search problem.
02:23:55.120 | You just want to search the space of all strings of symbols
02:23:59.920 | to find the one, find one that is the proof, right?
02:24:02.960 | And there's a whole area of machine learning called search.
02:24:08.840 | How do you search through some giant space
02:24:10.600 | to find the needle in the haystack?
02:24:12.360 | It's easier in cases where there's a clear measure of good,
02:24:17.200 | like you're not just right or wrong,
02:24:18.840 | but this is better and this is worse,
02:24:20.680 | so you can maybe get some hints
02:24:21.840 | as to which direction to go in.
02:24:23.840 | That's why we talked about neural networks work so well.
02:24:27.040 | - I mean, that's such a human thing
02:24:30.720 | of that moment of genius,
02:24:32.320 | of figuring out the intuition of good, essentially.
02:24:37.320 | I mean, we thought that that was--
02:24:38.800 | - Or is it?
02:24:40.160 | - Maybe it's not, right?
02:24:41.360 | We thought that about chess, right?
02:24:42.760 | - Exactly.
02:24:43.880 | - That the ability to see like 10, 15,
02:24:46.880 | sometimes 20 steps ahead was not a calculation
02:24:50.720 | that humans were performing.
02:24:51.800 | It was some kind of weird intuition
02:24:53.760 | about different patterns, about board positions,
02:24:57.320 | about the relative positions.
02:24:58.760 | - Exactly.
02:24:59.600 | - Somehow stitching stuff together.
02:25:01.680 | And a lot of it is just like intuition.
02:25:03.960 | But then you have like Alpha, I guess,
02:25:06.320 | Zero be the first one that did the self-play.
02:25:10.680 | It just came up with this.
02:25:12.200 | It was able to learn through self-play mechanism,
02:25:14.600 | this kind of intuition.
02:25:15.800 | - Exactly.
02:25:16.840 | - But just like you said, it's so fascinating to think
02:25:20.000 | within the space of totally new ideas,
02:25:24.640 | can that be done in developing theorems?
02:25:28.960 | - We know it can be done by neural networks
02:25:30.800 | 'cause we did it with the neural networks
02:25:32.280 | in the cranium of the great mathematicians of humanity.
02:25:36.240 | And I'm so glad you brought up Alpha, Zero
02:25:38.600 | 'cause that's the counter example.
02:25:39.960 | It turned out we were flattering ourselves
02:25:41.840 | when we said intuition is something different.
02:25:45.000 | It's only humans can do it.
02:25:46.520 | It's not information processing.
02:25:49.240 | If you, if it used to be that way,
02:25:52.320 | again, it's really instructive, I think,
02:25:56.200 | to compare the chess computer, Deep Blue,
02:25:58.480 | that beat Kasparov with Alpha, Zero
02:26:02.040 | that beat Lissadol at the go.
02:26:04.280 | Because for Deep Blue, there was no intuition.
02:26:08.640 | There was some pro, humans had programmed in some intuition.
02:26:12.000 | After humans had played a lot of games,
02:26:13.600 | they told the computer, you know,
02:26:15.040 | count the pawn as one point, the bishop as three points,
02:26:18.720 | the rook as five points and so on.
02:26:20.320 | You add it all up and then you add some extra points
02:26:22.480 | for past pawns and subtract if the opponent has it
02:26:25.480 | and blah, blah, blah, blah.
02:26:27.340 | And then what Deep Blue did was just search.
02:26:31.600 | Just very brute force, tried many, many moves ahead,
02:26:35.000 | all these combinations in a pruned tree search.
02:26:37.440 | And it could think much faster than Kasparov
02:26:40.480 | and it won, right?
02:26:41.520 | And that, I think, inflated our egos
02:26:45.480 | in a way it shouldn't have
02:26:46.600 | 'cause people started to say, yeah, yeah,
02:26:48.800 | it's just brute force search but it has no intuition.
02:26:51.400 | Alpha, Zero really popped our bubble there
02:26:57.320 | because what Alpha, Zero does,
02:27:00.880 | yes, it does also do some of that tree search,
02:27:03.900 | but it also has this intuition module
02:27:06.560 | which in geek speak is called a value function
02:27:09.560 | where it just looks at the board
02:27:11.120 | and comes up with a number for how good is that position.
02:27:15.160 | The difference was no human told it
02:27:17.960 | how good the position is.
02:27:19.280 | It just learned it.
02:27:21.080 | And Mu Zero is the coolest or scariest of all,
02:27:26.820 | depending on your mood,
02:27:28.200 | because the same basic AI system
02:27:32.040 | will learn what the good board position is
02:27:35.320 | regardless of whether it's chess or Go or Shogi
02:27:38.640 | or Pac-Man or Lady Pac-Man or Breakout or Space Invaders
02:27:42.920 | or any number, a bunch of other games.
02:27:45.000 | You don't tell it anything
02:27:45.840 | and it gets this intuition after a while for what's good.
02:27:49.760 | So this is very hopeful for science, I think,
02:27:52.760 | because if it can get intuition
02:27:55.240 | for what's a good position there,
02:27:57.280 | maybe it can also get intuition
02:27:58.880 | for what are some good directions to go
02:28:00.620 | if you're trying to prove something.
02:28:02.480 | One of the most fun things in my science career
02:28:06.400 | is when I've been able to prove some theorem about something
02:28:08.640 | and it's very heavily intuition guided, of course.
02:28:12.200 | I don't sit and try all random strings.
02:28:14.280 | I have a hunch that this reminds me a little bit
02:28:17.720 | about this other proof I've seen for this thing.
02:28:20.000 | So maybe I first, what if I try this?
02:28:22.600 | Nah, that didn't work out.
02:28:23.960 | But this reminds me, actually,
02:28:25.920 | the way this failed reminds me of that.
02:28:28.600 | So combining the intuition
02:28:30.520 | with all these brute force capabilities,
02:28:34.880 | I think it's gonna be able to help physics too.
02:28:38.600 | - Do you think there'll be a day when an AI system
02:28:42.960 | being the primary contributor,
02:28:45.120 | let's say 90% plus, wins a Nobel Prize in physics?
02:28:48.260 | Obviously, they'll give it to the humans
02:28:52.000 | 'cause we humans don't like to give prizes to machines.
02:28:54.840 | It'll give it to the humans behind the system.
02:28:57.600 | You could argue that AI has already been involved
02:28:59.960 | in some Nobel Prizes, probably,
02:29:01.600 | maybe some to the black holes and stuff like that.
02:29:03.600 | - Yeah, we don't like giving prizes to other life forms.
02:29:07.200 | If someone wins a horse racing contest,
02:29:09.760 | they don't give the prize to horse either.
02:29:11.360 | - It's true.
02:29:13.160 | But do you think that we might be able to see
02:29:16.040 | something like that in our lifetimes when AI?
02:29:19.240 | So the first system, I would say,
02:29:21.880 | that makes us think about a Nobel Prize seriously
02:29:25.400 | is like AlphaFold is making us think about,
02:29:28.760 | in medicine physiology, a Nobel Prize,
02:29:31.960 | perhaps discoveries that are a direct result
02:29:34.080 | of something that's discovered by AlphaFold.
02:29:36.640 | Do you think in physics,
02:29:38.760 | we might be able to see that in our lifetimes?
02:29:41.520 | - I think what's probably gonna happen
02:29:43.520 | is more of a blurring of the distinctions.
02:29:46.880 | So today, if somebody uses a computer
02:29:51.880 | to do a computation that gives them the Nobel Prize,
02:29:54.920 | nobody's gonna dream of giving the prize to the computer.
02:29:57.160 | They're gonna be like, "That was just a tool."
02:29:59.000 | I think for these things, also,
02:30:02.120 | people are just gonna, for a long time,
02:30:04.000 | view the computer as a tool.
02:30:06.080 | But what's gonna change is the ubiquity of machine learning.
02:30:11.320 | I think at some point in my lifetime,
02:30:16.160 | finding a human physicist
02:30:19.880 | who knows nothing about machine learning
02:30:22.240 | is gonna be almost as hard as it is today
02:30:24.760 | finding a human physicist who doesn't,
02:30:26.760 | says, "Oh, I don't know anything about computers,"
02:30:29.160 | or, "I don't use math."
02:30:30.880 | That would just be a ridiculous concept.
02:30:32.880 | - You see, but the thing is,
02:30:35.400 | there is a magic moment, though, like with AlphaZero,
02:30:40.080 | when the system surprises us in a way
02:30:42.960 | where the best people in the world
02:30:45.640 | truly learn something from the system
02:30:48.960 | in a way where you feel like it's another entity.
02:30:52.480 | Like the way people, the way Magnus Carlsen,
02:30:54.920 | the way certain people are looking at the work of AlphaZero,
02:30:58.080 | it's like it truly is no longer a tool
02:31:03.080 | in the sense that it doesn't feel like a tool.
02:31:06.680 | It feels like some other entity.
02:31:08.920 | So there is a magic difference where you're like,
02:31:12.480 | if an AI system is able to come up with an insight
02:31:17.320 | that surprises everybody in some major way
02:31:22.320 | that's a phase shift in our understanding
02:31:25.960 | of some particular science
02:31:27.760 | or some particular aspect of physics,
02:31:30.040 | I feel like that is no longer a tool.
02:31:32.640 | And then you can start to say
02:31:34.840 | that it perhaps deserves the prize.
02:31:38.720 | So for sure, the more important
02:31:40.680 | and the more fundamental transformation
02:31:43.120 | of the 21st century science is exactly what you're saying,
02:31:46.640 | which is probably everybody
02:31:48.800 | will be doing machine learning to some degree.
02:31:51.560 | Like if you want to be successful
02:31:54.760 | at unlocking the mysteries of science,
02:31:57.560 | you should be doing machine learning.
02:31:58.800 | But it's just exciting to think about
02:32:01.440 | whether there'll be one that comes along
02:32:03.080 | that's super surprising and they'll make us question
02:32:08.240 | who the real inventors are in this world.
02:32:10.360 | - Yeah, yeah.
02:32:12.200 | I think the question of isn't if it's gonna happen,
02:32:15.440 | but when.
02:32:16.720 | But it's important, honestly, in my mind,
02:32:19.440 | the time when that happens is also more or less
02:32:22.280 | the same time when we get artificial general intelligence.
02:32:25.600 | And then we have a lot bigger things to worry about
02:32:28.200 | than whether we should get the Nobel Prize or not, right?
02:32:31.680 | Because when you have machines
02:32:35.040 | that can outperform our best scientists
02:32:37.960 | at science, they can probably outperform us
02:32:41.040 | at a lot of other stuff as well,
02:32:43.600 | which can at a minimum make them
02:32:46.440 | incredibly powerful agents in the world.
02:32:49.440 | And I think it's a mistake
02:32:52.760 | to think we only have to start worrying
02:32:54.600 | about loss of control when machines get to AGI
02:32:58.960 | across the board, when they can do everything, all our jobs.
02:33:02.360 | Long before that,
02:33:05.160 | they'll be hugely influential.
02:33:07.920 | We talked at length about how the hacking of our minds
02:33:12.600 | with algorithms trying to get us glued to our screens,
02:33:17.600 | has already had a big impact on society.
02:33:22.400 | That was an incredibly dumb algorithm
02:33:24.120 | in the grand scheme of things, right?
02:33:25.840 | The supervised machine learning,
02:33:27.880 | yet it had huge impact.
02:33:29.560 | So I just don't want us to be lulled
02:33:32.120 | into a false sense of security
02:33:33.320 | and think there won't be any societal impact
02:33:35.600 | until things reach human level,
02:33:37.080 | 'cause it's happening already.
02:33:38.320 | And I was just thinking the other week,
02:33:40.640 | when I see some scaremonger going,
02:33:44.920 | oh, the robots are coming,
02:33:47.120 | the implication is always that they're coming to kill us.
02:33:50.000 | And maybe you shouldn't have worried about that
02:33:52.400 | if you were in Nagorno-Karabakh
02:33:54.760 | during the recent war there.
02:33:55.760 | But more seriously,
02:33:57.760 | the robots are coming right now,
02:34:01.480 | but they're mainly not coming to kill us.
02:34:03.200 | They're coming to hack us.
02:34:04.480 | They're coming to hack our minds
02:34:08.240 | into buying things that maybe we didn't need,
02:34:11.320 | to vote for people who may not have
02:34:13.240 | our best interest in mind.
02:34:15.400 | And it's kind of humbling, I think,
02:34:17.640 | actually, as a human being,
02:34:19.640 | to admit that it turns out that our minds
02:34:22.080 | are actually much more hackable than we thought.
02:34:24.800 | And the ultimate insult is that we are actually
02:34:27.080 | getting hacked by the machine learning algorithms
02:34:30.440 | that are in some objective sense,
02:34:31.600 | much dumber than us.
02:34:34.000 | But maybe we shouldn't be so surprised
02:34:35.800 | because how do you feel about the cute puppies?
02:34:40.600 | - Love them.
02:34:41.640 | - So you would probably argue that
02:34:44.400 | in some across the board measure,
02:34:46.200 | you're more intelligent than they are,
02:34:47.720 | but boy, are our cute puppies good at hacking us, right?
02:34:51.160 | They move into our house,
02:34:52.520 | persuade us to feed them and do all these things.
02:34:54.640 | What do they ever do for us?
02:34:56.640 | - Yeah.
02:34:57.480 | - Other than being cute and making us feel good, right?
02:35:00.600 | So if puppies can hack us,
02:35:03.120 | maybe we shouldn't be so surprised
02:35:04.960 | if pretty dumb machine learning algorithms can hack us too.
02:35:09.080 | - Not to speak of cats, which is another level.
02:35:11.760 | And I think we should,
02:35:13.440 | to counter your previous point about there,
02:35:15.680 | let us not think about evil creatures in this world.
02:35:18.080 | We can all agree that cats are as close
02:35:20.520 | to objective evil as we can get.
02:35:23.000 | But that's just me saying that.
02:35:24.440 | Okay, so you--
02:35:25.280 | - Have you seen the cartoon?
02:35:27.360 | I think it's maybe The Onion.
02:35:30.720 | Where this incredibly cute kitten,
02:35:33.720 | and it just says,
02:35:34.560 | underneath something about,
02:35:37.280 | thinks about murder all day.
02:35:38.960 | - Exactly.
02:35:41.560 | That's accurate.
02:35:43.080 | You've mentioned offline that there might be a link
02:35:45.200 | between post-biological AGI and SETI.
02:35:47.960 | So last time we talked,
02:35:51.280 | you've talked about this intuition
02:35:54.920 | that we humans might be quite unique
02:35:59.280 | in our galactic neighborhood.
02:36:02.320 | Perhaps our galaxy,
02:36:03.680 | perhaps the entirety of the observable universe,
02:36:06.360 | we might be the only intelligent civilization here.
02:36:10.680 | And you argue pretty well for that thought.
02:36:17.720 | So I have a few little questions around this.
02:36:21.240 | One, the scientific question.
02:36:24.680 | In which way would you be,
02:36:29.240 | if you were wrong in that intuition,
02:36:33.120 | in which way do you think you would be surprised?
02:36:36.680 | Like why were you wrong?
02:36:38.520 | We find out that you ended up being wrong.
02:36:41.600 | Like in which dimension?
02:36:43.880 | So like, is it because we can't see them?
02:36:48.420 | Is it because the nature of their intelligence
02:36:51.320 | or the nature of their life is totally different
02:36:54.760 | than we can possibly imagine?
02:36:56.760 | Is it because the,
02:37:00.680 | I mean, something about the great filters
02:37:02.640 | and surviving them?
02:37:04.480 | Or maybe because we're being protected from signals?
02:37:08.800 | All those explanations for why we haven't heard
02:37:13.800 | a big, loud, like red light that says we're here.
02:37:20.400 | - Yeah.
02:37:21.720 | So there are actually two separate things there
02:37:23.560 | that I could be wrong about,
02:37:24.720 | two separate claims that I made, right?
02:37:26.800 | Not them.
02:37:27.620 | One of them is, I made the claim,
02:37:32.240 | I think most civilizations,
02:37:35.500 | when you're going from simple bacteria-like things
02:37:41.800 | to space colonizing civilizations,
02:37:47.840 | they spend only a very, very tiny fraction
02:37:50.840 | of their life being where we are.
02:37:55.200 | That I could be wrong about.
02:37:57.280 | The other one I could be wrong about
02:37:58.740 | is a quite different statement that I think
02:38:00.800 | that actually I'm guessing
02:38:02.520 | that we are the only civilization
02:38:04.680 | in our observable universe
02:38:06.120 | from which light has reached us so far
02:38:08.240 | that's actually gotten far enough to invent telescopes.
02:38:12.320 | So let's talk about maybe both of them in turn
02:38:14.000 | 'cause they really are different.
02:38:14.980 | The first one, if you look at the N equals one,
02:38:19.860 | the data point we have on this planet,
02:38:22.100 | so we spent four and a half billion years
02:38:25.900 | futzing around on this planet with life, right?
02:38:28.260 | We got, and most of it was pretty lame stuff
02:38:32.100 | from an intelligence perspective.
02:38:33.700 | Bacteria and then the dinosaurs spent,
02:38:38.160 | then the things gradually accelerated, right?
02:38:41.300 | Then the dinosaurs spent over 100 million years
02:38:43.620 | stomping around here without even inventing smartphones.
02:38:46.980 | And then very recently,
02:38:49.780 | we've only spent 400 years going from Newton to us, right?
02:38:55.340 | In terms of technology.
02:38:56.500 | And look what we've done even,
02:38:59.260 | when I was a little kid, there was no internet even.
02:39:02.620 | So I think it's pretty likely for in this case
02:39:07.080 | of this planet, right?
02:39:08.180 | That we're either gonna really get our act together
02:39:12.180 | and start spreading life into space, the century,
02:39:15.100 | and doing all sorts of great things,
02:39:16.460 | or we're gonna wipe out.
02:39:18.060 | It's a little hard.
02:39:19.900 | I couldn't be wrong in the sense that maybe
02:39:23.500 | what happened on this Earth is very atypical.
02:39:25.780 | And for some reason, what's more common on other planets
02:39:28.540 | is that they spend an enormously long time
02:39:31.440 | futzing around with the ham radio and things,
02:39:33.720 | but they just never really take it to the next level
02:39:36.220 | for reasons I haven't understood.
02:39:38.380 | I'm humble and open to that.
02:39:40.200 | But I would bet at least 10 to one
02:39:42.860 | that our situation is more typical,
02:39:45.140 | because the whole thing with Moore's law
02:39:46.780 | and accelerating technology,
02:39:48.220 | it's pretty obvious why it's happening.
02:39:50.180 | Everything that grows exponentially,
02:39:52.940 | we call it an explosion,
02:39:54.120 | whether it's a population explosion or a nuclear explosion,
02:39:56.660 | it's always caused by the same thing.
02:39:58.020 | It's that the next step triggers a step after that.
02:40:01.500 | So today's technology enables tomorrow's technology,
02:40:06.500 | and that enables the next level.
02:40:09.100 | And because the technology's always better,
02:40:13.820 | of course, the steps can come faster and faster.
02:40:17.020 | On the other question that I might be wrong about,
02:40:19.200 | that's the much more controversial one, I think.
02:40:22.380 | But before we close out on this thing about,
02:40:24.960 | if the first one, if it's true that most civilizations
02:40:28.360 | spend only a very short amount of their total time
02:40:32.060 | in the stage, say, between inventing
02:40:37.660 | telescopes or mastering electricity
02:40:40.820 | and doing space travel,
02:40:43.420 | if that's actually generally true,
02:40:46.220 | then that should apply also elsewhere out there.
02:40:49.020 | So we should be very, very surprised
02:40:52.940 | if we find some random civilization
02:40:55.540 | and we happen to catch them exactly
02:40:56.980 | in that very, very short stage.
02:40:58.820 | It's much more likely that we find
02:41:00.460 | this planet full of bacteria,
02:41:03.020 | or that we find some civilization
02:41:05.580 | that's already post-biological and has done
02:41:08.740 | some really cool galactic construction projects
02:41:11.860 | in their galaxy.
02:41:13.340 | - Would we be able to recognize them, do you think?
02:41:15.200 | Is it possible that we just can't?
02:41:17.460 | I mean, this post-biological world,
02:41:20.060 | could it be just existing in some other dimension?
02:41:24.140 | Could it be just all a virtual reality game
02:41:26.300 | for them or something, I don't know,
02:41:28.500 | that it changes completely where we won't be able to detect?
02:41:32.900 | - We have to be, honestly, very humble about this.
02:41:35.100 | I think I said earlier,
02:41:37.020 | the number one principle of being a scientist
02:41:39.780 | is you have to be humble and willing to acknowledge
02:41:42.220 | that everything we think, guess, might be totally wrong.
02:41:45.060 | Of course, you can imagine some civilization
02:41:46.940 | where they all decide to become Buddhists
02:41:48.660 | and very inward-looking and just move
02:41:51.060 | into their little virtual reality
02:41:52.380 | and not disturb the flora and fauna around them
02:41:55.100 | and we might not notice them.
02:41:58.100 | But this is a numbers game, right?
02:41:59.960 | If you have millions of civilizations out there
02:42:02.260 | or billions of them,
02:42:03.700 | all it takes is one with a more ambitious mentality
02:42:08.100 | that decides, hey, we are gonna go out
02:42:10.260 | and settle a bunch of other solar systems
02:42:15.260 | and maybe galaxies,
02:42:17.580 | and then it doesn't matter
02:42:18.460 | if they're a bunch of quiet Buddhists.
02:42:19.620 | We're still gonna notice that expansionist one, right?
02:42:23.020 | And it seems like quite the stretch to assume that,
02:42:26.580 | now, we know even in our own galaxy
02:42:28.140 | that there are probably a billion or more planets
02:42:33.260 | that are pretty Earth-like
02:42:34.540 | and many of them were formed over a billion years
02:42:37.860 | before ours, so it had a big head start.
02:42:40.820 | So if you actually assume also that life happens
02:42:45.060 | kind of automatically on an Earth-like planet,
02:42:47.540 | I think it's quite the stretch to then go and say,
02:42:52.260 | okay, so there are another billion civilizations out there
02:42:55.460 | that also have our level of tech
02:42:57.020 | and they all decided to become Buddhists
02:42:59.460 | and not a single one decided to go Hitler on the galaxy
02:43:03.020 | and say, we need to go out and colonize
02:43:05.420 | or not a single one decided for more benevolent reasons
02:43:08.980 | to go out and get more resources.
02:43:10.660 | That seems like a bit of a stretch, frankly.
02:43:13.980 | And this leads into the second thing
02:43:16.700 | you challenged me to be, that I might be wrong about,
02:43:18.700 | how rare or common is life?
02:43:21.180 | So Francis Drake, when he wrote down the Drake equation,
02:43:25.300 | multiplied together a huge number of factors
02:43:27.700 | and said, we don't know any of them,
02:43:29.500 | so we know even less about what you get
02:43:31.660 | when you multiply together the whole product.
02:43:33.940 | Since then, a lot of those factors
02:43:37.420 | have become much better known.
02:43:39.100 | One of his big uncertainties was,
02:43:41.060 | how common is it that a solar system even has a planet?
02:43:44.580 | Well, now we know it's very common.
02:43:46.500 | - Earth-like planets, we know we have better--
02:43:48.540 | - There are a dime a dozen, there are many, many of them,
02:43:50.620 | even in our galaxy.
02:43:52.300 | At the same time, we have, thanks to,
02:43:55.220 | I'm a big supporter of the SETI project and its cousins,
02:43:59.060 | and I think we should keep doing this,
02:44:00.740 | and we've learned a lot.
02:44:02.580 | We've learned that so far,
02:44:04.020 | all we have is still unconvincing hints, nothing more.
02:44:08.260 | And there are certainly many scenarios
02:44:10.540 | where it would be dead obvious.
02:44:12.580 | If there were 100 million other human-like civilizations
02:44:18.260 | in our galaxy, it would not be that hard
02:44:20.380 | to notice some of them with today's technology,
02:44:22.860 | and we haven't.
02:44:23.700 | So what we can say is, well, okay,
02:44:29.640 | we can rule out that there is a human-level civilization
02:44:32.320 | on the moon, and in fact, the many nearby solar systems,
02:44:36.000 | where we cannot rule out, of course,
02:44:39.320 | that there is something like Earth sitting in a galaxy
02:44:43.360 | five billion light years away.
02:44:44.860 | But we've ruled out a lot,
02:44:48.200 | and that's already kind of shocking,
02:44:50.240 | given that there are all these planets there.
02:44:52.120 | So where are they?
02:44:53.320 | Where are they all?
02:44:54.160 | That's the classic Fermi paradox.
02:44:56.500 | - Yeah.
02:44:58.840 | So my argument, which might very well be wrong,
02:45:01.120 | it's very simple, really, it just goes like this.
02:45:03.240 | Okay, we have no clue about this.
02:45:04.880 | It could be the probability of getting life
02:45:09.640 | on a random planet, it could be 10 to the minus one,
02:45:13.080 | a priori, or 10 to the minus 10,
02:45:15.400 | 10 to the minus 20, 10 to the minus 30, 10 to the minus 40.
02:45:19.180 | Basically, every order of magnitude
02:45:20.560 | is about equally likely.
02:45:21.920 | When you then do the math,
02:45:24.680 | and ask how close is our nearest neighbor,
02:45:27.460 | it's again equally likely that it's 10 to the 10 meters away,
02:45:30.560 | 10 to the 20 meters away, 10 to the 30 meters away.
02:45:33.480 | We have some nerdy ways of talking about this
02:45:35.680 | with Bayesian statistics and a uniform log prior,
02:45:38.120 | but that's irrelevant.
02:45:39.400 | This is the simple basic argument.
02:45:42.080 | And now comes the data, so we can say,
02:45:43.760 | okay, there are all these orders of magnitude.
02:45:46.880 | 10 to the 26 meters away,
02:45:49.300 | there's the edge of our observable universe.
02:45:52.000 | If it's farther than that,
02:45:52.880 | light hasn't even reached us yet.
02:45:54.900 | If it's less than 10 to the 16 meters away,
02:45:58.080 | well, it's within Earth's,
02:46:02.360 | it's no farther away than the sun.
02:46:03.860 | We can definitely rule that out.
02:46:05.460 | So I think about it like this.
02:46:08.560 | A priori, before we looked with telescopes,
02:46:10.920 | it could be 10 to the 10 meters, 10 to the 20,
02:46:14.360 | 10 to the 30, 10 to the 40, 10 to the 50,
02:46:15.720 | 10 to the blah, blah, blah,
02:46:16.560 | equally likely anywhere here.
02:46:18.080 | And now we've ruled out this chunk.
02:46:22.760 | - Yeah.
02:46:24.060 | Most of it is outside.
02:46:25.460 | - And here is the edge of our observable universe already.
02:46:28.900 | So I'm certainly not saying I don't think
02:46:30.580 | there's any life elsewhere in space.
02:46:32.460 | If space is infinite,
02:46:33.700 | then you're basically 100% guaranteed that there is.
02:46:36.740 | But the probability that there is life,
02:46:39.220 | that the nearest neighbor,
02:46:42.300 | it happens to be in this little region
02:46:43.800 | between where we would have seen it already
02:46:47.100 | and where we will never see it,
02:46:48.740 | there's actually significantly less than one, I think.
02:46:52.240 | And I think there's a moral lesson from this,
02:46:54.280 | which is really important,
02:46:55.840 | which is to be good stewards of this planet
02:47:00.120 | and this shot we've had.
02:47:01.440 | It can be very dangerous to say,
02:47:03.760 | oh, it's fine if we nuke our planet or ruin the climate
02:47:07.680 | or mess it up with unaligned AI,
02:47:10.280 | because I know there is this nice Star Trek fleet out there.
02:47:15.160 | They're gonna swoop in and take over where we failed.
02:47:18.040 | Just like it wasn't the big deal
02:47:19.840 | that the Easter Island losers wiped themselves out.
02:47:23.040 | That's a dangerous way of loading yourself
02:47:25.180 | into false sense of security.
02:47:26.640 | If it's actually the case that it might be up to us
02:47:32.020 | and only us, the whole future of intelligent life
02:47:35.000 | in our observable universe,
02:47:36.360 | then I think it's both,
02:47:40.440 | it really puts a lot of responsibility on our shoulders.
02:47:43.120 | - Inspiring, it's a little bit terrifying,
02:47:45.240 | but it's also inspiring.
02:47:46.480 | - But it's empowering, I think, most of all,
02:47:48.240 | because the biggest problem today is,
02:47:50.240 | I see this even when I teach,
02:47:51.960 | so many people feel that it doesn't matter
02:47:55.440 | what they do or we do, we feel disempowered.
02:47:58.800 | Oh, it makes no difference.
02:48:00.160 | This is about as far from that as you can come.
02:48:05.080 | But we realize that what we do
02:48:06.920 | on our little spinning ball here in our lifetime
02:48:12.220 | could make the difference
02:48:13.880 | for the entire future of life in our universe.
02:48:17.120 | How empowering is that?
02:48:18.720 | - Yeah, survival of consciousness.
02:48:20.280 | I mean, the other, a very similar kind of empowering aspect
02:48:25.280 | of the Drake equation is,
02:48:27.680 | say there is a huge number of intelligent civilizations
02:48:31.120 | that spring up everywhere,
02:48:32.920 | but because of the Drake equation,
02:48:34.800 | which is the lifetime of a civilization,
02:48:38.000 | maybe many of them hit a wall.
02:48:39.880 | And just like you said, it's clear that that,
02:48:43.360 | for us, the great filter,
02:48:45.920 | the one possible great filter seems to be coming
02:48:49.040 | in the next 100 years.
02:48:51.240 | So it's also empowering to say,
02:48:53.720 | okay, well, we have a chance to not,
02:48:58.720 | I mean, the way great filters work,
02:49:00.120 | it is just get most of them.
02:49:02.080 | - Exactly.
02:49:02.920 | Nick Bostrom has articulated this really beautifully too.
02:49:06.080 | Every time yet another search for life on Mars
02:49:09.480 | comes back negative or something, I'm like, yes, yes!
02:49:14.760 | Our odds for us surviving this is the best.
02:49:17.880 | You already made the argument in broad brush there, right?
02:49:21.000 | But just to unpack it, right?
02:49:22.600 | The point is, we already know
02:49:24.560 | there is a crap ton of planets out there that are Earth-like
02:49:29.680 | and we also know that most of them do not seem to have
02:49:33.600 | anything like our kind of life on them.
02:49:35.120 | So what went wrong?
02:49:37.280 | There's clearly one step along the evolutionary,
02:49:39.520 | at least one filter roadblock in going from no life
02:49:43.840 | to spacefaring life.
02:49:45.640 | And where is it?
02:49:48.200 | Is it in front of us or is it behind us, right?
02:49:50.760 | If there's no filter behind us
02:49:54.080 | and we keep finding all sorts of little mice on Mars
02:49:59.080 | and whatever, right?
02:50:01.920 | That's actually very depressing
02:50:03.120 | because that makes it much more likely
02:50:04.480 | that the filter is in front of us.
02:50:06.280 | And that what actually is going on
02:50:08.080 | is like the ultimate dark joke
02:50:11.120 | that whenever a civilization invents
02:50:14.400 | sufficiently powerful tech,
02:50:15.640 | it just sets their clock and then after a little while
02:50:18.160 | it goes poof for one reason or other and wipes itself out.
02:50:21.840 | Now wouldn't that be like utterly depressing
02:50:24.320 | if we're actually doomed?
02:50:26.120 | Whereas if it turns out that there is a great filter
02:50:30.800 | early on that for whatever reason
02:50:32.880 | seems to be really hard to get to the stage
02:50:34.840 | of sexually reproducing organisms
02:50:39.120 | or even the first ribosome or whatever, right?
02:50:43.280 | Or maybe you have lots of planets with dinosaurs and cows
02:50:47.120 | but for some reason they tend to get stuck there
02:50:48.840 | and never invent smartphones.
02:50:50.800 | All of those are huge boosts for our own odds
02:50:55.120 | because been there, done that.
02:50:58.800 | It doesn't matter how hard or unlikely it was
02:51:01.680 | that we got past that roadblock because we already did.
02:51:04.760 | And then that makes it likely
02:51:07.520 | that the future is in our own hands.
02:51:09.680 | We're not doomed.
02:51:11.440 | So that's why I think the fact that life is rare
02:51:16.440 | in the universe, it's not just something
02:51:19.000 | that there is some evidence for
02:51:20.560 | but also something we should actually hope for.
02:51:24.560 | - So that's the end, the mortality,
02:51:29.880 | the death of human civilization
02:51:31.520 | that we've been discussing in life,
02:51:33.120 | maybe prospering beyond any kind of great filter.
02:51:36.680 | Do you think about your own death?
02:51:39.400 | Does it make you sad that you may not witness some of the,
02:51:44.400 | you lead a research group on working
02:51:47.880 | some of the biggest questions in the universe actually,
02:51:51.080 | both on the physics and the AI side.
02:51:53.720 | Does it make you sad that you may not be able
02:51:55.560 | to see some of these exciting things come to fruition
02:51:58.800 | that we've been talking about?
02:52:00.640 | - Of course, of course it sucks,
02:52:02.640 | the fact that I'm gonna die.
02:52:04.840 | I remember once when I was much younger,
02:52:07.200 | my dad made this remark that life is fundamentally tragic.
02:52:10.840 | And I'm like, what are you talking about, daddy?
02:52:13.120 | And then many years later, I felt,
02:52:15.640 | now I feel I totally understand what he means.
02:52:17.360 | You know, we grow up, we're little kids
02:52:19.040 | and everything is infinite and it's so cool.
02:52:21.960 | And then suddenly we find out that actually, you know,
02:52:24.760 | you got to start only, this is,
02:52:27.480 | you're gonna get game over at some point.
02:52:30.360 | So of course it's something that's sad.
02:52:35.360 | - Are you afraid?
02:52:37.280 | - No, not in the sense that I think anything terrible
02:52:46.040 | is gonna happen after I die or anything like that.
02:52:48.280 | No, I think it's really gonna be game over,
02:52:51.000 | but it's more that it makes me very acutely aware
02:52:56.000 | of what a wonderful gift this is
02:52:57.960 | that I get to be alive right now.
02:53:00.240 | And it's a steady reminder to just live life to the fullest
02:53:04.720 | and really enjoy it because it is finite, you know.
02:53:08.080 | And I think actually, and we all get the regular reminders
02:53:12.200 | when someone near and dear to us dies,
02:53:14.360 | that one day it's gonna be our turn.
02:53:19.360 | It adds this kind of focus.
02:53:21.560 | I wonder what it would feel like actually
02:53:23.760 | to be an immortal being, if they might even enjoy
02:53:27.040 | some of the wonderful things of life a little bit less,
02:53:29.480 | just because there isn't that--
02:53:33.480 | - Finiteness?
02:53:34.360 | - Yeah.
02:53:35.200 | - Do you think that could be a feature, not a bug,
02:53:38.080 | the fact that we beings are finite?
02:53:42.080 | Maybe there's lessons for engineering
02:53:44.360 | and artificial intelligence systems as well
02:53:47.000 | that are conscious.
02:53:48.440 | I do think it makes, is it possible
02:53:53.440 | that the reason the pistachio ice cream is delicious
02:53:57.000 | is the fact that you're going to die one day?
02:54:00.000 | And you will not have all the pistachio ice cream
02:54:03.800 | that you could eat because of that fact.
02:54:06.240 | - Well, let me say two things.
02:54:07.640 | First of all, it's actually quite profound
02:54:09.720 | what you're saying.
02:54:10.560 | I do think I appreciate the pistachio ice cream a lot more
02:54:12.840 | knowing that I will, there's only a finite number of times
02:54:16.000 | I get to enjoy that.
02:54:17.840 | And I can only remember a finite number of times
02:54:19.960 | in the past.
02:54:20.800 | And moreover, my life is not so long
02:54:25.180 | that it just starts to feel like things
02:54:26.360 | are repeating themselves in general.
02:54:28.160 | It's so new and fresh.
02:54:30.560 | I also think though that death is a little bit overrated
02:54:35.560 | in the sense that it comes from a sort of outdated view
02:54:41.400 | of physics and what life actually is.
02:54:45.640 | Because if you ask, okay, what is it that's going to die
02:54:49.120 | exactly, what am I really?
02:54:52.040 | When I say I feel sad about the idea of myself dying,
02:54:56.000 | am I really sad that this skin cell here is gonna die?
02:54:59.200 | Of course not, 'cause it's gonna die next week anyway
02:55:01.600 | and I'll grow a new one, right?
02:55:04.020 | And it's not any of my cells that I'm associating really
02:55:08.440 | with who I really am, nor is it any of my atoms
02:55:12.400 | or quarks or electrons.
02:55:14.040 | In fact, basically all of my atoms get replaced
02:55:19.400 | on a regular basis, right?
02:55:20.560 | So what is it that's really me
02:55:22.880 | from a more modern physics perspective?
02:55:24.320 | It's the information.
02:55:25.880 | In processing, that's where my memory, that's my memories,
02:55:30.880 | that's my values, my dreams, my passion, my love.
02:55:36.480 | That's what's really fundamentally me.
02:55:43.560 | And frankly, not all of that will die when my body dies.
02:55:48.560 | Like Richard Feynman, for example,
02:55:51.800 | his body died of cancer, you know?
02:55:55.120 | But many of his ideas that he felt made him very him
02:55:59.760 | actually live on.
02:56:01.440 | This is my own little personal tribute
02:56:03.200 | to Richard Feynman, right?
02:56:04.120 | I try to keep a little bit of him alive in myself.
02:56:07.520 | I've even quoted him today, right?
02:56:09.640 | - Yeah, he almost came alive for a brief moment
02:56:11.760 | in this conversation, yeah.
02:56:13.360 | - Yeah, and this honestly gives me some solace.
02:56:17.240 | You know, when I work as a teacher,
02:56:19.360 | I feel if I can actually share a bit about myself,
02:56:25.360 | that my students feel worthy enough to copy and adopt
02:56:30.360 | as a part of things that they know
02:56:32.240 | or they believe or aspire to,
02:56:35.680 | now I live on also a little bit in them, right?
02:56:38.280 | And so being a teacher is a little bit of what I,
02:56:44.280 | that's something also that contributes
02:56:51.760 | to making me a little teeny bit less mortal, right?
02:56:55.720 | Because I'm not, at least not all gonna die all at once,
02:56:58.680 | right?
02:56:59.560 | And I find that a beautiful tribute
02:57:01.240 | to people we didn't respect.
02:57:02.920 | If we can remember them and carry in us
02:57:06.960 | the things that we felt was the most awesome about them,
02:57:12.240 | right, then they live on.
02:57:13.480 | And I'm getting a bit emotionally over it,
02:57:17.240 | but it's a very beautiful idea you bring up there.
02:57:19.800 | I think we should stop this old-fashioned materialism
02:57:23.160 | and just equate who we are with our quarks and electrons.
02:57:28.160 | There's no scientific basis for that really.
02:57:31.360 | And it's also very uninspiring.
02:57:34.620 | Now, if you look a little bit towards the future, right,
02:57:40.520 | one thing which really sucks about humans dying
02:57:44.180 | is that even though some of their teachings
02:57:46.200 | and memories and stories and ethics and so on
02:57:49.920 | will be copied by those around them, hopefully,
02:57:52.960 | a lot of it can't be copied and just dies with them,
02:57:55.600 | with their brain, and that really sucks.
02:57:57.160 | That's the fundamental reason why we find it so tragic
02:58:00.900 | when someone goes from having all this information there
02:58:03.760 | to the more just gone, ruined, right?
02:58:07.300 | With more post-biological intelligence,
02:58:11.680 | that's gonna shift a lot, right?
02:58:15.000 | The only reason it's so hard to make a backup of your brain
02:58:18.320 | in its entirety is exactly
02:58:19.600 | because it wasn't built for that, right?
02:58:21.840 | If you have a future machine intelligence,
02:58:25.760 | there's no reason for why it has to die at all
02:58:28.400 | if you want to copy it, whatever,
02:58:32.600 | into some other quark blob, right?
02:58:36.680 | You can copy not just some of it, but all of it, right?
02:58:39.520 | And so in that sense,
02:58:42.820 | you can get immortality because all the information
02:58:45.820 | can be copied out of any individual entity.
02:58:49.180 | And it's not just mortality that will change
02:58:51.500 | if we get more post-biological life.
02:58:54.100 | It's also with that, very much the whole individualism
02:58:59.100 | we have now, right?
02:59:01.220 | The reason that we make such a big difference
02:59:02.980 | between me and you is exactly because
02:59:06.220 | we're a little bit limited in how much we can copy.
02:59:08.100 | Like I would just love to go back to the beginning
02:59:10.460 | and copy, like I would just love to go like this
02:59:13.340 | and copy your Russian skills, Russian speaking skills.
02:59:17.820 | Wouldn't it be awesome?
02:59:18.860 | But I can't.
02:59:20.500 | I have to actually work for years
02:59:22.020 | if I want to get better on it.
02:59:23.940 | But if we were robots--
02:59:27.980 | - Just copying paste freely, then that loses completely,
02:59:31.860 | it washes away the sense of what immortality is.
02:59:35.180 | - And also individuality a little bit, right?
02:59:37.500 | We would start feeling much more,
02:59:39.340 | maybe we would feel much more collaborative with each other
02:59:43.540 | if we can just, hey, I'll give you my Russian,
02:59:45.660 | you can give me your Russian and I'll give you whatever,
02:59:48.740 | and suddenly you can speak Swedish.
02:59:50.220 | Maybe that's less a bad trade for you,
02:59:52.060 | but whatever else you want from my brain, right?
02:59:54.660 | And there've been a lot of sci-fi stories
02:59:58.100 | about hive minds and so on where experiences
03:00:02.180 | can be more broadly shared.
03:00:05.540 | And I think, I don't pretend to know
03:00:10.220 | what it would feel like to be a super intelligent machine,
03:00:15.220 | but I'm quite confident that however it feels
03:00:20.460 | about mortality and individuality
03:00:22.460 | will be very, very different from how it is for us.
03:00:25.020 | - Well, for us, mortality and finiteness
03:00:30.540 | seems to be pretty important at this particular moment.
03:00:34.140 | And so all good things must come to an end,
03:00:37.460 | just like this conversation, Max.
03:00:39.180 | - I saw that coming.
03:00:40.660 | - Sorry, this is the world's worst translation.
03:00:44.660 | I could talk to you forever.
03:00:45.820 | It's such a huge honor that you spent time with me.
03:00:49.140 | - Honor is mine.
03:00:50.100 | - Thank you so much for getting me,
03:00:52.860 | essentially to start this podcast
03:00:54.580 | by doing the first conversation,
03:00:55.960 | making me realize falling in love
03:00:58.500 | with conversation in itself.
03:01:01.140 | And thank you so much for inspiring
03:01:03.220 | so many people in the world with your books,
03:01:05.380 | with your research, with your talking,
03:01:07.740 | and with the other, like this ripple effect of friends,
03:01:12.740 | including Elon and everybody else that you inspire.
03:01:15.460 | So thank you so much for talking today.
03:01:18.140 | - Thank you.
03:01:18.980 | I feel so fortunate that you're doing this podcast
03:01:23.620 | and getting so many interesting voices out there
03:01:27.780 | into the ether and not just the five-second sound bites,
03:01:30.940 | but so many of the interviews I've watched you do.
03:01:33.060 | You really let people go in into depth
03:01:36.140 | in a way which we sorely need in this day and age.
03:01:38.380 | And that I got to be number one, I feel super honored.
03:01:41.740 | - Yeah, you started it.
03:01:43.500 | Thank you so much, Max.
03:01:44.660 | Thanks for listening to this conversation with Max Tegmark.
03:01:48.260 | And thank you to our sponsors,
03:01:50.220 | the Jordan Harbinger Show,
03:01:52.540 | Four Sigmatic Mushroom Coffee,
03:01:54.820 | BetterHelp Online Therapy, and ExpressVPN.
03:01:58.940 | So the choices, wisdom, caffeine, sanity,
03:02:03.020 | or privacy.
03:02:04.420 | Choose wisely, my friends.
03:02:05.820 | And if you wish, click the sponsor links below
03:02:08.740 | to get a discount and to support this podcast.
03:02:11.860 | And now let me leave you with some words from Max Tegmark.
03:02:15.100 | If consciousness is the way that information feels
03:02:18.860 | when it's processed in certain ways,
03:02:21.380 | then it must be substrate independent.
03:02:24.220 | It's only the structure of information processing
03:02:26.660 | that matters, not the structure of the matter
03:02:29.100 | doing the information processing.
03:02:31.900 | Thank you for listening and hope to see you next time.
03:02:34.980 | (upbeat music)
03:02:37.580 | (upbeat music)
03:02:40.180 | [BLANK_AUDIO]