back to index

Ray Kurzweil: Singularity, Superintelligence, and Immortality | Lex Fridman Podcast #321


Chapters

0:0 Introduction
1:6 Turing test
14:51 Brain–computer interfaces
26:31 Singularity
32:51 Virtual reality
35:31 Evolution of information processing
41:57 Automation
51:57 Nanotechnology
53:51 Nuclear war
55:57 Uploading minds
63:38 How to think
70:8 Digital afterlife
79:28 Intelligent alien life
82:18 Simulation hypothesis
86:31 Mortality
94:10 Meaning of life

Whisper Transcript | Transcript Only Page

00:00:00.000 | By the time we get to 2045,
00:00:02.800 | we'll be able to multiply our intelligence
00:00:05.320 | many millions fold,
00:00:07.680 | and it's just very hard to imagine what that will be like.
00:00:10.820 | - The following is a conversation with Ray Kurzweil,
00:00:16.840 | author, inventor, and futurist,
00:00:19.480 | who has an optimistic view of our future
00:00:22.280 | as a human civilization,
00:00:24.320 | predicting that exponentially improving technologies
00:00:27.280 | will take us to a point of a singularity,
00:00:29.880 | beyond which super intelligent artificial intelligence
00:00:33.480 | will transform our world in nearly unimaginable ways.
00:00:38.360 | 18 years ago, in the book "Singularity is Near,"
00:00:41.280 | he predicted that the onset of the singularity
00:00:44.000 | will happen in the year 2045.
00:00:47.360 | He still holds to this prediction and estimate.
00:00:50.800 | In fact, he's working on a new book on this topic
00:00:53.440 | that will hopefully be out next year.
00:00:55.620 | This is the Lex Friedman Podcast.
00:00:58.320 | To support it, please check out our sponsors
00:01:00.360 | in the description.
00:01:01.640 | And now, dear friends, here's Ray Kurzweil.
00:01:05.380 | In your 2005 book titled "The Singularity is Near,"
00:01:10.960 | you predicted that the singularity will happen in 2045.
00:01:15.400 | So now, 18 years later,
00:01:17.640 | do you still estimate that the singularity will happen
00:01:20.880 | on 2045?
00:01:22.480 | And maybe first, what is the singularity,
00:01:24.960 | the technological singularity, and when will it happen?
00:01:27.760 | - Singularity is where computers really change our view
00:01:31.680 | of what's important and change who we are.
00:01:35.860 | But we're getting close to some salient things
00:01:39.580 | that will change who we are.
00:01:42.840 | A key thing is 2029,
00:01:45.700 | when computers will pass the Turing test.
00:01:49.080 | And there's also some controversy
00:01:51.560 | whether the Turing test is valid.
00:01:53.720 | I believe it is.
00:01:56.000 | Most people do believe that,
00:01:57.920 | but there's some controversy about that.
00:01:59.680 | But Stanford got very alarmed at my prediction about 2029.
00:02:04.680 | I made this in 1999 in my book.
00:02:10.520 | - The Age of Spiritual Machines.
00:02:12.120 | - Right.
00:02:12.960 | - And then you repeated the prediction in 2005.
00:02:15.560 | - In 2005.
00:02:16.600 | - Yeah.
00:02:17.520 | - So they held an international conference,
00:02:19.480 | you might have been aware of it,
00:02:20.760 | of AI experts in 1999 to assess this view.
00:02:26.600 | So people gave different predictions and they took a poll.
00:02:30.840 | It was really the first time that AI experts worldwide
00:02:34.240 | were polled on this prediction.
00:02:36.600 | And the average poll was 100 years.
00:02:39.980 | 20% believed it would never happen.
00:02:44.320 | And that was the view in 1999.
00:02:48.120 | 80% believed it would happen,
00:02:50.640 | but not within their lifetimes.
00:02:53.200 | There's been so many advances in AI
00:02:55.840 | that the poll of AI experts has come down over the years.
00:03:01.840 | So a year ago, something called Meticulous,
00:03:05.440 | which you may be aware of,
00:03:07.080 | assesses different types of experts on the future.
00:03:11.560 | They again assessed what AI experts then felt.
00:03:16.440 | And they were saying 2042.
00:03:18.940 | - For the Turing test.
00:03:20.480 | - For the Turing test.
00:03:22.400 | - Yeah, so it's coming down.
00:03:23.600 | - And I was still saying 2029.
00:03:26.360 | A few weeks ago, they again did another poll
00:03:30.240 | and it was 2030.
00:03:31.720 | So AI experts now basically agree with me.
00:03:37.960 | I haven't changed at all, I've stayed with 2029.
00:03:41.240 | And AI experts now agree with me,
00:03:44.560 | but they didn't agree at first.
00:03:46.880 | - So Alan Turing formulated the Turing test and--
00:03:50.980 | - Right, now what he said was very little about it.
00:03:54.480 | I mean, the 1950 paper
00:03:55.920 | where he had articulated the Turing test,
00:03:58.060 | there's like a few lines that talk about the Turing test.
00:04:04.440 | And it really wasn't very clear how to administer it.
00:04:11.840 | And he said if they did it in like 15 minutes,
00:04:16.520 | that would be sufficient,
00:04:17.680 | which I don't really think is the case.
00:04:20.600 | These large language models now,
00:04:22.940 | some people are convinced by it already.
00:04:25.540 | I mean, you can talk to it and have a conversation with it.
00:04:28.440 | You can actually talk to it for hours.
00:04:30.340 | So it requires a little more depth.
00:04:35.360 | There's some problems with large language models,
00:04:38.100 | which we can talk about.
00:04:39.600 | But some people are convinced by the Turing test.
00:04:46.460 | Now, if somebody passes the Turing test,
00:04:50.140 | what are the implications of that?
00:04:52.160 | Does that mean that they're sentient,
00:04:53.720 | that they're conscious or not?
00:04:56.000 | It's not necessarily clear what the implications are.
00:05:00.880 | Anyway, I believe 2029, that's six, seven years from now,
00:05:05.880 | we'll have something that passes the Turing test
00:05:10.360 | and a valid Turing test,
00:05:12.480 | meaning it goes for hours, not just a few minutes.
00:05:15.320 | - Can you speak to that a little bit?
00:05:16.600 | What is your formulation of the Turing test?
00:05:21.140 | You've proposed a very difficult version
00:05:23.180 | of the Turing test, so what does that look like?
00:05:25.420 | - Basically, it's just to assess it over several hours
00:05:28.580 | and also have a human judge that's fairly sophisticated
00:05:35.780 | on what computers can do and can't do.
00:05:39.220 | If you take somebody who's not that sophisticated
00:05:43.820 | or even an average engineer,
00:05:47.260 | they may not really assess various aspects of it.
00:05:52.100 | - So you really want the human to challenge the system.
00:05:55.720 | - Exactly, exactly.
00:05:57.100 | - On its ability to do things
00:05:58.540 | like common sense reasoning, perhaps.
00:06:00.820 | - That's actually a key problem
00:06:03.100 | with large language models.
00:06:04.700 | They don't do these kinds of tests
00:06:10.180 | that would involve assessing chains of reasoning.
00:06:15.060 | But you can lose track of that.
00:06:19.420 | If you talk to them, they actually can talk to you
00:06:21.460 | pretty well and you can be convinced by it.
00:06:24.860 | But it's somebody that would really convince you
00:06:27.380 | that it's a human, whatever that takes.
00:06:32.180 | Maybe it would take days or weeks,
00:06:34.800 | but it would really convince you that it's human.
00:06:39.560 | Large language models can appear that way.
00:06:44.560 | You can read conversations and they appear pretty good.
00:06:49.760 | There are some problems with it.
00:06:52.280 | It doesn't do math very well.
00:06:55.000 | Can ask, "How many legs did 10 elephants have?"
00:06:58.200 | And they'll tell you, "Well, okay, each elephant
00:07:00.600 | "has four legs and it's 10 elephants, so it's 40 legs."
00:07:03.720 | And you go, "Okay, that's pretty good.
00:07:05.800 | "How many legs do 11 elephants have?"
00:07:07.960 | And they don't seem to understand the question.
00:07:11.520 | - Do all humans understand that question?
00:07:14.160 | - No, that's the key thing.
00:07:15.880 | I mean, how advanced a human do you want it to be?
00:07:19.440 | But we do expect a human to be able
00:07:21.840 | to do multi-chain reasoning, to be able to take a few facts
00:07:26.320 | and put them together, not perfectly.
00:07:29.840 | And we see that in a lot of polls
00:07:32.800 | that people don't do that perfectly at all.
00:07:35.520 | But, so it's not very well-defined,
00:07:40.520 | but it's something where it really would convince you
00:07:44.280 | that it's a human.
00:07:45.600 | - Is your intuition that large language models
00:07:48.840 | will not be solely the kind of system
00:07:52.320 | that passes the Turing test in 2029?
00:07:55.560 | Do we need something else?
00:07:56.760 | - No, I think it will be a large language model,
00:07:58.680 | but they have to go beyond what they're doing now.
00:08:01.480 | I think we're getting there.
00:08:05.360 | And another key issue is if somebody actually passes
00:08:10.000 | the Turing test validly, I would believe they're conscious.
00:08:13.600 | And then not everybody would say that.
00:08:15.000 | So, okay, we can pass the Turing test,
00:08:17.400 | but we don't really believe that it's conscious.
00:08:20.040 | That's a whole nother issue.
00:08:21.480 | But if it really passes the Turing test,
00:08:24.880 | I would believe that it's conscious.
00:08:26.680 | But I don't believe that of large language models today.
00:08:32.760 | - If it appears to be conscious,
00:08:35.520 | that's as good as being conscious,
00:08:37.480 | at least for you, in some sense.
00:08:40.720 | - I mean, consciousness is not something that's scientific.
00:08:45.300 | I mean, I believe you're conscious,
00:08:48.860 | but it's really just a belief,
00:08:51.120 | and we believe that about other humans
00:08:52.840 | that at least appear to be conscious.
00:08:55.500 | When you go outside of shared human assumption,
00:09:01.720 | like are animals conscious?
00:09:03.640 | Some people believe they're not conscious.
00:09:06.200 | Some people believe they are conscious.
00:09:08.680 | And would a machine that acts just like a human
00:09:13.120 | be conscious?
00:09:14.520 | I mean, I believe it would be,
00:09:16.200 | but that's really a philosophical belief.
00:09:20.720 | It's not, you can't prove it.
00:09:22.700 | I can't take an entity and prove that it's conscious.
00:09:25.480 | There's nothing that you can do that would indicate that.
00:09:30.360 | - It's like saying a piece of art is beautiful.
00:09:32.780 | You can say it.
00:09:35.000 | Multiple people can experience a piece of art is beautiful,
00:09:38.200 | but you can't prove it.
00:09:41.320 | - But it's also an extremely important issue.
00:09:44.840 | I mean, imagine if you had something
00:09:47.040 | where nobody's conscious.
00:09:49.140 | The world may as well not exist.
00:09:58.000 | Some people, like say Marvin Rinsky,
00:10:00.040 | said, well, consciousness is not logical.
00:10:05.920 | It's not scientific, and therefore we should dismiss it.
00:10:08.400 | And any talk about consciousness is just not to be believed.
00:10:13.400 | But when he actually engaged with somebody who was conscious,
00:10:19.520 | he actually acted as if they were conscious.
00:10:22.600 | He didn't ignore that.
00:10:24.240 | - He acted as if consciousness does matter.
00:10:26.880 | - Exactly, whereas he said it didn't matter.
00:10:30.480 | - Well, that's Marvin Rinsky.
00:10:31.800 | - Yeah.
00:10:32.640 | - He's full of contradictions.
00:10:34.040 | - But that's true of a lot of people as well.
00:10:37.640 | - But to you, consciousness matters.
00:10:39.640 | - But to me, it's very important,
00:10:42.160 | but I would say it's not a scientific issue.
00:10:45.640 | It's a philosophical issue.
00:10:49.280 | And people have different views.
00:10:50.760 | Some people believe that anything
00:10:52.800 | that makes a decision is conscious.
00:10:54.520 | So your light switch is conscious.
00:10:56.800 | Its level of consciousness is low.
00:10:59.400 | It's not very interesting, but that's a consciousness.
00:11:03.400 | And anything, so a computer that makes
00:11:07.240 | a more interesting decision, still not at human levels,
00:11:10.440 | but it's also conscious and at a higher level
00:11:12.560 | than your light switch.
00:11:13.720 | So that's one view.
00:11:16.000 | There's many different views of what consciousness is.
00:11:20.120 | - So if a system passes the Turing test,
00:11:24.600 | it's not scientific, but in issues of philosophy,
00:11:29.600 | things like ethics start to enter the picture.
00:11:32.640 | Do you think there would be, we would start contending
00:11:37.360 | as a human species about the ethics
00:11:41.040 | of turning off such a machine?
00:11:42.860 | - Yeah, I mean, that's definitely come up.
00:11:46.480 | Hasn't come up in reality yet.
00:11:49.640 | - Yet.
00:11:50.560 | - But I'm talking about 2029.
00:11:52.400 | It's not that many years from now.
00:11:54.180 | So what are our obligations to it?
00:11:58.480 | It has a different, I mean, a computer that's conscious
00:12:03.480 | has a little bit different connotations than a human.
00:12:08.480 | We have a continuous consciousness.
00:12:15.580 | We're in an entity that does not last forever.
00:12:22.080 | Now, actually, a significant portion of humans still exist
00:12:27.080 | and are therefore still conscious,
00:12:29.240 | but anybody who is over a certain age doesn't exist anymore.
00:12:36.720 | That wouldn't be true of a computer program.
00:12:40.320 | You could completely turn it off
00:12:42.280 | and a copy of it could be stored and you could recreate it.
00:12:46.120 | And so it has a different type of validity.
00:12:51.160 | You could actually take it back in time.
00:12:52.880 | You could eliminate its memory and have it go over again.
00:12:55.800 | I mean, it has a different kind of connotation
00:12:59.760 | than humans do.
00:13:01.760 | - Well, perhaps it can do the same thing with humans.
00:13:04.360 | It's just that we don't know how to do that yet.
00:13:06.840 | It's possible that we figure out all of these things
00:13:09.360 | on the machine first,
00:13:10.740 | but that doesn't mean the machine isn't conscious.
00:13:15.440 | - I mean, if you look at the way people react,
00:13:17.600 | say 3CPO or other machines that are conscious in movies,
00:13:22.600 | they don't actually present how it's conscious,
00:13:26.720 | but we see that they are a machine
00:13:30.080 | and people will believe that they are conscious
00:13:33.240 | and they'll actually worry about it
00:13:34.600 | if they get into trouble and so on.
00:13:36.480 | - So 2029 is going to be the first year
00:13:40.820 | when a major thing happens.
00:13:43.440 | - Right.
00:13:44.280 | - And that will shake our civilization
00:13:46.480 | to start to consider the role of AI in this world.
00:13:50.240 | - Yes and no.
00:13:51.080 | I mean, this one guy at Google claimed
00:13:54.520 | that the machine was conscious.
00:13:58.400 | - But that's just one person.
00:14:00.120 | - Right.
00:14:00.960 | - When it starts to happen to scale.
00:14:03.040 | - Well, that's exactly right
00:14:04.520 | because most people have not taken that position.
00:14:07.720 | I don't take that position.
00:14:08.900 | I mean, I've used different things.
00:14:14.380 | I've seen people like this
00:14:17.220 | and they don't appear to me to be conscious.
00:14:20.500 | As we eliminate various problems
00:14:22.820 | of these large language models,
00:14:25.840 | more and more people will accept that they're conscious.
00:14:30.460 | So when we get to 2029,
00:14:32.140 | I think a large fraction of people
00:14:36.300 | will believe that they're conscious.
00:14:38.100 | So it's not gonna happen all at once.
00:14:42.420 | I believe it will actually happen gradually
00:14:44.380 | and it's already started to happen.
00:14:46.220 | - And so that takes us one step closer to the singularity.
00:14:52.300 | - Another step then is in the 2030s
00:14:55.540 | when we can actually connect our neocortex,
00:14:59.820 | which is where we do our thinking, to computers.
00:15:04.820 | And I mean, just as this actually gains a lot
00:15:09.300 | to being connected to computers
00:15:12.220 | that will amplify its abilities.
00:15:15.340 | I mean, if this did not have any connection,
00:15:17.380 | it would be pretty stupid.
00:15:19.620 | It could not answer any of your questions.
00:15:21.860 | - If you're just listening to this, by the way,
00:15:24.420 | Ray's holding up the all-powerful smartphone.
00:15:29.420 | - So we're gonna do that directly from our brains.
00:15:32.500 | I mean, these are pretty good.
00:15:35.060 | These already have amplified our intelligence.
00:15:37.740 | I'm already much smarter than I would otherwise be
00:15:40.020 | if I didn't have this.
00:15:42.280 | 'Cause I remember my first book,
00:15:44.160 | "The Age of Intelligent Machines,"
00:15:45.960 | there was no way to get information from computers.
00:15:52.040 | I actually would go to a library, find a book,
00:15:55.360 | find the page that had an information I wanted,
00:15:58.400 | and I'd go to the copier,
00:15:59.920 | and my most significant information tool
00:16:04.320 | was a roll of quarters where I could feed the copier.
00:16:08.440 | So we're already greatly advanced that we have these things.
00:16:13.280 | There's a few problems with it.
00:16:15.440 | First of all, I constantly put it down,
00:16:17.280 | and I don't remember where I put it.
00:16:19.680 | I've actually never lost it, but you have to find it,
00:16:24.680 | and then you have to turn it on.
00:16:26.080 | So there's a certain amount of steps.
00:16:28.160 | It would actually be quite useful
00:16:30.100 | if someone would just listen to your conversation
00:16:33.440 | and say, "Oh, that's so-and-so actress,"
00:16:38.440 | and tell you what you're talking about.
00:16:41.120 | - So going from active to passive,
00:16:43.120 | where it just permeates your whole life.
00:16:46.200 | - Yeah, exactly.
00:16:47.240 | - The way your brain does when you're awake.
00:16:49.520 | Your brain is always there.
00:16:51.200 | - Right.
00:16:52.020 | Now, that's something that could actually
00:16:53.720 | just about be done today,
00:16:55.800 | where you would listen to your conversation,
00:16:57.360 | understand what you're saying,
00:16:58.560 | understand what you're not missing,
00:17:01.800 | and give you that information.
00:17:03.560 | But another step is to actually go inside your brain.
00:17:07.280 | And there are some prototypes
00:17:12.700 | where you can connect your brain.
00:17:15.240 | They actually don't have the amount of bandwidth
00:17:17.800 | that we need.
00:17:19.120 | They can work, but they work fairly slowly.
00:17:21.920 | So if it actually would connect to your neocortex,
00:17:26.100 | and the neocortex, which I describe
00:17:30.120 | in "How to Create a Mind,"
00:17:31.800 | the neocortex is actually,
00:17:34.800 | it has different levels.
00:17:38.200 | And as you go up the levels,
00:17:39.960 | it's kind of like a pyramid.
00:17:41.800 | The top level is fairly small.
00:17:44.340 | And that's the level where you wanna connect
00:17:46.520 | these brain extenders.
00:17:50.120 | So I believe that will happen in the 2030s.
00:17:58.120 | So just the way this is greatly amplified
00:18:01.600 | by being connected to the cloud,
00:18:03.520 | we can connect our own brain to the cloud,
00:18:07.480 | and just do what we can do by using this machine.
00:18:12.480 | - Do you think it would look like
00:18:15.720 | the brain-computer interface of Neuralink?
00:18:18.960 | So would it be--
00:18:19.800 | - Well, Neuralink's an attempt to do that.
00:18:22.480 | It doesn't have the bandwidth that we need.
00:18:24.920 | - Yet, right?
00:18:27.640 | - Right, but I think,
00:18:29.240 | I mean, they're gonna get permission for this,
00:18:31.980 | because there are a lot of people who absolutely need it,
00:18:34.960 | because they can't communicate.
00:18:36.680 | I know a couple of people like that,
00:18:38.400 | who have ideas, and they cannot move their muscles,
00:18:43.400 | and so on, they can't communicate.
00:18:45.820 | So for them, this would be very valuable.
00:18:51.140 | But we could all use it.
00:18:54.840 | Basically, it'd be,
00:18:56.600 | turn this into something that would be like we have a phone,
00:19:02.520 | but it would be in our minds,
00:19:05.120 | it would be kind of instantaneous.
00:19:07.360 | - And maybe communication between two people
00:19:09.440 | would not require this low-bandwidth mechanism of language.
00:19:14.080 | - Yes, exactly. - Of spoken word.
00:19:15.960 | - We don't know what that would be,
00:19:17.280 | although we do know that computers can share information
00:19:22.280 | like language instantly.
00:19:24.640 | They can share many, many books in a second,
00:19:28.840 | so we could do that as well.
00:19:31.160 | If you look at what our brain does,
00:19:34.200 | it actually can manipulate different parameters.
00:19:39.060 | So we talk about these large language models.
00:19:43.580 | I mean, I had written that
00:19:51.520 | it requires a certain amount of information
00:19:55.000 | in order to be effective,
00:19:57.620 | and that we would not see AI really being effective
00:20:01.920 | until it got to that level.
00:20:03.500 | And we had large language models
00:20:06.400 | that were like 10 billion bytes, didn't work very well.
00:20:09.600 | They finally got to 100 billion bytes,
00:20:11.680 | and now they work fairly well,
00:20:13.120 | and now we're going to a trillion bytes.
00:20:16.260 | If you say Lambda has 100 billion bytes,
00:20:22.220 | what does that mean?
00:20:23.500 | Well, what if you had something that had one byte,
00:20:26.740 | one parameter?
00:20:28.900 | Maybe you want to tell whether or not
00:20:30.500 | something's an elephant or not,
00:20:33.940 | and so you put in something that would detect its trunk.
00:20:37.660 | If it has a trunk, it's an elephant.
00:20:39.140 | If it doesn't have a trunk, it's not an elephant.
00:20:41.700 | That would work fairly well.
00:20:44.420 | There's a few problems with it.
00:20:46.320 | Really wouldn't be able to tell what a trunk is,
00:20:49.700 | but anyway.
00:20:50.620 | - And maybe other things other than elephants have trunks.
00:20:54.140 | You might get really confused.
00:20:55.580 | - Yeah, exactly.
00:20:56.940 | - I'm not sure which animals have trunks,
00:20:58.820 | but you know, plus how do you define a trunk?
00:21:02.380 | But yeah, that's one parameter.
00:21:03.940 | You can do okay.
00:21:06.380 | - So these things have 100 billion parameters,
00:21:08.740 | so they're able to deal with very complex issues.
00:21:12.200 | - All kinds of trunks.
00:21:14.000 | - Human beings actually have a little bit more than that,
00:21:16.220 | but they're getting to the point
00:21:17.940 | where they can emulate humans.
00:21:20.860 | If we were able to connect this to our neocortex,
00:21:25.860 | we would basically add more of these abilities
00:21:32.580 | to make distinctions,
00:21:35.400 | and it could ultimately be much smarter
00:21:37.620 | and also be attached to information
00:21:39.680 | that we feel is reliable.
00:21:42.240 | So that's where we're headed.
00:21:45.260 | - So you think that there will be a merger in the '30s,
00:21:49.100 | an increasing amount of merging
00:21:50.860 | between the human brain and the AI brain.
00:21:55.860 | - Exactly.
00:21:57.720 | And the AI brain is really an emulation of human beings.
00:22:02.360 | I mean, that's why we're creating them,
00:22:04.580 | because human beings act the same way,
00:22:07.260 | and this is basically to amplify them.
00:22:09.660 | I mean, this amplifies our brain.
00:22:11.680 | It's a little bit clumsy to interact with,
00:22:15.620 | but it definitely, it's way beyond
00:22:18.980 | what we had 15 years ago.
00:22:21.940 | - But the implementation becomes different,
00:22:23.580 | just like a bird versus the airplane.
00:22:25.760 | Even though the AI brain is an emulation,
00:22:30.660 | it starts adding features we might not otherwise have,
00:22:34.380 | like ability to consume a huge amount
00:22:36.300 | of information quickly,
00:22:38.580 | like look up thousands of Wikipedia articles in one take.
00:22:43.100 | - Exactly.
00:22:44.260 | And we can get, for example,
00:22:46.060 | to issues like simulated biology,
00:22:48.140 | where it can simulate many different things at once.
00:22:53.140 | We already had one example of simulated biology,
00:22:59.620 | which is the Moderna vaccine,
00:23:01.500 | and that's gonna be now the way
00:23:07.100 | in which we create medications.
00:23:11.140 | But they were able to simulate what each example
00:23:15.020 | of an mRNA would do to a human being,
00:23:17.780 | and they were able to simulate that quite reliably.
00:23:21.380 | And we actually simulated billions
00:23:23.980 | of different mRNA sequences,
00:23:27.060 | and they found the ones that were the best,
00:23:29.020 | and they created the vaccine.
00:23:31.060 | And they did, and talk about doing it quickly,
00:23:34.100 | they did that in two days.
00:23:36.280 | Now, how long would a human being take
00:23:38.060 | to simulate billions of different mRNA sequences?
00:23:41.500 | I don't know that we could do it at all,
00:23:42.820 | but it would take many years.
00:23:45.780 | They did it in two days.
00:23:47.320 | And one of the reasons that people didn't like vaccines
00:23:52.820 | is because it was done too quickly,
00:23:55.460 | and it was done too fast.
00:23:57.000 | And they actually included the time it took
00:24:00.420 | to test it out, which was 10 months.
00:24:02.900 | So they figured, okay, it took 10 months to create this.
00:24:06.300 | Actually, it took us two days.
00:24:08.100 | And we also will be able to ultimately do the tests
00:24:11.860 | in a few days as well.
00:24:14.180 | - Oh, 'cause we can simulate how the body will respond to it.
00:24:16.620 | - Yeah. - More and more.
00:24:17.460 | - That's a little bit more complicated,
00:24:19.140 | 'cause the body has a lot of different elements,
00:24:22.940 | and we have to simulate all of that.
00:24:25.380 | But that's coming as well.
00:24:27.520 | So ultimately, we could create it in a few days,
00:24:30.240 | and then test it in a few days, and it would be done.
00:24:34.020 | And we can do that with every type of medical insufficiency
00:24:38.660 | that we have.
00:24:40.260 | - So curing all diseases.
00:24:42.700 | - Yes, yeah.
00:24:43.900 | - Improving certain functions of the body,
00:24:47.620 | supplements, drugs, for recreation, for health,
00:24:53.900 | for performance, for productivity, all that kind of stuff.
00:24:56.420 | - Well, that's where we're headed.
00:24:58.060 | 'Cause I mean, right now, we have a very inefficient way
00:25:00.620 | of creating these new medications.
00:25:03.500 | But we've already shown it.
00:25:06.140 | And the Moderna vaccine is actually the best
00:25:08.420 | of the vaccines we've had.
00:25:11.400 | And it literally took two days to create.
00:25:14.860 | And we'll get to the point where we can test it out
00:25:18.300 | also quickly.
00:25:20.140 | - Are you impressed by AlphaFold
00:25:22.260 | and the solution to the protein folding,
00:25:25.780 | which essentially is simulating, modeling
00:25:30.060 | this primitive building block of life, which is a protein,
00:25:34.340 | and it's 3D shaped?
00:25:36.100 | - It's pretty remarkable that they can actually predict
00:25:39.060 | what the 3D shape of these things are.
00:25:42.020 | But they did it with the same type of neural net,
00:25:46.180 | the one, for example, the GO test.
00:25:51.180 | - So it's all the same.
00:25:52.740 | - It's all the same.
00:25:53.580 | - All the same approaches.
00:25:54.420 | - They took that same thing and just changed the rules
00:25:57.100 | to chess.
00:25:58.900 | And within a couple of days, it now played a master level
00:26:03.220 | of chess greater than any human being.
00:26:05.820 | And the same thing then worked for AlphaFold,
00:26:12.060 | which no human had done.
00:26:14.780 | I mean, human beings could do, the best humans
00:26:17.660 | could maybe do 15, 20% of figuring out
00:26:22.660 | what the shape would be.
00:26:25.820 | And after a few takes, it ultimately did just about 100%.
00:26:30.820 | - Do you still think the singularity will happen in 2045?
00:26:35.580 | And what does that look like?
00:26:39.020 | - You know, once we can amplify our brain with computers
00:26:45.100 | directly, which will happen in the 2030s,
00:26:48.100 | that's gonna keep growing.
00:26:49.780 | That's another whole theme, which is the exponential growth
00:26:52.780 | of computing power.
00:26:54.940 | - Yeah, so looking at price performance of computation
00:26:57.500 | from 1939 to 2021.
00:26:59.780 | - Right, so that starts with the very first computer
00:27:02.940 | actually created by a German during World War II.
00:27:05.620 | You might have thought that that might be significant,
00:27:09.420 | but actually the Germans didn't think computers
00:27:12.860 | were significant, and they completely rejected it.
00:27:16.660 | The second one is also the ZUSA-2.
00:27:20.340 | - And by the way, we're looking at a plot
00:27:22.220 | with the X-axis being the year from 1935 to 2025,
00:27:27.220 | and on the Y-axis in log scale is computation per second
00:27:32.340 | per constant dollar, so dollar normalized to inflation.
00:27:36.840 | And it's growing linearly on the log scale,
00:27:40.220 | which means it's growing exponentially.
00:27:41.900 | - The third one was the British computer,
00:27:44.540 | which the Allies did take very seriously,
00:27:47.740 | and it cracked the German code and enabled the British
00:27:52.740 | to win the Battle of Britain, which otherwise
00:27:56.340 | absolutely would not have happened if they hadn't
00:27:58.300 | cracked the code using that computer.
00:28:00.780 | But that's an exponential graph, so a straight line
00:28:04.660 | on that graph is exponential growth.
00:28:07.300 | And you see 80 years of exponential growth.
00:28:11.620 | And I would say about every five years,
00:28:15.220 | and this happened shortly before the pandemic,
00:28:18.280 | people saying, well, they call it Moore's Law,
00:28:20.660 | which is not the correct, because that's not all Intel.
00:28:25.540 | In fact, this started decades before Intel was even created.
00:28:29.700 | It wasn't with transistors formed into a grid.
00:28:34.140 | - So it's not just transistor count or transistor size.
00:28:37.260 | - Right, this started with relays,
00:28:40.740 | then went to vacuum tubes, then went
00:28:43.580 | to individual transistors, and then to integrated circuits.
00:28:48.580 | And integrated circuits actually starts
00:28:54.060 | like in the middle of this graph.
00:28:55.700 | And it has nothing to do with Intel.
00:28:58.780 | Intel actually was a key part of this,
00:29:02.940 | but a few years ago, they stopped making the fastest chips.
00:29:07.200 | But if you take the fastest chip of any technology
00:29:12.840 | in that year, you get this kind of graph.
00:29:16.660 | And it's definitely continuing for 80 years.
00:29:19.820 | - So you don't think Moore's Law, broadly defined, is dead?
00:29:23.900 | It's been declared dead multiple times
00:29:27.140 | throughout this process. - Right.
00:29:29.300 | I don't like the term Moore's Law,
00:29:31.420 | because it has nothing to do with Moore or with Intel.
00:29:34.780 | But yes, the exponential growth of computing is continuing
00:29:41.620 | and has never stopped. - From various sources.
00:29:44.000 | - I mean, it went through World War II,
00:29:45.880 | it went through global recessions.
00:29:49.160 | It's just continuing.
00:29:50.700 | And if you continue that out, along with software gains,
00:29:58.120 | which is a whole 'nother issue,
00:29:59.740 | and they really multiply, whatever you get
00:30:03.600 | from software gains, you multiply by the computer gains,
00:30:07.960 | you get faster and faster speed.
00:30:10.960 | This is actually the fastest computer models
00:30:14.360 | that have been created.
00:30:15.900 | And that actually expands roughly twice a year.
00:30:19.500 | Like every six months, it expands by two.
00:30:22.880 | - So we're looking at a plot from 2010 to 2022.
00:30:27.880 | On the x-axis is the publication date of the model,
00:30:31.440 | and perhaps sometimes the actual paper associated with it.
00:30:34.260 | And on the y-axis is training, compute, and flops.
00:30:40.300 | And so basically this is looking at the increase
00:30:43.900 | in the, not transistors, but the computational power
00:30:48.900 | of neural networks.
00:30:51.540 | - Yes, the computational power that created these models.
00:30:55.160 | And that's doubled every six months.
00:30:57.620 | - Which is even faster, the transistor division.
00:31:00.420 | - Yeah.
00:31:01.260 | Actually, since it goes faster than the amount of cost,
00:31:06.920 | this has actually become a greater investment
00:31:10.880 | to create these.
00:31:12.300 | But at any rate, by the time we get to 2045,
00:31:16.660 | we'll be able to multiply our intelligence
00:31:19.160 | many millions fold.
00:31:21.520 | And it's just very hard to imagine what that will be like.
00:31:25.080 | - And that's the singularity, what we can't even imagine.
00:31:28.440 | - Right, that's why we call it the singularity.
00:31:30.520 | 'Cause the singularity in physics,
00:31:32.800 | something gets sucked into its singularity
00:31:35.160 | and you can't tell what's going on in there
00:31:37.800 | because no information can get out of it.
00:31:40.480 | There's various problems with that,
00:31:42.160 | but that's the idea.
00:31:44.120 | It's too much beyond what we can imagine.
00:31:49.000 | - Do you think it's possible we don't notice
00:31:51.160 | that what the singularity actually feels like
00:31:55.320 | is we just live through it
00:31:58.280 | with exponentially increasing cognitive capabilities
00:32:05.460 | and we almost, because everything's moving so quickly,
00:32:08.460 | aren't really able to introspect that our life has changed?
00:32:13.700 | - Yeah, but I mean, we will have that much greater capacity
00:32:17.460 | to understand things, so we should be able to look back.
00:32:20.740 | - Looking at history, understand history.
00:32:23.220 | - But we will need people, basically like you and me,
00:32:26.860 | to actually think about these things.
00:32:29.260 | - But we might be distracted by all the other sources
00:32:32.260 | of entertainment and fun because the exponential power
00:32:37.260 | of intellect is growing, but also the--
00:32:40.580 | - There'll be a lot of fun.
00:32:41.940 | - The amount of ways you can have--
00:32:46.380 | - I mean, we already have a lot of fun
00:32:47.700 | with computer games and so on
00:32:49.060 | that are really quite remarkable.
00:32:50.700 | - What do you think about the digital world,
00:32:54.980 | the metaverse, virtual reality?
00:32:57.860 | Will that have a component in this
00:32:59.340 | or will most of our advancement be in physical reality?
00:33:02.060 | - Well, that's a little bit like Second Life,
00:33:04.700 | although the Second Life actually didn't work very well
00:33:07.100 | because it couldn't actually handle too many people.
00:33:09.580 | And I don't think the metaverse has come to being.
00:33:14.420 | I think there will be something like that.
00:33:16.860 | It won't necessarily be from that one company.
00:33:21.300 | I mean, there's gonna be competitors,
00:33:23.700 | but yes, we're gonna live increasingly online,
00:33:26.100 | particularly if our brains are online.
00:33:29.180 | I mean, how could we not be online?
00:33:31.540 | - Do you think it's possible that given this merger
00:33:34.340 | with AI, most of our meaningful interactions
00:33:39.020 | will be in this virtual world?
00:33:42.660 | Most of our life, we fall in love, we make friends,
00:33:46.300 | we come up with ideas, we do collaborations, we have fun.
00:33:49.420 | - I actually know somebody who's marrying somebody
00:33:51.660 | that they never met.
00:33:52.940 | I think they just met her briefly before the wedding,
00:33:57.700 | but she actually fell in love with this other person
00:34:01.460 | never having met them.
00:34:04.140 | And I think the love is real.
00:34:10.300 | - That's a beautiful story, but do you think that story
00:34:13.140 | is one that might be experienced as opposed to by
00:34:16.540 | hundreds of thousands of people,
00:34:18.460 | but instead by hundreds of millions of people?
00:34:22.220 | - I mean, it really gives you appreciation
00:34:23.860 | for these virtual ways of communicating.
00:34:27.220 | And if anybody can do it, then it's really not
00:34:31.140 | such a freak story.
00:34:33.640 | So I think more and more people will do that.
00:34:37.500 | - But that's turning our back on our entire history
00:34:40.460 | of evolution.
00:34:41.620 | Or the old days, we used to fall in love by holding hands
00:34:45.620 | and sitting by the fire, that kind of stuff.
00:34:49.380 | Here, you're playing.
00:34:50.780 | - Actually, I have five patents on where you can hold hands,
00:34:54.740 | even if you're separated.
00:34:57.140 | - Great.
00:34:58.740 | So the touch, the sense, it's all just senses.
00:35:02.020 | It's all just--
00:35:02.860 | - Yeah, I mean, touch is, it's not just that you're touching
00:35:06.420 | someone or not, there's a whole way of doing it,
00:35:09.180 | and it's very subtle, but ultimately,
00:35:12.380 | we can emulate all of that.
00:35:15.280 | - Are you excited by that future?
00:35:19.900 | Do you worry about that future?
00:35:21.620 | - I have certain worries about the future, but not--
00:35:26.020 | - Not that.
00:35:26.860 | - Virtual touch.
00:35:27.680 | (both laughing)
00:35:29.780 | - Well, I agree with you.
00:35:31.580 | You described six stages in the evolution
00:35:34.260 | of information processing in the universe,
00:35:36.660 | as you started to describe.
00:35:39.500 | Can you maybe talk through some of those stages,
00:35:42.820 | from the physics and chemistry to DNA and brains,
00:35:46.380 | and then to the very end,
00:35:48.860 | to the very beautiful end of this process?
00:35:52.060 | - It actually gets more rapid.
00:35:54.180 | So physics and chemistry, that's how we started.
00:35:57.720 | - So from the very beginning of the universe--
00:36:02.180 | - We had lots of electrons and various things
00:36:04.780 | traveling around, and that took, actually,
00:36:09.060 | many billions of years, kind of jumping ahead here
00:36:13.780 | to kind of some of the last stages,
00:36:16.020 | where we have things like love and creativity.
00:36:19.300 | It's really quite remarkable that that happens.
00:36:21.820 | But finally, physics and chemistry created biology and DNA,
00:36:27.860 | and now you had actually one type of molecule
00:36:33.460 | that described the cutting edge of this process.
00:36:37.000 | And we go from physics and chemistry to biology.
00:36:43.580 | And finally, biology created brains.
00:36:47.120 | I mean, not everything that's created by biology
00:36:51.460 | has a brain, but eventually brains came along.
00:36:56.460 | - And all of this is happening faster and faster.
00:36:58.860 | - Yeah.
00:37:00.340 | It created increasingly complex organisms.
00:37:04.540 | Another key thing is actually not just brains,
00:37:08.460 | but our thumb.
00:37:11.040 | Because there's a lot of animals with brains
00:37:16.040 | even bigger than humans.
00:37:18.080 | Elephants have a bigger brain, whales have a bigger brain,
00:37:22.940 | but they've not created technology
00:37:26.080 | because they don't have a thumb.
00:37:28.680 | So that's one of the really key elements
00:37:32.280 | in the evolution of humans.
00:37:34.120 | - This physical manipulator device
00:37:37.920 | that's useful for puzzle solving in the physical reality.
00:37:41.360 | - So I could think, I could look at a tree and go,
00:37:43.680 | oh, I could actually trip that branch down
00:37:46.280 | and eliminate the leaves and carve a tip on it
00:37:49.920 | and it would create technology.
00:37:51.840 | And you can't do that if you don't have a thumb.
00:37:56.680 | - Yeah.
00:37:57.520 | - So thumbs then created technology,
00:38:04.520 | and technology also had a memory.
00:38:08.080 | And now those memories are competing
00:38:10.040 | with the scale and scope of human beings.
00:38:15.040 | And ultimately we'll go beyond it.
00:38:17.120 | And then we're gonna merge human technology
00:38:22.520 | with human intelligence
00:38:27.520 | and understand how human intelligence works,
00:38:31.000 | which I think we already do.
00:38:33.320 | And we're putting that into our human technology.
00:38:37.920 | - So create the technology inspired by our own intelligence
00:38:43.120 | and then that technology supersedes us
00:38:45.400 | in terms of its capabilities.
00:38:47.240 | And we ride along.
00:38:48.680 | Or do you ultimately see it as fundamentally--
00:38:50.520 | - And we ride along, but a lot of people don't see that.
00:38:52.840 | They say, well, you got humans and you got machines
00:38:56.240 | and there's no way we can ultimately compete with humans.
00:38:59.320 | And you can already see that.
00:39:02.240 | Lee Sedol, who's like the best Go player in the world,
00:39:07.080 | says he's not gonna play Go anymore.
00:39:09.200 | - Yeah.
00:39:10.080 | - Because playing Go for human,
00:39:12.960 | that was like the ultimate in intelligence
00:39:15.000 | 'cause no one else could do that.
00:39:16.680 | But now a machine can actually go way beyond him.
00:39:22.440 | And so he says, well, there's no point playing it anymore.
00:39:25.120 | - That may be more true for games than it is for life.
00:39:28.960 | I think there's a lot of benefit to working together
00:39:32.200 | with AI in regular life.
00:39:34.520 | So if you were to put a probability on it,
00:39:38.000 | is it more likely that we merge with AI or AI replaces us?
00:39:43.000 | - A lot of people just think computers come along
00:39:47.480 | and they compete with them.
00:39:48.400 | We can't really compete and that's the end of it.
00:39:50.960 | As opposed to them increasing our abilities.
00:39:57.280 | And if you look at most technology,
00:39:59.840 | it increases our abilities.
00:40:02.000 | I mean, look at the history of work.
00:40:06.360 | Look at what people did 100 years ago.
00:40:11.240 | Does any of that exist anymore?
00:40:13.120 | People, I mean, if you were to predict
00:40:16.680 | that all of these jobs would go away
00:40:19.560 | and would be done by machines,
00:40:21.120 | people would say, well, that's gonna be,
00:40:22.880 | no one's gonna have jobs
00:40:24.160 | and it's gonna be massive unemployment.
00:40:29.600 | But I show in this book that's coming out,
00:40:32.640 | the amount of people that are working,
00:40:36.840 | even as a percentage of the population, has gone way up.
00:40:41.680 | - We're looking at the X-axis year from 1774 to 2024
00:40:46.240 | and on the Y-axis, personal income per capita
00:40:49.600 | in constant dollars and it's growing super linearly.
00:40:52.800 | I mean, it's-- - Yeah, 2021,
00:40:54.440 | constant dollars and it's gone way up.
00:40:58.080 | That's not what you were to predict
00:40:59.920 | given that we would predict
00:41:01.960 | that all these jobs would go away.
00:41:03.800 | But the reason it's gone up
00:41:06.520 | is because we've basically enhanced our own capabilities
00:41:09.880 | by using these machines
00:41:11.280 | as opposed to them just competing with us.
00:41:13.400 | That's a key way in which we're gonna be able
00:41:16.280 | to become far smarter than we are now
00:41:18.640 | by increasing the number of different parameters
00:41:23.200 | we can consider in making a decision.
00:41:26.480 | - I was very fortunate, I am very fortunate
00:41:28.600 | to be able to get a glimpse preview of your upcoming book,
00:41:33.600 | Singularity's Nearer.
00:41:37.280 | And one of the themes outside of just discussing
00:41:41.880 | the increasing exponential growth of technology,
00:41:44.720 | one of the themes is that things are getting better
00:41:48.440 | in all aspects of life.
00:41:50.760 | And you talk just about this.
00:41:53.680 | So one of the things you're saying is with jobs.
00:41:55.600 | So let me just ask about that.
00:41:57.800 | There is a big concern that automation,
00:42:01.040 | especially powerful AI, will get rid of jobs.
00:42:06.040 | There are people who lose jobs.
00:42:07.880 | And as you were saying, the sense is
00:42:10.960 | throughout the history of the 20th century,
00:42:14.000 | automation did not do that ultimately.
00:42:16.680 | And so the question is, will this time be different?
00:42:20.600 | - Right, that is the question.
00:42:22.560 | Will this time be different?
00:42:24.480 | And it really has to do with how quickly we can merge
00:42:27.400 | with this type of intelligence.
00:42:29.080 | Whether Lambda or GPT-3 is out there,
00:42:34.920 | and maybe it's overcome some of its key problems,
00:42:38.640 | and we really haven't enhanced human intelligence,
00:42:43.480 | that might be a negative scenario.
00:42:45.640 | But I mean, that's why we create technologies,
00:42:53.140 | to enhance ourselves.
00:42:54.640 | And I believe we will be enhanced.
00:42:58.800 | We're not just gonna sit here with 300 million
00:43:03.800 | modules in our neocortex.
00:43:09.040 | We're gonna be able to go beyond that.
00:43:10.940 | Because that's useful, but we can multiply that by 10,
00:43:19.640 | 100, 1,000, a million.
00:43:22.280 | And you might think, well, what's the point of doing that?
00:43:28.700 | It's like asking somebody that's never heard music,
00:43:33.900 | well, what's the value of music?
00:43:36.580 | I mean, you can't appreciate it until you've created it.
00:43:39.920 | - There's some worry that there'll be a wealth disparity.
00:43:46.860 | - Class or wealth disparity, only the rich people will be,
00:43:51.560 | basically, the rich people will first have access
00:43:54.280 | to this kind of thing, and then because of this kind
00:43:56.940 | of thing, because the ability to merge will get richer,
00:44:00.800 | exponentially faster.
00:44:02.720 | - And I say that's just like cell phones.
00:44:05.180 | I mean, there's like four billion cell phones
00:44:08.120 | in the world today.
00:44:10.360 | In fact, when cell phones first came out,
00:44:13.380 | you had to be fairly wealthy.
00:44:14.880 | They weren't very inexpensive.
00:44:17.540 | So you had to have some wealth in order to afford them.
00:44:20.180 | - Yeah, there were these big, sexy phones.
00:44:22.760 | - And they didn't work very well.
00:44:24.060 | They did almost nothing.
00:44:26.480 | So you can only afford these things if you're wealthy
00:44:31.260 | at a point where they really don't work very well.
00:44:34.060 | - So achieving scale and making it inexpensive
00:44:39.860 | is part of making the thing work well.
00:44:42.220 | - Exactly.
00:44:43.560 | So these are not totally cheap, but they're pretty cheap.
00:44:46.960 | I mean, you can get them for a few hundred dollars.
00:44:52.140 | - Especially given the kind of things it provides for you.
00:44:55.400 | There's a lot of people in the third world
00:44:57.100 | that have very little, but they have a smartphone.
00:45:00.400 | - Yeah, absolutely.
00:45:01.980 | - And the same will be true with AI.
00:45:03.840 | - I mean, I see homeless people have their own cell phones.
00:45:07.640 | - Yeah, so your sense is any kind of advanced technology
00:45:12.120 | will take the same trajectory.
00:45:13.760 | - Right, it ultimately becomes cheap and will be affordable.
00:45:17.740 | I probably would not be the first person
00:45:21.040 | to put something in my brain to connect to computers
00:45:26.040 | 'cause I think it will have limitations.
00:45:30.240 | But once it's really perfected,
00:45:33.160 | at that point it'll be pretty inexpensive.
00:45:36.440 | I think it'll be pretty affordable.
00:45:39.640 | - So in which other ways, as you outline your book,
00:45:43.080 | is life getting better?
00:45:44.480 | 'Cause I think--
00:45:45.320 | - Well, I have 50 charts in there
00:45:49.200 | where everything is getting better.
00:45:51.760 | - I think there's a kind of cynicism about,
00:45:54.400 | like even if you look at extreme poverty, for example.
00:45:58.000 | - For example, this is actually a poll
00:46:00.880 | taken on extreme poverty, and the people were asked,
00:46:05.520 | has poverty gotten better or worse?
00:46:08.320 | And the options are increased by 50%,
00:46:11.120 | increased by 25%, remain the same,
00:46:13.920 | decreased by 25%, decreased by 50%.
00:46:16.720 | If you're watching this or listening to this,
00:46:18.720 | try to vote for yourself.
00:46:21.440 | - 70% thought it had gotten worse,
00:46:24.160 | and that's the general impression.
00:46:27.080 | 88% thought it had gotten worse, it remained the same.
00:46:31.560 | Only 1% thought it decreased by 50%,
00:46:35.680 | and that is the answer.
00:46:37.560 | It actually decreased by 50%.
00:46:39.480 | - So only 1% of people got the right optimistic estimate
00:46:43.640 | of how poverty is--
00:46:45.200 | - Right, and this is the reality,
00:46:47.480 | and it's true of almost everything you look at.
00:46:51.080 | You don't wanna go back 100 years or 50 years.
00:46:54.760 | Things were quite miserable then,
00:46:56.920 | but we tend not to remember that.
00:47:01.000 | - So literacy rate increasing over the past few centuries
00:47:05.320 | across all the different nations,
00:47:07.920 | nearly to 100% across many of the nations in the world.
00:47:11.880 | - It's gone way up,
00:47:12.800 | average years of education have gone way up.
00:47:15.640 | Life expectancy is also increasing.
00:47:18.560 | Life expectancy was 48 in 1900.
00:47:23.560 | - And it's over 80 now.
00:47:26.400 | - And it's gonna continue to go up,
00:47:28.160 | particularly as we get into more advanced stages
00:47:30.920 | of simulated biology.
00:47:33.400 | - For life expectancy, these trends are the same
00:47:35.600 | for at birth, age one, age five, age 10,
00:47:37.960 | so it's not just the infant mortality.
00:47:40.360 | - And I have 50 more graphs in the book
00:47:42.680 | about all kinds of things.
00:47:44.600 | Even spread of democracy,
00:47:48.360 | which might bring up some sort of controversial issues,
00:47:52.560 | it still has gone way up.
00:47:55.160 | - Well, that one is gone way up,
00:47:57.280 | but that one is a bumpy road, right?
00:47:59.560 | - Exactly, and somebody might represent democracy,
00:48:03.240 | and go backwards, but we basically had no democracies
00:48:08.240 | before the creation of the United States,
00:48:11.000 | which was a little over two centuries ago,
00:48:13.880 | which in the scale of human history isn't that long.
00:48:16.520 | - Do you think superintelligence systems
00:48:19.040 | will help with democracy?
00:48:22.580 | So what is democracy?
00:48:25.040 | Democracy is giving a voice to the populace,
00:48:30.640 | and having their ideas, having their beliefs,
00:48:33.720 | having their views represented.
00:48:38.200 | - Well, I hope so.
00:48:39.460 | I mean, we've seen social networks
00:48:44.120 | can spread conspiracy theories,
00:48:47.760 | which have been quite negative,
00:48:51.360 | being, for example, being against any kind of stuff
00:48:55.540 | that would help your health.
00:48:58.360 | - So those kinds of ideas have,
00:49:01.520 | on social media, what you notice
00:49:05.120 | is they increase engagement,
00:49:07.160 | so dramatic division increases engagement.
00:49:10.360 | Do you worry about AI systems
00:49:12.200 | that will learn to maximize that division?
00:49:15.160 | - I mean, I do have some concerns about this,
00:49:20.360 | and I have a chapter in the book
00:49:24.020 | about the perils of advanced AI.
00:49:27.940 | Spreading misinformation on social networks is one of them,
00:49:34.080 | but there are many others.
00:49:36.780 | - What's the one that worries you the most,
00:49:38.940 | that we should think about to try to avoid?
00:49:42.900 | - Well, it's hard to choose.
00:49:49.020 | (Lex laughing)
00:49:50.800 | We do have the nuclear power
00:49:52.520 | that evolved when I was a child, I remember,
00:49:57.680 | and we would actually do these drills
00:50:01.520 | against a nuclear war.
00:50:03.560 | We'd get under our desks and put our hands behind our heads
00:50:07.640 | to protect us from a nuclear war.
00:50:10.120 | Seemed to work, we're still around, so.
00:50:13.300 | - You're protected.
00:50:17.080 | - But that's still a concern,
00:50:20.080 | and there are key dangerous situations
00:50:22.840 | that can take place in biology.
00:50:26.220 | Someone could create a virus that's very,
00:50:32.120 | I mean, we have viruses that are hard to spread,
00:50:35.920 | and they can be very dangerous,
00:50:42.800 | and we have viruses that are easy to spread,
00:50:46.160 | but they're not so dangerous.
00:50:48.300 | Somebody could create something
00:50:51.580 | that would be very easy to spread and very dangerous,
00:50:55.580 | and be very hard to stop,
00:50:57.420 | and it could be something that would spread
00:51:02.060 | without people noticing, 'cause people could get it,
00:51:04.620 | they'd have no symptoms, and then everybody would get it,
00:51:08.340 | and then symptoms would occur maybe a month later.
00:51:11.820 | So I mean, and that actually doesn't occur normally,
00:51:17.820 | because if we were to have a problem with that,
00:51:22.820 | we wouldn't exist.
00:51:26.900 | So the fact that humans exist means
00:51:29.020 | that we don't have viruses that can spread easily
00:51:33.180 | and kill us, because otherwise we wouldn't exist.
00:51:37.540 | - Yeah, viruses don't wanna do that.
00:51:39.060 | They want to spread and keep the host alive somewhat.
00:51:44.100 | So you can describe various dangers with biology.
00:51:47.240 | Also nanotechnology, which we actually haven't experienced
00:51:53.500 | yet, but there are people that are creating nanotechnology,
00:51:56.020 | and I described that in the book.
00:51:57.940 | - Now you're excited by the possibilities
00:51:59.900 | of nanotechnology, of nanobots,
00:52:02.520 | of being able to do things inside our body,
00:52:04.900 | inside our mind, that's going to help.
00:52:07.540 | What's exciting, what's terrifying about nanobots?
00:52:10.900 | - What's exciting is that that's a way
00:52:12.660 | to communicate with our neocortex,
00:52:16.040 | because each neocortex is pretty small,
00:52:19.020 | and you need a small entity that can actually get in there
00:52:22.380 | and establish a communication channel.
00:52:25.460 | And that's gonna really be necessary
00:52:28.180 | to connect our brains to AI within ourselves,
00:52:33.180 | because otherwise it would be hard for us
00:52:37.180 | to compete with it.
00:52:38.780 | - In a high-bandwidth way.
00:52:40.300 | - Yeah, yeah.
00:52:41.820 | And that's key, actually, 'cause a lot of the things
00:52:45.740 | like Neuralink are really not high-bandwidth yet.
00:52:49.020 | - So nanobots is the way you achieve high-bandwidth.
00:52:52.700 | How much intelligence would those nanobots have?
00:52:55.900 | - Yeah, they don't need a lot.
00:52:58.300 | Just enough to basically establish
00:53:00.900 | a communication channel to one nanobot.
00:53:04.460 | - So it's primarily about communication
00:53:07.300 | between external computing devices
00:53:09.900 | and our biological thinking machine.
00:53:14.120 | What worries you about nanobots?
00:53:17.060 | Is it similar to with the viruses?
00:53:19.820 | - Well, I mean, there's the great goo challenge.
00:53:22.740 | - Yes.
00:53:23.580 | - If you have a nanobot that wanted to create
00:53:29.900 | any kind of entity and repeat itself,
00:53:37.520 | and was able to operate in a natural environment,
00:53:41.520 | it could turn everything into that entity
00:53:45.240 | and basically destroy all biological life.
00:53:50.240 | - So you mentioned nuclear weapons.
00:53:54.640 | - Yeah.
00:53:55.460 | - I'd love to hear your opinion about the 21st century
00:54:01.840 | and whether you think we might destroy ourselves.
00:54:05.320 | And maybe your opinion, if it has changed
00:54:08.840 | by looking at what's going on in Ukraine,
00:54:11.760 | that we could have a hot war with nuclear powers involved
00:54:16.760 | and the tensions building and the seeming forgetting
00:54:23.320 | of how terrifying and destructive nuclear weapons are.
00:54:27.460 | Do you think humans might destroy ourselves
00:54:32.940 | in the 21st century, and if we do, how?
00:54:36.240 | And how do we avoid it?
00:54:37.520 | - I don't think that's gonna happen,
00:54:41.080 | despite the terrors of that war.
00:54:45.200 | It is a possibility, but I mean, I don't--
00:54:50.200 | - It's unlikely in your mind.
00:54:52.680 | - Yeah, even with the tensions we've had
00:54:55.360 | with this one nuclear power plant that's been taken over,
00:55:02.340 | it's very tense, but I don't actually see
00:55:06.920 | a lot of people worrying that that's gonna happen.
00:55:10.200 | I think we'll avoid that.
00:55:11.920 | We had two nuclear bombs go off in '45,
00:55:15.960 | so now we're 77 years later.
00:55:20.880 | - Yeah, we're doing pretty good.
00:55:22.400 | - We've never had another one go off through anger.
00:55:27.040 | - But people forget.
00:55:28.720 | People forget the lessons of history.
00:55:31.060 | - Well, yeah, I am worried about it.
00:55:33.600 | I mean, that is definitely a challenge.
00:55:37.480 | - But you believe that we'll make it out,
00:55:40.660 | and ultimately, superintelligent AI will help us
00:55:43.680 | make it out, as opposed to destroy us.
00:55:47.720 | - I think so, but we do have to be mindful of these dangers.
00:55:52.440 | And there are other dangers besides nuclear weapons.
00:55:56.400 | - So to get back to merging with AI,
00:56:01.100 | will we be able to upload our mind in a computer
00:56:03.780 | in a way where we might even transcend
00:56:09.420 | the constraints of our bodies?
00:56:11.660 | So copy our mind into a computer and leave the body behind?
00:56:15.320 | - Let me describe one thing I've already done with my father.
00:56:21.100 | - That's a great story.
00:56:22.280 | - So we created a technology.
00:56:25.420 | This is public, came out, I think, six years ago,
00:56:30.180 | where you could ask any question,
00:56:33.780 | and the release product, which I think
00:56:35.740 | is still on the market, it would read 200,000 books,
00:56:40.740 | and then find the one sentence in 200,000 books
00:56:45.940 | that best answered your question.
00:56:48.220 | And it's actually quite interesting.
00:56:51.180 | You can ask all kinds of questions,
00:56:52.740 | and you get the best answer in 200,000 books.
00:56:56.220 | But I was also able to take it and not go through
00:57:02.260 | 200,000 books, but go through a book that I put together,
00:57:07.060 | which is basically everything my father had written.
00:57:09.700 | So everything he had written, I had gathered,
00:57:14.640 | and we created a book, everything that
00:57:18.700 | Frederick Rizzo had written.
00:57:20.220 | Now, I didn't think this actually would work that well,
00:57:23.340 | because stuff he had written was stuff about how to lay out,
00:57:28.340 | I mean, he directed choral groups and music groups,
00:57:35.860 | and he would be laying out how the people should,
00:57:44.140 | where they should sit, and how to fund this,
00:57:49.620 | and all kinds of things that really didn't seem
00:57:54.020 | that interesting.
00:57:55.100 | And yet, when you ask a question, it would go through it,
00:58:00.780 | and it would actually give you a very good answer.
00:58:04.700 | So I said, "Well, who's the most interesting composer?"
00:58:07.820 | And he said, "Well, definitely Brahms."
00:58:09.500 | And he would go on about how Brahms was fabulous,
00:58:13.200 | and talk about the importance of music education.
00:58:17.980 | - So you could have essentially a question and answer,
00:58:21.100 | a conversation with him.
00:58:21.940 | - You could have a conversation with him,
00:58:22.980 | which was actually more interesting than talking to him,
00:58:25.900 | because if you talked to him, he'd be concerned about
00:58:29.060 | how they're gonna lay out this property
00:58:31.660 | to give a choral group.
00:58:34.180 | - He'd be concerned about the day-to-day
00:58:36.020 | versus the big questions.
00:58:37.300 | - Exactly, yeah.
00:58:39.020 | - And you did ask about the meaning of life,
00:58:41.580 | and he answered, "Love."
00:58:43.260 | - Yeah.
00:58:44.100 | - Do you miss him?
00:58:48.360 | - Yes, I do.
00:58:50.280 | You know, you get used to missing somebody after 52 years,
00:58:57.920 | and I didn't really have intelligent conversations with him
00:59:02.680 | until later in life.
00:59:04.480 | In the last few years, he was sick,
00:59:08.800 | which meant he was home a lot,
00:59:10.120 | and I was actually able to talk to him
00:59:11.880 | about different things like music and other things.
00:59:16.880 | So I miss that very much.
00:59:19.840 | - What did you learn about life from your father?
00:59:22.280 | What part of him is with you now?
00:59:27.840 | - He was devoted to music,
00:59:31.560 | and when he would create something to music,
00:59:33.880 | it put him in a different world.
00:59:35.480 | Otherwise, he was very shy,
00:59:39.700 | and if people got together,
00:59:43.800 | he tended not to interact with people,
00:59:47.000 | just because of his shyness.
00:59:48.560 | But when he created music,
00:59:51.320 | that, he was like a different person.
00:59:54.440 | - Do you have that in you?
00:59:56.560 | That kind of light that shines?
00:59:59.840 | - I mean, I got involved with technology at like age five.
01:00:04.840 | - And you fell in love with it
01:00:07.640 | in the same way he did with music?
01:00:09.360 | - Yeah, yeah.
01:00:11.320 | I remember, this actually happened with my grandmother.
01:00:15.880 | She had a manual typewriter,
01:00:20.080 | and she wrote a book, "One Life is Not Enough,"
01:00:23.040 | which actually a good title for a book I might write.
01:00:26.280 | And it was about a school she had created.
01:00:30.280 | Well, actually, her mother created it.
01:00:33.800 | So my mother's mother's mother created the school in 1868,
01:00:38.320 | and it was the first school in Europe
01:00:40.620 | that provided higher education for girls.
01:00:42.640 | It went through 14th grade.
01:00:44.420 | If you were a girl,
01:00:47.020 | and you were lucky enough to get an education at all,
01:00:50.720 | it would go through like ninth grade,
01:00:52.920 | and many people didn't have any education as a girl.
01:00:55.800 | This went through 14th grade.
01:00:58.340 | Her mother created it, she took it over,
01:01:03.520 | and the book was about the history of the school
01:01:09.440 | and her involvement with it.
01:01:11.120 | When she presented it to me,
01:01:14.000 | I was not so interested in the story of the school,
01:01:19.000 | but I was totally amazed with this manual typewriter.
01:01:25.240 | I mean, here was something
01:01:26.360 | you could put a blank piece of paper into,
01:01:29.440 | and you could turn it into something
01:01:31.080 | that looked like it came from a book.
01:01:33.760 | And you could actually type on it,
01:01:35.040 | and it looked like it came from a book.
01:01:36.440 | It was just amazing to me.
01:01:39.120 | And I could see actually how it worked.
01:01:41.740 | And I was also interested in magic.
01:01:44.820 | But in magic, if somebody actually knows how it works,
01:01:50.460 | the magic goes away.
01:01:52.560 | The magic doesn't stay there
01:01:53.800 | if you actually understand how it works.
01:01:56.640 | But here was technology.
01:01:57.880 | I didn't have that word when I was five or six.
01:02:01.040 | - And the magic was still there for you?
01:02:02.760 | - The magic was still there,
01:02:04.160 | even if you knew how it worked.
01:02:05.760 | So I became totally interested in this,
01:02:08.800 | and then went around, collected little pieces
01:02:12.600 | of mechanical objects from bicycles, from broken radios.
01:02:17.600 | I would go through the neighborhood.
01:02:19.400 | This was an era where you would allow five or six-year-olds
01:02:23.760 | to run through the neighborhood and do this.
01:02:26.360 | We don't do that anymore.
01:02:27.760 | But I didn't know how to put them together.
01:02:30.680 | And I said, "If I could just figure out
01:02:32.200 | "how to put these things together,
01:02:34.320 | "I could solve any problem."
01:02:37.360 | And I actually remember talking to these very old girls,
01:02:41.640 | I think they were 10,
01:02:42.760 | and telling them, "If I could just figure this out,
01:02:48.240 | "we could fly, we could do anything."
01:02:50.080 | And they said, "Well, you have quite an imagination."
01:02:53.640 | And then when I was in third grade,
01:03:00.760 | so I was like eight,
01:03:02.840 | created a virtual reality theater
01:03:05.880 | where people could come on stage
01:03:07.760 | and they could move their arms.
01:03:09.920 | And all of it was controlled through one control box.
01:03:13.560 | It was all done with mechanical technology.
01:03:15.840 | And it was a big hit in my third grade class.
01:03:19.820 | And then I went on to do things
01:03:23.040 | in junior high school science fairs,
01:03:24.960 | and high school science fairs,
01:03:27.680 | where I won the Westinghouse Science Talent Search.
01:03:30.760 | So I mean, I became committed to technology
01:03:33.960 | when I was five or six years old.
01:03:37.480 | - You've talked about how you use lucid dreaming
01:03:42.400 | to think, to come up with ideas as a source of creativity.
01:03:45.920 | Could you maybe talk through that?
01:03:49.360 | Maybe the process of how to,
01:03:52.040 | you've invented a lot of things.
01:03:54.080 | You've came up and thought through
01:03:55.600 | some very interesting ideas.
01:03:57.180 | What advice would you give,
01:03:59.520 | or can you speak to the process of thinking,
01:04:03.400 | of how to think, how to think creatively?
01:04:07.080 | - Well, I mean, sometimes I will think through in a dream
01:04:10.440 | and try to interpret that.
01:04:12.320 | But I think the key issue that I would tell younger people
01:04:17.320 | is to put yourself in the position
01:04:25.080 | that what you're trying to create already exists.
01:04:30.660 | And then you're explaining--
01:04:32.940 | - How it works.
01:04:35.780 | - Exactly.
01:04:38.220 | - That's really interesting.
01:04:39.220 | You paint a world that you would like to exist,
01:04:42.780 | you think it exists, and reverse engineer that.
01:04:45.980 | - And then you actually imagine
01:04:46.820 | you're giving a speech about how you created this.
01:04:50.140 | Well, you'd have to then work backwards
01:04:51.780 | as to how you would create it in order to make it work.
01:04:56.780 | - That's brilliant.
01:04:58.140 | And that requires some imagination, too,
01:05:01.420 | some first principles thinking.
01:05:03.160 | You have to visualize that world.
01:05:06.060 | That's really interesting.
01:05:07.760 | - And generally, when I talk about things
01:05:10.640 | we're trying to invent, I would use the present tense
01:05:13.220 | as if it already exists.
01:05:14.720 | Not just to give myself that confidence,
01:05:18.300 | but everybody else who's working on it.
01:05:20.300 | We just have to kind of do all the steps
01:05:26.660 | in order to make it actual.
01:05:31.060 | - How much of a good idea is about timing?
01:05:33.500 | How much is it about your genius
01:05:37.060 | versus that it's time has come?
01:05:40.020 | - Timing's very important.
01:05:42.940 | I mean, that's really why I got into futurism.
01:05:45.860 | I wasn't inherently a futurist.
01:05:50.820 | That was not really my goal.
01:05:54.300 | It's really to figure out when things are feasible.
01:05:57.380 | We see that now with large-scale models.
01:06:00.760 | The very large-scale models like GPT-3,
01:06:06.380 | it started two years ago.
01:06:08.180 | Four years ago, it wasn't feasible.
01:06:11.140 | In fact, they did create GPT-2, which didn't work.
01:06:16.140 | So it required a certain amount of timing
01:06:22.340 | having to do with this exponential growth
01:06:24.180 | of computing power.
01:06:26.280 | - So futurism, in some sense, is a study of timing,
01:06:31.220 | trying to understand how the world will evolve
01:06:34.340 | and when will the capacity for certain ideas emerge.
01:06:38.260 | - And that's become a thing in itself,
01:06:39.980 | then, to try to time things in the future.
01:06:42.540 | But really, its original purpose was to time my products.
01:06:50.180 | I mean, I did OCR in the 1970s,
01:06:53.840 | because OCR doesn't require a lot of computation.
01:07:00.460 | - Optical character recognition.
01:07:02.780 | - Yeah, so we were able to do that in the '70s,
01:07:06.540 | and I waited 'til the '80s to address speech recognition,
01:07:10.980 | since that requires more computation.
01:07:14.420 | - So you were thinking through timing
01:07:15.980 | when you were developing those things.
01:07:17.460 | - Yeah. - Has its time come?
01:07:19.820 | - Yeah.
01:07:21.340 | - And that's how you've developed that brain power
01:07:24.300 | to start to think in a futurist sense.
01:07:26.700 | When, how will the world look like in 2045
01:07:30.980 | and work backwards, and how it gets there.
01:07:33.580 | - But that has become a thing in itself,
01:07:35.300 | because looking at what things will be like in the future
01:07:40.300 | reflects such dramatic changes in how humans will live.
01:07:48.660 | So that was worth communicating also.
01:07:51.260 | - So you developed that muscle of predicting the future,
01:07:56.260 | and then apply it broadly, and start to discuss
01:07:59.860 | how it changes the world of technology,
01:08:02.280 | how it changes the world of human life on Earth.
01:08:06.820 | In Danielle, one of your books,
01:08:09.020 | you write about someone who has the courage
01:08:11.620 | to question assumptions that limit human imagination
01:08:15.060 | to solve problems.
01:08:16.660 | Can you also give advice on how each of us
01:08:20.260 | can have this kind of courage?
01:08:22.820 | - Well, it's good that you picked that quote,
01:08:24.580 | because I think that does symbolize what Danielle is about.
01:08:27.540 | - Courage.
01:08:28.820 | So how can each of us have that courage
01:08:30.820 | to question assumptions?
01:08:32.840 | - I mean, we see that when people can go beyond
01:08:38.660 | the current realm and create something that's new.
01:08:43.660 | I mean, take Uber, for example.
01:08:45.580 | Before that existed, you never thought
01:08:48.100 | that that would be feasible,
01:08:49.980 | and it did require changes in the way people work.
01:08:53.200 | - Is there practical advice you give in the book
01:08:57.860 | about what each of us can do to be a Danielle?
01:09:02.040 | - Well, she looks at the situation and tries to imagine
01:09:08.100 | how she can overcome various obstacles.
01:09:15.820 | And then she goes for it,
01:09:17.940 | and she's a very good communicator,
01:09:19.700 | so she can communicate these ideas to other people.
01:09:24.700 | - And there's practical advice of learning to program
01:09:27.620 | and recording your life and things of this nature.
01:09:32.020 | Become a physicist.
01:09:33.260 | So you list a bunch of different suggestions
01:09:36.900 | of how to throw yourself into this world.
01:09:39.140 | - Yeah, I mean, it's kind of an idea
01:09:42.220 | how young people can actually change the world
01:09:46.180 | by learning all of these different skills.
01:09:51.180 | - And at the core of that is the belief
01:09:54.740 | that you can change the world,
01:09:56.780 | that your mind, your body can change the world.
01:10:00.500 | - Yeah, that's right.
01:10:02.780 | - And not letting anyone else tell you otherwise.
01:10:05.180 | - That's very good, exactly.
01:10:08.940 | When we upload, the story you told about your dad
01:10:13.540 | and having a conversation with him,
01:10:15.340 | we're talking about uploading your mind to the computer.
01:10:19.980 | Do you think we'll have a future
01:10:23.140 | with something you call afterlife?
01:10:25.620 | We'll have avatars that mimic increasingly better and better
01:10:29.820 | our behavior, our appearance, all that kind of stuff.
01:10:33.500 | Even those are perhaps no longer with us.
01:10:36.780 | - Yes, I mean, we need some information about them.
01:10:41.780 | I mean, I think about my father.
01:10:44.540 | I have what he wrote.
01:10:47.060 | Now, he didn't have a word processor,
01:10:50.460 | so he didn't actually write that much.
01:10:53.700 | And our memories of him aren't perfect.
01:10:56.020 | So how do you even know if you've created
01:10:59.340 | something that's satisfactory?
01:11:00.860 | Now, you could do a Frederick Kurzweil Turing test.
01:11:04.900 | It seems like Frederick Kurzweil to me.
01:11:07.060 | But the people who remember him, like me,
01:11:10.300 | don't have a perfect memory.
01:11:14.380 | - Is there such a thing as a perfect memory?
01:11:16.300 | Maybe the whole point is for him
01:11:21.300 | to make you feel a certain way.
01:11:25.540 | - Yeah, well, I think that would be the goal.
01:11:28.340 | - And that's the connection we have with loved ones.
01:11:30.340 | It's not really based on very strict definition of truth.
01:11:35.060 | It's more about the experiences we share.
01:11:37.500 | And they get morphed through memory.
01:11:39.820 | But ultimately, they make us smile.
01:11:41.740 | - I think we definitely can do that.
01:11:44.380 | And that would be very worthwhile.
01:11:46.740 | - So do you think we'll have a world of replicants,
01:11:49.900 | of copies, there'll be a bunch of Ray Kurzweils.
01:11:53.740 | Like I could hang out with one.
01:11:55.260 | I can download it for five bucks
01:11:58.140 | and have a best friend, Ray.
01:12:00.020 | And you, the original copy, wouldn't even know about it.
01:12:04.820 | Is that, do you think that world is,
01:12:10.020 | first of all, do you think that world is feasible?
01:12:13.340 | And do you think there's ethical challenges there?
01:12:16.260 | Like how would you feel about me hanging out
01:12:18.020 | with Ray Kurzweil and you not knowing about it?
01:12:20.420 | - Doesn't strike me as a problem.
01:12:28.060 | - Which you, the original?
01:12:30.220 | - Would that cause a problem for you?
01:12:34.140 | - No, I would really very much enjoy it.
01:12:37.460 | - No, not just hanging out with me,
01:12:38.740 | but if somebody hanging out with you, a replicant of you.
01:12:43.740 | - Well, I think I would start, it sounds exciting,
01:12:46.780 | but then what if they start doing better than me
01:12:50.620 | and take over my friend group?
01:12:55.020 | And then, because they may be an imperfect copy
01:13:00.020 | or they may be more social or all these kinds of things.
01:13:05.380 | And then I become like the old version
01:13:07.660 | that's not nearly as exciting.
01:13:10.260 | Maybe they're a copy of the best version of me
01:13:12.380 | on a good day.
01:13:13.220 | - Yeah, but if you hang out with a replicant of me
01:13:16.900 | and that turned out to be successful,
01:13:20.220 | I'd feel proud of that person 'cause it was based on me.
01:13:24.940 | - So, but it is a kind of death of this version of you.
01:13:29.940 | - Well, not necessarily.
01:13:33.940 | I mean, you can still be alive, right?
01:13:36.340 | - But, and you would be proud, okay.
01:13:38.580 | So, it's like having kids and you're proud
01:13:40.260 | that they've done even more than you were able to do.
01:13:42.700 | - Yeah, exactly.
01:13:43.540 | It does bring up new issues,
01:13:50.060 | but it seems like an opportunity.
01:13:54.880 | - Well, that replicant should probably
01:13:56.920 | have the same rights as you do.
01:13:58.560 | - Well, that gets into a whole issue
01:14:02.720 | because when a replicant occurs,
01:14:07.400 | they're not necessarily gonna have your rights.
01:14:10.320 | And if a replicant occurs to somebody who's already dead,
01:14:13.320 | do they have all the obligations
01:14:17.880 | that the original person had?
01:14:21.160 | Do they have all the agreements that they had?
01:14:25.780 | - I think you're gonna have to have laws that say, yes.
01:14:30.140 | There has to be, if you wanna create a replicant,
01:14:33.180 | they have to have all the same rights as human rights.
01:14:35.660 | - Well, you don't know.
01:14:37.020 | Someone can create a replicant and say,
01:14:38.260 | "Well, it's a replicant,
01:14:39.100 | but I didn't bother getting their rights."
01:14:41.700 | - But that would be illegal, I mean.
01:14:43.700 | Like, if you do that,
01:14:44.700 | you have to do that in the black market.
01:14:46.660 | If you wanna get an official replicant--
01:14:49.460 | - Okay, it's not so easy.
01:14:50.980 | Suppose you created multiple replicants,
01:14:53.700 | the original rights may be for one person
01:15:00.380 | and not for a whole group of people.
01:15:03.220 | - Sure.
01:15:05.500 | (laughs)
01:15:07.580 | So there has to be at least one,
01:15:10.620 | and then all the other ones kind of share the rights.
01:15:13.300 | Yeah, I just don't think that's very difficult
01:15:17.340 | to conceive for us humans, the idea that this country--
01:15:20.420 | - Well, you create a replicant that has certain,
01:15:23.700 | I mean, I've talked to people about this,
01:15:26.820 | including my wife, who would like to get back her father.
01:15:30.580 | And she doesn't worry about who has rights to what.
01:15:36.620 | She would have somebody that she could visit with
01:15:40.460 | and give her some satisfaction.
01:15:42.500 | And she wouldn't care about any of these other rights.
01:15:49.300 | - What does your wife think about multiple recursals?
01:15:51.900 | Have you had that discussion?
01:15:54.460 | - I haven't addressed that with her.
01:15:56.260 | (laughs)
01:15:58.260 | - I think ultimately that's an important question,
01:16:00.660 | loved ones, how they feel about,
01:16:03.580 | there's something about love--
01:16:05.060 | - Well, that's the key thing, right?
01:16:06.420 | If the loved one's rejected,
01:16:07.980 | it's not gonna work very well.
01:16:09.980 | So the loved ones really are the key determinant,
01:16:15.940 | whether or not this works or not.
01:16:19.220 | - But there's also ethical rules.
01:16:21.900 | We have to contend with the idea,
01:16:24.140 | and we have to contend with that idea with AI.
01:16:27.860 | - But what's gonna motivate it is,
01:16:30.260 | I mean, I talk to people who really miss people who are gone
01:16:34.620 | and they would love to get something back,
01:16:37.700 | even if it isn't perfect.
01:16:39.500 | And that's what's gonna motivate this.
01:16:47.140 | And that person lives on in some form.
01:16:51.180 | And the more data we have,
01:16:52.860 | the more we're able to reconstruct that person
01:16:56.060 | and allow them to live on.
01:16:57.500 | - And eventually, as we go forward,
01:17:01.420 | we're gonna have more and more of this data
01:17:03.140 | because we're gonna have nanobots
01:17:06.340 | that are inside our neocortex
01:17:08.340 | and we're gonna collect a lot of data.
01:17:10.240 | In fact, anything that's data is always collected.
01:17:15.820 | - There is something a little bit sad,
01:17:18.660 | which is becoming, or maybe it's hopeful,
01:17:23.180 | which is more and more common these days,
01:17:26.800 | which when a person passes away,
01:17:28.360 | you'll have their Twitter account,
01:17:30.060 | and you have the last tweet they tweeted,
01:17:34.060 | like something they-
01:17:34.900 | - And you can recreate them now
01:17:36.500 | with large language models and so on.
01:17:38.300 | I mean, you can create somebody that's just like them
01:17:40.820 | and can actually continue to communicate.
01:17:45.020 | - I think that's really exciting
01:17:46.420 | because I think in some sense,
01:17:49.340 | like if I were to die today,
01:17:51.700 | in some sense I would continue on if I continued tweeting.
01:17:55.200 | I tweet, therefore I am.
01:17:57.600 | - Yeah, well, I mean, that's one of the advantages
01:18:02.020 | of a replicant, that I can recreate
01:18:05.620 | the communications of that person.
01:18:08.280 | - Do you hope, do you think,
01:18:13.180 | do you hope humans will become a multi-planetary species?
01:18:17.380 | You've talked about the phases, the six epochs,
01:18:20.000 | and one of them is reaching out into the stars in part.
01:18:23.580 | - Yes, but the kind of attempts we're making now
01:18:28.180 | to go to other planetary objects
01:18:33.020 | doesn't excite me that much
01:18:36.500 | 'cause it's not really advancing anything.
01:18:38.780 | - It's not efficient enough?
01:18:41.100 | - Yeah, and we're also putting out other human beings,
01:18:45.180 | which is a very inefficient way
01:18:50.420 | to explore these other objects.
01:18:52.620 | What I'm really talking about
01:18:55.100 | in the sixth epoch, the universe wakes up,
01:18:59.240 | it's where we can spread our superintelligence
01:19:03.060 | throughout the universe.
01:19:05.380 | And that doesn't mean sending
01:19:07.340 | very soft, squishy creatures like humans.
01:19:10.200 | - Yeah, the universe wakes up.
01:19:13.840 | - I mean, we would send intelligence masses of nanobots
01:19:18.840 | which can then go out and colonize
01:19:22.120 | these other parts of the universe.
01:19:27.840 | - Do you think there's intelligent alien civilizations
01:19:31.600 | out there that our bots might meet?
01:19:34.140 | - My hunch is no.
01:19:38.780 | Most people say yes, absolutely.
01:19:40.720 | I mean, and-- - Universe is too big.
01:19:43.480 | - And they'll cite the Drake equation.
01:19:46.160 | And I think in "Singularity is Near,"
01:19:50.880 | I have two analyses of the Drake equation,
01:19:56.260 | both with very reasonable assumptions.
01:19:58.980 | And one gives you thousands of advanced civilizations
01:20:05.040 | in each galaxy.
01:20:07.520 | And another one gives you one civilization.
01:20:11.860 | And we know of one.
01:20:13.740 | A lot of the analyses are forgetting
01:20:16.620 | the exponential growth of computation.
01:20:20.200 | Because we've gone from where the fastest way
01:20:24.900 | I could send a message to somebody was with a pony,
01:20:28.540 | which was what, like a century and a half ago?
01:20:33.380 | - Yeah.
01:20:36.220 | The advanced civilization we have today,
01:20:37.880 | and if you accept what I've said,
01:20:40.920 | go forward a few decades,
01:20:42.760 | you're gonna have an absolutely fantastic amount
01:20:44.960 | of civilization compared to a pony,
01:20:47.920 | and that's in a couple hundred years.
01:20:50.440 | - Yeah, the speed and the scale of information transfer
01:20:53.360 | is just growing exponentially, in a blink of an eye.
01:20:57.560 | - Now think about these other civilizations.
01:21:01.720 | They're gonna be spread out at cosmic times.
01:21:06.020 | So if something is ahead of us or behind us,
01:21:10.180 | it could be ahead of us or behind us
01:21:11.780 | by maybe millions of years, which isn't that much.
01:21:16.460 | I mean, the world is billions of years old,
01:21:21.460 | 14 billion or something.
01:21:23.980 | So even a thousand years,
01:21:27.540 | if two or 300 years is enough to go from a pony
01:21:30.860 | to a fantastic amount of civilization, we would see that.
01:21:35.100 | So of other civilizations that have occurred,
01:21:39.760 | okay, some might be behind us,
01:21:42.020 | but some might be ahead of us.
01:21:44.000 | If they're ahead of us, they're ahead of us
01:21:45.840 | by thousands, millions of years,
01:21:49.600 | and they would be so far beyond us,
01:21:51.800 | they would be doing galaxy-wide engineering.
01:21:55.280 | But we don't see anything doing galaxy-wide engineering.
01:22:00.120 | So either they don't exist, or this very universe
01:22:05.120 | is a construction of an alien species.
01:22:08.400 | We're living inside a video game.
01:22:10.940 | - Well, that's another explanation that, yes,
01:22:14.920 | you've got some teenage kids in another civilization.
01:22:19.180 | - Do you find compelling the simulation hypothesis
01:22:22.320 | as a thought experiment, that we're living in a simulation?
01:22:25.680 | - The universe is computational,
01:22:29.440 | so we are an example in a computational world.
01:22:34.440 | Therefore, it is a simulation.
01:22:39.160 | It doesn't necessarily mean an experiment
01:22:41.080 | by some high school kid in another world,
01:22:44.860 | but it nonetheless is taking place in a computational world,
01:22:49.860 | and everything that's going on
01:22:51.640 | is basically a form of computation.
01:22:58.120 | So you really have to define what you mean
01:23:00.680 | by this whole world being a simulation.
01:23:05.680 | - Well, then it's the teenager that makes the video game.
01:23:11.400 | You know, us humans with our current limited
01:23:14.720 | cognitive capability have strived to understand ourselves,
01:23:19.720 | and we have created religions.
01:23:23.880 | We think of God.
01:23:26.560 | Whatever that is, do you think God exists?
01:23:31.560 | And if so, who is God?
01:23:34.520 | - I alluded to this before.
01:23:37.720 | We started out with lots of particles going around,
01:23:42.840 | and there's nothing that represents love and creativity.
01:23:47.840 | And somehow we've gotten into a world
01:23:55.000 | where love actually exists,
01:23:56.880 | and that has to do actually with consciousness,
01:24:00.000 | because you can't have love without consciousness.
01:24:03.160 | So to me, that's God,
01:24:04.960 | the fact that we have something where love,
01:24:09.200 | where you can be devoted to someone else
01:24:11.240 | and really feel the love, that's God.
01:24:15.240 | And if you look at the Old Testament,
01:24:22.160 | it was actually created by several
01:24:25.160 | different rabbinics in there.
01:24:29.200 | And I think they've identified three of them.
01:24:32.480 | One of them dealt with God as a person
01:24:39.080 | that you can make deals with, and he gets angry,
01:24:42.440 | and he wrecks vengeance on various people.
01:24:47.440 | But two of them actually talk about God
01:24:50.440 | as a symbol of love and peace and harmony and so forth.
01:24:55.440 | That's how they describe God.
01:24:59.980 | So that's my view of God,
01:25:03.640 | not as a person in the sky that you can make deals with.
01:25:08.120 | - It's whatever the magic that goes from basic elements
01:25:13.200 | to things like consciousness and love.
01:25:15.960 | Do you think, one of the things I find
01:25:19.200 | extremely beautiful and powerful is cellular automata,
01:25:22.240 | which you also touch on.
01:25:23.600 | Do you think whatever the heck happens in cellular automata
01:25:27.720 | where interesting, complicated objects emerge,
01:25:30.480 | God is in there too?
01:25:33.480 | The emergence of love in this seemingly primitive universe?
01:25:38.480 | - Of creating a replicant is that
01:25:43.120 | they would love you and you would love them.
01:25:49.080 | There wouldn't be much point of doing it
01:25:50.840 | if that didn't happen.
01:25:52.720 | - But all of it, I guess what I'm saying
01:25:54.880 | about cellular automata is it's primitive building blocks,
01:25:59.280 | and they somehow create beautiful things.
01:26:03.440 | Is there some deep truth to that
01:26:06.280 | about how our universe works?
01:26:07.960 | Is the emergence from simple rules,
01:26:11.160 | beautiful, complex objects can emerge?
01:26:14.080 | Is that the thing that made us?
01:26:16.680 | - Yeah.
01:26:17.520 | We went through all the six phases of reality.
01:26:21.600 | - That's a good way to look at it.
01:26:23.680 | It does make some point to the whole value
01:26:27.360 | of having a universe.
01:26:29.840 | - Do you think about your own mortality?
01:26:34.080 | Are you afraid of it?
01:26:35.120 | - Yes, but I keep going back to my idea
01:26:41.320 | of being able to expand human life quickly enough
01:26:47.320 | in advance of our getting there,
01:26:50.120 | longevity, escape velocity,
01:26:53.080 | which we're not quite at yet,
01:26:57.640 | but I think we're actually pretty close,
01:27:01.480 | particularly with, for example, doing simulated biology.
01:27:05.260 | I think we can probably get there within,
01:27:08.880 | say, by the end of this decade, and that's my goal.
01:27:12.800 | - Do you hope to achieve the longevity, escape velocity?
01:27:16.400 | Do you hope to achieve immortality?
01:27:18.520 | - Well, immortality is hard to say.
01:27:22.960 | I can't really come on your program saying, I've done it.
01:27:26.080 | I've achieved immortality, because it's never forever.
01:27:30.300 | - A long time, a long time of living well.
01:27:35.280 | - But we'd like to actually advance human life expectancy,
01:27:38.880 | advance my life expectancy more than a year
01:27:42.480 | every year, and I think we can get there within,
01:27:45.840 | by the end of this decade.
01:27:47.820 | - How do you think we do it?
01:27:49.440 | So there's practical things in "Transcend,
01:27:53.600 | "The Nine Steps to Living Well Forever," your book.
01:27:56.200 | You describe just that.
01:27:58.360 | There's practical things like health,
01:28:00.520 | exercise, all those things.
01:28:02.320 | - Yeah, I mean, we live in a body that doesn't last forever.
01:28:07.320 | There's no reason why it can't, though,
01:28:11.160 | and we're discovering things, I think, that will extend it.
01:28:14.160 | But you do have to deal with, I mean,
01:28:19.640 | I've got various issues.
01:28:22.080 | Went to Mexico 40 years ago, developed salmonella.
01:28:28.240 | I created pancreatitis, which gave me
01:28:33.080 | a strange form of diabetes.
01:28:37.400 | It's not type one diabetes, 'cause that's
01:28:41.800 | an autoimmune disorder that destroys your pancreas.
01:28:44.880 | I don't have that.
01:28:45.920 | But it's also not type two diabetes,
01:28:48.720 | 'cause type two diabetes, your pancreas works fine,
01:28:51.800 | but your cells don't absorb the insulin well.
01:28:55.760 | I don't have that either.
01:28:57.060 | The pancreatitis I had partially damaged my pancreas,
01:29:04.560 | but it was a one-time thing, it didn't continue,
01:29:07.420 | and I've learned now how to control it.
01:29:11.640 | But so that's just something I had to do
01:29:13.800 | in order to continue to exist.
01:29:18.560 | - Since your particular biological system,
01:29:20.440 | you had to figure out a few hacks,
01:29:22.560 | and the idea is that science would be able
01:29:24.920 | to do that much better, actually.
01:29:26.440 | - Yeah, so I mean, I do spend a lot of time
01:29:29.360 | just tinkering with my own body to keep it going.
01:29:34.240 | So I do think I'll last 'til the end of this decade,
01:29:37.720 | and I think we'll achieve longevity, escape velocity.
01:29:41.680 | I think that we'll start with people
01:29:43.540 | who are very diligent about this.
01:29:46.200 | Eventually it'll become sort of routine
01:29:48.840 | that people will be able to do it.
01:29:51.400 | So if you're talking about kids today,
01:29:54.320 | or even people in their 20s or 30s,
01:29:56.680 | that's really not a very serious problem.
01:30:01.280 | I have had some discussions with relatives
01:30:05.080 | who are like almost 100,
01:30:07.900 | and saying, well, we're working on it
01:30:11.280 | as quickly as possible,
01:30:12.440 | but I don't know if that's gonna work.
01:30:14.780 | - Is there a case, this is a difficult question,
01:30:18.400 | but is there a case to be made against living forever
01:30:23.400 | that a finite life, that mortality is a feature, not a bug,
01:30:29.640 | that living a shorter, so dying makes ice cream
01:30:34.640 | taste delicious, makes life intensely beautiful
01:30:40.240 | more than it otherwise may be?
01:30:42.080 | - Most people believe that way,
01:30:44.020 | except if you present a death of anybody
01:30:47.860 | they care about or love,
01:30:51.480 | they find that extremely depressing.
01:30:55.320 | And I know people who feel that way 20, 30,
01:30:59.620 | 40 years later, they still want them back.
01:31:03.860 | So I mean, death is not something to celebrate,
01:31:10.640 | but we've lived in a world where people just accept this.
01:31:15.980 | Well, life is short, you see it all the time on TV,
01:31:18.340 | oh, life's short, you have to take advantage of it.
01:31:21.300 | And nobody accepts the fact
01:31:22.700 | that you could actually go beyond normal lifetimes.
01:31:27.940 | But anytime we talk about death or a death of a person,
01:31:31.580 | even one death is a terrible tragedy.
01:31:35.420 | If you have somebody that lives to 100 years old,
01:31:39.020 | we still love them in return.
01:31:42.860 | And there's no limitation to that.
01:31:47.620 | In fact, these kinds of trends are gonna provide
01:31:52.000 | greater and greater opportunity for everybody,
01:31:54.700 | even if we have more people.
01:31:57.140 | - So let me ask about an alien species
01:32:00.300 | or a super intelligent AI 500 years from now
01:32:03.060 | that will look back and remember Ray Kurzweil version zero.
01:32:08.060 | Before the replicants spread,
01:32:13.140 | how do you hope they remember you?
01:32:17.020 | In a "Hitchhiker's Guide to the Galaxy" summary
01:32:20.460 | of Ray Kurzweil, what do you hope your legacy is?
01:32:23.160 | - Well, I mean, I do hope to be around, so that's--
01:32:26.780 | - Some version of you, yes.
01:32:28.060 | Do you think you'll be the same person around?
01:32:32.140 | - I mean, am I the same person I was when I was 20 or 10?
01:32:37.020 | - You would be the same person in that same way,
01:32:39.780 | but yes, we're different.
01:32:41.560 | All we have of that, all you have of that person
01:32:46.900 | is your memories, which are probably distorted in some way.
01:32:51.900 | Maybe you just remember the good parts.
01:32:55.860 | Depending on your psyche, you might focus on the bad parts,
01:32:59.740 | might focus on the good parts.
01:33:01.260 | - Right, but I mean, I'd still have a relationship
01:33:06.500 | to the way I was when I was younger.
01:33:10.820 | - How will you and the other super intelligent AIs
01:33:14.220 | remember you of today from 500 years ago?
01:33:17.740 | What do you hope to be remembered by, this version of you,
01:33:22.860 | before the singularity?
01:33:25.660 | - Well, I think it's expressed well in my books,
01:33:28.220 | trying to create some new realities that people will accept.
01:33:32.620 | I mean, that's something that gives me great pleasure
01:33:36.060 | and greater insight into what makes humans valuable.
01:33:45.340 | I'm not the only person who's tempted to comment on that.
01:33:55.740 | - And optimism that permeates your work.
01:34:00.700 | Optimism about the future.
01:34:02.320 | It's ultimately that optimism paves the way
01:34:05.300 | for building a better future.
01:34:06.740 | - I agree with that.
01:34:09.100 | - So you asked your dad about the meaning of life
01:34:15.100 | and he said, "Love, let me ask you the same question.
01:34:19.180 | "What's the meaning of life?
01:34:21.120 | "Why are we here?
01:34:22.840 | "This beautiful journey that we're on in phase four,
01:34:27.420 | "reaching for phase five of this evolution
01:34:32.460 | "and information processing, why?"
01:34:35.500 | - Well, I think I'd give the same answer as my father.
01:34:38.340 | Because if there were no love
01:34:43.860 | and we didn't care about anybody,
01:34:45.660 | there'd be no point existing.
01:34:48.300 | - Love is the meaning of life.
01:34:51.460 | The AI version of your dad had a good point.
01:34:54.260 | Well, I think that's a beautiful way to end it.
01:34:57.840 | Ray, thank you for your work.
01:34:59.240 | Thank you for being who you are.
01:35:01.120 | Thank you for dreaming about a beautiful future
01:35:03.440 | and creating it along the way.
01:35:06.480 | And thank you so much for spending
01:35:09.340 | a really valuable time with me today.
01:35:10.900 | This was awesome.
01:35:12.220 | - It was my pleasure and you have some great insights
01:35:16.320 | both into me and into humanity as well.
01:35:19.040 | So I appreciate that.
01:35:21.420 | - Thanks for listening to this conversation
01:35:22.940 | with Ray Kurzweil.
01:35:24.340 | To support this podcast,
01:35:25.580 | please check out our sponsors in the description.
01:35:28.380 | And now let me leave you with some words
01:35:30.420 | from Isaac Asimov.
01:35:31.900 | "It is change, continuous change,
01:35:35.800 | "inevitable change,
01:35:37.580 | "that is the dominant factor in society today.
01:35:41.000 | "No sensible decision could be made any longer
01:35:43.780 | "without taking into account not only the world as it is,
01:35:47.260 | "but the world as it will be.
01:35:49.500 | "This in turn means that our statesmen,
01:35:52.500 | "our businessmen, our every man
01:35:55.300 | "must take on a science fictional way of thinking."
01:35:58.540 | Thank you for listening and hope to see you next time.
01:36:02.720 | (upbeat music)
01:36:05.300 | (upbeat music)
01:36:07.880 | [BLANK_AUDIO]