Back to Index

Ray Kurzweil: Singularity, Superintelligence, and Immortality | Lex Fridman Podcast #321


Chapters

0:0 Introduction
1:6 Turing test
14:51 Brain–computer interfaces
26:31 Singularity
32:51 Virtual reality
35:31 Evolution of information processing
41:57 Automation
51:57 Nanotechnology
53:51 Nuclear war
55:57 Uploading minds
63:38 How to think
70:8 Digital afterlife
79:28 Intelligent alien life
82:18 Simulation hypothesis
86:31 Mortality
94:10 Meaning of life

Transcript

By the time we get to 2045, we'll be able to multiply our intelligence many millions fold, and it's just very hard to imagine what that will be like. - The following is a conversation with Ray Kurzweil, author, inventor, and futurist, who has an optimistic view of our future as a human civilization, predicting that exponentially improving technologies will take us to a point of a singularity, beyond which super intelligent artificial intelligence will transform our world in nearly unimaginable ways.

18 years ago, in the book "Singularity is Near," he predicted that the onset of the singularity will happen in the year 2045. He still holds to this prediction and estimate. In fact, he's working on a new book on this topic that will hopefully be out next year. This is the Lex Friedman Podcast.

To support it, please check out our sponsors in the description. And now, dear friends, here's Ray Kurzweil. In your 2005 book titled "The Singularity is Near," you predicted that the singularity will happen in 2045. So now, 18 years later, do you still estimate that the singularity will happen on 2045?

And maybe first, what is the singularity, the technological singularity, and when will it happen? - Singularity is where computers really change our view of what's important and change who we are. But we're getting close to some salient things that will change who we are. A key thing is 2029, when computers will pass the Turing test.

And there's also some controversy whether the Turing test is valid. I believe it is. Most people do believe that, but there's some controversy about that. But Stanford got very alarmed at my prediction about 2029. I made this in 1999 in my book. - The Age of Spiritual Machines. - Right.

- And then you repeated the prediction in 2005. - In 2005. - Yeah. - So they held an international conference, you might have been aware of it, of AI experts in 1999 to assess this view. So people gave different predictions and they took a poll. It was really the first time that AI experts worldwide were polled on this prediction.

And the average poll was 100 years. 20% believed it would never happen. And that was the view in 1999. 80% believed it would happen, but not within their lifetimes. There's been so many advances in AI that the poll of AI experts has come down over the years. So a year ago, something called Meticulous, which you may be aware of, assesses different types of experts on the future.

They again assessed what AI experts then felt. And they were saying 2042. - For the Turing test. - For the Turing test. - Yeah, so it's coming down. - And I was still saying 2029. A few weeks ago, they again did another poll and it was 2030. So AI experts now basically agree with me.

I haven't changed at all, I've stayed with 2029. And AI experts now agree with me, but they didn't agree at first. - So Alan Turing formulated the Turing test and-- - Right, now what he said was very little about it. I mean, the 1950 paper where he had articulated the Turing test, there's like a few lines that talk about the Turing test.

And it really wasn't very clear how to administer it. And he said if they did it in like 15 minutes, that would be sufficient, which I don't really think is the case. These large language models now, some people are convinced by it already. I mean, you can talk to it and have a conversation with it.

You can actually talk to it for hours. So it requires a little more depth. There's some problems with large language models, which we can talk about. But some people are convinced by the Turing test. Now, if somebody passes the Turing test, what are the implications of that? Does that mean that they're sentient, that they're conscious or not?

It's not necessarily clear what the implications are. Anyway, I believe 2029, that's six, seven years from now, we'll have something that passes the Turing test and a valid Turing test, meaning it goes for hours, not just a few minutes. - Can you speak to that a little bit? What is your formulation of the Turing test?

You've proposed a very difficult version of the Turing test, so what does that look like? - Basically, it's just to assess it over several hours and also have a human judge that's fairly sophisticated on what computers can do and can't do. If you take somebody who's not that sophisticated or even an average engineer, they may not really assess various aspects of it.

- So you really want the human to challenge the system. - Exactly, exactly. - On its ability to do things like common sense reasoning, perhaps. - That's actually a key problem with large language models. They don't do these kinds of tests that would involve assessing chains of reasoning. But you can lose track of that.

If you talk to them, they actually can talk to you pretty well and you can be convinced by it. But it's somebody that would really convince you that it's a human, whatever that takes. Maybe it would take days or weeks, but it would really convince you that it's human.

Large language models can appear that way. You can read conversations and they appear pretty good. There are some problems with it. It doesn't do math very well. Can ask, "How many legs did 10 elephants have?" And they'll tell you, "Well, okay, each elephant "has four legs and it's 10 elephants, so it's 40 legs." And you go, "Okay, that's pretty good.

"How many legs do 11 elephants have?" And they don't seem to understand the question. - Do all humans understand that question? - No, that's the key thing. I mean, how advanced a human do you want it to be? But we do expect a human to be able to do multi-chain reasoning, to be able to take a few facts and put them together, not perfectly.

And we see that in a lot of polls that people don't do that perfectly at all. But, so it's not very well-defined, but it's something where it really would convince you that it's a human. - Is your intuition that large language models will not be solely the kind of system that passes the Turing test in 2029?

Do we need something else? - No, I think it will be a large language model, but they have to go beyond what they're doing now. I think we're getting there. And another key issue is if somebody actually passes the Turing test validly, I would believe they're conscious. And then not everybody would say that.

So, okay, we can pass the Turing test, but we don't really believe that it's conscious. That's a whole nother issue. But if it really passes the Turing test, I would believe that it's conscious. But I don't believe that of large language models today. - If it appears to be conscious, that's as good as being conscious, at least for you, in some sense.

- I mean, consciousness is not something that's scientific. I mean, I believe you're conscious, but it's really just a belief, and we believe that about other humans that at least appear to be conscious. When you go outside of shared human assumption, like are animals conscious? Some people believe they're not conscious.

Some people believe they are conscious. And would a machine that acts just like a human be conscious? I mean, I believe it would be, but that's really a philosophical belief. It's not, you can't prove it. I can't take an entity and prove that it's conscious. There's nothing that you can do that would indicate that.

- It's like saying a piece of art is beautiful. You can say it. Multiple people can experience a piece of art is beautiful, but you can't prove it. - But it's also an extremely important issue. I mean, imagine if you had something where nobody's conscious. The world may as well not exist.

Some people, like say Marvin Rinsky, said, well, consciousness is not logical. It's not scientific, and therefore we should dismiss it. And any talk about consciousness is just not to be believed. But when he actually engaged with somebody who was conscious, he actually acted as if they were conscious. He didn't ignore that.

- He acted as if consciousness does matter. - Exactly, whereas he said it didn't matter. - Well, that's Marvin Rinsky. - Yeah. - He's full of contradictions. - But that's true of a lot of people as well. - But to you, consciousness matters. - But to me, it's very important, but I would say it's not a scientific issue.

It's a philosophical issue. And people have different views. Some people believe that anything that makes a decision is conscious. So your light switch is conscious. Its level of consciousness is low. It's not very interesting, but that's a consciousness. And anything, so a computer that makes a more interesting decision, still not at human levels, but it's also conscious and at a higher level than your light switch.

So that's one view. There's many different views of what consciousness is. - So if a system passes the Turing test, it's not scientific, but in issues of philosophy, things like ethics start to enter the picture. Do you think there would be, we would start contending as a human species about the ethics of turning off such a machine?

- Yeah, I mean, that's definitely come up. Hasn't come up in reality yet. - Yet. - But I'm talking about 2029. It's not that many years from now. So what are our obligations to it? It has a different, I mean, a computer that's conscious has a little bit different connotations than a human.

We have a continuous consciousness. We're in an entity that does not last forever. Now, actually, a significant portion of humans still exist and are therefore still conscious, but anybody who is over a certain age doesn't exist anymore. That wouldn't be true of a computer program. You could completely turn it off and a copy of it could be stored and you could recreate it.

And so it has a different type of validity. You could actually take it back in time. You could eliminate its memory and have it go over again. I mean, it has a different kind of connotation than humans do. - Well, perhaps it can do the same thing with humans.

It's just that we don't know how to do that yet. It's possible that we figure out all of these things on the machine first, but that doesn't mean the machine isn't conscious. - I mean, if you look at the way people react, say 3CPO or other machines that are conscious in movies, they don't actually present how it's conscious, but we see that they are a machine and people will believe that they are conscious and they'll actually worry about it if they get into trouble and so on.

- So 2029 is going to be the first year when a major thing happens. - Right. - And that will shake our civilization to start to consider the role of AI in this world. - Yes and no. I mean, this one guy at Google claimed that the machine was conscious.

- But that's just one person. - Right. - When it starts to happen to scale. - Well, that's exactly right because most people have not taken that position. I don't take that position. I mean, I've used different things. I've seen people like this and they don't appear to me to be conscious.

As we eliminate various problems of these large language models, more and more people will accept that they're conscious. So when we get to 2029, I think a large fraction of people will believe that they're conscious. So it's not gonna happen all at once. I believe it will actually happen gradually and it's already started to happen.

- And so that takes us one step closer to the singularity. - Another step then is in the 2030s when we can actually connect our neocortex, which is where we do our thinking, to computers. And I mean, just as this actually gains a lot to being connected to computers that will amplify its abilities.

I mean, if this did not have any connection, it would be pretty stupid. It could not answer any of your questions. - If you're just listening to this, by the way, Ray's holding up the all-powerful smartphone. - So we're gonna do that directly from our brains. I mean, these are pretty good.

These already have amplified our intelligence. I'm already much smarter than I would otherwise be if I didn't have this. 'Cause I remember my first book, "The Age of Intelligent Machines," there was no way to get information from computers. I actually would go to a library, find a book, find the page that had an information I wanted, and I'd go to the copier, and my most significant information tool was a roll of quarters where I could feed the copier.

So we're already greatly advanced that we have these things. There's a few problems with it. First of all, I constantly put it down, and I don't remember where I put it. I've actually never lost it, but you have to find it, and then you have to turn it on.

So there's a certain amount of steps. It would actually be quite useful if someone would just listen to your conversation and say, "Oh, that's so-and-so actress," and tell you what you're talking about. - So going from active to passive, where it just permeates your whole life. - Yeah, exactly.

- The way your brain does when you're awake. Your brain is always there. - Right. Now, that's something that could actually just about be done today, where you would listen to your conversation, understand what you're saying, understand what you're not missing, and give you that information. But another step is to actually go inside your brain.

And there are some prototypes where you can connect your brain. They actually don't have the amount of bandwidth that we need. They can work, but they work fairly slowly. So if it actually would connect to your neocortex, and the neocortex, which I describe in "How to Create a Mind," the neocortex is actually, it has different levels.

And as you go up the levels, it's kind of like a pyramid. The top level is fairly small. And that's the level where you wanna connect these brain extenders. So I believe that will happen in the 2030s. So just the way this is greatly amplified by being connected to the cloud, we can connect our own brain to the cloud, and just do what we can do by using this machine.

- Do you think it would look like the brain-computer interface of Neuralink? So would it be-- - Well, Neuralink's an attempt to do that. It doesn't have the bandwidth that we need. - Yet, right? - Right, but I think, I mean, they're gonna get permission for this, because there are a lot of people who absolutely need it, because they can't communicate.

I know a couple of people like that, who have ideas, and they cannot move their muscles, and so on, they can't communicate. So for them, this would be very valuable. But we could all use it. Basically, it'd be, turn this into something that would be like we have a phone, but it would be in our minds, it would be kind of instantaneous.

- And maybe communication between two people would not require this low-bandwidth mechanism of language. - Yes, exactly. - Of spoken word. - We don't know what that would be, although we do know that computers can share information like language instantly. They can share many, many books in a second, so we could do that as well.

If you look at what our brain does, it actually can manipulate different parameters. So we talk about these large language models. I mean, I had written that it requires a certain amount of information in order to be effective, and that we would not see AI really being effective until it got to that level.

And we had large language models that were like 10 billion bytes, didn't work very well. They finally got to 100 billion bytes, and now they work fairly well, and now we're going to a trillion bytes. If you say Lambda has 100 billion bytes, what does that mean? Well, what if you had something that had one byte, one parameter?

Maybe you want to tell whether or not something's an elephant or not, and so you put in something that would detect its trunk. If it has a trunk, it's an elephant. If it doesn't have a trunk, it's not an elephant. That would work fairly well. There's a few problems with it.

Really wouldn't be able to tell what a trunk is, but anyway. - And maybe other things other than elephants have trunks. You might get really confused. - Yeah, exactly. - I'm not sure which animals have trunks, but you know, plus how do you define a trunk? But yeah, that's one parameter.

You can do okay. - So these things have 100 billion parameters, so they're able to deal with very complex issues. - All kinds of trunks. - Human beings actually have a little bit more than that, but they're getting to the point where they can emulate humans. If we were able to connect this to our neocortex, we would basically add more of these abilities to make distinctions, and it could ultimately be much smarter and also be attached to information that we feel is reliable.

So that's where we're headed. - So you think that there will be a merger in the '30s, an increasing amount of merging between the human brain and the AI brain. - Exactly. And the AI brain is really an emulation of human beings. I mean, that's why we're creating them, because human beings act the same way, and this is basically to amplify them.

I mean, this amplifies our brain. It's a little bit clumsy to interact with, but it definitely, it's way beyond what we had 15 years ago. - But the implementation becomes different, just like a bird versus the airplane. Even though the AI brain is an emulation, it starts adding features we might not otherwise have, like ability to consume a huge amount of information quickly, like look up thousands of Wikipedia articles in one take.

- Exactly. And we can get, for example, to issues like simulated biology, where it can simulate many different things at once. We already had one example of simulated biology, which is the Moderna vaccine, and that's gonna be now the way in which we create medications. But they were able to simulate what each example of an mRNA would do to a human being, and they were able to simulate that quite reliably.

And we actually simulated billions of different mRNA sequences, and they found the ones that were the best, and they created the vaccine. And they did, and talk about doing it quickly, they did that in two days. Now, how long would a human being take to simulate billions of different mRNA sequences?

I don't know that we could do it at all, but it would take many years. They did it in two days. And one of the reasons that people didn't like vaccines is because it was done too quickly, and it was done too fast. And they actually included the time it took to test it out, which was 10 months.

So they figured, okay, it took 10 months to create this. Actually, it took us two days. And we also will be able to ultimately do the tests in a few days as well. - Oh, 'cause we can simulate how the body will respond to it. - Yeah. - More and more.

- That's a little bit more complicated, 'cause the body has a lot of different elements, and we have to simulate all of that. But that's coming as well. So ultimately, we could create it in a few days, and then test it in a few days, and it would be done.

And we can do that with every type of medical insufficiency that we have. - So curing all diseases. - Yes, yeah. - Improving certain functions of the body, supplements, drugs, for recreation, for health, for performance, for productivity, all that kind of stuff. - Well, that's where we're headed. 'Cause I mean, right now, we have a very inefficient way of creating these new medications.

But we've already shown it. And the Moderna vaccine is actually the best of the vaccines we've had. And it literally took two days to create. And we'll get to the point where we can test it out also quickly. - Are you impressed by AlphaFold and the solution to the protein folding, which essentially is simulating, modeling this primitive building block of life, which is a protein, and it's 3D shaped?

- It's pretty remarkable that they can actually predict what the 3D shape of these things are. But they did it with the same type of neural net, the one, for example, the GO test. - So it's all the same. - It's all the same. - All the same approaches.

- They took that same thing and just changed the rules to chess. And within a couple of days, it now played a master level of chess greater than any human being. And the same thing then worked for AlphaFold, which no human had done. I mean, human beings could do, the best humans could maybe do 15, 20% of figuring out what the shape would be.

And after a few takes, it ultimately did just about 100%. - Do you still think the singularity will happen in 2045? And what does that look like? - You know, once we can amplify our brain with computers directly, which will happen in the 2030s, that's gonna keep growing. That's another whole theme, which is the exponential growth of computing power.

- Yeah, so looking at price performance of computation from 1939 to 2021. - Right, so that starts with the very first computer actually created by a German during World War II. You might have thought that that might be significant, but actually the Germans didn't think computers were significant, and they completely rejected it.

The second one is also the ZUSA-2. - And by the way, we're looking at a plot with the X-axis being the year from 1935 to 2025, and on the Y-axis in log scale is computation per second per constant dollar, so dollar normalized to inflation. And it's growing linearly on the log scale, which means it's growing exponentially.

- The third one was the British computer, which the Allies did take very seriously, and it cracked the German code and enabled the British to win the Battle of Britain, which otherwise absolutely would not have happened if they hadn't cracked the code using that computer. But that's an exponential graph, so a straight line on that graph is exponential growth.

And you see 80 years of exponential growth. And I would say about every five years, and this happened shortly before the pandemic, people saying, well, they call it Moore's Law, which is not the correct, because that's not all Intel. In fact, this started decades before Intel was even created.

It wasn't with transistors formed into a grid. - So it's not just transistor count or transistor size. - Right, this started with relays, then went to vacuum tubes, then went to individual transistors, and then to integrated circuits. And integrated circuits actually starts like in the middle of this graph.

And it has nothing to do with Intel. Intel actually was a key part of this, but a few years ago, they stopped making the fastest chips. But if you take the fastest chip of any technology in that year, you get this kind of graph. And it's definitely continuing for 80 years.

- So you don't think Moore's Law, broadly defined, is dead? It's been declared dead multiple times throughout this process. - Right. I don't like the term Moore's Law, because it has nothing to do with Moore or with Intel. But yes, the exponential growth of computing is continuing and has never stopped.

- From various sources. - I mean, it went through World War II, it went through global recessions. It's just continuing. And if you continue that out, along with software gains, which is a whole 'nother issue, and they really multiply, whatever you get from software gains, you multiply by the computer gains, you get faster and faster speed.

This is actually the fastest computer models that have been created. And that actually expands roughly twice a year. Like every six months, it expands by two. - So we're looking at a plot from 2010 to 2022. On the x-axis is the publication date of the model, and perhaps sometimes the actual paper associated with it.

And on the y-axis is training, compute, and flops. And so basically this is looking at the increase in the, not transistors, but the computational power of neural networks. - Yes, the computational power that created these models. And that's doubled every six months. - Which is even faster, the transistor division.

- Yeah. Actually, since it goes faster than the amount of cost, this has actually become a greater investment to create these. But at any rate, by the time we get to 2045, we'll be able to multiply our intelligence many millions fold. And it's just very hard to imagine what that will be like.

- And that's the singularity, what we can't even imagine. - Right, that's why we call it the singularity. 'Cause the singularity in physics, something gets sucked into its singularity and you can't tell what's going on in there because no information can get out of it. There's various problems with that, but that's the idea.

It's too much beyond what we can imagine. - Do you think it's possible we don't notice that what the singularity actually feels like is we just live through it with exponentially increasing cognitive capabilities and we almost, because everything's moving so quickly, aren't really able to introspect that our life has changed?

- Yeah, but I mean, we will have that much greater capacity to understand things, so we should be able to look back. - Looking at history, understand history. - But we will need people, basically like you and me, to actually think about these things. - But we might be distracted by all the other sources of entertainment and fun because the exponential power of intellect is growing, but also the-- - There'll be a lot of fun.

- The amount of ways you can have-- - I mean, we already have a lot of fun with computer games and so on that are really quite remarkable. - What do you think about the digital world, the metaverse, virtual reality? Will that have a component in this or will most of our advancement be in physical reality?

- Well, that's a little bit like Second Life, although the Second Life actually didn't work very well because it couldn't actually handle too many people. And I don't think the metaverse has come to being. I think there will be something like that. It won't necessarily be from that one company.

I mean, there's gonna be competitors, but yes, we're gonna live increasingly online, particularly if our brains are online. I mean, how could we not be online? - Do you think it's possible that given this merger with AI, most of our meaningful interactions will be in this virtual world? Most of our life, we fall in love, we make friends, we come up with ideas, we do collaborations, we have fun.

- I actually know somebody who's marrying somebody that they never met. I think they just met her briefly before the wedding, but she actually fell in love with this other person never having met them. And I think the love is real. - That's a beautiful story, but do you think that story is one that might be experienced as opposed to by hundreds of thousands of people, but instead by hundreds of millions of people?

- I mean, it really gives you appreciation for these virtual ways of communicating. And if anybody can do it, then it's really not such a freak story. So I think more and more people will do that. - But that's turning our back on our entire history of evolution. Or the old days, we used to fall in love by holding hands and sitting by the fire, that kind of stuff.

Here, you're playing. - Actually, I have five patents on where you can hold hands, even if you're separated. - Great. So the touch, the sense, it's all just senses. It's all just-- - Yeah, I mean, touch is, it's not just that you're touching someone or not, there's a whole way of doing it, and it's very subtle, but ultimately, we can emulate all of that.

- Are you excited by that future? Do you worry about that future? - I have certain worries about the future, but not-- - Not that. - Virtual touch. (both laughing) - Well, I agree with you. You described six stages in the evolution of information processing in the universe, as you started to describe.

Can you maybe talk through some of those stages, from the physics and chemistry to DNA and brains, and then to the very end, to the very beautiful end of this process? - It actually gets more rapid. So physics and chemistry, that's how we started. - So from the very beginning of the universe-- - We had lots of electrons and various things traveling around, and that took, actually, many billions of years, kind of jumping ahead here to kind of some of the last stages, where we have things like love and creativity.

It's really quite remarkable that that happens. But finally, physics and chemistry created biology and DNA, and now you had actually one type of molecule that described the cutting edge of this process. And we go from physics and chemistry to biology. And finally, biology created brains. I mean, not everything that's created by biology has a brain, but eventually brains came along.

- And all of this is happening faster and faster. - Yeah. It created increasingly complex organisms. Another key thing is actually not just brains, but our thumb. Because there's a lot of animals with brains even bigger than humans. Elephants have a bigger brain, whales have a bigger brain, but they've not created technology because they don't have a thumb.

So that's one of the really key elements in the evolution of humans. - This physical manipulator device that's useful for puzzle solving in the physical reality. - So I could think, I could look at a tree and go, oh, I could actually trip that branch down and eliminate the leaves and carve a tip on it and it would create technology.

And you can't do that if you don't have a thumb. - Yeah. - So thumbs then created technology, and technology also had a memory. And now those memories are competing with the scale and scope of human beings. And ultimately we'll go beyond it. And then we're gonna merge human technology with human intelligence and understand how human intelligence works, which I think we already do.

And we're putting that into our human technology. - So create the technology inspired by our own intelligence and then that technology supersedes us in terms of its capabilities. And we ride along. Or do you ultimately see it as fundamentally-- - And we ride along, but a lot of people don't see that.

They say, well, you got humans and you got machines and there's no way we can ultimately compete with humans. And you can already see that. Lee Sedol, who's like the best Go player in the world, says he's not gonna play Go anymore. - Yeah. - Because playing Go for human, that was like the ultimate in intelligence 'cause no one else could do that.

But now a machine can actually go way beyond him. And so he says, well, there's no point playing it anymore. - That may be more true for games than it is for life. I think there's a lot of benefit to working together with AI in regular life. So if you were to put a probability on it, is it more likely that we merge with AI or AI replaces us?

- A lot of people just think computers come along and they compete with them. We can't really compete and that's the end of it. As opposed to them increasing our abilities. And if you look at most technology, it increases our abilities. I mean, look at the history of work.

Look at what people did 100 years ago. Does any of that exist anymore? People, I mean, if you were to predict that all of these jobs would go away and would be done by machines, people would say, well, that's gonna be, no one's gonna have jobs and it's gonna be massive unemployment.

But I show in this book that's coming out, the amount of people that are working, even as a percentage of the population, has gone way up. - We're looking at the X-axis year from 1774 to 2024 and on the Y-axis, personal income per capita in constant dollars and it's growing super linearly.

I mean, it's-- - Yeah, 2021, constant dollars and it's gone way up. That's not what you were to predict given that we would predict that all these jobs would go away. But the reason it's gone up is because we've basically enhanced our own capabilities by using these machines as opposed to them just competing with us.

That's a key way in which we're gonna be able to become far smarter than we are now by increasing the number of different parameters we can consider in making a decision. - I was very fortunate, I am very fortunate to be able to get a glimpse preview of your upcoming book, Singularity's Nearer.

And one of the themes outside of just discussing the increasing exponential growth of technology, one of the themes is that things are getting better in all aspects of life. And you talk just about this. So one of the things you're saying is with jobs. So let me just ask about that.

There is a big concern that automation, especially powerful AI, will get rid of jobs. There are people who lose jobs. And as you were saying, the sense is throughout the history of the 20th century, automation did not do that ultimately. And so the question is, will this time be different?

- Right, that is the question. Will this time be different? And it really has to do with how quickly we can merge with this type of intelligence. Whether Lambda or GPT-3 is out there, and maybe it's overcome some of its key problems, and we really haven't enhanced human intelligence, that might be a negative scenario.

But I mean, that's why we create technologies, to enhance ourselves. And I believe we will be enhanced. We're not just gonna sit here with 300 million modules in our neocortex. We're gonna be able to go beyond that. Because that's useful, but we can multiply that by 10, 100, 1,000, a million.

And you might think, well, what's the point of doing that? It's like asking somebody that's never heard music, well, what's the value of music? I mean, you can't appreciate it until you've created it. - There's some worry that there'll be a wealth disparity. - Class or wealth disparity, only the rich people will be, basically, the rich people will first have access to this kind of thing, and then because of this kind of thing, because the ability to merge will get richer, exponentially faster.

- And I say that's just like cell phones. I mean, there's like four billion cell phones in the world today. In fact, when cell phones first came out, you had to be fairly wealthy. They weren't very inexpensive. So you had to have some wealth in order to afford them.

- Yeah, there were these big, sexy phones. - And they didn't work very well. They did almost nothing. So you can only afford these things if you're wealthy at a point where they really don't work very well. - So achieving scale and making it inexpensive is part of making the thing work well.

- Exactly. So these are not totally cheap, but they're pretty cheap. I mean, you can get them for a few hundred dollars. - Especially given the kind of things it provides for you. There's a lot of people in the third world that have very little, but they have a smartphone.

- Yeah, absolutely. - And the same will be true with AI. - I mean, I see homeless people have their own cell phones. - Yeah, so your sense is any kind of advanced technology will take the same trajectory. - Right, it ultimately becomes cheap and will be affordable. I probably would not be the first person to put something in my brain to connect to computers 'cause I think it will have limitations.

But once it's really perfected, at that point it'll be pretty inexpensive. I think it'll be pretty affordable. - So in which other ways, as you outline your book, is life getting better? 'Cause I think-- - Well, I have 50 charts in there where everything is getting better. - I think there's a kind of cynicism about, like even if you look at extreme poverty, for example.

- For example, this is actually a poll taken on extreme poverty, and the people were asked, has poverty gotten better or worse? And the options are increased by 50%, increased by 25%, remain the same, decreased by 25%, decreased by 50%. If you're watching this or listening to this, try to vote for yourself.

- 70% thought it had gotten worse, and that's the general impression. 88% thought it had gotten worse, it remained the same. Only 1% thought it decreased by 50%, and that is the answer. It actually decreased by 50%. - So only 1% of people got the right optimistic estimate of how poverty is-- - Right, and this is the reality, and it's true of almost everything you look at.

You don't wanna go back 100 years or 50 years. Things were quite miserable then, but we tend not to remember that. - So literacy rate increasing over the past few centuries across all the different nations, nearly to 100% across many of the nations in the world. - It's gone way up, average years of education have gone way up.

Life expectancy is also increasing. Life expectancy was 48 in 1900. - And it's over 80 now. - And it's gonna continue to go up, particularly as we get into more advanced stages of simulated biology. - For life expectancy, these trends are the same for at birth, age one, age five, age 10, so it's not just the infant mortality.

- And I have 50 more graphs in the book about all kinds of things. Even spread of democracy, which might bring up some sort of controversial issues, it still has gone way up. - Well, that one is gone way up, but that one is a bumpy road, right? - Exactly, and somebody might represent democracy, and go backwards, but we basically had no democracies before the creation of the United States, which was a little over two centuries ago, which in the scale of human history isn't that long.

- Do you think superintelligence systems will help with democracy? So what is democracy? Democracy is giving a voice to the populace, and having their ideas, having their beliefs, having their views represented. - Well, I hope so. I mean, we've seen social networks can spread conspiracy theories, which have been quite negative, being, for example, being against any kind of stuff that would help your health.

- So those kinds of ideas have, on social media, what you notice is they increase engagement, so dramatic division increases engagement. Do you worry about AI systems that will learn to maximize that division? - I mean, I do have some concerns about this, and I have a chapter in the book about the perils of advanced AI.

Spreading misinformation on social networks is one of them, but there are many others. - What's the one that worries you the most, that we should think about to try to avoid? - Well, it's hard to choose. (Lex laughing) We do have the nuclear power that evolved when I was a child, I remember, and we would actually do these drills against a nuclear war.

We'd get under our desks and put our hands behind our heads to protect us from a nuclear war. Seemed to work, we're still around, so. - You're protected. - But that's still a concern, and there are key dangerous situations that can take place in biology. Someone could create a virus that's very, I mean, we have viruses that are hard to spread, and they can be very dangerous, and we have viruses that are easy to spread, but they're not so dangerous.

Somebody could create something that would be very easy to spread and very dangerous, and be very hard to stop, and it could be something that would spread without people noticing, 'cause people could get it, they'd have no symptoms, and then everybody would get it, and then symptoms would occur maybe a month later.

So I mean, and that actually doesn't occur normally, because if we were to have a problem with that, we wouldn't exist. So the fact that humans exist means that we don't have viruses that can spread easily and kill us, because otherwise we wouldn't exist. - Yeah, viruses don't wanna do that.

They want to spread and keep the host alive somewhat. So you can describe various dangers with biology. Also nanotechnology, which we actually haven't experienced yet, but there are people that are creating nanotechnology, and I described that in the book. - Now you're excited by the possibilities of nanotechnology, of nanobots, of being able to do things inside our body, inside our mind, that's going to help.

What's exciting, what's terrifying about nanobots? - What's exciting is that that's a way to communicate with our neocortex, because each neocortex is pretty small, and you need a small entity that can actually get in there and establish a communication channel. And that's gonna really be necessary to connect our brains to AI within ourselves, because otherwise it would be hard for us to compete with it.

- In a high-bandwidth way. - Yeah, yeah. And that's key, actually, 'cause a lot of the things like Neuralink are really not high-bandwidth yet. - So nanobots is the way you achieve high-bandwidth. How much intelligence would those nanobots have? - Yeah, they don't need a lot. Just enough to basically establish a communication channel to one nanobot.

- So it's primarily about communication between external computing devices and our biological thinking machine. What worries you about nanobots? Is it similar to with the viruses? - Well, I mean, there's the great goo challenge. - Yes. - If you have a nanobot that wanted to create any kind of entity and repeat itself, and was able to operate in a natural environment, it could turn everything into that entity and basically destroy all biological life.

- So you mentioned nuclear weapons. - Yeah. - I'd love to hear your opinion about the 21st century and whether you think we might destroy ourselves. And maybe your opinion, if it has changed by looking at what's going on in Ukraine, that we could have a hot war with nuclear powers involved and the tensions building and the seeming forgetting of how terrifying and destructive nuclear weapons are.

Do you think humans might destroy ourselves in the 21st century, and if we do, how? And how do we avoid it? - I don't think that's gonna happen, despite the terrors of that war. It is a possibility, but I mean, I don't-- - It's unlikely in your mind. - Yeah, even with the tensions we've had with this one nuclear power plant that's been taken over, it's very tense, but I don't actually see a lot of people worrying that that's gonna happen.

I think we'll avoid that. We had two nuclear bombs go off in '45, so now we're 77 years later. - Yeah, we're doing pretty good. - We've never had another one go off through anger. - But people forget. People forget the lessons of history. - Well, yeah, I am worried about it.

I mean, that is definitely a challenge. - But you believe that we'll make it out, and ultimately, superintelligent AI will help us make it out, as opposed to destroy us. - I think so, but we do have to be mindful of these dangers. And there are other dangers besides nuclear weapons.

- So to get back to merging with AI, will we be able to upload our mind in a computer in a way where we might even transcend the constraints of our bodies? So copy our mind into a computer and leave the body behind? - Let me describe one thing I've already done with my father.

- That's a great story. - So we created a technology. This is public, came out, I think, six years ago, where you could ask any question, and the release product, which I think is still on the market, it would read 200,000 books, and then find the one sentence in 200,000 books that best answered your question.

And it's actually quite interesting. You can ask all kinds of questions, and you get the best answer in 200,000 books. But I was also able to take it and not go through 200,000 books, but go through a book that I put together, which is basically everything my father had written.

So everything he had written, I had gathered, and we created a book, everything that Frederick Rizzo had written. Now, I didn't think this actually would work that well, because stuff he had written was stuff about how to lay out, I mean, he directed choral groups and music groups, and he would be laying out how the people should, where they should sit, and how to fund this, and all kinds of things that really didn't seem that interesting.

And yet, when you ask a question, it would go through it, and it would actually give you a very good answer. So I said, "Well, who's the most interesting composer?" And he said, "Well, definitely Brahms." And he would go on about how Brahms was fabulous, and talk about the importance of music education.

- So you could have essentially a question and answer, a conversation with him. - You could have a conversation with him, which was actually more interesting than talking to him, because if you talked to him, he'd be concerned about how they're gonna lay out this property to give a choral group.

- He'd be concerned about the day-to-day versus the big questions. - Exactly, yeah. - And you did ask about the meaning of life, and he answered, "Love." - Yeah. - Do you miss him? - Yes, I do. You know, you get used to missing somebody after 52 years, and I didn't really have intelligent conversations with him until later in life.

In the last few years, he was sick, which meant he was home a lot, and I was actually able to talk to him about different things like music and other things. So I miss that very much. - What did you learn about life from your father? What part of him is with you now?

- He was devoted to music, and when he would create something to music, it put him in a different world. Otherwise, he was very shy, and if people got together, he tended not to interact with people, just because of his shyness. But when he created music, that, he was like a different person.

- Do you have that in you? That kind of light that shines? - I mean, I got involved with technology at like age five. - And you fell in love with it in the same way he did with music? - Yeah, yeah. I remember, this actually happened with my grandmother.

She had a manual typewriter, and she wrote a book, "One Life is Not Enough," which actually a good title for a book I might write. And it was about a school she had created. Well, actually, her mother created it. So my mother's mother's mother created the school in 1868, and it was the first school in Europe that provided higher education for girls.

It went through 14th grade. If you were a girl, and you were lucky enough to get an education at all, it would go through like ninth grade, and many people didn't have any education as a girl. This went through 14th grade. Her mother created it, she took it over, and the book was about the history of the school and her involvement with it.

When she presented it to me, I was not so interested in the story of the school, but I was totally amazed with this manual typewriter. I mean, here was something you could put a blank piece of paper into, and you could turn it into something that looked like it came from a book.

And you could actually type on it, and it looked like it came from a book. It was just amazing to me. And I could see actually how it worked. And I was also interested in magic. But in magic, if somebody actually knows how it works, the magic goes away.

The magic doesn't stay there if you actually understand how it works. But here was technology. I didn't have that word when I was five or six. - And the magic was still there for you? - The magic was still there, even if you knew how it worked. So I became totally interested in this, and then went around, collected little pieces of mechanical objects from bicycles, from broken radios.

I would go through the neighborhood. This was an era where you would allow five or six-year-olds to run through the neighborhood and do this. We don't do that anymore. But I didn't know how to put them together. And I said, "If I could just figure out "how to put these things together, "I could solve any problem." And I actually remember talking to these very old girls, I think they were 10, and telling them, "If I could just figure this out, "we could fly, we could do anything." And they said, "Well, you have quite an imagination." And then when I was in third grade, so I was like eight, created a virtual reality theater where people could come on stage and they could move their arms.

And all of it was controlled through one control box. It was all done with mechanical technology. And it was a big hit in my third grade class. And then I went on to do things in junior high school science fairs, and high school science fairs, where I won the Westinghouse Science Talent Search.

So I mean, I became committed to technology when I was five or six years old. - You've talked about how you use lucid dreaming to think, to come up with ideas as a source of creativity. Could you maybe talk through that? Maybe the process of how to, you've invented a lot of things.

You've came up and thought through some very interesting ideas. What advice would you give, or can you speak to the process of thinking, of how to think, how to think creatively? - Well, I mean, sometimes I will think through in a dream and try to interpret that. But I think the key issue that I would tell younger people is to put yourself in the position that what you're trying to create already exists.

And then you're explaining-- - How it works. - Exactly. - That's really interesting. You paint a world that you would like to exist, you think it exists, and reverse engineer that. - And then you actually imagine you're giving a speech about how you created this. Well, you'd have to then work backwards as to how you would create it in order to make it work.

- That's brilliant. And that requires some imagination, too, some first principles thinking. You have to visualize that world. That's really interesting. - And generally, when I talk about things we're trying to invent, I would use the present tense as if it already exists. Not just to give myself that confidence, but everybody else who's working on it.

We just have to kind of do all the steps in order to make it actual. - How much of a good idea is about timing? How much is it about your genius versus that it's time has come? - Timing's very important. I mean, that's really why I got into futurism.

I wasn't inherently a futurist. That was not really my goal. It's really to figure out when things are feasible. We see that now with large-scale models. The very large-scale models like GPT-3, it started two years ago. Four years ago, it wasn't feasible. In fact, they did create GPT-2, which didn't work.

So it required a certain amount of timing having to do with this exponential growth of computing power. - So futurism, in some sense, is a study of timing, trying to understand how the world will evolve and when will the capacity for certain ideas emerge. - And that's become a thing in itself, then, to try to time things in the future.

But really, its original purpose was to time my products. I mean, I did OCR in the 1970s, because OCR doesn't require a lot of computation. - Optical character recognition. - Yeah, so we were able to do that in the '70s, and I waited 'til the '80s to address speech recognition, since that requires more computation.

- So you were thinking through timing when you were developing those things. - Yeah. - Has its time come? - Yeah. - And that's how you've developed that brain power to start to think in a futurist sense. When, how will the world look like in 2045 and work backwards, and how it gets there.

- But that has become a thing in itself, because looking at what things will be like in the future reflects such dramatic changes in how humans will live. So that was worth communicating also. - So you developed that muscle of predicting the future, and then apply it broadly, and start to discuss how it changes the world of technology, how it changes the world of human life on Earth.

In Danielle, one of your books, you write about someone who has the courage to question assumptions that limit human imagination to solve problems. Can you also give advice on how each of us can have this kind of courage? - Well, it's good that you picked that quote, because I think that does symbolize what Danielle is about.

- Courage. So how can each of us have that courage to question assumptions? - I mean, we see that when people can go beyond the current realm and create something that's new. I mean, take Uber, for example. Before that existed, you never thought that that would be feasible, and it did require changes in the way people work.

- Is there practical advice you give in the book about what each of us can do to be a Danielle? - Well, she looks at the situation and tries to imagine how she can overcome various obstacles. And then she goes for it, and she's a very good communicator, so she can communicate these ideas to other people.

- And there's practical advice of learning to program and recording your life and things of this nature. Become a physicist. So you list a bunch of different suggestions of how to throw yourself into this world. - Yeah, I mean, it's kind of an idea how young people can actually change the world by learning all of these different skills.

- And at the core of that is the belief that you can change the world, that your mind, your body can change the world. - Yeah, that's right. - And not letting anyone else tell you otherwise. - That's very good, exactly. When we upload, the story you told about your dad and having a conversation with him, we're talking about uploading your mind to the computer.

Do you think we'll have a future with something you call afterlife? We'll have avatars that mimic increasingly better and better our behavior, our appearance, all that kind of stuff. Even those are perhaps no longer with us. - Yes, I mean, we need some information about them. I mean, I think about my father.

I have what he wrote. Now, he didn't have a word processor, so he didn't actually write that much. And our memories of him aren't perfect. So how do you even know if you've created something that's satisfactory? Now, you could do a Frederick Kurzweil Turing test. It seems like Frederick Kurzweil to me.

But the people who remember him, like me, don't have a perfect memory. - Is there such a thing as a perfect memory? Maybe the whole point is for him to make you feel a certain way. - Yeah, well, I think that would be the goal. - And that's the connection we have with loved ones.

It's not really based on very strict definition of truth. It's more about the experiences we share. And they get morphed through memory. But ultimately, they make us smile. - I think we definitely can do that. And that would be very worthwhile. - So do you think we'll have a world of replicants, of copies, there'll be a bunch of Ray Kurzweils.

Like I could hang out with one. I can download it for five bucks and have a best friend, Ray. And you, the original copy, wouldn't even know about it. Is that, do you think that world is, first of all, do you think that world is feasible? And do you think there's ethical challenges there?

Like how would you feel about me hanging out with Ray Kurzweil and you not knowing about it? - Doesn't strike me as a problem. - Which you, the original? - Would that cause a problem for you? - No, I would really very much enjoy it. - No, not just hanging out with me, but if somebody hanging out with you, a replicant of you.

- Well, I think I would start, it sounds exciting, but then what if they start doing better than me and take over my friend group? And then, because they may be an imperfect copy or they may be more social or all these kinds of things. And then I become like the old version that's not nearly as exciting.

Maybe they're a copy of the best version of me on a good day. - Yeah, but if you hang out with a replicant of me and that turned out to be successful, I'd feel proud of that person 'cause it was based on me. - So, but it is a kind of death of this version of you.

- Well, not necessarily. I mean, you can still be alive, right? - But, and you would be proud, okay. So, it's like having kids and you're proud that they've done even more than you were able to do. - Yeah, exactly. It does bring up new issues, but it seems like an opportunity.

- Well, that replicant should probably have the same rights as you do. - Well, that gets into a whole issue because when a replicant occurs, they're not necessarily gonna have your rights. And if a replicant occurs to somebody who's already dead, do they have all the obligations that the original person had?

Do they have all the agreements that they had? So. - I think you're gonna have to have laws that say, yes. There has to be, if you wanna create a replicant, they have to have all the same rights as human rights. - Well, you don't know. Someone can create a replicant and say, "Well, it's a replicant, but I didn't bother getting their rights." - But that would be illegal, I mean.

Like, if you do that, you have to do that in the black market. If you wanna get an official replicant-- - Okay, it's not so easy. Suppose you created multiple replicants, the original rights may be for one person and not for a whole group of people. - Sure. (laughs) So there has to be at least one, and then all the other ones kind of share the rights.

Yeah, I just don't think that's very difficult to conceive for us humans, the idea that this country-- - Well, you create a replicant that has certain, I mean, I've talked to people about this, including my wife, who would like to get back her father. And she doesn't worry about who has rights to what.

She would have somebody that she could visit with and give her some satisfaction. And she wouldn't care about any of these other rights. - What does your wife think about multiple recursals? Have you had that discussion? - I haven't addressed that with her. (laughs) - I think ultimately that's an important question, loved ones, how they feel about, there's something about love-- - Well, that's the key thing, right?

If the loved one's rejected, it's not gonna work very well. So the loved ones really are the key determinant, whether or not this works or not. - But there's also ethical rules. We have to contend with the idea, and we have to contend with that idea with AI. - But what's gonna motivate it is, I mean, I talk to people who really miss people who are gone and they would love to get something back, even if it isn't perfect.

And that's what's gonna motivate this. And that person lives on in some form. And the more data we have, the more we're able to reconstruct that person and allow them to live on. - And eventually, as we go forward, we're gonna have more and more of this data because we're gonna have nanobots that are inside our neocortex and we're gonna collect a lot of data.

In fact, anything that's data is always collected. - There is something a little bit sad, which is becoming, or maybe it's hopeful, which is more and more common these days, which when a person passes away, you'll have their Twitter account, and you have the last tweet they tweeted, like something they- - And you can recreate them now with large language models and so on.

I mean, you can create somebody that's just like them and can actually continue to communicate. - I think that's really exciting because I think in some sense, like if I were to die today, in some sense I would continue on if I continued tweeting. I tweet, therefore I am.

- Yeah, well, I mean, that's one of the advantages of a replicant, that I can recreate the communications of that person. - Do you hope, do you think, do you hope humans will become a multi-planetary species? You've talked about the phases, the six epochs, and one of them is reaching out into the stars in part.

- Yes, but the kind of attempts we're making now to go to other planetary objects doesn't excite me that much 'cause it's not really advancing anything. - It's not efficient enough? - Yeah, and we're also putting out other human beings, which is a very inefficient way to explore these other objects.

What I'm really talking about in the sixth epoch, the universe wakes up, it's where we can spread our superintelligence throughout the universe. And that doesn't mean sending very soft, squishy creatures like humans. - Yeah, the universe wakes up. - I mean, we would send intelligence masses of nanobots which can then go out and colonize these other parts of the universe.

- Do you think there's intelligent alien civilizations out there that our bots might meet? - My hunch is no. Most people say yes, absolutely. I mean, and-- - Universe is too big. - And they'll cite the Drake equation. And I think in "Singularity is Near," I have two analyses of the Drake equation, both with very reasonable assumptions.

And one gives you thousands of advanced civilizations in each galaxy. And another one gives you one civilization. And we know of one. A lot of the analyses are forgetting the exponential growth of computation. Because we've gone from where the fastest way I could send a message to somebody was with a pony, which was what, like a century and a half ago?

- Yeah. The advanced civilization we have today, and if you accept what I've said, go forward a few decades, you're gonna have an absolutely fantastic amount of civilization compared to a pony, and that's in a couple hundred years. - Yeah, the speed and the scale of information transfer is just growing exponentially, in a blink of an eye.

- Now think about these other civilizations. They're gonna be spread out at cosmic times. So if something is ahead of us or behind us, it could be ahead of us or behind us by maybe millions of years, which isn't that much. I mean, the world is billions of years old, 14 billion or something.

So even a thousand years, if two or 300 years is enough to go from a pony to a fantastic amount of civilization, we would see that. So of other civilizations that have occurred, okay, some might be behind us, but some might be ahead of us. If they're ahead of us, they're ahead of us by thousands, millions of years, and they would be so far beyond us, they would be doing galaxy-wide engineering.

But we don't see anything doing galaxy-wide engineering. So either they don't exist, or this very universe is a construction of an alien species. We're living inside a video game. - Well, that's another explanation that, yes, you've got some teenage kids in another civilization. - Do you find compelling the simulation hypothesis as a thought experiment, that we're living in a simulation?

- The universe is computational, so we are an example in a computational world. Therefore, it is a simulation. It doesn't necessarily mean an experiment by some high school kid in another world, but it nonetheless is taking place in a computational world, and everything that's going on is basically a form of computation.

So you really have to define what you mean by this whole world being a simulation. - Well, then it's the teenager that makes the video game. You know, us humans with our current limited cognitive capability have strived to understand ourselves, and we have created religions. We think of God.

Whatever that is, do you think God exists? And if so, who is God? - I alluded to this before. We started out with lots of particles going around, and there's nothing that represents love and creativity. And somehow we've gotten into a world where love actually exists, and that has to do actually with consciousness, because you can't have love without consciousness.

So to me, that's God, the fact that we have something where love, where you can be devoted to someone else and really feel the love, that's God. And if you look at the Old Testament, it was actually created by several different rabbinics in there. And I think they've identified three of them.

One of them dealt with God as a person that you can make deals with, and he gets angry, and he wrecks vengeance on various people. But two of them actually talk about God as a symbol of love and peace and harmony and so forth. That's how they describe God.

So that's my view of God, not as a person in the sky that you can make deals with. - It's whatever the magic that goes from basic elements to things like consciousness and love. Do you think, one of the things I find extremely beautiful and powerful is cellular automata, which you also touch on.

Do you think whatever the heck happens in cellular automata where interesting, complicated objects emerge, God is in there too? The emergence of love in this seemingly primitive universe? - Of creating a replicant is that they would love you and you would love them. There wouldn't be much point of doing it if that didn't happen.

- But all of it, I guess what I'm saying about cellular automata is it's primitive building blocks, and they somehow create beautiful things. Is there some deep truth to that about how our universe works? Is the emergence from simple rules, beautiful, complex objects can emerge? Is that the thing that made us?

- Yeah. We went through all the six phases of reality. - That's a good way to look at it. It does make some point to the whole value of having a universe. - Do you think about your own mortality? Are you afraid of it? - Yes, but I keep going back to my idea of being able to expand human life quickly enough in advance of our getting there, longevity, escape velocity, which we're not quite at yet, but I think we're actually pretty close, particularly with, for example, doing simulated biology.

I think we can probably get there within, say, by the end of this decade, and that's my goal. - Do you hope to achieve the longevity, escape velocity? Do you hope to achieve immortality? - Well, immortality is hard to say. I can't really come on your program saying, I've done it.

I've achieved immortality, because it's never forever. - A long time, a long time of living well. - But we'd like to actually advance human life expectancy, advance my life expectancy more than a year every year, and I think we can get there within, by the end of this decade.

- How do you think we do it? So there's practical things in "Transcend, "The Nine Steps to Living Well Forever," your book. You describe just that. There's practical things like health, exercise, all those things. - Yeah, I mean, we live in a body that doesn't last forever. There's no reason why it can't, though, and we're discovering things, I think, that will extend it.

But you do have to deal with, I mean, I've got various issues. Went to Mexico 40 years ago, developed salmonella. I created pancreatitis, which gave me a strange form of diabetes. It's not type one diabetes, 'cause that's an autoimmune disorder that destroys your pancreas. I don't have that. But it's also not type two diabetes, 'cause type two diabetes, your pancreas works fine, but your cells don't absorb the insulin well.

I don't have that either. The pancreatitis I had partially damaged my pancreas, but it was a one-time thing, it didn't continue, and I've learned now how to control it. But so that's just something I had to do in order to continue to exist. - Since your particular biological system, you had to figure out a few hacks, and the idea is that science would be able to do that much better, actually.

- Yeah, so I mean, I do spend a lot of time just tinkering with my own body to keep it going. So I do think I'll last 'til the end of this decade, and I think we'll achieve longevity, escape velocity. I think that we'll start with people who are very diligent about this.

Eventually it'll become sort of routine that people will be able to do it. So if you're talking about kids today, or even people in their 20s or 30s, that's really not a very serious problem. I have had some discussions with relatives who are like almost 100, and saying, well, we're working on it as quickly as possible, but I don't know if that's gonna work.

- Is there a case, this is a difficult question, but is there a case to be made against living forever that a finite life, that mortality is a feature, not a bug, that living a shorter, so dying makes ice cream taste delicious, makes life intensely beautiful more than it otherwise may be?

- Most people believe that way, except if you present a death of anybody they care about or love, they find that extremely depressing. And I know people who feel that way 20, 30, 40 years later, they still want them back. So I mean, death is not something to celebrate, but we've lived in a world where people just accept this.

Well, life is short, you see it all the time on TV, oh, life's short, you have to take advantage of it. And nobody accepts the fact that you could actually go beyond normal lifetimes. But anytime we talk about death or a death of a person, even one death is a terrible tragedy.

If you have somebody that lives to 100 years old, we still love them in return. And there's no limitation to that. In fact, these kinds of trends are gonna provide greater and greater opportunity for everybody, even if we have more people. - So let me ask about an alien species or a super intelligent AI 500 years from now that will look back and remember Ray Kurzweil version zero.

Before the replicants spread, how do you hope they remember you? In a "Hitchhiker's Guide to the Galaxy" summary of Ray Kurzweil, what do you hope your legacy is? - Well, I mean, I do hope to be around, so that's-- - Some version of you, yes. Do you think you'll be the same person around?

- I mean, am I the same person I was when I was 20 or 10? - You would be the same person in that same way, but yes, we're different. All we have of that, all you have of that person is your memories, which are probably distorted in some way.

Maybe you just remember the good parts. Depending on your psyche, you might focus on the bad parts, might focus on the good parts. - Right, but I mean, I'd still have a relationship to the way I was when I was younger. - How will you and the other super intelligent AIs remember you of today from 500 years ago?

What do you hope to be remembered by, this version of you, before the singularity? - Well, I think it's expressed well in my books, trying to create some new realities that people will accept. I mean, that's something that gives me great pleasure and greater insight into what makes humans valuable.

I'm not the only person who's tempted to comment on that. - And optimism that permeates your work. Optimism about the future. It's ultimately that optimism paves the way for building a better future. - I agree with that. - So you asked your dad about the meaning of life and he said, "Love, let me ask you the same question.

"What's the meaning of life? "Why are we here? "This beautiful journey that we're on in phase four, "reaching for phase five of this evolution "and information processing, why?" - Well, I think I'd give the same answer as my father. Because if there were no love and we didn't care about anybody, there'd be no point existing.

- Love is the meaning of life. The AI version of your dad had a good point. Well, I think that's a beautiful way to end it. Ray, thank you for your work. Thank you for being who you are. Thank you for dreaming about a beautiful future and creating it along the way.

And thank you so much for spending a really valuable time with me today. This was awesome. - It was my pleasure and you have some great insights both into me and into humanity as well. So I appreciate that. - Thanks for listening to this conversation with Ray Kurzweil. To support this podcast, please check out our sponsors in the description.

And now let me leave you with some words from Isaac Asimov. "It is change, continuous change, "inevitable change, "that is the dominant factor in society today. "No sensible decision could be made any longer "without taking into account not only the world as it is, "but the world as it will be.

"This in turn means that our statesmen, "our businessmen, our every man "must take on a science fictional way of thinking." Thank you for listening and hope to see you next time. (upbeat music) (upbeat music)