back to index

Martin Rees: Black Holes, Alien Life, Dark Matter, and the Big Bang | Lex Fridman Podcast #305


Chapters

0:0 Introduction
0:52 Understanding the universe
9:22 Human limitations and AI
18:18 Dark matter
25:38 Vast universe
33:19 Alien life
47:12 Space exploration
71:35 Future technology
82:57 Newton and Einstein
86:59 Black holes
100:40 Cosmological threats
127:44 Advice for young people
130:14 Mortality

Whisper Transcript | Transcript Only Page

00:00:00.000 | no reason to think that the ocean ends just beyond your horizon. And likewise, there's no reason to
00:00:05.680 | think that the aftermath of our Big Bang ends just at the boundary of what we can see. Indeed,
00:00:12.880 | there are quite strong arguments that it probably goes on about 100 times further.
00:00:17.360 | They may even go on so much further that all combinatorials are replicated. And there's
00:00:24.880 | another set of people like us sitting in a room like this. - The following is a conversation
00:00:31.920 | with Lord Martin Rees, Emeritus Professor of Cosmology and Astrophysics at Cambridge University
00:00:37.440 | and co-founder of the Center for the Study of Existential Risk. This is the Lex Friedman
00:00:43.840 | Podcast. To support it, please check out our sponsors in the description. And now, dear friends,
00:00:49.600 | here's Martin Rees. In your 2020 Scientific American article, you write that,
00:00:56.000 | quote, "Today we know that the universe is far bigger and stranger than anyone suspected."
00:01:02.160 | So what do you think are the strangest, maybe the most beautiful, or maybe even the most terrifying
00:01:07.920 | things lurking out there in the cosmos? - Well, of course, we're still groping for any
00:01:13.600 | detailed understanding of the remote parts of the universe. But of course, what we've learned in the
00:01:19.680 | last few decades is really two things. First, we've understood that the universe had an origin
00:01:25.760 | about 13.8 billion years ago in a so-called Big Bang, a hot, dense state, whose very beginnings
00:01:33.040 | are still shrouded in mystery. And also, we've learned more about the extreme things in it,
00:01:38.400 | black holes, neutron stars, explosions of various kinds. And one of the most potentially exciting
00:01:45.520 | discoveries in the last 20 years, mainly the last 10, has been the realization that most of the
00:01:52.800 | stars in the sky are orbited by retinues of planets, just as the Sun is orbited by the Earth
00:01:59.760 | and the other familiar planets. And this, of course, makes the night sky far more interesting.
00:02:04.720 | What you see up there aren't just points of light, but they're planetary systems. And that raises a
00:02:10.000 | question, could there be life out there? And so, that is an exciting problem for the 21st century.
00:02:15.600 | - So, when you see all those lights out there, you immediately imagine all the planetary worlds that
00:02:21.760 | are around them, and they potentially have all kinds of different lives, living organisms,
00:02:29.360 | life forms, or different histories. - Well, that we don't know at all. We know
00:02:32.640 | that these planets are there. We know that they have masses and orbits rather like the planets
00:02:40.240 | of our solar system, but we don't know at all if there's any life on any of them. I mean, it's
00:02:45.440 | entirely logically possible that life is unique to this Earth. It doesn't exist anywhere. On the
00:02:50.480 | other hand, it could be that the origin of life is something which happens routinely, given
00:02:56.080 | conditions like the young Earth, in which case there could be literally billions of places in
00:03:00.800 | our galaxy where some sort of biosphere has evolved. And settling where the truth lies
00:03:07.680 | between those two extremes is a challenge for the coming decades.
00:03:11.760 | - So, certainly we're either lucky to be here or very, very, very lucky to be here.
00:03:18.720 | - I guess that's the difference. - That's the difference. Where do you fall,
00:03:23.600 | your own estimate, your own guess on this question? Are we alone in the universe, do you think?
00:03:29.200 | - I think it would be foolish to give any firm estimate because we just don't know.
00:03:33.520 | That's just an example of how we are depending on greater observations. And also, incidentally,
00:03:40.880 | in the case of life, we've got to take account of the fact that, as I always say to my scientific
00:03:47.120 | colleagues, biology is a much harder subject than physics. Most of the universe that we know about
00:03:54.480 | could be understood by physics, but we've got to remember that even the smallest living organism,
00:04:00.240 | an insect, is far more complicated with layer on layer complexity than the most complicated
00:04:08.960 | star or galaxy. - You know, that's the funny thing
00:04:11.680 | about physics and biology. The dream of physicists in the 20th century and maybe this century is to
00:04:19.120 | discover the theory of everything. And there's a sense that once you discover that theory,
00:04:27.760 | you will understand everything. If we unlock the mysteries of how the universe works,
00:04:32.160 | would we be able to understand how life emerges from that fabric of the universe that we understand?
00:04:39.600 | - I think the phrase theory of everything is very misleading because it's used to describe a theory
00:04:46.400 | which unifies the three laws of microphysics, electromagnetism, and weak interaction with
00:04:54.000 | gravity. So, it's an important step forward for particle physicists. But the lack of such a theory
00:05:01.200 | doesn't hold up any other scientists. Anyone doing biology or most of physics is not held up at all
00:05:07.680 | through not understanding sub-nuclear physics. They're held up because they're dealing with
00:05:11.680 | things that are very complicated. And that's especially true of anything biological. So,
00:05:16.880 | what's holding up biologists is not a lack of the so-called theory of everything. It's the
00:05:22.240 | inability to understand things which are very complicated.
00:05:25.760 | - What do you think we'll understand first? How the universe works or how the human body works
00:05:31.680 | deeply, like from a fundamental deep level? - Well, I think, and perhaps we can come back
00:05:37.680 | to it later, that there are only limited prospects of ever being able to understand with unaged human
00:05:45.920 | brains the most fundamental theories linking together all the forces of nature. I think that
00:05:52.160 | may be a limitation of the human brains. But I also think that we can, perhaps aided by computer
00:06:00.000 | simulations, understand a bit more of the complexity of nature. But even understanding
00:06:07.520 | a simple organism from the atom up is very, very difficult. And I think extreme reductionists
00:06:14.800 | have a very misleading perception. They tend to think that, in a sense, we are all solutions
00:06:21.760 | to the equation, et cetera. But that isn't the way we'll ever understand anything. It may be true
00:06:29.600 | that we are reductionists in the sense that we believe that that's the case. We don't believe
00:06:34.800 | in any special life force in living things. But nonetheless, no one thinks that we can
00:06:39.840 | understand a living thing by solving Schrodinger's equation. To take an example which isn't as
00:06:45.360 | complicated, lots of people study the flow of fluids like water. Why waves break, why flows
00:06:51.920 | go turbulent, things like that. This is a serious branch of applied mathematics and engineering.
00:06:57.760 | And in doing this, you have concepts of viscosity, turbulence, and things like that.
00:07:03.360 | Now, you can understand quite a lot about how water behaves and how waves break in terms of
00:07:10.800 | those concepts. But the fact that any breaking wave is a solution of Schrodinger's equation for
00:07:18.960 | 10 to the 30 particles, even if you could solve that, which you clearly can't, would not give
00:07:24.960 | you any insight. So, the important thing is that every science has its own irreducible concepts
00:07:30.560 | in which you get the best explanation. So, it may be in chemistry, things like valence. In
00:07:38.880 | biology, the concepts in cell biology. And in ecology, there are concepts like imprinting,
00:07:48.160 | etc. And in psychology, there are other concepts. So, in a sense, the sciences are like a tall
00:07:56.160 | building where you have basic physics, the most fundamental, then the rest of physics, then
00:08:02.480 | chemistry, then cell biology, etc. all the way up to the, I guess, economists in the penthouse and
00:08:08.800 | all that. We have that. And that's true in a sense, but it's not true that it's like a building in
00:08:16.960 | that it's made unstable by an unstable base. Because if you're a chemist, biologist, or an
00:08:24.320 | economist, you're facing challenging problems. But they're not made any worse by uncertainty
00:08:30.960 | about sub-nuclear physics. >> And at every level, just because you understand the rules of the game
00:08:37.840 | or have some understanding of the rules of the game doesn't mean you know what kind of beautiful
00:08:44.880 | things that game creates. >> Right. So, if you're interested in
00:08:48.800 | birds and how they fly, then things like imprinting the baby on the mother and all that and
00:08:58.000 | things like that are what you need to understand. You couldn't even in principle solve this
00:09:05.760 | vertical equation how an albatross wanders for thousands of miles to the Southern Ocean
00:09:10.560 | and comes back and then coughs up food for its young. That's something we can understand in a
00:09:15.680 | sense and predict the behavior, but it's not because we can solve it on the atomic scale.
00:09:21.360 | >> You mentioned that there might be some fundamental limitation to the human brain
00:09:25.840 | that limits our ability to understand some aspect of how the universe works. That's really
00:09:32.160 | interesting. That's sad, actually. To the degree it's true, it's sad. So, what do you mean by that?
00:09:39.680 | >> I would simply say that just as a monkey can't understand quantum theory or even Newtonian
00:09:46.080 | physics, there's no particular reason why the human brain should evolve to be well-matched
00:09:52.480 | to understanding the deepest aspects of reality. And I suspect that there may be aspects that we
00:09:59.200 | are not even aware of and couldn't really fully comprehend. But as an intermediate step towards
00:10:04.960 | that, one thing which I think is a very interesting possibility is the extent to which AI can help us.
00:10:10.560 | I think if you take the example of so-called theories of everything, one of which is string
00:10:17.120 | theory, string theory involves very complicated geometry and structures in 10 dimensions.
00:10:25.760 | And it's certainly, in my view, on the cards that the physics of 10 dimensions,
00:10:35.040 | very complicated geometry, may be too hard for a human being to work through,
00:10:41.680 | but could be worked through by an AI with the advantage of the huge processing power,
00:10:50.160 | which enables them to learn World Championship Chess within a few hours just by watching games.
00:10:55.600 | So, there's every reason to expect that these machines could help us to solve these problems.
00:11:02.800 | And of course, if that's the way we came to understand whether string theory was right,
00:11:07.680 | it should be in a sense frustrating because you wouldn't get the sort of aha insight,
00:11:13.120 | which is the greatest satisfaction from doing science. But on the other hand, if a machine
00:11:19.040 | churns away at 10 dimensional geometry, figuring out all the possible origamis wound up in extra
00:11:26.560 | dimensions, if it comes out at the end, spews out the correct mass of the electron, the fact that
00:11:32.960 | there are three kinds of neutrinos, something like that, you would know that there was some truth in
00:11:36.960 | the theory. And so, we may have a theory which we come to trust because it does predict things that
00:11:43.200 | we can observe and check, but we may never really understand the full workings of it to the extent
00:11:49.760 | that we do more or less understand how most phenomena can be explained in a fundamental
00:11:57.600 | way. Of course, in the case of quantum theory, many people would say, "Understandably, there's
00:12:02.000 | still some mystery if you don't quite understand why it works." But there could be deeper mysteries
00:12:06.400 | when we get to these unified theories, where there's a big gap between what a computer can
00:12:12.400 | print out for us at the end and what we can actually grasp and think through in our heads.
00:12:17.360 | - Yeah, it's interesting that the idea that there could be things a computer could tell us that is
00:12:22.400 | true, and maybe it can even help us understand why it's true a little bit. But ultimately,
00:12:30.000 | it's still a long journey to really deeply understand the whys of it.
00:12:34.400 | - Yes, and that's a limitation of our brain.
00:12:37.200 | - We can try to sneak up to it in different ways, given the limitations of our brain. Have you,
00:12:43.120 | I've gotten a chance to spend the day at DeepMind, talk to Demis Hassabis. His big dream is to apply
00:12:49.440 | AI to the questions of science, certainly to the questions of physics. Have you gotten a chance
00:12:54.960 | to interact with him? - Well, I know him quite well.
00:12:56.880 | He's one of my heroes, certainly. And I remember- - I'm sure he would say the same.
00:13:02.080 | - And I remember the first time I met him, he said that he was like me. He wanted to
00:13:06.720 | understand the universe, but he thought the best thing to do was to try and develop AI.
00:13:11.920 | And then with the help of AI, he'd stand more chance of understanding the universe.
00:13:15.360 | - Yeah. - And I think he's right about that.
00:13:17.360 | And of course, although we're familiar with the way his computers play Go and chess,
00:13:26.640 | he's already made contributions to science through understanding protein folding better
00:13:33.040 | than the best human chemists. And so already he's on the path to showing ways in which computers
00:13:40.240 | have the power to learn and do things by having a bit to analyze enormous samples in a short time
00:13:46.160 | to do better than humans. And so I think he would resonate for what I've just said,
00:13:51.680 | that it may be that in these other fundamental questions, the computers will play a crucial role.
00:13:57.360 | - Yeah, and they're also doing quantum mechanical simulation of electrons. They're doing
00:14:03.680 | control of high temperature plasmas, fusion reactors.
00:14:07.680 | - Yes, that's a new thing, which is very interesting. They can suppress the instabilities
00:14:11.520 | in these tokamaks better than any other way. Yeah.
00:14:14.800 | - And it's just the march of progress by AIs in science is making big strides. Do you think an AI
00:14:25.200 | system will win a Nobel Prize in the century? What do you think? Does that make you
00:14:32.560 | sad? - If I can digress and put in a plug for my
00:14:35.920 | next book, it has a chapter saying why Nobel Prizes do more harm than good. So on a quite
00:14:42.480 | separate subject, I think Nobel Prizes do a great deal of damage to the perceptive of the way
00:14:48.080 | science is done. Of course, if you ask who or what deserves the credit for any scientific discovery,
00:14:53.760 | it may be often someone who has an idea, a team of people who work a big experiment, et cetera.
00:14:59.600 | And of course, it's the quality of the equipment, which is crucial. And certainly in the subjects I
00:15:07.520 | do in astronomy, the huge advances we've had come not from us being more intelligent than
00:15:15.200 | Aristotle was, but through us having far, far better data from powerful telescopes on the
00:15:22.240 | ground and in space. And also, incidentally, we've benefited hugely in astronomy from
00:15:27.840 | computer simulations. Because if you are a subatomic physicist, then of course, you crash
00:15:36.160 | together the particles in a big accelerator like the one at CERN and see what happens.
00:15:41.840 | But I can't crash together two galaxies or two stars and see what happens. But in the
00:15:48.080 | virtual world of a computer, one can do simulations like that. And the power of computers is such that
00:15:56.000 | these simulations can yield phenomena and insights which we wouldn't have guessed beforehand. And
00:16:05.520 | the way we can feel we're making progress in trying to understand some of these phenomena,
00:16:10.640 | why galaxies have the size and shape they do and all that, is because we can do simulations
00:16:16.480 | tweaking different initial conditions and seeing which gives the best fit to what we actually
00:16:21.920 | observe. And so, that's a way in which we've made progress in using computers. And incidentally,
00:16:28.800 | we also now need to analyze data because one thinks of astronomy as being traditionally
00:16:34.240 | a rather data-poor subject. But the European satellite called Gaia has just put online the
00:16:42.000 | speeds and colors and properties of nearly two billion stars in the Milky Way, which we do
00:16:48.880 | fantastic analyses of. And that, of course, could not be done at all without just the number of
00:16:53.520 | crunchy capacitors computers. LB: And the new methods of machine learning
00:16:57.680 | actually love raw data, the kind that astronomy provides, organized, structured raw data.
00:17:04.640 | MR. HAYDEN Yeah. Well, indeed, because the reason they really have a benefit over us
00:17:08.240 | is that they can learn and think so much faster. That's how they can learn to play chess and go.
00:17:14.880 | That's how they can learn to diagnose lung cancer better than a radiologist, because they can look
00:17:19.920 | at 100,000 scans in a few days, whereas no human radiologist sees that many in a lifetime.
00:17:27.520 | LB: Well, there's still magic to the human intelligence, to the intuition,
00:17:33.040 | to the common sense reasoning. MR. HAYDEN Well, we hope so.
00:17:36.320 | LB: For now. Well, what is the new book that you mentioned?
00:17:40.080 | MR. HAYDEN The book I mentioned is called If Science Is to Save Us. It's coming out in September,
00:17:44.960 | and it's on the big challenges of science, climate, dealing with biosafety and dealing
00:17:56.480 | with cyber safety. And also, it's got chapters on the way science is organized in universities
00:18:03.840 | and academies, et cetera, and the ethics of science and the education. And the limits, yes.
00:18:12.560 | LB: Well, let me actually just stroll around the beautiful and the strange of the universe.
00:18:18.080 | Over 20 years ago, you hypothesized that we would solve the mystery of dark matter by now.
00:18:24.800 | So, unfortunately, we didn't quite yet. First, what is dark matter, and why has it been
00:18:33.680 | so tough to figure out? MR. HAYDEN Well, I mean, we learned that
00:18:37.040 | galaxies and other large-scale structures which are moving around but prevent from flying apart
00:18:44.640 | by gravity would be flying apart if they only contained the stuff we see, if everything in
00:18:53.600 | them was shining. And to understand how galaxies formed and why they do remain confined to the same
00:19:02.400 | size, one has to infer that there's about five times as much stuff producing gravitational forces
00:19:09.680 | than the total amount of stuff in the gas and stars that we see. And that stuff is called
00:19:15.600 | dark matter. That's the leading name. It's not dark, it's just transparent, et cetera.
00:19:21.200 | And the most likely interpretation is that it's a swarm of microscopic particles which have no
00:19:31.440 | electric charge, and the very small cross-sections were hitting each other and hitting anything else.
00:19:36.800 | So, they swarm around, and we can detect their collective effects. And when we do computer
00:19:42.080 | simulations of how galaxies form and evolve and how they emerge from the Big Bang, then we get a
00:19:49.360 | nice consistent picture if we put in five times as much mass in the form of these mysterious dark
00:19:59.040 | particles. And for instance, it works better if we think they're non-interacting particles than if
00:20:04.800 | you think they're a gas which would have shockwaves and things. So, we know something about the
00:20:08.560 | properties of these, but we don't know what they are. And the disappointment compared to my guess
00:20:14.560 | 20 years ago is that particles answering this description have not yet been found. It was
00:20:20.480 | thought that the big accelerator, the Large Hadron Collider at CERN, which is the world's biggest,
00:20:26.080 | might have found a new class of particles which would have been the obvious candidates,
00:20:30.640 | and it hasn't. And some people say, "Well, dark matter can't be there," etc. But what I would
00:20:37.840 | argue is that there's a huge amount of parameter space that hasn't been explored. There are other
00:20:44.560 | kinds of particles called axions, which behave slightly differently, which are a good candidate.
00:20:49.680 | And there's a factor of 10 powers of 10 between the heaviest particles that could be created by
00:20:58.000 | the Large Hadron Collider and the heaviest particles which on theoretical grounds could exist
00:21:03.760 | without turning into black holes. So, there's a huge amount of possible particles which could
00:21:09.760 | be out there as remnants of the Big Bang, but which we wouldn't be able to detect so easily.
00:21:15.120 | So, the fact that we've got new constraints on what the dark matter could be doesn't diminish my
00:21:22.000 | belief that it's there in the form of particles because we've only explored a small fraction of
00:21:26.720 | parameter space. LB: So, there's this search. You're
00:21:29.920 | literally, upon an unintended, are searching in the dark here in this giant parameter space of
00:21:38.640 | possible particles. You're searching for… I mean, there could be all kinds of particles.
00:21:43.440 | CR: There could be, and there's some which may be very, very hard to detect. But I think we can hope
00:21:48.480 | for some new theoretical ideas because one point which perhaps you'd like to discuss more is about
00:21:55.840 | the very early stage of the Big Bang. And the situation now is that we have an outline picture
00:22:02.880 | for how the universe has evolved from the time when it was expanding in just a nanosecond
00:22:10.640 | up to the present. And we could do that because after a nanosecond, the physics of the material
00:22:18.160 | is in the same range that we can test in the lab. After a nanosecond, the particles move around like
00:22:26.160 | those in the Large Hadron Collider. If you wait for one second, they're rather like in the centers
00:22:31.440 | of the hottest stars, and nuclear reactions produce hydrogen and helium, etc., which fit the data. So,
00:22:37.600 | we can with confidence extrapolate back to when the universe was a nanosecond old. Indeed,
00:22:42.240 | I think we can do it with as much confidence as anything a geologist tells you about the early
00:22:46.720 | history of the Earth. And that's huge progress in the last 50 years. But any progress puts in
00:22:53.040 | sharper focus new mysteries. And of course, the new mysteries in this context are why is the
00:23:00.960 | universe expanding the way it is? Why does it contain this mixture of atoms and dark matter
00:23:06.960 | and radiation? And why does it have the properties which allow galaxies to form, being fairly smooth,
00:23:13.760 | but not completely smooth? And the answer to those questions are generally believed to lie
00:23:19.520 | in a much, much earlier stage of the universe when conditions were much more extreme,
00:23:24.960 | and therefore far beyond the stage where we had the foothold in experiments. Very theoretical.
00:23:30.480 | And so, we don't have a convincing theory. We just have ideas. Until we have something like
00:23:36.160 | string theory or some other clues to the ultra-early universe, that's going to remain
00:23:42.000 | speculative. So, there's a big gap. And to say how big the gap is, if we take the observable universe
00:23:49.120 | out to a bit more than 10 billion light years, then when the universe was a nanosecond old,
00:23:56.640 | that would have been squeezed down to the size of our solar system or compressed into that volume.
00:24:02.480 | But the times we're talking about when the key properties of the universe were first imprinted
00:24:08.800 | were times when that entire universe was squeezed down to the size of a tennis ball,
00:24:15.120 | or baseball if you prefer, and it emerged from something microscopic. So, it's a huge extrapolation.
00:24:21.600 | And it's not surprising that since it's so far from our experimental range of detectability,
00:24:27.680 | we are still groping for ideas.
00:24:30.320 | LR: But you think first theory will reach into that place, and then experiment will
00:24:37.840 | perhaps one day catch up? Maybe simulation?
00:24:40.880 | RH: Well, I think in a sense it's a combination. I think what we hope for is that there'll be a
00:24:47.360 | theory which applies to the early universe, but which also has consequences which we can test in
00:24:54.880 | our present-day universe like discovering why neutrinos exist or things like that. And that's
00:25:03.120 | the thing which, as I mentioned, we may perhaps need a bit of AI to help us to calculate. But I
00:25:09.120 | think the hope would be that we will have a theory which applies onto the very, very extreme early
00:25:16.880 | stages of the universe, but which gains credibility and gains confidence, because it also manages to
00:25:22.560 | account for otherwise unexplained features of the low-energy world, and what people call a
00:25:27.840 | standard model of particle physics, where there are lots of undetermined numbers. So, it may
00:25:33.040 | help with that.
00:25:34.240 | LR: So, we're dancing between physics and philosophy a little bit, but what do you think
00:25:41.760 | happened before the Big Bang? So, this feels like something that's out of the reach of science.
00:25:47.760 | RH: It's out of the reach of present science, because science developed and as the frontiers
00:25:53.200 | advance, then new problems come into focus that couldn't have been postulated before. I mean,
00:25:58.800 | if I think of my own career, when I was a student, the evidence for the Big Bang was pretty weak,
00:26:05.840 | whereas now it's extremely strong. But we are now thinking about the reason why the universe is the
00:26:12.640 | way it is and all that. So, I would put all these things we've just mentioned in the category of
00:26:20.000 | speculative science, and I don't see a bifurcation between that and philosophy. But of course,
00:26:26.720 | to answer your question, if we do want to understand the very early universe,
00:26:32.080 | then we've got to realize that it may involve even more counterintuitive concepts than quantum
00:26:38.480 | theory does, because it's a condition even further away from everyday world than quantum theory is.
00:26:46.080 | And remember, our brains evolved and haven't changed much since our ancestors roamed the
00:26:53.920 | African savanna and looked at the everyday world. And it's rather amazing that we've been able to
00:26:59.440 | make some sense of the quantum micro world and of the cosmos. But there may be some things which are
00:27:05.440 | beyond us. And certainly, as we implied, there are things that we don't yet understand at all.
00:27:10.400 | And of course, one concept we might have to jettison is the idea of three dimensions of space
00:27:16.320 | and time just ticking away. There are lots of ideas. I mean, I think Stephen Hawking had an
00:27:23.760 | idea that talking about what happens before the Big Bang, it's like asking what happens if you go
00:27:30.560 | north from the North Pole. It somehow closes off. That's just one idea. I don't like that idea,
00:27:36.640 | but that's a possible one. And so, we just don't know what happened at the very beginning of the
00:27:44.400 | Big Bang. Were there many Big Bangs rather than one, etc? And those are issues which
00:27:52.320 | we may be able to get some foothold on from some new theory. But even then, we won't be able to
00:28:03.600 | directly test the theories. But I think it's a heresy to think you have to be able to test every
00:28:10.480 | prediction of a theory. And I'll give you another example. We take seriously what Einstein's theory
00:28:16.720 | says about the inside of black holes, even though we can't observe them, because that theory has
00:28:22.960 | been vindicated in many other places--in cosmology, in black holes, gravitational waves, and all those
00:28:28.880 | things. Likewise, if we had a theory which explains some things about the early history of our Big
00:28:36.720 | Bang and the present universe, then we would take seriously the inference if it predicted many Big
00:28:44.560 | Bangs, not one, even though we can't predict the other ones. So, the example is that we can
00:28:49.360 | take seriously a prediction if it's the consequence of a theory that we believe on other grounds.
00:28:56.160 | We don't need to be able to detect another Big Bang in order to take it seriously.
00:29:01.360 | LR: It may not be a proof, but it's a good indication that this is the direction where
00:29:08.400 | the truth lies. RL: Yeah, if the theory
00:29:10.720 | has gained confidence in other ways. LR: Yes. Where do you sense? Do you think
00:29:15.120 | there's other universes besides our own? RL: There are sort of well-defined theories
00:29:20.640 | which make assumptions about the physics at the relevant time. And this time, incidentally,
00:29:26.800 | is 10 to the power minus 36 seconds or earlier than that. So, this tiny sliver of time. And
00:29:34.720 | there are some theories, famous one due to Andre Linde, the Russian cosmologist now at Stanford,
00:29:42.160 | called eternal inflation, which did predict an eternal production of new Big Bangs, as it were.
00:29:49.680 | And that's based on specific assumptions about the physics. But those assumptions, of course,
00:29:55.680 | are just hypotheses which aren't vindicated. But there are other theories which only predict one
00:30:02.000 | Big Bang. So, I think we should be open-minded and not dogmatic about these options until we do
00:30:09.280 | understand the relevant physics. But there are these different scenarios of very different ideas
00:30:14.640 | about this. But I think all of them have the feature that physical reality is a lot more
00:30:21.680 | extensive than what we can see through our telescope. I think even most conservative
00:30:25.760 | astronomers would say that because we can see out with our telescopes to a sort of horizon,
00:30:32.240 | which is about, depending on how you measure it, maybe 15 billion light years away or something
00:30:39.280 | like that. But that horizon of our observations is no more physical reality than the horizon
00:30:46.800 | around you if you're in the ocean and looking out at your horizon. There's no reason to think that
00:30:54.160 | the ocean ends just beyond your horizon. And likewise, there's no reason to think that the
00:30:59.600 | aftermath of our Big Bang ends just at the boundary of what we can see. Indeed, there are quite strong
00:31:06.400 | arguments that it probably goes on about 100 times further. It may even go on so much further
00:31:12.640 | that all combinatorials are replicated. And there's another set of people like us sitting
00:31:19.840 | in a room like this. CB: Every possible combination of--
00:31:22.240 | RL: Yeah, that could happen. That's not logically impossible. But I think many people would accept
00:31:29.600 | that it does go on and contain probably a million times as much stuff as what we can see within a
00:31:37.040 | horizon. The reason for that incidentally is that if we look as far as we can in one direction and
00:31:43.040 | in the opposite direction, then the conditions don't differ by more than one part in 100,000.
00:31:49.120 | So, that means that if we're part of some finite structure, the gradient across the part we can see
00:31:54.960 | is very small. And so, that suggests that it probably does go on a lot further. And the best
00:32:00.640 | estimates say it must go on at least 20 times further.
00:32:02.800 | CB: Is that exciting or terrifying to you? Just the spans of it all, the wide,
00:32:10.720 | everything that lies beyond the horizon? That example doesn't even hold for Earth. It goes
00:32:17.760 | way, way farther. And on top of that, just to take your metaphor further on the ocean,
00:32:23.280 | while we're on top of this ocean, not only can we not see beyond the horizon,
00:32:28.640 | we also don't know much about the depth of the ocean, nor the actual mechanism of observation
00:32:36.320 | that's in our head. RL: Yes. No, I think the
00:32:39.120 | rogues and algebras, all those points you make. Yes, yes. But I think even the solar system is
00:32:46.800 | pretty vast by human standards. And so, I don't think the perception of this utterly vast cosmos
00:32:55.120 | need have any deeper impact on us than just realizing that we are very small on the scale
00:33:02.960 | of the external world. CB: Yeah. It's humbling though. It's
00:33:08.560 | humbling, and depending where your ego is, it's humbling.
00:33:13.200 | RL: Well, if you start off very unhumble indeed, it may make a difference. But for most of us,
00:33:17.760 | I don't think it makes much difference. And well, there's a more general question, of course, about
00:33:24.320 | whether the human race as such is something which is very special, or if on the other hand,
00:33:33.680 | it's just one of many such species elsewhere in the universe, or indeed existing at different
00:33:42.160 | times in our universe. CB: To me, it feels almost obvious that
00:33:48.400 | the universe should be full of alien life, perhaps dead alien civilizations, but just
00:33:54.800 | the vastness of space. And it just feels wrong to think of Earth as somehow special. It sure as heck
00:34:04.800 | doesn't look that special. The more we learn, the less special it seems.
00:34:09.680 | RL: Well, I mean, I don't agree with that as far as life is concerned, because remember that we
00:34:16.720 | don't understand how life began here on Earth. We don't understand, although we know there were
00:34:22.320 | an evolution of simple life to complex life, we don't understand what caused the transition
00:34:28.240 | between complex chemistry and the first replicating, metabolizing entity we call alive.
00:34:34.880 | CB: Yes. RL: That's a mystery. And
00:34:37.120 | serious physicists and chemists are now thinking about it, but we don't know. So, we therefore
00:34:44.960 | can't say, "Was it a rare fluke which would not have happened anywhere else, or was it something
00:34:52.720 | which involves a process that would have happened in any other planet where conditions were like
00:34:58.640 | they were on the young Earth?" So, we can't say that now. I think many of us would indeed bet
00:35:06.080 | that probably some kind of life exists elsewhere. But even if you accept that, then there are many
00:35:14.000 | contingencies going from simple life to present-day life. And some biologists like Stephen
00:35:21.040 | Jay Gould thought that if you reran evolution, you'd end up with something quite different,
00:35:26.160 | and maybe not with an intelligent species. So, the contingencies in evolution may
00:35:31.600 | militate against the emergence of intelligence, even if life gets started in lots of places. So,
00:35:38.240 | I think these are still completely open questions. And that's why it's such an exciting time now
00:35:42.960 | that we are starting to be able to address these. I mean, I mentioned the fact that the origin of
00:35:47.920 | life is a question that we may be able to understand, and serious people are working on it.
00:35:54.320 | It's usually put in the sort of too difficult box. Everyone knew it was important, but they didn't
00:35:58.240 | know how to tackle it or what experiments to do. But it's not like that now. And that's partly
00:36:03.600 | because of clever experiments, but I think most importantly, because we are aware that we can look
00:36:10.080 | for life in other places, other places in our solar system, and of course, on the exoplanets
00:36:16.320 | around other stars. And within 10 or 20 years, I think two things could happen, which will be
00:36:23.120 | really, really important. We might, with the next big telescope, be able to image some of the
00:36:30.960 | Earth-like planets around other stars. >> Image, like get a picture?
00:36:36.000 | >> Well, actually, let me caveat that. It'd take 50 years to get a resolved image,
00:36:40.640 | but to actually detect the light. Because now, of course, these exoplanets are detected by
00:36:47.120 | their effects on the parent star. They either cause their parent star to dim slightly when they
00:36:52.800 | transit across in front of it, and so we see the dips, or their gravitational pull makes the star
00:36:58.880 | wobble a bit. So, most of the 5,000-plus planets that have been found around other stars, they've
00:37:05.440 | been found indirectly by their effect in one of those two ways on the parent star.
00:37:09.360 | >> You could still do a pretty good job of estimating size, all those kinds of things.
00:37:13.440 | >> Size and the mass, you can estimate. But detecting the actual light from one of these
00:37:22.160 | exoplanets hasn't really been done yet, except in one or two very, very bright, big planets.
00:37:27.600 | >> So, maybe like James Webb Telescope would be--
00:37:29.520 | >> Well, James Webb may do this, but even better will be the European ground-based telescope,
00:37:36.080 | called unimaginatively the Extremely Large Telescope, which has a 39-meter diameter mirror.
00:37:41.600 | 39 meters, a mosaic of 800 sheets of glass. And that will collect enough light from one of these
00:37:49.920 | exoplanets around a nearby star to be able to separate out its light from that of the star,
00:37:57.760 | which is millions of times brighter, and get the spectrum of the planet and see if it's got oxygen
00:38:03.840 | or chlorophyll and things in it. So, that will come. James Webb may make some steps there. But
00:38:11.840 | I think we can look forward to learning quite a bit in the next 20 years. Because I like to say,
00:38:18.240 | supposing that aliens are looking at the solar system, then they'd see the Sun as an ordinary
00:38:24.320 | star. They'd see the Earth as, in Carl Sagan's nice phrase, a pale blue dot, lying very close
00:38:31.040 | in the sky to its star, our Sun, and much, much, much fainter. But if they could observe that dot,
00:38:38.480 | they could learn quite a bit. They could perhaps get the spectrum of the light and find the
00:38:44.320 | atmosphere. They'd find the shade of blue is slightly different, depending on whether the
00:38:49.200 | Pacific Ocean or landmass of Asia was facing them. So, they could infer the length of the day
00:38:54.800 | and the oceans and continents, and maybe something about the seasons and the climate.
00:38:59.840 | And that's the kind of calculation and inference we might be able to draw within the next 10 or 20
00:39:08.160 | years about other exoplanets. And evidence of some sort of biosphere on one of them would,
00:39:16.240 | of course, be crucial. And it would rule out the still logical possibility that life is unique.
00:39:22.080 | But there's another way in which this may happen in the next 20 years. People think there could
00:39:25.840 | be something swimming under the ice of Europa and Enceladus, and probes are being sent to maybe not
00:39:33.600 | quite go under the ice, but detect the spray coming out to see if there's evidence for
00:39:38.480 | organics in that. And if we found any evidence for an origin of life that happened in either of
00:39:47.120 | those places, that would immediately be important. Because if life has originated twice independently
00:39:54.640 | in one planetary system, the solar system, that would tell us straight away it wasn't a rare
00:39:59.280 | accident and must have happened billions of times in the galaxy. At the moment, we can't rule out
00:40:06.000 | it being unique. And incidentally, if we found life on Mars, then that would still be ambiguous
00:40:11.600 | because people have realized that this early life could have got from Mars to Earth or vice versa
00:40:18.080 | on meteorites. So, if you found life on Mars, then some skeptics could still say,
00:40:23.760 | if it's a single origin. But I think- - But Europa's far enough-
00:40:27.440 | - That's far enough away. - Statistically because-
00:40:29.280 | - So, that's why that would be especially- - It's always the skeptics that ruin a good party.
00:40:35.360 | - But we need them, of course. - We need them at the party. We need
00:40:39.680 | some skeptics at the party. But boy, would that be so exciting to find life on one of the moons.
00:40:46.800 | - Yeah, yeah. - Because it means that life is everywhere.
00:40:50.320 | That'll just be any kind of vegetation or life. The question of the aliens or science fiction
00:40:57.280 | is a different matter. - Intelligent aliens. Yeah,
00:41:00.400 | but if you have a good indication that there's life elsewhere in the solar system,
00:41:06.080 | that means life is everywhere. - Yep.
00:41:10.640 | - And that's, I don't know if that's terrifying or what that is because if life is everywhere,
00:41:17.200 | why is intelligent life not everywhere? Why, I mean, you've talked about that most likely
00:41:22.720 | alien civilizations, if they are out there, they would likely be far ahead of us.
00:41:29.040 | The ones that would actually communicate with us.
00:41:32.640 | - Yes. - And that,
00:41:34.480 | again, one of those things that is both exciting and terrifying. You've mentioned that they're
00:41:41.440 | likely not to be of biological nature. - Well, I think that's important. Of
00:41:46.960 | course, again, it's a speculation, but in speculating about intelligent life. And I
00:41:53.040 | take the search seriously. In fact, I chair the committee that the Russian-American investor,
00:42:00.160 | Yuri Milner, supports looking for intelligent life. He's putting $10 million a year into better
00:42:07.360 | equipment and getting time on telescopes to do this. And so I think it's worthwhile, even though
00:42:12.640 | I don't hold my breath for success. It's very exciting. But that does lead me to wonder what
00:42:19.120 | might be detected. And I think, well, we don't know. We've got to be open-minded about anything.
00:42:25.360 | We've no idea what it could be. And so any anomalous objects or even some strange,
00:42:30.080 | shiny objects in the solar system or anything, we've got to keep our eyes open for. But I think
00:42:36.480 | if we ask what about a planet like the Earth where evolution had taken more of the same track,
00:42:44.960 | then as you say, it wouldn't be synchronized. If it had lagged behind, then of course, it would not
00:42:51.920 | have got to advanced life. But it may have had a head start. It may have formed on a planet around
00:42:59.600 | an older star. But then let's ask what we would see. It's taken nearly 4 billion years from the
00:43:06.560 | first life to us, and we've now got this technological civilization which could make
00:43:13.520 | itself detectable to any aliens out there. But I think most people would say that this
00:43:24.080 | civilization of flesh and blood creatures in a collective civilization may not last more than
00:43:30.000 | a few hundred years more. I think that some people would say it will kill itself off.
00:43:37.600 | But I'm more optimistic. And I would say that what we're going to have in future is
00:43:45.360 | no longer the slow Darwinian selection, but we're going to have what I call secular
00:43:51.920 | intelligent design, which will be humans designing their progeny to be better adapted to where they
00:44:01.520 | are. And if they go to Mars or somewhere, they're badly adapted, and they want to adapt a lot.
00:44:09.360 | And so, they will adapt. But there may be some limits to what could be done with flesh and blood.
00:44:15.840 | And so, they may become largely electronic, download their brains and be electronic entities.
00:44:23.920 | And if they're electronic, then what's important is that they're near immortal.
00:44:29.120 | And also, they won't necessarily want to be on a planet with an atmosphere or gravity. They may
00:44:36.240 | go off into the blue yonder. And if they're near immortal, they won't be daunted by interstellar
00:44:41.200 | travel taking a long time. And so, if we looked at what would happen on the Earth in the next
00:44:51.280 | millions of years, then there may be these electronic entities which have been sent out
00:44:58.320 | and are now far away from the Earth, but still burping away in some fashion to be detected.
00:45:05.200 | And so, this therefore leads me to think that if there was another planet which had evolved like
00:45:13.920 | the Earth and was ahead of us, it wouldn't be synchronized. So, we wouldn't see a flesh and
00:45:19.680 | blood civilization, but we would see these electronic progeny as it were. And then this
00:45:26.240 | raises another question because there's the famous argument against there being lots of aliens out
00:45:34.240 | there which is that they would come and invade us and eat us or something like that. That's a common
00:45:40.320 | idea which Fermi is attributed to have been the first to say. And I think there's an escape clause
00:45:48.880 | to that because these entities would be, say, they evolved by second intelligent design,
00:45:57.200 | designed by their predecessors and then designed by us. And whereas Darwinian selection requires
00:46:04.960 | two things, it requires aggression and intelligence. This future intelligent design
00:46:12.880 | may favor intelligence because that's what they were designed for, but it may not favor aggression.
00:46:18.560 | And so, these future entities, they may be sitting deep thoughts, thinking deep thoughts,
00:46:26.720 | and not being at all expansionist. So, they could be out there.
00:46:30.960 | >> Yeah.
00:46:31.760 | >> And we can't refute their existence in the way the Fermi paradox is supposed to
00:46:37.520 | refute their existence because these would not be aggressive or expansionist.
00:46:41.600 | >> Well, maybe evolution requires competition, not aggression. And I wonder if competition can
00:46:47.200 | take forms that are non-expansionary. So, you can still have fun competing in the space of ideas,
00:46:53.760 | which maybe primarily-
00:46:56.160 | >> The Dory philosophers, perhaps, yeah.
00:46:57.840 | >> In a way, right. It's an intellectual exercise versus a sort of violent exercise.
00:47:07.040 | So, what does this civilization on Mars look like? So, do you think we would more and more,
00:47:14.960 | you know, maybe start with some genetic modification and then move to basically cyborgs,
00:47:20.560 | increasing integration of electronic systems, computational systems into our bodies and
00:47:26.000 | brains?
00:47:26.400 | >> This is a theme of my other new book out this year, which is called The End of Astronauts.
00:47:32.560 | >> The End of Astronauts.
00:47:33.680 | >> And it's co-written with my old friend and colleague from Berkeley, Don Goldsmith. And
00:47:40.560 | it's really about the role of human spaceflight versus sort of robotic spaceflight. And just to
00:47:48.160 | summarize what it says, it argues that the practical case for sending humans into space
00:47:55.680 | is getting weaker all the time as robots get better and more capable. Robots 50 years ago
00:48:03.280 | couldn't do anything very much, but now they could assemble big structures in space or on the Moon,
00:48:10.320 | and they could probably do exploration. Well, present ones on Mars can't actually do the
00:48:19.600 | geology, but future AI will be able to do the geology and already they can dig on Mars. And so,
00:48:26.720 | if you want to do exploration of Mars and of course, even more of Enceladus or Europa,
00:48:34.000 | where you could never send humans, we depend on robots. And they're far, far cheaper because to
00:48:39.200 | send a human to Mars requires feeding them for 200 days on the journey there and bringing them
00:48:45.520 | back. And neither of those are necessary for robots. So, the practical case for humans is
00:48:50.720 | getting very, very weak. And if humans go, it's only as an adventure, really. And so, the line
00:48:56.720 | in our book is that human spaceflight should not be pursued by NASA or public funding agencies
00:49:08.080 | because it has no practical purpose, but also because it's especially expensive if they do it
00:49:15.680 | because they would have to be risk-averse in launching civilians into space. I can illustrate
00:49:23.600 | that by noting that the shuttle was launched 135 times, and it had two spectacular failures,
00:49:32.560 | which each killed the seven people in the crew. And it had been mistakenly presented as safe
00:49:40.160 | for civilians. And there was a woman schoolteacher killed in one of them. It was a big
00:49:44.720 | national trauma, and they tried to make it safer still. But if you launch into space,
00:49:51.680 | just the kind of people prepared to accept that sort of risk, and of course, test pilots and
00:49:57.360 | people who go hang gliding and go to the South Pole, etc., are prepared to accept a 2% risk at
00:50:04.320 | least for a big challenge, then of course, you do it more cheaply. And that's why I think
00:50:11.280 | human spaceflight should be left to the billionaires and their sponsors because
00:50:18.320 | then the taxpayers aren't paying, and they can launch simply those people who are prepared
00:50:24.960 | to accept high risks. Space adventure, not space tourism. And we should cheer them on.
00:50:32.000 | And as regards where they would go, then low Earth orbit, I suspect, can be done quite cheaply
00:50:41.200 | in the future. But going to Mars, which is very, very expensive and dangerous for humans,
00:50:48.080 | the only people who would go would be these adventurers, maybe on a one-way trip like some
00:50:55.760 | of the early polar explorers and Magellan and people like that, and we would cheer them on.
00:51:00.800 | And I expect and I very much hope that by the end of the century, there will be a small community
00:51:08.800 | of such people on Mars living very uncomfortably, far less comfortably than at the South Pole or
00:51:16.960 | the bottom of the ocean or the top of Everest. But they will be there, and they won't have a
00:51:22.800 | ticket, but they'll be there. Incidentally, I think it's a dangerous illusion to think, as
00:51:30.320 | Elon Musk has said, that we can have mass emigration from the Earth to Mars to escape
00:51:39.120 | the Earth's problems. It's a dangerous illusion because it's far easier to deal with climate
00:51:45.440 | change on Earth than to terraform Mars to make it properly habitable to humans. So, there's no
00:51:52.160 | planet B for ordinary risk-averse people. But for these crazy adventurers, then you can imagine that
00:51:58.080 | they would be trying to live on Mars as great pioneers. And by the end of the century,
00:52:04.640 | then there will be huge advances compared to the present in two things. First, in understanding
00:52:11.280 | genetics so as to genetically redesign one's offspring. And secondly, to use cyborg techniques
00:52:19.280 | to implant something in our brain or indeed think about downloading, etc. And those techniques will,
00:52:26.560 | one hopes, be heavily regulated on Earth on prudentials and ethical grounds. And of course,
00:52:33.760 | we are pretty well adapted to the Earth, so we don't have the incentive to do these things in
00:52:37.760 | the way they were there. So, our argument is that it'll be those crazy pioneers on Mars
00:52:46.720 | using all these scientific advances, which will be controlled here away from the regulators,
00:52:53.680 | they will transition into a new post-human species. And so, if they do that and if they
00:53:01.440 | transition into something which is electronic eventually, because there may be some
00:53:07.200 | limits to the capacity of flesh and blood brains anyways, then those electronic entities
00:53:13.280 | may not want to stay on the planet like Mars. They may want to go away. And so,
00:53:18.160 | they'll be the precursors of the future evolution of life and intelligence coming from the Earth.
00:53:26.160 | And of course, there's one point which perhaps astronomers are more aware of than most people.
00:53:31.200 | Most people are aware that we are the outcome of 4 billion years of evolution.
00:53:37.280 | Most of them nonetheless probably think that we humans are somehow the culmination,
00:53:42.640 | the top of the tree. But no astronomers can believe that because astronomers know
00:53:49.600 | that the Earth is 4.5 billion years old. The Sun has been shining for that length of time,
00:53:57.600 | but the Sun has got 6 billion years more to go before it flares up and engulfs the inner planet.
00:54:02.880 | So, the Sun is less than halfway through its life, and the expanding universe goes on far longer
00:54:09.120 | still, maybe forever. And I like to quote Woody Allen who said, "Eternity is very long, especially
00:54:14.240 | towards the end." So, we shouldn't think of ourselves as maybe even the halfway stage in the
00:54:21.200 | emergence of cosmic complexity. And so, these entities who are post-cursors, they will go
00:54:29.520 | beyond the solar system. And of course, even if there's nothing else out there already,
00:54:33.360 | then they could populate the rest of the galaxy.
00:54:38.000 | >> And maybe eventually meet the others who are out there expanding as well.
00:54:42.160 | >> Yeah.
00:54:42.640 | >> Expanding, populating.
00:54:43.920 | >> Yes, yes.
00:54:44.320 | >> With expanded capacity for life and intelligence, all those kinds of things.
00:54:50.320 | >> Well, they might, but again, all better off because I can't conceive what they'd be like.
00:54:58.880 | They won't be green men and women with eyes on storks. Maybe something quite different.
00:55:04.720 | We just don't know. But there's an interesting question actually, which comes up when I've
00:55:10.160 | sometimes spoken to audiences about this topic, but the question of consciousness and self-awareness.
00:55:15.360 | Because going back to philosophical questions, whether an electronic robot would be a zombie,
00:55:24.800 | or would it be conscious and self-aware? And I think there's no way of answering this empirically.
00:55:30.720 | And some people think that consciousness and self-awareness is an emergent property in any
00:55:38.320 | sufficiently complicated networks that they would be. Others say, "Well, maybe it's something
00:55:42.640 | special to the flesh and blood that we're made of. We don't know." And in a sense, this may not matter
00:55:49.680 | to the way things behave because they could be zombies and still behave as though they were
00:55:55.520 | intelligent. But I remember after one of my talks, someone came up and said, "Wouldn't it be sad
00:56:02.720 | if these future entities, which were the main intelligence in the universe,
00:56:08.720 | had no self-awareness?" So, there was nothing which could appreciate the wonder and mystery
00:56:14.960 | of the universe and the beauty of the universe in the way that we do. And so, it does perhaps
00:56:21.680 | affect one's perspective of whether you welcome or deplore this possible future scenario,
00:56:28.800 | depending on whether you think the future post-human entities are conscious and have
00:56:34.800 | an aesthetic sense, or whether they're just zombies. - And of course, you have to be humble
00:56:41.280 | to realize that self-awareness may not be the highest form of being, that humans have a very
00:56:50.240 | strong ego and a very strong sense of identity, like personal identity connected to this particular
00:56:57.280 | brain. It's not so obvious to me that that is somehow the highest achievement of a life form,
00:57:07.200 | that maybe this kind of... - You think something collective would be...
00:57:10.160 | - It's possible that... Well, I think from an alien perspective, when you look at Earth,
00:57:16.160 | it's not so obvious to me that individual humans are the atoms of intelligence. It could be the
00:57:24.080 | entire organism together, the collective intelligence. And so, we humans think of
00:57:28.320 | ourselves as individuals. We dress up, we wear ties and suits, and we give each other prizes.
00:57:33.280 | But in reality, the intelligence, the things we create that are beautiful, emerges from our
00:57:39.920 | interaction with each other. And that may be where the intelligence is, ideas jumping from one person
00:57:45.520 | to another over generations. - Yes, but we have experiences where we
00:57:49.840 | can appreciate beauty and wonder and all that. And a zombie may not have those experiences.
00:57:58.400 | - Yeah, or it may have a very different... You have a very black and white, harsh description
00:58:05.040 | of zom... Like a philosophical zombie that could be just a very different way to experience.
00:58:10.080 | And in terms of the explorers that colonize Mars,
00:58:18.800 | I mean, there's several things I want to mention. One,
00:58:26.640 | it's just at a high level. To me, that's one of the most inspiring things humans can do,
00:58:30.640 | is reach out into the unknown. That's in the space of ideas, in the space of science,
00:58:36.320 | but also the explorers. - Yes, no, I agree with that.
00:58:38.800 | - And that inspires people here on Earth more. I mean, it did in their... When going to the moon
00:58:46.320 | or going out to space in the 20th century, that inspired a generation of scientists.
00:58:51.040 | I think that also could be used to inspire a generation of new scientists in the 21st century
00:58:58.640 | by reaching out towards Mars. So in that sense, I think what Elon Musk and others are doing is
00:59:04.800 | actually quite inspiring. - No, I agree.
00:59:07.040 | - It's not a recreational thing. It actually has a deep humanitarian purpose of really
00:59:13.520 | inspiring the world. And then on the other one, to push back on your thought,
00:59:19.680 | I don't think Elon says we want to escape Earth's problems. It's more that we should allocate some
00:59:28.960 | small percentage of resources to have a backup plan. And because you yourself have spoken about
00:59:37.440 | and written about all the ways we clever humans could destroy ourselves. And I'm not sure...
00:59:45.360 | It does seem, when you look at the long arc of human history, it seems almost obvious that we
00:59:53.200 | need to become a multi-planetary species over a period if we are to survive many centuries.
00:59:59.200 | It seems that as we get cleverer and cleverer with the ways we can destroy ourselves,
01:00:06.080 | Earth is gonna become less and less safe. So in that sense, this is one of the things
01:00:15.920 | people talk about climate change, and that we need to respond to climate change,
01:00:20.000 | and that's a long-term investment we need to make. But it's not really long-term,
01:00:24.480 | it's a span of decades. I think what Elon is doing is a really long-term investment.
01:00:29.920 | We should be working on multi-planetary colonization now if we were to have it ready
01:00:36.000 | five centuries from now. And so taking those early steps. And then also, something happens.
01:00:44.000 | When you go into the unknown and do this really difficult thing, you discover something very new.
01:00:51.280 | You discover something about robotics, or materials engineering, or nutrition, or neuroscience,
01:00:57.120 | or human relations, or political systems that actually work well with humans. You discover
01:01:01.760 | all those things. And so it's a worthy effort to go out there and try to become cyborgs.
01:01:08.880 | - Yeah, no, I agree with that. I think the only different point I'd make is that
01:01:14.480 | this is gonna be very expensive if it's done in a risk-averse way. And that's why I think we
01:01:22.880 | should be grateful to the billionaires if they're going to sort of foster these opportunities for
01:01:31.200 | thrill-seeking risk-takers who we can all admire.
01:01:35.280 | - Yeah, by the way, I should push back on the billionaires, 'cause there's sometimes a negative
01:01:39.360 | connotation to the word billionaire. It's not a billionaire, it's a company versus government,
01:01:43.440 | because governments are billionaires and trillionaires. It's not the wealth, it's the
01:01:49.360 | capitalist imperative, which I think deserves a lot more praise than people are giving it.
01:02:02.000 | I'm troubled by the sort of criticism like it's billionaires playing with toys
01:02:05.920 | for their own pleasure. I think what some of these companies like SpaceX and Blue Origin are doing
01:02:14.240 | is some of the most inspiring engineering and even scientific work ever done in human history.
01:02:22.160 | - No, no, I agree. I think the people who've made the greatest wealth are people who've really been
01:02:28.160 | mega-benefactors. I mean, I think, you know...
01:02:30.080 | - Some of them, some of them.
01:02:32.320 | - Yeah, yes, some of them. But those who've founded Google and all that, and even Amazon,
01:02:38.880 | they're beneficiaries. Then they're quite different category, in my view, from those
01:02:44.960 | who just shuffle around money or crypto coins and things like that, who are...
01:02:52.320 | - Now you're really talking trash.
01:02:54.720 | - Yes, but I think if they use their money in these ways, that's fine. But I think it's true
01:03:02.160 | that far more money is owned by us collectively as taxpayers, but I think the fact is that in a
01:03:07.440 | democracy, there'd be big resistance to exposing human beings to very high risks if, in a sense,
01:03:16.400 | we share responsibility for it. And that's the reason I think it'd be done much more cheaply by
01:03:23.200 | these private funders.
01:03:24.240 | - That's an interesting hypothesis, but I have to push back. I don't know if it's obvious
01:03:29.600 | why NASA spends so much money and takes such a long time to develop the things it was doing.
01:03:38.240 | So before Elon Musk came along, because I would love to live in a world where government
01:03:44.240 | actually uses taxpayer money to get some of the best engineers and scientists in the world
01:03:49.200 | and actually work across governments, Russia, China, United States, European Union, together
01:03:55.120 | to do some of these big projects. It's strange that Elon is able to do this much cheaper,
01:04:01.040 | much faster. It could have to do with risk aversion, you're right.
01:04:05.760 | - I think it's that, is that he had all the whole assembly within this one building,
01:04:15.120 | as it were, rather than depending on a supply chain. But I think it's also that he had a Silicon
01:04:21.680 | Valley culture and had younger people, whereas the big aerospace companies, Boeing and Lockheed
01:04:27.440 | Martin, they had people who were left over from the Apollo program in some cases. And so they
01:04:33.360 | weren't quite so lively. And indeed, quite apart from the controversial issues of the future of
01:04:39.680 | human space flight, in terms of the next generation of big rockets, then the one that Musk is going to
01:04:48.000 | launch for the first time this year, the huge one, is going to be far, far cheaper than the one that
01:04:55.600 | NASA has been working on at the same time. And that's because it will have a reusable first stage.
01:05:03.280 | And it's going to be great. It can launch over 100 tons into Earth orbit. And it's going to
01:05:10.480 | make it feasible to do things that I used to think were crazy, like having solar energy from space.
01:05:16.400 | That's no longer so crazy. If you can do that. And also for science, because its nose cone could
01:05:26.000 | contain within it something as big as the entire unfurled James Webb telescope mirror.
01:05:33.040 | And therefore, you can have a big telescope much more cheaply if you can launch it all in one
01:05:38.400 | piece. And so it's going to be hugely beneficial to science and to any practical use of space
01:05:45.040 | to have these cheaper rockets that are far more completely reusable than it was NASA had. So
01:05:50.720 | I think Musk's done a tremendous service to space exploration and the whole space technology through
01:05:58.000 | these rockets, certainly. - Plus it's some big, sexy rocket. It's just great engineering.
01:06:04.640 | - Of course, yeah. - It's like looking at a beautiful big bridge
01:06:07.600 | that humans are capable, us descendants of apes are capable to do something so majestic.
01:06:12.800 | - Yes. And also the way they land coming down on this bar, that's amazing.
01:06:16.560 | - It's both controls engineering, it's increasing sort of intelligence in these rockets, but also
01:06:23.680 | great propulsion engineering materials, entrepreneurship. And it just inspires,
01:06:29.920 | it just inspires so many people. - No, I'm entirely with you on that.
01:06:33.440 | - So would it be exciting to you to see a human being step foot on Mars in your lifetime?
01:06:40.720 | - Yes, I think it's unlikely in my lifetime since I'm so ancient, but I think this century is going
01:06:47.680 | to happen. And I think that that will indeed be exciting. And I hope there will be a small
01:06:53.120 | community by the end of a century. But as I say, I think they may go with one way tickets or
01:07:00.080 | accepting the risk of no return. So they've got to be people like that. And I still think it's
01:07:08.160 | going to be hard to persuade the public to send people when you say straight out that they may
01:07:14.960 | never come back. But of course, the Apollo astronauts, they took a high risk. And in fact,
01:07:21.360 | in my previous book, I quote the speech that's been written for Nixon to be read out if Neil
01:07:29.760 | Armstrong got stuck on the moon. And it was written by one of his advisors and very eloquent
01:07:38.000 | speech about how they have come to a noble end, et cetera. But of course, there was a genuine risk
01:07:46.560 | at that time. But that may have been accepted, but clearly the crashes of the shuttle were not
01:07:58.000 | acceptable to the American public, even when they were told that this was only a 2% risk,
01:08:03.040 | given how often they launched it. And so that's what leads me to think that it's got to be left to
01:08:11.600 | the kind of people who are prepared to take these risks. And I think of American Avengers,
01:08:18.720 | a guy called Steve Fossett, who was an aviator, did all kinds of crazy things, you know,
01:08:24.400 | and then a guy who fell supersonically with the parachute from very high altitude. All these
01:08:31.600 | people, we all cheer them on. They extend the bounds of humanity. But I don't think the public
01:08:38.560 | would be so happy to fund them. - I mean, I disagree with that. I think
01:08:42.880 | if we change the narrative, we should change the story. - You think so?
01:08:45.680 | - I think there's a lot of people, because the public is happy to fund
01:08:52.720 | folks in other domains that take bold, giant risks. First of all, military, for example.
01:08:58.080 | - Oh, in the military, obviously, yes. - I think this is, in the name of science,
01:09:03.440 | especially if it's sold correctly, I sure as hell would go up there with a risk. I would take a 40%
01:09:09.280 | chance risk of death for something that's... - I would. I might want to be even older than I am
01:09:15.760 | now. But then I would go. - I guess what I'm trying to
01:09:19.280 | communicate is there's a lot of people on Earth. That's the nice feature. And I'm sure there's
01:09:24.160 | going to be a significant percentage or some percentage of people that take on the risk for
01:09:29.360 | the adventure. I particularly love that that risk of adventure when taking on inspires people. And
01:09:39.280 | just the ripple effect it has across the generation, especially among the young minds,
01:09:43.200 | is perhaps immeasurable. But you're thinking that sending humans should be something we do less and
01:09:54.800 | less, sending humans to space. That it should be primarily an effort. The work of space exploration
01:10:02.480 | should be done primarily by robots. - Well, I think it can be done much more
01:10:06.320 | cheaply, obviously, on Mars. And no one's thinking of sending humans to Enceladus or Europa,
01:10:13.040 | the outer planets. And the point is we'll have much better robots because, let's take an example,
01:10:21.840 | you see these pictures of the moons of Saturn and the picture of Pluto and the comet taken by
01:10:32.160 | probes. And Cassini spent 13 years going around Saturn and its moons after 70 years. And those
01:10:39.920 | are all based on 1990s technology. And if you think of how smartphones have advanced in 20
01:10:45.120 | years since then, just think how much better one could do instrumenting some very small,
01:10:50.400 | sophisticated probe. It could send dozens of them to explore the outer planets. And that's
01:10:55.680 | the way to do that because no one thinks you could send humans that far. But I would apply
01:11:01.440 | the same argument to Mars. And if you want to assemble big structures like, for instance,
01:11:08.240 | radio astronomers would like to have a big radio telescope on the far side of the Moon.
01:11:12.320 | So, it's away from the Earth's background artificial radio waves. And that could be
01:11:19.920 | done by assembling using robots without people. So, on the Moon and on Mars, I think
01:11:29.520 | everything that's useful can be done by machines much more cheaply than by humans.
01:11:34.400 | - Do you know the movie 2001, A Space Odyssey?
01:11:38.800 | - Of course, yes. You must be too young to have seen that when it came out, obviously.
01:11:43.840 | - Yeah, but still...
01:11:44.720 | - I remember seeing it when it came out.
01:11:45.920 | - You saw it when it came out?
01:11:48.000 | - Yeah, yeah, 50 years ago.
01:11:50.320 | - 60, when was it? 60...
01:11:52.640 | - It was...
01:11:53.520 | - In the 60s.
01:11:54.800 | - Yeah, that's right. Still a classic.
01:11:57.120 | - It's still probably, to me, the greatest AI movie ever made.
01:12:03.120 | - Yes, yes, I agree.
01:12:04.480 | - One of the great space movies ever made.
01:12:06.720 | - Yes, yes.
01:12:07.040 | - So, well, let me ask you a philosophical question since we're talking about robots
01:12:12.240 | exploring space. Do you think Hal 9000 is good or bad? So, for people who haven't watched,
01:12:20.800 | this computer system makes a decision to basically prioritize the mission that the ship is on over the
01:12:31.200 | humans that are part of the mission. Do you think Hal is good or evil?
01:12:37.440 | - If you ask me, probably in that context, it was probably good, but I think you're raising
01:12:43.040 | what is, of course, very much an active issue in everyday life about the extent to which we should
01:12:50.640 | entrust any important decision to a machine. And there again, I'm very worried because I think
01:12:58.640 | if you are recommended for an operation or not given parole from prison or even denied
01:13:07.360 | credit by your bank, you feel you should be entitled to an explanation. It's not enough
01:13:13.760 | to be told that the machine has a more reliable record on the whole than humans have of making
01:13:20.400 | these decisions. You think you should be given reasons you could understand. And that's why I
01:13:25.440 | think the present societal trend to take away the humans and leave us in the hands of decisions
01:13:36.320 | that we can't contest is a very dangerous one. I think we've got to be very careful of the extent
01:13:42.800 | to which AI, which can handle lots of information, actually makes the decisions without oversight.
01:13:48.400 | And I think we can use them as a supplement. But to take the case of radiology and cancer,
01:13:58.560 | I mean, it's true that the radiologist hasn't seen as many x-rays of cancer lungs as the machine.
01:14:08.000 | So the machine could certainly help, but you want the human to make the final decision. And I think
01:14:12.400 | that's true in most of these instances. But if we turn a bit to the short term concerns with robotics,
01:14:20.320 | I think the big worry, of course, is the effect it has on people's self-respect and the labor market.
01:14:28.480 | And I think my solution would be that we should arrange to tax more heavily the big international
01:14:38.080 | conglomerates which use the robots and use that tax to fund decently paid, dignified posts
01:14:50.480 | of the kind where being a human being is important. Above all, carers for old people,
01:14:57.200 | teachers assistants for young, guards in public parks, and things like that. And if the people
01:15:02.240 | who are now working in mind-numbing jobs in Amazon warehouses or in telephone call centers
01:15:10.560 | are automated, but those same people are given jobs where being a human is an asset,
01:15:18.400 | then that's a plus-plus situation. And so that's the way I think that we should benefit from these
01:15:26.640 | technologies. Take over the mind-numbing jobs and use machines to make them more efficient, but
01:15:34.800 | enable the people so displaced to do jobs where we do want a human being. I mean, most people,
01:15:43.120 | when they're old, the rich people, if they have the choice, they want human carers and all that,
01:15:49.760 | don't they? They may want robots to help with some things, empty the bedpans and things like that,
01:15:55.600 | but they want real people. And certainly in this country, and I think even worse in America,
01:16:02.160 | the care of old people is completely inadequate. And it needs just more human beings to help them
01:16:09.760 | cope with everyday life and look after them when they're sick. And so that seems to me the
01:16:16.080 | way in which the money raised in tax from these big companies should be deployed.
01:16:20.960 | - So that's in the short term, but if you actually just look, the fact is where we are today to
01:16:26.560 | long-term future in a hundred years, it does seem that there is some significant chance
01:16:33.360 | that the human species is coming to an end in its pure biological form. There's going to be
01:16:41.920 | greater and greater integration through genetic modification than cyborg type of creatures.
01:16:48.720 | And so you have to think, all right, well, we're going to have to get from here to there.
01:16:52.880 | And that process is going to be painful. And that, you know, how there's so many different
01:16:59.680 | trajectories that take us from one place to another. It does seem that we need to deeply
01:17:04.800 | respect humanness and humanity, basic human rights, human welfare, like happiness and
01:17:13.520 | all that kind of stuff. - No, absolutely. And that's why
01:17:16.640 | I think we ought to try and slow down the application of these human enhancement techniques
01:17:21.840 | and cyborg techniques for humans for just that reason. I mean, that's why I want to lead into
01:17:28.080 | people on Mars. Let them do it, but just for that reason.
01:17:32.240 | - But they're people too, okay? People on Mars are people too. I tend to, you know.
01:17:36.880 | - But they are very poorly adapted to where they are. That's why they need
01:17:42.080 | this modification, whereas we're adapted to the earth quite well. So we don't need
01:17:47.520 | these modifications. We're happy to be humans living in the environment where our ancestors
01:17:53.200 | lived. So we don't have the same motive. So I think there's a difference, but I agree,
01:17:57.840 | we don't want drastic changes probably in our lifestyle. And that indeed is a worry
01:18:04.240 | because some things are changing so fast. But I think I'd like to inject a note of caution.
01:18:10.640 | If you think of the way progress in one technology goes, it goes in a sort of spurt. It goes up very
01:18:17.840 | fast and then it levels off. Let me give you two examples. Well, the one we've had already,
01:18:24.400 | a human space flight at the time of the Apollo program, which was only 12 years after Sputnik 1.
01:18:31.840 | I was alive then, and I thought it would only be 10 or 20 years further before there were footprints
01:18:38.400 | on Mars. But as we know for reasons we could all understand, that was and still remains the high
01:18:45.520 | point of human space exploration. That's because it was funded for reasons of superpower rivalry
01:18:53.600 | at huge public expense. But let me give you another case, civil aviation. If you think of
01:19:01.680 | the change between 1919, when that was Alcock and Brown's first transatlantic fight, to 1979,
01:19:09.360 | the first flight of the jumbo jet. It was a big change. But it's more than 50 years since 1969,
01:19:15.760 | and we still have jumbo jets more or less the same. So that's an example of something which
01:19:19.360 | developed fast and changed over. And to take another analogy, we've had huge developments in
01:19:26.720 | mobile phones, but I suspect the iPhone 24 may not be too different from the iPhone 13.
01:19:32.960 | They develop, but then they saturate, and then maybe some new innovation takes over
01:19:41.920 | in stimulating economic growth. - Yeah, so it's that we have to be cautious about being too
01:19:48.400 | optimistic, and we have to be cautious about being too cynical. I think that is the--
01:19:54.000 | - Well, optimistic is begging the question. I mean, do we want this very rapid change?
01:19:58.640 | - Right. So first of all, there's some degree to which technological advancement is something,
01:20:04.720 | is a force that can't be stopped. And so the question is about directing it versus stopping it.
01:20:10.080 | - Well, it can be sort of slopped or slow. Take human space flight. There could have been
01:20:15.360 | footprints on Mars if America had gone on spending 4% of the federal budget on the project after,
01:20:23.280 | Apollo. - But the reason--
01:20:25.120 | - So there were very good reasons, and we could have had supersonic flight, but
01:20:32.080 | Concorde came and went during the 50 years during which we had the--
01:20:35.600 | - But the reason it didn't progress is not because we realize it's not good for human society. The
01:20:42.240 | reason it didn't progress is because it couldn't make, sort of from a capitalist perspective,
01:20:48.960 | it couldn't make, there was no short-term or long-term way for it to make money. So for--
01:20:54.720 | - But that's the same as saying it's not good for society.
01:20:58.080 | - I don't think everything that makes money is good for society, and everything that doesn't
01:21:04.800 | make money is bad for society, right? That's a difficult thing we're always contending with
01:21:11.920 | when we look at social networks. It's not obvious, even though they make a tremendous amount of money,
01:21:17.280 | that they're good for society, especially how they're currently implemented with advertisement
01:21:21.920 | and engagement maximization. So that's the constant struggle of--
01:21:26.320 | - Oh, you know, I agree with you, as many innovations are damaging. Yes, yes.
01:21:30.720 | - Well-- - But I would have thought that
01:21:34.640 | supersonic flight was something that would benefit only a tiny elite, a huge expense,
01:21:41.200 | and environmental damage. That was obviously something which they're very glad not to have,
01:21:45.200 | in my opinion. - Yeah, but perhaps there was a way to do it
01:21:49.680 | where it could benefit the general populace. If you were to think about airplanes, wouldn't you
01:21:54.320 | think that in the early days, airplanes would have been seen as something that can surely only benefit
01:22:00.000 | 1% at most of the population, as opposed to a much larger percentage? There's another aspect
01:22:08.160 | of capitalist system that's able to drive down costs once you get the thing kind of going.
01:22:12.720 | So we get together, maybe with taxpayer money, and get the thing going at first,
01:22:18.240 | and once it gets going, companies step up and drive down the cost and actually make it so that
01:22:23.840 | blue-collar folks can actually start using the stuff.
01:22:26.480 | - Yeah, sometimes that does happen. That's good. - Yeah, so that's, again, the double-edged sword
01:22:32.960 | of human civilization, that some technology hurts us, some benefits us, and we don't know ahead of
01:22:40.080 | time. We can just do our best. - Yes, and there's a gap between what could be done and what
01:22:44.800 | we can actually decide to do. - Yes.
01:22:49.440 | - In the term, you could push forward some developments faster than we do.
01:22:54.640 | - Let me ask you, in your book on the future prospects for humanity,
01:22:59.840 | you imagine a time machine that allows you to send a tweet-length message to scientists in the past,
01:23:06.000 | like to Newton. - Yes.
01:23:08.080 | - What tweet would you send? That's an interesting thought experiment. What message would you send
01:23:13.200 | to Newton about what we know today? - Well, I think he'd love to know that
01:23:17.840 | there were planets around other stars. He'd like to know that--
01:23:22.160 | - That would really blow his mind. - He'd like to know that everything was made
01:23:25.120 | of atoms. He'd like to know that if he looked a bit more carefully through his prisms
01:23:32.320 | and looked at light not just from the sun but from some flames, he might get the idea that
01:23:39.920 | different substances emitted light of different colors, and he might have been twigged to
01:23:46.640 | discover some things that had to wait 200 or 300 years. Could have given him those clues, I think.
01:23:52.960 | - It's fascinating to think, to look back at how little he understood,
01:24:00.800 | people at that time understood about our world. - Mm-hmm, yes.
01:24:05.040 | - And how much we've-- - And certainly about the cosmos
01:24:07.280 | because of course-- - About the cosmos, yes.
01:24:08.720 | - Well, if we think about astronomy, then until about 1850, astronomy was a matter of
01:24:17.440 | the positions of how the stars and the planets moved around, et cetera. Of course, that goes
01:24:24.960 | back a long way, but Newton understood why the planets moved around in ellipses. But he didn't
01:24:31.120 | understand why the solar system was all in a plane, what we call the ecliptic. And he didn't
01:24:38.960 | understand it. No one did till the mid-19th century what the stars are made of. I mean,
01:24:44.000 | they were thought to be made of some fifth essence, not earth, air, fire, and water like
01:24:47.760 | everything else. And it was only after 1850 when people did use prisms more precisely to get
01:24:55.920 | spectra that they realized that the Sun was made of the same stuff as the Earth,
01:25:01.680 | and indeed the stars were. And it wasn't till 1930 that people knew about nuclear energy and
01:25:09.680 | knew what kept the Sun shining for so long. So it was quite late that some of these key ideas
01:25:16.160 | came in, which would have completely transformed Newton's views and of course, the entire scale of
01:25:21.360 | the galaxy and the rest of the universe. Some of these came later.
01:25:27.360 | - Just imagine what he would have thought about the Big Bang or even just general relativity,
01:25:30.880 | just gravity, just him and Einstein talking for a couple of weeks.
01:25:36.320 | Would he be able to make sense of space-time and the curvature of space-time?
01:25:43.040 | - Well, I think given a quick course, I mean, if one looks back, he was really a unique intellect
01:25:51.280 | in a way. And he said that he thought better than everyone else by thinking on things continually
01:26:00.640 | and thinking very deep thoughts. And so, he was an utterly remarkable intellect, obviously.
01:26:06.480 | But of course, scientists aren't all like that. I think one thing that interests me having spent a
01:26:10.960 | life among scientists is what a variety of mindsets and mental styles they have.
01:26:16.640 | Well, just to contrast Newton and Darwin, Darwin said, and he's probably correct, that he thought
01:26:30.560 | he just had as much sort of common sense and reasoning power as the average lawyer. And that's
01:26:38.400 | probably true because his ability was to sort of collect data and think through things deeply.
01:26:43.920 | That's a quite different kind of thinking from what was involved in Newton or someone
01:26:49.920 | doing abstract mathematics. - I think in the 20th century,
01:26:54.560 | the coolest, well, there's the theory, but from an astronomy perspective, black holes
01:27:03.600 | is one of the most fascinating entities to have been through theory and through experiment to
01:27:11.200 | have emerged from. - Obviously, I agree. It's an amazing
01:27:13.680 | story that, well, of course, what's interesting is Einstein's reaction because, of course, although,
01:27:21.520 | as you know, we now accept this as one of the most remarkable predictions of Einstein's theory,
01:27:25.760 | he never took it seriously, even believed it. Although it was a consequence of
01:27:33.200 | a series of his equations, which someone discovered just a year after his theory,
01:27:37.600 | Schwarzschild. But he never took it seriously, and others did. But then, of course,
01:27:43.280 | well, this is something that I've been involved in, actually finding evidence for black holes.
01:27:49.680 | And that's come in the last 50 years. And so, now there's pretty compelling evidence that they exist
01:27:55.200 | as the remnants of stars or big ones in the center of galaxies. And we understand
01:28:02.000 | what's going on. We have ideas vaguely on how they form. And, of course,
01:28:08.400 | gravitational waves have been detected. And that's an amazing piece of technology.
01:28:14.080 | - LIGO is one of the most incredible engineering efforts of all time.
01:28:18.080 | - Yes, and that's an example where the engineers deserve most of the credit because the precision
01:28:22.960 | is, as they said, it's like measuring the thickness of a hair at the distance of Alpha
01:28:28.240 | Centauri. - Yeah, it's incredible.
01:28:30.080 | - Tens to the minus 21. - So, maybe actually, if we step back,
01:28:32.960 | what are black holes? What do we humans understand about black holes? And what's still unknown?
01:28:39.680 | - Einstein's theory, extended by people like Roger Penrose, tells us that black holes are,
01:28:47.120 | in a sense, rather simple things, basically, because they are solutions of Einstein's equations.
01:28:54.880 | And the thing that was shown in the 1960s by Roger Penrose, in particular, and by a few other people,
01:29:02.080 | was that a black hole, when it forms and settles down, is defined just by two quantities,
01:29:09.040 | its mass and its spin. So, they're actually very standardized objects. It's amazing that
01:29:14.320 | objects as standardized as that can be so big and can lurk in the solar system. And so,
01:29:21.680 | that's the situation for a ready-formed black hole. But the way they form, obviously, is
01:29:27.680 | very messy and complicated. And one of the things that I've worked on a lot is what the phenomena
01:29:36.720 | are which are best attributed to black holes and what may lead to them and all that.
01:29:41.600 | - Richard, can you explain to that? So, what are the different phenomena that lead to a black hole?
01:29:47.360 | Let's talk about it. This is so cool. - Okay, well…
01:29:50.080 | - This is so cool. - Okay, well, I mean, I think one thing
01:29:54.480 | that only became understood really in the 1950s, I suppose, and beyond was how stars evolve
01:30:03.920 | differently depending on how heavy they are. The Sun burns hydrogen to helium, and then when it's
01:30:10.640 | run out of that, it contracts to be a white dwarf. And we know how long that will take. It'll take
01:30:15.520 | about 10 billion years altogether for its lifetime. But big stars burn up their fuel more quickly
01:30:22.480 | and more interestingly because when they've turned hydrogen to helium,
01:30:25.680 | they then get even hotter so they can fuse helium into carbon and go up the periodic table. And then
01:30:32.720 | they eventually explode when they have an energy crisis, and they blow out that process material,
01:30:38.400 | which as a digression is crucially important because all the atoms inside our bodies
01:30:44.880 | were synthesized inside a star, a star that lived and died more than 5 billion years ago
01:30:50.640 | before our solar system formed. And so, we each have inside us atoms made in thousands of different
01:30:55.920 | stars all over the Milky Way. And that's an amazing idea. My predecessor, Fred Hoyle, in 1946,
01:31:01.920 | was the first person to suggest that idea, and that's been borne out as a wonderful idea.
01:31:06.480 | So, that's how massive stars explode. And they leave behind something which is very exotic and
01:31:15.280 | of two kinds. One possibility is a neutron star, and these were first discovered in 1967, '68.
01:31:22.320 | These are stars a bit heavier than the sun, which are compressed to an amazing density. So,
01:31:29.600 | the whole mass of more than the sun's mass is in something about 10 miles across.
01:31:36.080 | So, they're extraordinarily dense, very exotic physics. And they've been studied in immense
01:31:46.880 | detail, and they've been real laboratories because the good thing about astronomy, apart from
01:31:51.360 | exploring what's out there, is to use the fact that the cosmos has provided us with a lab
01:31:56.480 | with far more extreme conditions than we could ever simulate. And so, we learn lots of basic
01:32:01.040 | physics from looking at these objects. And that's been true of neutron stars. But for black holes,
01:32:06.320 | that's even more true because the bigger stars, when they collapse, they leave something behind
01:32:14.160 | in the center which is too big to be a stable white, two or four neutron star becomes a black
01:32:18.720 | hole. And we know that there are lots of black holes weighing about 10 or up to 50 times as much
01:32:25.200 | as the sun, which are the remnants of stars. They were detected first 50 years ago when a black hole
01:32:33.920 | was orbiting around another star and grabbing material from the other star which swirled into
01:32:40.240 | it and gave us x-rays. So, the x-ray astronomers found these objects orbiting around an ordinary
01:32:49.040 | star and emitting x-ray radiation very intensely, varying on a very short timescale. So, something
01:32:56.240 | very small and dense was giving that radiation. That was the first evidence for black holes.
01:33:00.880 | But then the other thing that's happened was realizing that there was a different class of
01:33:05.840 | monster black holes in the centers of galaxies. And these are responsible for what's called quasars,
01:33:13.120 | which is when something in the center of a galaxy is grabbing some fuel and outshines all the 100
01:33:21.120 | billion stars or so in the rest of the galaxy. - A giant beam of light.
01:33:26.080 | - And in many cases, it's a beam of... - That's got to be the most epic thing
01:33:32.080 | the universe produces is quasars. - Well, it's a debate of what's most epic,
01:33:37.440 | but quasars maybe or maybe gamma ray bursts or something, but they are remarkable and they were
01:33:43.760 | a mystery for a long time. And they're one of the things I worked on in my younger days.
01:33:48.640 | - So, even though they're so bright, they're still a mystery. And you can only see them...
01:33:53.760 | - I think they're less of a mystery now. I think we do understand basically what's going on.
01:33:57.040 | - How were quasars discovered? - Well, they were discovered when
01:34:02.400 | astronomers found things that looked like stars and that they were small enough to be a point-like,
01:34:07.840 | not resolved by a telescope, but outshone an entire galaxy.
01:34:12.960 | - Yeah, that's suspicious. - Yes, but then they realized that
01:34:19.200 | they were objects which you now know are black holes, and black holes were capturing gas,
01:34:30.880 | and that gas was getting very hot, but it was producing far more energy than all the stars
01:34:37.680 | added together. And it was the energy of the black hole that was lighting up all the gas in the
01:34:45.200 | galaxy. So, you've got a spectrum of it there. So, this was something which was realized from
01:34:51.120 | the 1970s onwards. And as you say, the other thing we've learned is that they often do produce these
01:34:57.760 | jets squirting out, which could be detected in all wave bands. So, there's now a standard picture.
01:35:05.040 | - So, there's a giant black hole generating jets of light in the center of most galaxies.
01:35:10.560 | - Yes, that's right. - Do we have a sense if every galaxy
01:35:14.480 | has one of these big boys, big black holes? - Well, most galaxies have big black holes.
01:35:20.640 | They vary in size. The one in our galactic center... - Do we know much about ours?
01:35:25.120 | - We do, yes. We know it weighs about as much as 4 million suns, which is less than some,
01:35:34.560 | which is several billion in other galaxies. But we know this, the one in our galactic center
01:35:40.560 | isn't very bright or conspicuous, and that's because not much is falling into it at the moment.
01:35:46.640 | If a black hole's isolated, then of course, it doesn't radiate. All that radiates is gas
01:35:53.680 | swirling into it, which is very hot or has magnetic fields.
01:35:57.360 | - It's only radiating the thing it's murdering or consuming, however you put it.
01:36:01.760 | - Yeah, that's right. And so, it's thought that our galaxy may have been brighter sometime in
01:36:07.600 | the past, and that's when the black hole formed or grew. But now it's not catching very much gas,
01:36:17.520 | and so, it's rather faint. It's only detected indirectly and by fairly weak radio emission.
01:36:24.640 | And so, I think the answer to your question is that we suspect that most galaxies have a black
01:36:32.080 | hole in them. So, that means at some stage in their lives, or maybe one or more stages,
01:36:37.920 | they went through a phase of being like a quasar where that black hole captured gas and became
01:36:43.840 | very, very bright. But for the rest of their lives, the black holes are fairly quiescent
01:36:49.280 | because there's not much gas falling into them. - And so, this universe of ours is sprinkled with
01:36:56.000 | a bunch of galaxies and giant black holes with a very large number of stars orbiting these black
01:37:06.240 | holes and then planets orbiting. Likely, it seems like planets orbiting almost every one of those
01:37:13.520 | stars and just this beautiful universe of ours. So, what happens when galaxies collide, when these
01:37:22.080 | two big black holes collide? - Well, what would happen is that,
01:37:29.040 | well, and I should say that this is going to happen near us one day, but not for 4 billion
01:37:35.200 | years because the Andromeda Galaxy, which is the biggest galaxy near to us, which is about
01:37:41.760 | 3 million light years away, which is a big disk galaxy with a black hole in its hub, rather like
01:37:47.760 | our Milky Way. And that's falling towards us because they're both in a common gravitational
01:37:55.680 | potential well. And that will collide with our galaxy in about 4 billion years.
01:38:00.800 | - But maybe it'll be less a collision and more of a dance because it'll be like a swirling situation.
01:38:07.280 | - Well, it's a swirling, but eventually, they'll be a merger. They'll go through each other and
01:38:12.000 | then merge. In fact, the nice movies to be made of this, computer simulations, and it'll go through.
01:38:19.680 | And then there's a black hole in the center of Andromeda and our galaxy. And the black holes will
01:38:30.160 | settle towards the center. Then they will orbit around each other very fast, and then they will
01:38:37.200 | eventually merge. And that'll produce a big burst of gravitational waves, a very big burst.
01:38:43.200 | - That an alien civilization with a LIGO-like detector will be able to detect.
01:38:47.360 | - Yes. Well, in fact, we can detect these with their lower frequencies than the
01:38:53.680 | ways that have been detected by LIGO. So, there's a space interferometer which can detect these.
01:39:00.640 | It's about one cycle per hour rather than about 100 cycles per second. It's the ones that
01:39:06.880 | detect it. But thinking back to what will happen in 4 billion years to any of our descendants,
01:39:15.920 | they'll be okay because the two disk galaxies will merge. It'll end up as a sort of amorphous
01:39:22.960 | elliptical galaxy. But the stars won't be much closer together than they are now. It'll still be
01:39:30.320 | just twice as many stars in a structure almost as big. And so, the chance of another star colliding
01:39:37.760 | with our sun would still be very small. - Yeah, because there's actually a lot of
01:39:42.400 | space between stars and planets. - Indeed. Yes, the chance of a star
01:39:46.080 | getting close enough to affect our solar system's orbit is small, and it won't change that very much.
01:39:51.440 | So, you can be reassured. - That would be a heck of a starry sky,
01:39:55.840 | though. What would that look like? - Well, it won't make much difference
01:39:58.640 | even to that, actually. It'll just be... - Wouldn't that look kind of beautiful
01:40:02.960 | when you're swirling? Oh, because it's swirling so slow.
01:40:05.520 | - Yeah, but they're far away. So, it'll be twice as many stars in the sky.
01:40:09.920 | - Yeah, but the pattern changes. - Yeah, the pattern will change a bit,
01:40:14.560 | and there won't be the Milky Way, because the Milky Way across the sky is because we are looking in
01:40:20.320 | the disk of our galaxy. And you lose that because the disk will be sort of disrupted. And it'll be
01:40:28.480 | a more sort of spherical distribution. And of course, many galaxies are like that. And that's
01:40:33.200 | probably because they have been through mergers of this kind.
01:40:36.000 | - If we survive four billion years, we would likely be able to survive beyond that.
01:40:40.160 | - Oh, yeah. - What's the other thing on the horizon for
01:40:44.480 | humans in terms of the sun burning out, all those kinds of interesting cosmological threats to our
01:40:53.440 | civilization? - Well, I think on the cosmological time
01:40:57.200 | scale, because it won't be humans, because even if evolution's no faster than Darwinian,
01:41:03.840 | and I would argue it will be faster than Darwinian in the future, then we're thinking about six
01:41:09.920 | billion years before the sun dies. So, any entities watching the death of the sun, if they're still
01:41:16.240 | around, they'll be as different from us as we are from slime mold or something, and far more
01:41:22.160 | different still if they become electronic. So, on that time scale, we just can't predict anything.
01:41:27.360 | But I think going back to the human time scale, then we've talked about whether there'll be people
01:41:37.360 | on Mars by the end of a century. And even in these long perspectives, then indeed, this century is
01:41:44.240 | very special, because it may see the transition between purely flesh and blood entities to those
01:41:50.640 | which are sort of cyborgs. And that'll be an important transition in biology and complexity
01:41:58.240 | in this century. But of course, the other importance, and this has been the theme of a
01:42:02.640 | couple of my older books, is that this is the first century when one species, namely our species,
01:42:09.600 | has the future of the planet in its hands. And that's because of two types of concerns.
01:42:17.760 | One is that there are more of us, we're more demanding of energy and resources,
01:42:23.520 | and therefore we are for the first time changing the whole planet through climate change,
01:42:31.920 | loss of biodiversity, and all those issues. This has never happened in the past, because having
01:42:36.080 | enough humans have been much in power. So, this is an effect that's obviously
01:42:42.400 | is high on everyone's agenda now, and rightly so, because we've got to ensure that we leave a
01:42:47.680 | heritage that isn't eroded or damaged to future generations. And so, that's one class of threats.
01:42:55.840 | But there's another thing that worries me, perhaps more than many people seem to worry,
01:43:02.800 | and that's the threat of misuse of technology. And so, this is particularly because technologies
01:43:12.480 | empower even small groups of malevolent people, or indeed, even careless people, to create
01:43:20.960 | some effect which could cascade globally. And to take an example, a dangerous pathogen or pandemic,
01:43:31.920 | my worst nightmare is that there could be some small group that can engineer a virus to make
01:43:42.480 | it more virulent or more transmissible than a natural virus. This is so-called gain-of-function
01:43:47.600 | experiments, which were done on the flu virus 10 years ago and can be done for others.
01:43:52.560 | And of course, we now know from COVID-19 that our world is so interconnected that a disaster in one
01:44:02.800 | part of the world can't be confined to that part and will spread globally. So, it's possible for
01:44:07.760 | a few dissidents with expertise in biotech could create a global catastrophe of that kind. And also,
01:44:15.360 | I think we need to worry about very large-scale disruption by cyber attacks. In fact,
01:44:22.480 | I quote in one of my books a 2012 report from the American Pentagon about the possibility of a
01:44:32.320 | state-level cyber attack on the electricity grid in the eastern United States, which is it could
01:44:38.720 | happen. And it says at the end of this chapter that this would merit a nuclear response.
01:44:44.720 | This is a pretty scary possibility. That was 10 years ago. And I think now, what would have needed
01:44:51.600 | a state actor then could be done perhaps by a small group empowered by AI. And so, there's
01:44:57.520 | obviously been an arms race between the cyber criminals and the cyber security people. Not
01:45:06.080 | clear which side is winning. But the main point is that as we become more dependent on more
01:45:12.320 | integrated systems, then we get more vulnerable. And so, we have the knowledge. Then the misuse of
01:45:22.160 | that knowledge becomes more and more of a threat. And I'd say bio and cyber are the two biggest
01:45:29.520 | concerns. And if we depend too much on AI and complex systems, then just breakdowns. It may
01:45:38.080 | be that they break down. And even if it's an innocent breakdown, then it may be pretty hard
01:45:43.440 | to mend it. And just think how much worse the pandemic would have been if we'd lost the internet
01:45:48.880 | in the middle of it. We'd be dependent more than ever for communication and everything else on
01:45:54.160 | the internet and Zooms and all that. And if that had broken down, that would have made
01:46:00.640 | things far worse. And those are the kinds of threats that we, I think, need to be more
01:46:06.480 | energized and politicians need to be more energized to minimize. And one of the things I've been doing
01:46:12.640 | in the last year through being a member of our part of our parliament is sort of I have to
01:46:19.200 | instigate a committee to think more on better preparedness for extreme technological risks and
01:46:25.200 | things like that. So, they're a big concern in my mind that we've got to make sure that we
01:46:32.160 | can benefit from these advances, but safely, because the stakes are getting higher. The
01:46:41.120 | benefits are getting greater, as we know, huge benefits from computers, but also huge downsides
01:46:46.880 | as well. - And one of the things this war in Ukraine has shown, one of the most terrifying
01:46:54.000 | things outside of the humanitarian crisis, is that at least for me, I realized
01:47:00.480 | that the human capacity to initiate nuclear war is greater than I thought. I thought the lessons
01:47:12.800 | of the past have been learned. It seems that we hang on the brink of nuclear war with this conflict
01:47:20.160 | like every single day, with just one mistake or bad actor or the actual leaders of the particular
01:47:31.440 | nations launching a nuclear strike and all hell breaks loose. So, then add into that picture
01:47:38.400 | cyber attacks and so on that can lead to confusion and chaos, and then out of that confusion
01:47:47.600 | calculations are made such that a nuclear launch is... A nuclear weapon is launched and then you're
01:47:57.360 | talking about... I mean, I don't... Direct, probably 60-70% of humans on earth are dead instantly,
01:48:06.560 | and then the rest... I mean, it's basically 99% of the human population is wiped out in the period
01:48:14.320 | of five years. - Well, it may not be that bad, but it will be a devastation for civilization,
01:48:18.640 | of course. And of course, you're quite right that this could happen very quickly because of
01:48:23.920 | information coming in, and there's hardly enough time for human collected and careful thought.
01:48:34.560 | And there have been recorded cases of false alarms, several where there have been suspected
01:48:42.880 | attacks from the other side, and fortunately, they've been realized to be false alarms soon
01:48:48.320 | enough. But this could happen. And there's a new class of threats, actually, which in our center
01:48:53.760 | in Cambridge, people are thinking about, which is that the command and control system of the nuclear
01:49:03.280 | weapons and the submarine fleets and all that is now more automated and could be subject to cyber
01:49:10.640 | attacks. And that's a new threat which didn't exist 30 years ago. And so I think, indeed,
01:49:18.160 | we're in a sort of scary world, I think. And it's because things happen faster,
01:49:26.240 | and human beings aren't in such direct and immediate control because so much is
01:49:31.440 | delegated to machines. And also because the world is so much more interconnected
01:49:37.360 | than some local event can cascade globally in a way it never could in the past and much faster.
01:49:45.040 | Yeah, it's a double-edged sword because the interconnectedness brings
01:49:50.480 | a higher quality of life across a lot of metrics.
01:49:57.360 | Yeah, it can do. But of course, there again, I mean, if you think of supply chains where we get
01:50:02.720 | stuff from around the world, and one lesson we've learned is that there's a trade-off between
01:50:07.520 | resilience and efficiency and is resilient to have an inventory and stock and to depend on
01:50:16.400 | local supplies, whereas the more efficient to have long supply chains, but the risk there is that
01:50:24.000 | a break in one link in one chain can screw up car production. This has already happened in the
01:50:30.640 | pandemic. So there's a trade-off. And there are examples. I mean, for instance, the other thing
01:50:35.840 | we learned was that it may be efficient to have 95% of your hospital intensive care beds occupied
01:50:43.040 | all the time, which has been the UK situation, whereas to do what the Germans do and always keep
01:50:48.000 | 20% of them free for an emergency is really a sensible precaution. And so I think we've probably
01:50:55.680 | learned a lot of lessons from COVID-19, and they would include rebalancing the trade-off between
01:51:02.880 | resilience and efficiency. - Boy, the fact that COVID-19,
01:51:09.200 | a pandemic that could have been a lot, a lot worse, brought the world to its knees anyway.
01:51:14.480 | - It could be far worse in terms of its fatality rate or something like that.
01:51:18.720 | - Fatality rate, yeah. - Of course, yeah.
01:51:20.800 | So the fact that that, I mean, it revealed so many flaws in our human institutions.
01:51:26.000 | - Yeah, yeah. And I think I'm rather pessimistic because
01:51:30.800 | I do worry about the bad actor, the small group who can produce catastrophe. And
01:51:41.680 | if you imagine someone with access to the kind of equipment that's available in university labs or
01:51:48.400 | industrial labs, and they could create some dangerous pathogen, then even one such person
01:51:55.520 | is too many. And how can we stop that? Because it's true that you can have regulations. I mean,
01:52:02.720 | academies are having meetings, et cetera, about how to regulate these new biological experiments,
01:52:09.280 | et cetera, make them safe. But even if you have all these regulations, then enforcing regulations
01:52:16.480 | is pretty hopeless. We can't enforce the tax laws globally. We can't enforce the drug laws globally.
01:52:22.960 | And so similarly, we can't readily enforce the laws against people doing these dangerous
01:52:29.520 | experiments, even if all the governments say they should be prohibited. And so my line on this is
01:52:36.080 | that all nations are going to face a big trade-off between three things we value, freedom, security,
01:52:45.600 | and privacy. And I think different nations will make that choice differently. The Chinese
01:52:55.040 | would give up privacy and have more, certainly more security, if not more liberty. But I think
01:53:02.480 | in our countries, I think we're going to have to give up more privacy in the same way.
01:53:11.120 | - That's a really interesting trade-off. But there's also something about human nature here,
01:53:17.360 | where I personally believe that all humans are capable of good and evil. And there's some aspect
01:53:23.840 | to which we can fight this by encouraging people, incentivizing people towards the better angels of
01:53:34.080 | their nature. So in order for a small group of people to create, to engineer deadly pathogens,
01:53:40.880 | you have to have people that, for whatever trajectory took them in life,
01:53:48.320 | wanting to do that kind of thing. And if we can aggressively work on a world that sort of sees
01:53:57.200 | the beauty in everybody and encourages the flourishing of everybody in terms of mental
01:54:03.840 | health, in terms of meaning, in terms of all those kinds of things, that's one way to fight
01:54:09.200 | the development of weapons that can lead to atrocities. - Yes, and I completely agree with
01:54:16.000 | that and to reduce the reason why people feel embittered. But of course, we've got a long way
01:54:22.240 | to go to do that because if you look at the present world, nearly everyone in Africa has
01:54:29.520 | reason to feel embittered because their economic development is lagging behind most of the rest of
01:54:36.880 | the world. And the prospects of getting out of the poverty trap is rather bleak, especially if the
01:54:42.880 | population grows. Because for instance, they can't develop like the Eastern Tigers by cheap
01:54:48.720 | manufacturing because robots are taking that over. So they naturally feel embittered by the inequality.
01:54:56.880 | And of course, what we need to have is some sort of mega version of the Marshall Plan that
01:55:02.800 | helped Europe in the post-World War II era to enable Africa to develop. That would be
01:55:08.880 | not just an altruistic thing for Europe to do, but in our interest, because otherwise,
01:55:16.400 | those in Africa will feel massively disaffected. And indeed, it's a manifestation of the excessive
01:55:23.120 | inequalities, the fact that the 2,000 richest people in the world have enough money to double
01:55:28.560 | the income of the bottom billion. And that's an indictment of the ethics of the world. And this is
01:55:35.600 | where my friend Steven Pinker and I have had some contact. We wrote joint articles on bio threats
01:55:43.440 | and all that. But he writes these books being very optimistic about quoting figures about how
01:55:51.200 | life expectancy has gone up, infant mortality has gone down, literacy has gone up, and all those
01:55:57.760 | things. And he's quite right about that. And so he says the world's getting better.
01:56:02.000 | Lex: Do you disagree with your friends too, Pinker?
01:56:05.280 | Peter: Well, I mean, I agree with those facts, okay? But I think he misses out part of the
01:56:11.840 | picture, because there's a new class of threats, which hang over us now, which didn't hang over us
01:56:19.680 | in the past. And I would also question whether we have collectively improved our ethics at all,
01:56:25.920 | because let's think back to the Middle Ages. It's true that, as Pinker says, the average person was
01:56:32.320 | in a more miserable state than they are today on average. For all the reasons he quantifies,
01:56:38.960 | that's fine. But in the Middle Ages, there wasn't very much that could have been done to improve
01:56:45.680 | people's lot in life because of lack of knowledge and lack of science, etc.
01:56:51.680 | So the gap between the way the world was, which is pretty miserable, and the way the world could
01:56:57.440 | have been, which wasn't all that much better, was fairly narrow. Whereas now, the gap between the
01:57:03.920 | way the world is and the way the world could be is far, far wider. And therefore, I think we are
01:57:10.000 | ethically more at fault in allowing this gap to get wider than it was in medieval times. And so I
01:57:19.760 | would very much question and dispute the idea that we are ethically in advance of our predecessors.
01:57:27.040 | - That's a lot of interesting hypotheses in there. It's a fascinating question of how much
01:57:34.560 | is the size of that gap between the way the world is and the way the world could be is a reflection
01:57:40.400 | of our ethics, or maybe sometimes it's just a reflection of a very large number of people.
01:57:45.360 | Maybe it's a technical challenge too. It's not just...
01:57:49.920 | - Well, of our political systems.
01:57:51.600 | - Political systems. Like how many... And we're trying to figure this thing out. There's the 20th
01:57:56.640 | century, tried this thing that sounded really good on paper of collective communism type of things.
01:58:03.840 | And it's like, turned out at least the way that was done there,
01:58:07.440 | that leads to atrocities and the suffering and the murder of tens of millions of people. Okay,
01:58:13.760 | so that didn't work. Let's try democracy. And that seems to have a lot of flaws, but it seems to be
01:58:19.680 | the best thing we got so far. So we're trying to figure this out as our technologies become more
01:58:24.080 | and more powerful, have the capacity to do a lot of good to the world, but also unfortunately have
01:58:29.360 | the capacity to destroy the entirety of the human civilization.
01:58:33.200 | - And I think it's social media generally, which makes it harder to get a sort of moderate
01:58:41.040 | consensus because in the old days when people got their news filtered through responsible
01:58:47.120 | journalists in this country, the BBC and the main newspapers, et cetera, they would muffle the
01:58:52.880 | crazy extremes. Whereas now, of course, they're on the internet and if you click on them, you get
01:58:59.360 | something still more extreme. And so I think we are seeing a sort of dangerous polarization,
01:59:05.120 | which I think is going to make all countries harder to govern. And that's something which
01:59:08.560 | I'm pessimistic about. - So to push back, it is true that
01:59:12.960 | brilliant people like you highlighting the limitations of social media is making them
01:59:17.200 | realize the stakes and the failings of social media companies, but at the same time, they're
01:59:23.520 | revealing the division. It's not like they're creating it, they're revealing it in part.
01:59:28.320 | And so that puts a lot of, that puts the responsibility into the hands of social media
01:59:35.920 | and the opportunity in the hands of social media to alleviate some of that division. So it could,
01:59:41.680 | in the long arc of human history, result. So bringing some of those divisions and the anger
01:59:48.720 | and the hatred to the surface so that we can talk about it. And as opposed to disproportionately
01:59:54.640 | promoting it, actually just surfacing it so we can get over it.
01:59:57.680 | - Well, you're assuming that the fat cats are more public spirited than the politicians,
02:00:02.560 | and I'm not sure about that. - I think there's a lot of money to be made
02:00:06.640 | in being publicly spirited. I think there's a lot of money to be made in increasing the
02:00:12.960 | amount of love in the world, despite the sort of public perception that all the social media
02:00:19.920 | companies' heads are interested in doing is making money. I think that may be true,
02:00:27.200 | but I just personally believe people being happy is a hell of a good business model.
02:00:34.240 | And so making as many people happy, helping them flourish in a long-term way, that's a lot of way
02:00:40.480 | to make people, that's a good way to make money. - Well, I think on the other hand, guilt and shame
02:00:45.280 | are good motives to make you behave better in future. That's my experience.
02:00:49.760 | - Okay, so let's put it together. From maybe in the political perspective,
02:00:53.600 | certainly is the case. But it does make sense now that we can destroy ourselves with nuclear weapons,
02:00:59.680 | with engineered pandemics and so on, that the aliens would show up. That if I was the,
02:01:08.800 | you know, had a leadership position, maybe as a scientist or otherwise in an alien civilization,
02:01:17.600 | and I would come upon Earth, I would try to watch from a distance, to not interfere.
02:01:27.280 | And I would start interfering when these life forms start becoming quite, have the capacity
02:01:34.960 | to be destructive. And so, I mean, it is an interesting question when people talk about
02:01:40.480 | UFO sightings and all those kinds of things that at least-- - These are benign aliens you're
02:01:45.680 | thinking of, we're going to-- - Benign, yes. I mean, they, benign, almost curious, almost,
02:01:54.240 | partially as with all curiosity, partially selfish to try to observe, is there something
02:01:59.200 | interesting about this particular evolutionary system? Because I'm sure even to aliens,
02:02:05.520 | Earth is a curiosity. - Yeah. Well, it's in this very special stage, you know what I mean?
02:02:11.360 | - Special, perhaps a very short-- - This century is very special among the 45
02:02:15.520 | million centuries the Earth experienced already. So it is a very special time where they should
02:02:20.160 | be especially interested. But I think going back to the politics, the other problem is getting
02:02:27.440 | people who have short-term concerns to care about the long-term. By the long-term, I now mean just
02:02:34.560 | looking 30 years or so ahead. I know people who've been scientific advisors to governments
02:02:41.200 | and things, and they may make these points, but of course, they don't have much traction because,
02:02:46.560 | as we know very well, any politician has an urgent agenda of very worrying things to deal with. And so
02:02:52.800 | they aren't going to prioritize these issues which are longer term and less immediate and
02:03:01.040 | don't just concern their constituents, they concern distant parts of the world. And so,
02:03:06.800 | I think what we have to do is to enlist charismatic individuals to convert the public,
02:03:16.960 | because if the politicians know the public care about something, climate change as an example,
02:03:23.840 | then they will make decisions which take cognizance of that. And I think for that to happen,
02:03:33.920 | then we do need some public individuals who are respected by everyone and who have a high profile.
02:03:42.560 | And in the climate context, I would say that I've mentioned four very disparate people who've had
02:03:49.920 | such a big effect in the last few years. One is Pope Francis, the other is David Attenborough,
02:03:55.280 | the other is Bill Gates, and the other is Greta Thunberg. And those four people have certainly
02:04:00.240 | had a big shift in public opinion and even changed the rhetoric of business, although
02:04:08.000 | how deep that is, I don't know. But politicians can't let these issues drop down off the agenda
02:04:17.360 | if there's a public clamor, and it needs people like that to keep the public clamor going.
02:04:22.960 | - To push back a little bit, so those four are very interesting and I have deep respect for them.
02:04:28.240 | They have, except David Attenborough, David Attenborough is really, I mean, everybody loves
02:04:33.840 | him. I mean, I can't say anything, but you know, with Bill Gates and Greta, that also creates a
02:04:40.320 | lot of division. And this is a big problem, so it's not just charismatic. I put that responsibility
02:04:46.880 | actually on the scientific community. - The Pope does too, yeah.
02:04:49.920 | - Yep, and the politicians, so we need the charismatic leaders, and they're rare.
02:04:58.880 | - Yeah, yeah. - When you look at human history,
02:05:01.040 | those are the ones that make a difference. Those are the ones that, not deride, they inspire
02:05:09.120 | the populace to think long-term. The JFK, we'll go to the moon in this decade, not because it's
02:05:18.080 | easy, but because it is hard. There's no discussion about short-term political gains or any of that
02:05:28.080 | kind of stuff in the vision of going to the moon, or going to Mars, or taking on gigantic
02:05:34.640 | projects, or taking on world hunger, or taking on climate change, or the education system,
02:05:41.760 | all those things that require long-term, significant investment, that requires--
02:05:47.360 | - But it's hard to find those people. And incidentally, I think another problem,
02:05:51.200 | which is a downside of social media, is that of younger people I know,
02:05:57.680 | the number who would contemplate a political career has gone down because of the pressures
02:06:05.360 | on them and their family from social media. It's a hell of a job now. And so I think we are
02:06:12.640 | all losers because the quality of people who choose that path is really dropping,
02:06:21.760 | and as we see by the quality of those who are in these compositions.
02:06:25.360 | - That said, I think the silver lining there is the quality of the competition actually is
02:06:32.080 | inspiring, because it shows to you that there's a dire need of leaders, which I think would be
02:06:39.520 | inspiring to young people to step into the fold. I mean, great leaders are not afraid of a little
02:06:44.480 | bit of fire on social media. So if you have a 20-year-old kid now, a 25-year-old kid,
02:06:51.680 | it's seeing how the world responded to the pandemic, seeing the geopolitical division
02:06:57.680 | over the war in Ukraine, seeing the brewing war between the West and China. We need great leaders,
02:07:04.160 | and there's a hunger for them, and the time will come when they step up. I believe that. But also
02:07:12.240 | to add to your list of four, he doesn't get enough credit. I've been defending him in this
02:07:16.960 | conversation. Elon Musk, in terms of the fight in climate change, but he also has led to a lot
02:07:23.520 | of division, but we need more David Annenberg. - Yeah, no, no, I mean, I'm a fan. I mean,
02:07:31.360 | I've heard him described as a 21st century Brunel for his innovation, and that's true, but
02:07:36.400 | whether he's an ethical inspiration, I don't know. - Yeah, he has a lot of fun on Twitter. Well,
02:07:44.560 | let me ask you to put on your wise sage hat. What advice would you give to young people today?
02:07:52.880 | Maybe they're teenagers in high school, maybe early college. What advice would you give
02:08:01.360 | to a career or have a life they can be proud of? - Yes, well, I'd be very diffident, really,
02:08:07.920 | about offering any wisdom, but I think they should realize that
02:08:16.320 | the choices they make at that time are important. And from experience I've had with many friends,
02:08:29.440 | many people don't realize that opportunities are open until it's too late. They somehow think that
02:08:35.600 | some opportunities are only open to a few privileged people, and they don't even try,
02:08:39.600 | and that they could succeed. But if I focus on people working in some profession I know about,
02:08:49.360 | like science, I would say pick an area to work in where new things are happening,
02:08:57.280 | where you can do something that the old guys never had a chance to think about.
02:09:02.320 | Don't go into a field that's fairly stagnant, because then there won't be much to do,
02:09:08.480 | or you'll be trying to tackle the problems that the old guys got stuck on. And so I think in
02:09:12.800 | science I can give people good advice that they should pick a subject where there are exciting
02:09:20.240 | new developments, and also, of course, something which suits their style, because even within
02:09:25.760 | science, which is just one profession, there's a big range of style between the sort of solitary
02:09:30.560 | thinker, the person who does field work, the person who works in a big team, et cetera,
02:09:34.960 | and whether you like computing or mathematical thought, et cetera. So pick some subject that
02:09:41.520 | suits your style and where things are happening fast. And be prepared to be flexible, that's what
02:09:47.920 | I'd say really. - Keep your eyes open for the opportunity throughout, like you said. Go to a
02:09:52.560 | new field, go to a field where new cool stuff is happening, and just keep your eyes open.
02:09:57.200 | - Yes, that's platitudinous, but I think most of us, and I include myself in this,
02:10:01.840 | didn't realize these sort of things until too late. - Yeah, I think this applies way beyond science.
02:10:08.960 | What do you make of this finiteness of our life? Do you think about death? Do you think about
02:10:16.880 | mortality? Do you think about your mortality? And are you afraid of death?
02:10:20.960 | - Well, I mean, I'm not afraid because I think I'm lucky. I feel lucky to have lived as long as I
02:10:26.640 | have and to have been fairly lucky in my life in many respects compared to most people. So I feel
02:10:35.920 | very fortunate. This reminds me of this current emphasis on living much longer,
02:10:45.120 | these so-called Aaltos Laboratories, which have been set up by billionaires. There's one in
02:10:52.080 | San Francisco, one in La Jolla, I think, and one in Cambridge. And they're funded by these guys who,
02:11:02.720 | when young, wanted to be rich. And now they're rich, they want to be young again.
02:11:06.800 | They won't find that quite so easy. And do we want this? I don't know. If there was some elite
02:11:13.280 | that was able to live much longer than others, that would be a really fundamental kind of
02:11:17.520 | inequality. And I think if it happened to everyone, then that might be an improvement. It's
02:11:26.000 | not so obvious. But I think for my part, I think to have lived as long as most people
02:11:35.920 | and had a fortunate life is all I can expect and a lot to be grateful for.
02:11:41.280 | - Those are all the patterns to use.
02:11:44.320 | - Well, I am incredibly honored that you sit down with me today. I thank you so much for a life
02:11:50.800 | of exploring some of the deepest mysteries of our universe and of our humanity and thinking
02:11:57.360 | about our future with existential risks that are before us. It's a huge honor,
02:12:02.960 | Martin, that you sit with me. And I really enjoyed it.
02:12:05.920 | - Well, thank you, Lex. I thought we couldn't go on for as long as this. We could have gone on
02:12:10.240 | a lot longer, I think. - Exactly. Thank you so much.
02:12:13.280 | Thank you for listening to this conversation with Martin Rees. To support this podcast,
02:12:18.320 | please check out our sponsors in the description. And now let me leave you with some words from
02:12:22.960 | Martin Rees himself. "I'd like to widen people's awareness of the tremendous time span lying ahead
02:12:29.360 | for our planet and for life itself. Most educated people are aware that we're the outcome of nearly
02:12:36.720 | four billion years of Darwinian selection, but many tend to think that humans are somehow the
02:12:43.200 | culmination. Our sun, however, is less than halfway through its lifespan. It will not be humans who
02:12:51.040 | watch the sun's demise six billion years from now. Any creatures that then exist will be as
02:12:58.000 | different from us as we are from bacteria or amoeba." Thank you for listening. I hope to see
02:13:06.320 | you next time.
02:13:12.160 | [BLANK_AUDIO]