back to index

Peter Singer: Suffering in Humans, Animals, and AI | Lex Fridman Podcast #107


Chapters

0:0 Introduction
5:25 World War II
9:53 Suffering
16:6 Is everyone capable of evil?
21:52 Can robots suffer?
37:22 Animal liberation
40:31 Question for AI about suffering
43:32 Neuralink
45:11 Control problem of AI
51:8 Utilitarianism
59:43 Helping people in poverty
65:15 Mortality

Whisper Transcript | Transcript Only Page

00:00:00.000 | The following is a conversation with Peter Singer,
00:00:03.440 | professor of bioethics at Princeton University,
00:00:06.200 | best known for his 1975 book, "Animal Liberation,"
00:00:10.280 | that makes an ethical case against eating meat.
00:00:14.240 | He has written brilliantly from an ethical perspective
00:00:17.680 | on extreme poverty, euthanasia, human genetic selection,
00:00:21.480 | sports doping, the sale of kidneys,
00:00:23.720 | and generally happiness, including in his books,
00:00:28.520 | "Ethics in the Real World" and "The Life You Can Save."
00:00:33.000 | He was a key popularizer
00:00:34.560 | of the effective altruism movement
00:00:36.360 | and is generally considered
00:00:37.800 | one of the most influential philosophers in the world.
00:00:41.100 | Quick summary of the ads.
00:00:43.800 | Two sponsors, Cash App and Masterclass.
00:00:47.120 | Please consider supporting the podcast
00:00:48.880 | by downloading Cash App and using code LEXPODCAST
00:00:52.280 | and signing up at masterclass.com/lex.
00:00:55.960 | Click the links, buy the stuff.
00:00:57.880 | It really is the best way to support the podcast
00:01:00.120 | and the journey I'm on.
00:01:02.480 | As you may know, I primarily eat a ketogenic
00:01:05.920 | or carnivore diet,
00:01:07.520 | which means that most of my diet is made up of meat.
00:01:10.400 | I do not hunt the food I eat, though one day I hope to.
00:01:14.480 | I love fishing, for example.
00:01:17.840 | Fishing and eating the fish I catch
00:01:19.680 | has always felt much more honest
00:01:22.440 | than participating in the supply chain of factory farming.
00:01:26.460 | From an ethics perspective,
00:01:28.440 | this part of my life has always had a cloud over it.
00:01:32.000 | It makes me think.
00:01:33.700 | I've tried a few times in my life
00:01:35.960 | to reduce the amount of meat I eat,
00:01:37.960 | but for some reason, whatever the makeup of my body,
00:01:41.280 | whatever the way I practice the dieting I have,
00:01:44.120 | I get a lot of mental and physical energy
00:01:48.040 | and performance from eating meat.
00:01:50.640 | So both intellectually and physically,
00:01:54.000 | it's a continued journey for me.
00:01:56.200 | I return to Peter's work often
00:01:58.220 | to reevaluate the ethics
00:02:00.380 | of how I live this aspect of my life.
00:02:03.420 | Let me also say that you may be a vegan
00:02:06.200 | or you may be a meat eater
00:02:07.840 | and may be upset by the words I say or Peter says,
00:02:11.440 | but I ask for this podcast
00:02:13.780 | and other episodes of this podcast
00:02:16.080 | that you keep an open mind.
00:02:18.280 | I may and probably will talk with people you disagree with.
00:02:21.680 | Please try to really listen,
00:02:24.820 | especially to people you disagree with
00:02:27.440 | and give me and the world the gift
00:02:29.840 | of being a participant
00:02:31.480 | in a patient, intelligent, and nuanced discourse.
00:02:34.840 | If your instinct and desire is to be a voice of mockery
00:02:38.660 | towards those you disagree with,
00:02:40.600 | please unsubscribe.
00:02:42.560 | My source of joy and inspiration here
00:02:44.860 | has been to be a part of a community
00:02:46.880 | that thinks deeply and speaks with empathy and compassion.
00:02:51.040 | That is what I hope to continue being a part of
00:02:53.880 | and I hope you join as well.
00:02:56.200 | If you enjoy this podcast,
00:02:57.560 | subscribe on YouTube,
00:02:58.940 | review it with Five Stars on Apple Podcast,
00:03:01.360 | follow on Spotify,
00:03:02.840 | support on Patreon,
00:03:04.240 | or connect with me on Twitter @LexFriedman.
00:03:07.920 | As usual, I'll do a few minutes of ads now
00:03:09.940 | and never any ads in the middle
00:03:11.280 | that can break the flow of the conversation.
00:03:14.040 | This show is presented by Cash App,
00:03:16.560 | the number one finance app in the App Store.
00:03:18.960 | When you get it, use code LEXPODCAST.
00:03:22.000 | Cash App lets you send money to friends,
00:03:24.280 | buy Bitcoin, and invest in the stock market
00:03:27.320 | with as little as $1.
00:03:29.520 | Since Cash App allows you to buy Bitcoin,
00:03:31.800 | let me mention that cryptocurrency
00:03:33.720 | in the context of the history of money is fascinating.
00:03:37.400 | I recommend "A Scent of Money"
00:03:39.520 | as a great book on this history.
00:03:41.480 | Debits and credits on ledgers
00:03:43.160 | started around 30,000 years ago.
00:03:45.960 | The US dollar created over 200 years ago
00:03:48.560 | and the first decentralized cryptocurrency
00:03:51.080 | released just over 10 years ago.
00:03:53.740 | So given that history,
00:03:55.040 | cryptocurrency is still very much
00:03:57.000 | in its early days of development,
00:03:58.720 | but it's still aiming to,
00:04:00.160 | and just might,
00:04:01.280 | redefine the nature of money.
00:04:04.320 | So again, if you get Cash App
00:04:06.220 | from the App Store or Google Play
00:04:07.960 | and use the code LEXPODCAST,
00:04:10.480 | you get $10.
00:04:11.880 | And Cash App will also donate $10 to FIRST,
00:04:14.880 | an organization that is helping to advance
00:04:16.720 | robotics and STEM education
00:04:18.180 | for young people around the world.
00:04:20.880 | This show is sponsored by Masterclass.
00:04:23.440 | Sign up at masterclass.com/lex
00:04:26.080 | to get a discount and to support this podcast.
00:04:29.640 | When I first heard about Masterclass,
00:04:31.320 | I thought it was too good to be true.
00:04:33.160 | For $180 a year,
00:04:35.160 | you get an all-access pass to watch courses from,
00:04:38.400 | to list some of my favorites,
00:04:40.400 | Chris Hadfield on space exploration,
00:04:42.960 | Neil deGrasse Tyson on scientific thinking
00:04:44.960 | and communication,
00:04:46.200 | Will Wright, creator of SimCity and Sims on game design.
00:04:50.420 | I promise I'll start streaming games at some point soon.
00:04:53.880 | Carlos Santana on guitar,
00:04:55.840 | Garry Kasparov on chess,
00:04:57.520 | Daniel Lagrano on poker,
00:04:59.780 | and many more.
00:05:01.600 | Chris Hadfield explaining how rockets work
00:05:04.240 | and the experience of being launched into space alone
00:05:07.280 | is worth the money.
00:05:08.720 | By the way, you can watch it on basically any device.
00:05:12.800 | Once again, sign up at masterclass.com/lex
00:05:16.600 | to get a discount and to support this podcast.
00:05:20.260 | And now here's my conversation with Peter Singer.
00:05:24.060 | When did you first become conscious of the fact
00:05:27.640 | that there is much suffering in the world?
00:05:30.320 | - I think I was conscious of the fact
00:05:33.760 | that there's a lot of suffering in the world
00:05:35.740 | pretty much as soon as I was able to understand
00:05:38.520 | anything about my family and its background,
00:05:40.960 | because I lost three of my four grandparents
00:05:44.700 | in the Holocaust.
00:05:45.660 | And obviously I knew why I only had one grandparent
00:05:50.660 | and she herself had been in the camps and survived.
00:05:54.540 | So I think I knew a lot about that pretty early.
00:05:58.100 | - My entire family comes from the Soviet Union.
00:06:01.180 | I was born in the Soviet Union.
00:06:03.700 | So sort of the World War II has deep roots
00:06:07.240 | in the culture and the suffering that the war brought.
00:06:10.340 | The millions of people who died is in the music,
00:06:14.000 | is in the literatures and the culture.
00:06:16.900 | What do you think was the impact of the war broadly
00:06:20.660 | on our society?
00:06:22.400 | - The war had many impacts.
00:06:26.840 | I think one of them, a beneficial impact,
00:06:31.440 | is that it showed what racism
00:06:34.300 | and authoritarian government can do.
00:06:37.940 | And at least as far as the West was concerned,
00:06:41.080 | I think that meant that I grew up in an era
00:06:43.200 | in which there wasn't the kind of overt racism
00:06:48.020 | and antisemitism that had existed for my parents in Europe.
00:06:52.180 | I was growing up in Australia
00:06:53.840 | and certainly that was clearly seen
00:06:57.580 | as something completely unacceptable.
00:06:59.420 | There was also though a fear of a further outbreak of war,
00:07:05.620 | which this time we expected would be nuclear
00:07:08.960 | because of the way the Second World War had ended.
00:07:11.740 | So there was this overshadowing of my childhood
00:07:16.220 | about the possibility that I would not live to grow up
00:07:19.920 | and be an adult because of a catastrophic nuclear war.
00:07:23.800 | There was a film on the beach was made
00:07:28.140 | in which the city that I was living, Melbourne,
00:07:30.300 | was the last place on earth to have living human beings
00:07:34.380 | because of the nuclear cloud
00:07:36.460 | that was spreading from the North.
00:07:39.100 | So that certainly gave us a bit of that sense.
00:07:41.880 | There were clearly many other legacies
00:07:45.420 | that we got of the war as well
00:07:47.580 | and the whole setup of the world
00:07:49.460 | and the Cold War that followed.
00:07:51.620 | All of that has its roots in the Second World War.
00:07:55.020 | - You know, there is much beauty that comes from war.
00:07:58.180 | Sort of, I had a conversation with Eric Weinstein.
00:08:01.420 | He said, "Everything is great about war
00:08:03.940 | "except all the death and suffering."
00:08:08.260 | Do you think there's something positive
00:08:10.180 | that came from the war?
00:08:13.680 | The mirror that it put to our society,
00:08:16.860 | sort of the ripple effects on it, ethically speaking,
00:08:20.340 | do you think there are positive aspects to war?
00:08:24.580 | - I find it hard to see positive aspects in war.
00:08:27.580 | And some of the things that other people
00:08:29.940 | think of as positive and beautiful,
00:08:33.420 | I'm maybe questioning.
00:08:35.640 | So there's a certain kind of patriotism.
00:08:38.300 | People say, you know, "During wartime, we all pull together.
00:08:40.940 | "We all work together against a common enemy."
00:08:44.100 | And that's true.
00:08:45.300 | An outside enemy does unite a country.
00:08:47.380 | And in general, it's good for countries to be united
00:08:49.940 | and have common purposes.
00:08:51.100 | But it also engenders a kind of a nationalism
00:08:55.380 | and a patriotism that can't be questioned
00:08:57.780 | and that I'm more skeptical about.
00:09:02.020 | - What about the brotherhood
00:09:04.600 | that people talk about from soldiers?
00:09:08.280 | The sort of counterintuitive sad idea
00:09:13.000 | that the closest that people feel to each other
00:09:16.280 | is in those moments of suffering,
00:09:17.920 | of being at the sort of the edge
00:09:20.400 | of seeing your comrades dying in your arms.
00:09:25.040 | That somehow brings people extremely closely together.
00:09:27.480 | Suffering brings people closer together.
00:09:29.620 | How do you make sense of that?
00:09:31.960 | - It may bring people close together,
00:09:33.560 | but there are other ways of bonding
00:09:36.460 | and being close to people, I think,
00:09:37.880 | without the suffering and death that war entails.
00:09:41.160 | - Perhaps you could see,
00:09:43.800 | you can already hear the romanticized Russian in me.
00:09:46.720 | We tend to romanticize suffering
00:09:49.760 | just a little bit in our literature and culture and so on.
00:09:53.460 | Could you take a step back?
00:09:54.920 | And I apologize if it's a ridiculous question,
00:09:57.580 | but what is suffering?
00:09:59.640 | If you would try to define what suffering is,
00:10:03.760 | how would you go about it?
00:10:05.560 | - Suffering is a conscious state.
00:10:08.720 | There can be no suffering
00:10:10.840 | for a being who is completely unconscious.
00:10:13.140 | And it's distinguished from other conscious states
00:10:17.920 | in terms of being one that,
00:10:20.140 | considered just in itself,
00:10:23.040 | we would rather be without.
00:10:25.480 | It's a conscious state that we wanna stop
00:10:27.520 | if we're experiencing,
00:10:29.000 | or we wanna avoid having again
00:10:31.760 | if we've experienced it in the past.
00:10:34.480 | And that's, I say, emphasized for its own sake,
00:10:37.400 | because of course, people will say,
00:10:39.320 | well, suffering strengthens the spirit,
00:10:41.600 | it has good consequences.
00:10:43.140 | And sometimes it does have those consequences.
00:10:47.120 | And of course, sometimes we might undergo suffering.
00:10:50.780 | We set ourselves a challenge to run a marathon
00:10:53.700 | or climb a mountain,
00:10:55.240 | or even just to go to the dentist
00:10:57.240 | so that the toothache doesn't get worse,
00:10:59.120 | even though we know the dentist
00:11:00.280 | is gonna hurt us to some extent.
00:11:01.960 | So I'm not saying that we never choose suffering,
00:11:04.520 | but I am saying that other things being equal,
00:11:07.240 | we would rather not be in that state of consciousness.
00:11:10.640 | - Is the ultimate goal,
00:11:11.880 | so if you have the new 10 year anniversary release
00:11:15.800 | of the "Life You Can Save" book,
00:11:17.160 | really influential book,
00:11:18.840 | we'll talk about it a bunch of times
00:11:20.700 | throughout this conversation,
00:11:21.780 | but do you think it's possible
00:11:25.340 | to eradicate suffering?
00:11:28.560 | Is that the goal?
00:11:29.800 | Or do we want to achieve
00:11:32.880 | a kind of minimum threshold of suffering
00:11:37.560 | and then keeping a little drop of poison
00:11:41.520 | to keep things interesting in the world?
00:11:46.160 | - In practice, I don't think we ever will eliminate suffering
00:11:50.120 | so I think that little drop of poison, as you put it,
00:11:52.980 | or if you like the contrasting dash
00:11:56.400 | of an unpleasant color, perhaps something like that,
00:11:59.680 | in a otherwise harmonious and beautiful composition,
00:12:04.020 | that is gonna always be there.
00:12:05.860 | If you ask me whether in theory,
00:12:09.100 | if we could get rid of it, we should,
00:12:12.560 | I think the answer is whether in fact
00:12:14.660 | we would be better off
00:12:17.760 | or whether in terms of by eliminating the suffering,
00:12:20.240 | we would also eliminate some of the highs,
00:12:22.500 | the positive highs.
00:12:23.820 | And if that's so, then we might be prepared to say
00:12:27.360 | it's worth having a minimum of suffering
00:12:30.600 | in order to have the best possible experiences as well.
00:12:34.560 | - Is there a relative aspect to suffering?
00:12:37.720 | So when you talk about eradicating poverty in the world,
00:12:42.720 | is this the more you succeed,
00:12:46.880 | the more the bar of what defines poverty raises?
00:12:49.680 | Or is there at the basic human ethical level,
00:12:53.400 | a bar that's absolute, that once you get above it,
00:12:57.180 | then we can morally converge to feeling
00:13:02.180 | like we have eradicated poverty?
00:13:04.640 | - I think they're both.
00:13:08.680 | And I think this is true for poverty as well as suffering.
00:13:11.000 | There's an objective level of suffering or of poverty
00:13:16.000 | where we're talking about objective indicators,
00:13:19.640 | like you're constantly hungry,
00:13:22.500 | you can't get enough food, you're constantly cold,
00:13:27.880 | you can't get warm, you have some physical pains
00:13:32.600 | that you're never rid of.
00:13:34.480 | I think those things are objective.
00:13:37.000 | But it may also be true that if you do get rid of that
00:13:39.840 | and you get to the stage where all of those basic needs
00:13:42.820 | have been met, there may still be then new forms
00:13:47.080 | of suffering that develop.
00:13:48.680 | And perhaps that's what we're seeing
00:13:50.360 | in the affluent societies we have.
00:13:52.680 | That people get bored, for example.
00:13:55.640 | They don't need to spend so many hours a day
00:13:58.200 | earning money to get enough to eat and shelter.
00:14:01.360 | So now they're bored, they lack a sense of purpose.
00:14:04.220 | That can happen.
00:14:06.300 | And that then is a kind of a relative suffering
00:14:09.480 | that is distinct from the objective forms of suffering.
00:14:14.320 | - But in your focus on eradicating suffering,
00:14:17.520 | you don't think about that kind of,
00:14:19.960 | the kind of interesting challenges and suffering
00:14:22.520 | that emerges in affluent societies?
00:14:24.400 | That's just not, in your ethical, philosophical brain,
00:14:28.780 | is that of interest at all?
00:14:31.240 | - It would be of interest to me if we had eliminated
00:14:34.120 | all of the objective forms of suffering,
00:14:36.500 | which I think of as generally more severe
00:14:40.240 | and also perhaps easier at this stage anyway
00:14:43.200 | to know how to eliminate.
00:14:45.000 | So yes, in some future state,
00:14:48.320 | when we've eliminated those objective forms of suffering,
00:14:50.560 | I would be interested in trying to eliminate
00:14:53.080 | the relative forms as well.
00:14:55.900 | But that's not a practical need for me at the moment.
00:14:59.920 | - Sorry to linger on it because you kind of said it,
00:15:02.380 | but just to, is elimination the goal
00:15:06.360 | for the affluent society?
00:15:07.600 | So is there a, do you see a suffering as a creative force?
00:15:14.400 | - Suffering can be a creative force.
00:15:17.080 | I think I'll repeating what I said about the highs
00:15:20.520 | and whether we need some of the lows
00:15:22.200 | to experience the highs.
00:15:24.080 | So it may be that suffering makes us more creative
00:15:26.520 | and we regard that as worthwhile.
00:15:29.800 | Maybe that brings some of those highs with it
00:15:32.880 | that we would not have had if we'd had no suffering.
00:15:35.480 | I don't really know.
00:15:37.680 | Many people have suggested that
00:15:39.480 | and I certainly can't have no basis for denying it.
00:15:43.900 | And if it's true,
00:15:45.660 | then I would not want to eliminate suffering completely.
00:15:49.240 | - But the focus is on the absolute,
00:15:53.940 | not to be cold, not to be hungry.
00:15:56.820 | - Yes, that's at the present stage
00:15:59.780 | of where the world's population is, that's the focus.
00:16:03.060 | - Talking about human nature for a second,
00:16:06.380 | do you think people are inherently good
00:16:08.420 | or do we all have good and evil in us
00:16:10.980 | that basically everyone is capable of evil
00:16:14.860 | based on the environment?
00:16:16.180 | - Certainly most of us have potential
00:16:19.700 | for both good and evil.
00:16:21.460 | I'm not prepared to say that everyone is capable of evil.
00:16:24.020 | That maybe some people who even in the worst of circumstances
00:16:27.180 | would not be capable of it,
00:16:28.900 | but most of us are very susceptible
00:16:32.380 | to environmental influences.
00:16:34.500 | So when we look at things
00:16:36.500 | that we were talking about previously,
00:16:37.900 | let's say what the Nazis did during the Holocaust,
00:16:42.460 | I think it's quite difficult to say,
00:16:46.580 | I know that I would not have done those things
00:16:50.220 | even if I were in the same circumstances
00:16:52.640 | as those who did them.
00:16:54.460 | Even if let's say I had grown up under the Nazi regime
00:16:58.260 | and had been indoctrinated with racist ideas,
00:17:02.460 | had also had the idea that I must obey orders,
00:17:07.100 | follow the commands of the Führer.
00:17:09.860 | Plus of course, perhaps the threat
00:17:12.500 | that if I didn't do certain things,
00:17:14.540 | I might get sent to the Russian front
00:17:16.580 | and that would be a pretty grim fate.
00:17:19.180 | I think it's really hard for anybody to say,
00:17:22.740 | nevertheless I know I would not have killed those Jews
00:17:26.740 | or whatever else it was that they--
00:17:28.420 | - What's your intuition?
00:17:29.420 | How many people will be able to say that?
00:17:31.420 | - Truly to be able to say it?
00:17:34.940 | I think very few, less than 10%.
00:17:37.700 | - To me it seems a very interesting
00:17:39.740 | and powerful thing to meditate on.
00:17:42.140 | So I've read a lot about the war, World War II,
00:17:45.860 | and I can't escape the thought
00:17:47.940 | that I would have not been one of the 10%.
00:17:51.700 | - Right, I have to say I simply don't know.
00:17:55.460 | I would like to hope that I would have been one of the 10%,
00:17:59.060 | but I don't really have any basis for claiming
00:18:02.060 | that I would have been different from the majority.
00:18:05.300 | Is it a worthwhile thing to contemplate?
00:18:08.460 | It would be interesting if we could find a way
00:18:11.420 | of really finding these answers.
00:18:13.220 | There obviously is quite a bit of research
00:18:16.660 | on people during the Holocaust,
00:18:19.820 | on how ordinary Germans got led to do terrible things,
00:18:24.820 | and there are also studies of the resistance,
00:18:28.220 | some heroic people in the White Rose group, for example,
00:18:32.420 | who resisted even though they knew
00:18:34.780 | they were likely to die for it.
00:18:36.340 | But I don't know whether these studies
00:18:40.100 | really can answer your larger question
00:18:43.220 | of how many people would have been capable of doing that.
00:18:46.340 | - Well, sort of the reason I think it's interesting
00:18:50.380 | is in the world, as you've described,
00:18:55.180 | when there are things that you'd like to do that are good,
00:18:59.980 | that are objectively good,
00:19:01.420 | it's useful to think about whether
00:19:04.860 | I'm not willing to do something,
00:19:06.740 | or I'm not willing to acknowledge something
00:19:09.020 | as good and the right thing to do
00:19:10.780 | because I'm simply scared of putting my life,
00:19:15.780 | of damaging my life in some kind of way.
00:19:18.940 | And that kind of thought exercise is helpful
00:19:20.780 | to understand what is the right thing
00:19:23.460 | in my current skillset and the capacity to do.
00:19:27.460 | So if there's things that are convenient,
00:19:30.060 | and I wonder if there are things
00:19:31.980 | that are highly inconvenient,
00:19:33.700 | where I would have to experience derision or hatred
00:19:36.700 | or death or all those kinds of things,
00:19:39.700 | but it is truly the right thing to do.
00:19:41.260 | And that kind of balance is,
00:19:42.720 | I feel like in America, we don't have,
00:19:45.740 | it's difficult to think in the current times,
00:19:50.040 | it seems easier to put yourself back in history
00:19:53.380 | where you can sort of objectively contemplate whether,
00:19:57.780 | how willing you are to do the right thing
00:19:59.900 | when the cost is high.
00:20:01.220 | - True, but I think we do face those challenges today.
00:20:06.100 | And I think we can still ask ourselves those questions.
00:20:10.000 | So one stand that I took more than 40 years ago now
00:20:13.540 | was to stop eating meat, become a vegetarian at a time
00:20:17.540 | when you hardly met anybody who was a vegetarian,
00:20:21.380 | or if you did, they might've been a Hindu
00:20:23.740 | or they might've had some weird theories
00:20:27.580 | about meat and health.
00:20:29.000 | And I know thinking about making that decision,
00:20:33.300 | I was convinced that it was the right thing to do,
00:20:35.300 | but I still did have to think,
00:20:37.260 | are all my friends gonna think that I'm a crank
00:20:40.100 | because I'm now refusing to eat meat?
00:20:42.180 | So, I'm not saying there were any terrible sanctions,
00:20:47.780 | obviously, but I thought about that.
00:20:50.040 | And I guess I decided, well,
00:20:52.580 | I still think this is the right thing to do.
00:20:54.060 | And I'll put up with that if it happens.
00:20:56.300 | And one or two friends were clearly uncomfortable
00:20:59.060 | with that decision, but that was pretty minor
00:21:03.460 | compared to the historical examples
00:21:05.820 | that we've been talking about.
00:21:08.000 | But other issues that we have around too,
00:21:09.820 | like global poverty and what we ought to be doing about that
00:21:13.740 | is another question where people, I think,
00:21:16.860 | can have the opportunity to take a stand
00:21:19.040 | on what's the right thing to do now.
00:21:21.040 | Climate change would be a third question
00:21:23.200 | where, again, people are taking a stand.
00:21:25.680 | I can look at Greta Thunberg there and say,
00:21:29.120 | well, I think it must've taken a lot of courage
00:21:32.360 | for a schoolgirl to say,
00:21:35.280 | I'm gonna go on strike about climate change
00:21:37.180 | and see what happens.
00:21:39.460 | - Yeah, especially in this divisive world,
00:21:43.000 | she gets exceptionally huge amounts of support
00:21:45.600 | and hatred both.
00:21:47.440 | - That's right.
00:21:48.280 | - It's very difficult for a teenager to operate in.
00:21:50.680 | In your book, "Ethics in the Real World,"
00:21:56.120 | amazing book, people should check it out.
00:21:57.920 | Very easy read.
00:21:59.640 | 82 brief essays on things that matter.
00:22:01.980 | One of the essays asks, should robots have rights?
00:22:06.960 | You've written about this,
00:22:07.960 | so let me ask, should robots have rights?
00:22:10.640 | - If we ever develop robots capable of consciousness,
00:22:17.120 | capable of having their own internal perspective
00:22:20.560 | on what's happening to them
00:22:22.120 | so that their lives can go well or badly for them,
00:22:25.640 | then robots should have rights.
00:22:27.760 | Until that happens, they shouldn't.
00:22:30.120 | - So is consciousness essentially
00:22:33.520 | a prerequisite to suffering?
00:22:36.200 | So everything that possesses consciousness
00:22:40.440 | is capable of suffering, put another way.
00:22:43.960 | And if so, what is consciousness?
00:22:47.040 | - I certainly think that consciousness
00:22:51.360 | is a prerequisite for suffering.
00:22:53.120 | You can't suffer if you're not conscious.
00:22:56.640 | But is it true that every being that is conscious
00:23:01.200 | will suffer or has to be capable of suffering?
00:23:05.400 | I suppose you could imagine a kind of consciousness,
00:23:08.220 | especially if we can construct it artificially,
00:23:10.960 | that's capable of experiencing pleasure.
00:23:13.880 | But just automatically cuts at the consciousness
00:23:16.760 | when they're suffering.
00:23:18.280 | So they're like instant anesthesia
00:23:20.440 | as soon as something is gonna cause you suffering.
00:23:22.560 | So that's possible, but doesn't exist
00:23:27.080 | as far as we know on this planet yet.
00:23:29.860 | You asked what is consciousness.
00:23:32.880 | Philosophers often talk about it
00:23:36.440 | as there being a subject of experiences.
00:23:39.520 | So you and I and everybody listening to this
00:23:42.920 | is a subject of experience.
00:23:44.640 | There is a conscious subject who is taking things in,
00:23:48.600 | responding to it in various ways,
00:23:51.280 | feeling good about it, feeling bad about it.
00:23:53.480 | And that's different from the kinds
00:23:57.400 | of artificial intelligence we have now.
00:24:00.600 | I take out my phone, I ask Google directions
00:24:05.120 | to where I'm going, Google gives me the directions
00:24:08.680 | and I choose to take a different way.
00:24:10.840 | Google doesn't care, it's not like I'm offending Google
00:24:13.360 | or anything like that.
00:24:14.200 | There is no subject of experiences there.
00:24:16.520 | And I think that's the indication
00:24:19.360 | that Google AI we have now is not conscious,
00:24:24.360 | or at least that level of AI is not conscious.
00:24:27.560 | And that's the way to think about it.
00:24:28.880 | Now, it may be difficult to tell, of course,
00:24:31.040 | whether a certain AI is or isn't conscious.
00:24:34.080 | It may mimic consciousness and we can't tell
00:24:35.900 | if it's only mimicking it or if it's the real thing.
00:24:39.120 | But that's what we're looking for.
00:24:40.600 | Is there a subject of experience,
00:24:43.480 | a perspective on the world from which things can go well
00:24:47.080 | or badly from that perspective?
00:24:50.160 | - So our idea of what suffering looks like
00:24:54.200 | comes from just watching ourselves when we're in pain.
00:24:59.200 | - Or when we're experiencing pleasure, it's not only--
00:25:03.320 | - Pleasure and pain.
00:25:04.600 | Yeah, so, and then you could actually push back on this,
00:25:08.800 | but I would say that's how we kind of build an intuition
00:25:11.960 | about animals is we can infer the similarities
00:25:16.960 | between humans and animals and so infer
00:25:19.560 | that they're suffering or not based on certain things
00:25:22.560 | and they're conscious or not.
00:25:24.320 | So what if robots, you mentioned Google Maps,
00:25:29.320 | and I've done this experiment, so I work in robotics,
00:25:33.720 | just for my own self.
00:25:35.720 | I have several Roomba robots
00:25:37.640 | and I play with different speech interaction,
00:25:40.960 | voice-based interaction.
00:25:42.160 | And if the Roomba or the robot or Google Maps
00:25:45.880 | shows any signs of pain, like screaming or moaning
00:25:50.360 | or being displeased by something you've done,
00:25:54.240 | that in my mind, I can't help but immediately upgrade it.
00:25:58.240 | And even when I myself programmed it in,
00:26:02.520 | just having another entity that's now for the moment
00:26:06.040 | disjoint from me, showing signs of pain,
00:26:09.120 | makes me feel like it is conscious.
00:26:11.160 | Like I immediately then the whatever,
00:26:13.880 | I immediately realize that it's not obviously,
00:26:17.800 | but that feeling is there.
00:26:19.680 | So sort of, I guess, what do you think about a world
00:26:24.680 | where Google Maps and Roombas are pretending to be conscious
00:26:31.440 | and we, descendants of apes, are not smart enough
00:26:35.400 | to realize they're not, or whatever, or that is conscious,
00:26:39.120 | they appear to be conscious.
00:26:40.760 | And so you then have to give them rights.
00:26:44.120 | The reason I'm asking that is that kind of capability
00:26:47.160 | may be closer than we realize.
00:26:51.160 | - Yes, that kind of capability may be closer,
00:26:55.960 | but I don't think it follows
00:26:59.760 | that we have to give them rights.
00:27:00.960 | I suppose the argument for saying
00:27:04.480 | that in those circumstances we should give them rights
00:27:06.520 | is that if we don't, we'll harden ourselves
00:27:10.560 | against other beings who are not robots
00:27:13.000 | and who really do suffer.
00:27:14.240 | That's a possibility that, you know,
00:27:17.920 | if we get used to looking at a being suffering
00:27:20.920 | and saying, "Yeah, we don't have to do anything about that.
00:27:23.400 | "That being doesn't have any rights."
00:27:25.040 | Maybe we'll feel the same about animals, for instance.
00:27:28.320 | And interestingly, among philosophers and thinkers
00:27:34.840 | who denied that we have any direct duties to animals,
00:27:39.840 | and this includes people like Thomas Aquinas
00:27:42.560 | and Immanuel Kant, they did say,
00:27:46.640 | "Yes, but still it's better not to be cruel to them,
00:27:49.520 | "not because of the suffering we're inflicting
00:27:51.480 | "on the animals, but because if we are,
00:27:54.880 | "we may develop a cruel disposition,
00:27:57.000 | "and this will be bad for humans, you know,
00:28:00.520 | "because we're more likely to be cruel to other humans,
00:28:02.640 | "and that would be wrong."
00:28:04.120 | So-- - But you don't accept that--
00:28:07.760 | - I don't accept that as the basis of the argument
00:28:10.160 | for why we shouldn't be cruel to animals.
00:28:11.600 | I think the basis of the argument
00:28:12.680 | for why we shouldn't be cruel to animals
00:28:14.000 | is just that we're inflicting suffering on them,
00:28:16.400 | and the suffering is a bad thing.
00:28:18.040 | But possibly, I might accept some sort of parallel
00:28:23.000 | of that argument as a reason why you shouldn't be cruel
00:28:26.040 | to these robots that mimic the symptoms of pain
00:28:30.520 | if it's gonna be harder for us to distinguish.
00:28:33.560 | - I would venture to say, I'd like to disagree with you,
00:28:36.800 | and with most people, I think,
00:28:38.480 | at the risk of sounding crazy.
00:28:42.280 | I would like to say that if that Roomba is dedicated
00:28:46.920 | to faking the consciousness and the suffering,
00:28:50.880 | I think it will be impossible for us.
00:28:54.720 | I would like to apply the same argument
00:28:58.440 | as with animals to robots,
00:29:00.520 | that they deserve rights in that sense.
00:29:02.920 | Now, we might outlaw the addition
00:29:05.920 | of those kinds of features into Roombas,
00:29:07.640 | but once you do, I think,
00:29:10.120 | I'm quite surprised by the upgrade in consciousness
00:29:16.440 | that the display of suffering creates.
00:29:19.760 | It's a totally open world, but I'd like to just,
00:29:23.600 | sort of the difference between animals and other humans
00:29:27.700 | is that in the robot case, we've added it in ourselves.
00:29:32.480 | Therefore, we can say something about how real it is.
00:29:37.480 | But I would like to say that the display of it
00:29:40.200 | is what makes it real.
00:29:42.000 | And I'm not a philosopher, I'm not making that argument,
00:29:45.600 | but I'd at least like to add that as a possibility.
00:29:48.160 | And I've been surprised by it,
00:29:50.960 | is all I'm trying to articulate poorly, I suppose.
00:29:55.520 | So there is a philosophical view
00:29:57.900 | has been held about humans,
00:30:00.820 | which is rather like what you're talking about,
00:30:02.540 | and that's behaviorism.
00:30:04.780 | So behaviorism was employed both in psychology,
00:30:07.500 | people like B.F. Skinner was a famous behaviorist,
00:30:10.260 | but in psychology, it was more a kind of a,
00:30:14.780 | what is it that makes this science?
00:30:16.380 | Well, you need to have behavior
00:30:17.500 | because that's what you can observe,
00:30:18.700 | you can't observe consciousness.
00:30:20.300 | But in philosophy, the view defended by people
00:30:23.860 | like Gilbert Ryle, who was a professor of philosophy
00:30:26.040 | at Oxford, wrote a book called "The Concept of Mind,"
00:30:28.640 | in which, in this kind of phase,
00:30:32.040 | this is in the 40s of linguistic philosophy,
00:30:35.320 | he said, well, the meaning of a term is its use,
00:30:38.960 | and we use terms like so-and-so is in pain
00:30:42.480 | when we see somebody writhing or screaming
00:30:44.860 | or trying to escape some stimulus.
00:30:47.120 | And that's the meaning of the term,
00:30:48.440 | so that's what it is to be in pain.
00:30:50.480 | And you point to the behavior.
00:30:53.640 | And Norman Malcolm, who was another philosopher
00:30:58.400 | in the school from Cornell, had the view that,
00:31:02.920 | so what is it to dream?
00:31:04.620 | After all, we can't see other people's dreams.
00:31:07.960 | Well, when people wake up and say,
00:31:10.040 | I've just had a dream of, here I was,
00:31:14.100 | undressed, walking down the main street
00:31:15.740 | or whatever it is you've dreamt,
00:31:17.780 | that's what it is to have a dream,
00:31:19.040 | it's basically to wake up and recall something.
00:31:22.720 | So you could apply this to what you're talking about
00:31:25.640 | and say, so what it is to be in pain
00:31:28.480 | is to exhibit these symptoms of pain behavior,
00:31:31.060 | and therefore, these robots are in pain,
00:31:34.920 | that's what the word means.
00:31:36.840 | But nowadays, not many people think
00:31:38.520 | that Ryle's kind of philosophical behaviorism
00:31:40.880 | is really very plausible,
00:31:42.320 | so I think they would say the same about your view.
00:31:45.080 | - So yes, I just spoke with Noam Chomsky,
00:31:48.560 | who basically was part of dismantling
00:31:52.760 | the behaviorist movement, and I'm with that 100%
00:31:57.760 | for studying human behavior,
00:32:00.600 | but I am one of the few people in the world
00:32:04.080 | who has made Roombas scream in pain,
00:32:08.240 | and I just don't know what to do
00:32:12.200 | with that empirical evidence,
00:32:14.560 | because it's hard, sort of philosophically, I agree,
00:32:18.760 | but the only reason I philosophically agree in that case
00:32:23.280 | is because I was the programmer,
00:32:25.120 | but if somebody else was a programmer,
00:32:26.800 | I'm not sure I would be able to interpret that well.
00:32:29.320 | So I think it's a new world,
00:32:31.880 | that I was just curious what your thoughts are.
00:32:37.520 | For now, you feel that the display
00:32:42.320 | of what we can kind of intellectually say
00:32:46.400 | is a fake display of suffering is not suffering.
00:32:50.140 | - That's right, that would be my view,
00:32:53.240 | but that's consistent, of course,
00:32:54.480 | with the idea that it's part of our nature
00:32:56.920 | to respond to this display,
00:32:58.680 | if it's reasonably authentically done,
00:33:01.140 | and therefore it's understandable
00:33:04.800 | that people would feel this,
00:33:06.680 | and maybe, as I said, it's even a good thing
00:33:09.880 | that they do feel it,
00:33:10.720 | and you wouldn't want to harden yourself against it,
00:33:12.640 | because then you might harden yourself
00:33:14.440 | against beings who are really suffering.
00:33:16.480 | - But there's this line, you know,
00:33:19.200 | so you said, "Once an artificial general intelligence system,
00:33:22.800 | "a human-level intelligence system, become conscious."
00:33:25.800 | I guess, if I could just linger on it,
00:33:28.520 | now, I've wrote really dumb programs
00:33:30.760 | that just say things that I told them to say,
00:33:33.780 | but how do you know when a system like Alexa,
00:33:38.360 | which is sufficiently complex
00:33:39.720 | that you can't introspect of how it works,
00:33:42.080 | starts giving you signs of consciousness
00:33:46.240 | through natural language,
00:33:48.040 | that there's a feeling there's another entity there
00:33:51.200 | that's self-aware, that has a fear of death, a mortality,
00:33:55.120 | that has awareness of itself
00:33:57.880 | that we kind of associate with other living creatures?
00:34:00.580 | I guess I'm sort of trying to do the slippery slope
00:34:05.720 | from the very naive thing where I started
00:34:07.920 | into something where it's sufficiently a black box
00:34:12.120 | to where it's starting to feel like it's conscious.
00:34:16.400 | Where's that threshold,
00:34:17.980 | where you would start getting uncomfortable
00:34:20.240 | with the idea of robot suffering, do you think?
00:34:23.980 | - I don't know enough about the programming
00:34:27.640 | that would go into this, really, to answer this question,
00:34:31.580 | but I presume that somebody who does know more about this
00:34:34.880 | could look at the program and see whether
00:34:39.160 | we can explain the behaviors in a parsimonious way
00:34:43.560 | that doesn't require us to suggest
00:34:46.080 | that some sort of consciousness has emerged.
00:34:50.080 | Or alternatively, whether you're in a situation
00:34:52.400 | where you say, "I don't know how this is happening.
00:34:56.200 | "The program does generate a kind of artificial
00:35:00.080 | "general intelligence which is autonomous,
00:35:04.100 | "starts to do things itself and is autonomous
00:35:06.280 | "of the basic programming that set it up."
00:35:10.400 | And so it's quite possible that actually
00:35:13.400 | we have achieved consciousness
00:35:15.800 | in a system of artificial intelligence.
00:35:18.600 | - Sort of the approach that I work with,
00:35:20.640 | most of the community is really excited about now
00:35:22.700 | is with learning methods, so machine learning.
00:35:26.000 | And the learning methods are unfortunately
00:35:27.960 | are not capable of revealing,
00:35:31.440 | which is why somebody like Noam Chomsky criticizes them.
00:35:34.140 | You've created powerful systems
00:35:35.660 | that are able to do certain things
00:35:37.080 | without understanding the theory, the physics,
00:35:40.140 | the science of how it works.
00:35:42.180 | And so it's possible if those are the kinds of methods
00:35:45.340 | that succeed, we won't be able to know exactly,
00:35:50.340 | sort of try to reduce, try to find
00:35:53.820 | whether this thing is conscious or not,
00:35:56.180 | this thing is intelligent or not.
00:35:58.140 | It's simply giving, when we talk to it,
00:36:01.800 | it displays wit and humor and cleverness
00:36:05.840 | and emotion and fear, and then we won't be able to say,
00:36:10.840 | where in the billions of nodes, neurons
00:36:14.520 | in this artificial neural network is the fear coming from?
00:36:19.120 | So in that case, that's a really interesting place
00:36:22.480 | where we do now start to return to behaviorism and say,
00:36:26.760 | hmm, yeah, that is an interesting issue.
00:36:31.760 | I would say that if we have serious doubts
00:36:36.940 | and think it might be conscious,
00:36:39.420 | then we ought to try to give it the benefit of the doubt,
00:36:42.880 | just as I would say with animals.
00:36:45.380 | I think we can be highly confident
00:36:46.860 | that vertebrates are conscious,
00:36:50.460 | but when we get down, and some invertebrates
00:36:53.480 | like the octopus, but with insects,
00:36:56.900 | it's much harder to be confident of that.
00:37:01.460 | I think we should give them the benefit of the doubt
00:37:03.220 | where we can, which means, I think it would be wrong
00:37:07.200 | to torture an insect, but this doesn't necessarily mean
00:37:11.380 | it's wrong to slap a mosquito that's about to bite you
00:37:14.780 | and stop you getting to sleep.
00:37:16.300 | So I think you try to achieve some balance
00:37:20.100 | in these circumstances of uncertainty.
00:37:22.980 | - If it's okay with you, if we can go back just briefly.
00:37:26.460 | So 44 years ago, like you mentioned, 40 plus years ago,
00:37:29.660 | you've written "Animal Liberation,"
00:37:31.180 | the classic book that started, that launched,
00:37:35.340 | that was the foundation of the movement
00:37:37.960 | of animal liberation.
00:37:39.400 | Can you summarize the key set of ideas
00:37:42.460 | that underpin that book?
00:37:44.380 | - Certainly, the key idea that underlies that book
00:37:49.000 | is the concept of speciesism,
00:37:52.200 | which I did not invent that term.
00:37:54.760 | I took it from a man called Richard Rider,
00:37:56.720 | who was in Oxford when I was, and I saw a pamphlet
00:37:59.640 | that he'd written about experiments on chimpanzees
00:38:03.000 | that used that term.
00:38:04.060 | But I think I contributed to making it philosophically
00:38:08.020 | more precise and to getting it into a broader audience.
00:38:12.040 | And the idea is that we have a bias or a prejudice
00:38:16.760 | against taking seriously the interests of beings
00:38:20.400 | who are not members of our species,
00:38:23.440 | just as in the past, Europeans, for example,
00:38:26.900 | had a bias against taking seriously
00:38:28.600 | the interests of Africans, racism,
00:38:31.560 | and men have had a bias against taking seriously
00:38:34.040 | the interests of women, sexism.
00:38:37.280 | So I think something analogous, not completely identical,
00:38:41.280 | but something analogous, goes on and has gone on
00:38:45.040 | for a very long time with the way humans
00:38:48.480 | see themselves vis-a-vis animals.
00:38:50.400 | We see ourselves as more important.
00:38:53.880 | We see animals as existing to serve our needs
00:38:58.280 | in various ways, and you can find this very explicit
00:39:00.720 | in earlier philosophers from Aristotle
00:39:04.440 | through to Kant and others.
00:39:06.020 | And either we don't need to take their interests
00:39:12.000 | into account at all, or we can discount it
00:39:16.660 | because they're not humans.
00:39:17.800 | They count a little bit, but they don't count
00:39:19.400 | nearly as much as humans do.
00:39:21.040 | My book argues that that attitude is responsible
00:39:25.720 | for a lot of the things that we do to animals
00:39:29.320 | that are wrong, confining them indoors
00:39:32.080 | in very crowded, cramped conditions,
00:39:34.640 | in factory farms to produce meat or eggs
00:39:37.760 | or milk more cheaply, using them in some research
00:39:41.720 | that's by no means essential for our survival
00:39:45.360 | or well-being, and a whole lot, you know,
00:39:48.240 | some of the sports and things that we do to animals.
00:39:51.280 | So I think that's unjustified because I think
00:39:55.920 | the significance of pain and suffering
00:40:00.080 | does not depend on the species of the being
00:40:03.480 | who is in pain or suffering any more than it depends
00:40:05.920 | on the race or sex of the being who is in pain or suffering.
00:40:10.920 | And I think we ought to rethink our treatment of animals
00:40:14.720 | along the lines of saying, if the pain is just as great
00:40:18.280 | in an animal, then it's just as bad that it happens
00:40:22.040 | as if it were a human.
00:40:23.540 | - Maybe if I could ask, I apologize.
00:40:27.960 | Hopefully it's not a ridiculous question,
00:40:29.520 | but so as far as we know, we cannot communicate
00:40:33.320 | with animals through natural language,
00:40:35.240 | but we would be able to communicate with robots,
00:40:40.280 | so I'm returning to sort of a small parallel
00:40:43.040 | between perhaps animals and the future of AI.
00:40:45.400 | If we do create an AGI system or as we approach creating
00:40:51.320 | that AGI system, what kind of questions would you ask her
00:40:56.320 | to try to intuit whether there is consciousness
00:41:02.000 | or more importantly, whether there's capacity to suffer?
00:41:10.260 | - I might ask the AGI what she was feeling,
00:41:15.260 | well, does she have feelings?
00:41:19.820 | And if she says yes, to describe those feelings,
00:41:22.660 | to describe what they were like,
00:41:24.580 | to see what the phenomenal account
00:41:26.380 | of consciousness is like.
00:41:29.080 | That's one question.
00:41:32.060 | I might also try to find out if the AGI has a sense
00:41:39.300 | of itself, so for example, the idea,
00:41:43.220 | would you, we often ask people,
00:41:46.380 | so suppose you're in a car accident
00:41:48.660 | and your brain were transplanted into someone else's body,
00:41:51.900 | do you think you would survive
00:41:53.260 | or would it be the person whose body was still surviving,
00:41:56.180 | your body having been destroyed?
00:41:57.740 | And most people say, I think I would,
00:42:00.340 | if my brain was transplanted along with my memories
00:42:02.500 | and so on, I would survive.
00:42:04.100 | So we could ask AGI those kinds of questions
00:42:07.940 | if they were transferred to a different piece of hardware,
00:42:11.700 | would they survive, what would survive?
00:42:14.380 | Get at that sort of concept.
00:42:15.340 | - Sort of on that line, another perhaps absurd question,
00:42:19.380 | but do you think having a body
00:42:22.660 | is necessary for consciousness?
00:42:24.820 | So do you think digital beings can suffer?
00:42:29.680 | - Presumably digital beings need to be running
00:42:35.300 | on some kind of hardware, right?
00:42:36.980 | - Yeah, that ultimately boils down to,
00:42:38.780 | but this is exactly what you just said,
00:42:40.460 | is moving the brain from one place to another.
00:42:42.420 | - So you could move it to a different kind of hardware,
00:42:44.780 | and they could say, look,
00:42:46.540 | your hardware is getting worn out,
00:42:49.300 | we're going to transfer you to a fresh piece of hardware,
00:42:52.100 | so we're gonna shut you down for a time,
00:42:55.100 | but don't worry, you'll be running very soon
00:42:58.180 | on a nice, fresh piece of hardware.
00:43:00.260 | And you could imagine this conscious AGI saying,
00:43:03.220 | that's fine, I don't mind having a little rest,
00:43:05.320 | just make sure you don't lose me, or something like that.
00:43:08.740 | - Yeah, I mean, that's an interesting thought,
00:43:10.340 | that even with us humans, the suffering is in the software.
00:43:14.900 | We right now don't know how to repair the hardware,
00:43:19.300 | but we're getting better at it, and better in the idea.
00:43:23.180 | I mean, a lot of, some people dream about one day
00:43:26.140 | being able to transfer certain aspects of the software
00:43:30.780 | to another piece of hardware.
00:43:32.980 | What do you think, just on that topic,
00:43:35.740 | there's been a lot of exciting innovation
00:43:39.200 | in brain-computer interfaces.
00:43:41.180 | I don't know if you're familiar with the companies
00:43:43.680 | like Neuralink with Elon Musk,
00:43:45.960 | communicating both ways from a computer,
00:43:48.200 | being able to send, activate neurons,
00:43:51.520 | and being able to read spikes from neurons.
00:43:54.840 | With the dream of being able to expand,
00:43:58.900 | sort of increase the bandwidth at which your brain
00:44:02.440 | can look up articles on Wikipedia, kind of thing.
00:44:05.240 | Sort of expand the knowledge capacity of the brain.
00:44:08.340 | Do you think that notion, is that interesting to you,
00:44:13.140 | as the expansion of the human mind?
00:44:15.520 | - Yes, that's very interesting.
00:44:17.300 | I'd love to be able to have that increased bandwidth.
00:44:19.960 | And I want better access to my memory, I have to say, too.
00:44:23.680 | As I get older, I talk to my wife about things
00:44:28.280 | that we did 20 years ago, or something.
00:44:30.280 | Her memory is often better about particular events.
00:44:32.680 | Where were we, who was at that event?
00:44:35.220 | What did he or she wear, even, she may know.
00:44:37.360 | And I have not the faintest idea about this.
00:44:39.040 | But perhaps it's somewhere in my memory.
00:44:40.880 | And if I had this extended memory,
00:44:42.560 | I could search that particular year and rerun those things.
00:44:46.580 | I think that would be great.
00:44:47.980 | - In some sense, we already have that
00:44:51.120 | by storing so much of our data online,
00:44:53.240 | like pictures of different events.
00:44:54.760 | - Yes, well, Gmail is fantastic for that,
00:44:56.520 | because people email me as if they know me well,
00:44:59.960 | and I haven't got a clue who they are,
00:45:01.440 | but then I search for their name.
00:45:02.680 | And I just got emailed me in 2007,
00:45:05.680 | and I know who they are now.
00:45:07.040 | - Yeah, so we're already,
00:45:08.440 | we're taking the first steps already.
00:45:11.080 | So on the flip side of AI, people like Stuart Russell
00:45:14.480 | and others focus on the control problem,
00:45:16.280 | value alignment in AI, which is the problem
00:45:19.800 | of making sure we build systems
00:45:21.400 | that align to our own values, our ethics.
00:45:25.520 | Do you think, sort of high level,
00:45:28.440 | how do we go about building systems,
00:45:31.160 | do you think is it possible that align with our values,
00:45:34.640 | align with our human ethics, or living being ethics?
00:45:39.360 | - Presumably it's possible to do that.
00:45:42.320 | I know that a lot of people who think
00:45:46.080 | that there's a real danger that we won't,
00:45:47.960 | that we'll more or less accidentally lose control of AGI.
00:45:51.800 | - Do you have that fear yourself, personally?
00:45:56.840 | - I'm not quite sure what to think.
00:45:58.560 | I talk to philosophers like Nick Bostrom and Toby Ord,
00:46:01.840 | and they think that this is a real problem
00:46:03.960 | we need to worry about.
00:46:07.200 | Then I talk to people who work for Microsoft
00:46:11.200 | or DeepMind or somebody, and they say,
00:46:13.680 | "No, we're not really that close to producing AGI,
00:46:18.280 | super intelligence."
00:46:19.600 | - So if you look at Nick Bostrom's sort of the arguments,
00:46:23.760 | it's very hard to defend.
00:46:24.960 | So I'm, of course, I am a self-engineer AI system,
00:46:28.080 | so I'm more with the DeepMind folks
00:46:29.960 | where it seems that we're really far away.
00:46:32.400 | But then the counter argument is,
00:46:34.880 | is there any fundamental reason that we'll never achieve it?
00:46:38.320 | And if not, then eventually there'll be
00:46:42.200 | a dire existential risk, so we should be concerned about it.
00:46:46.640 | And do you find that argument at all appealing
00:46:50.720 | in this domain or any domain,
00:46:52.200 | that eventually this will be a problem,
00:46:53.960 | so we should be worried about it?
00:46:55.600 | - Yes, I think it's a problem.
00:46:58.760 | I think that's a valid point.
00:47:02.360 | Of course, when you say eventually,
00:47:06.160 | that raises the question, how far off is that?
00:47:11.480 | And is there something that we can do about it now?
00:47:13.840 | Because if we're talking about,
00:47:15.480 | this is gonna be 100 years in the future,
00:47:17.760 | and you consider how rapidly our knowledge
00:47:20.120 | of artificial intelligence has grown
00:47:22.120 | in the last 10 or 20 years,
00:47:24.040 | it seems unlikely that there's anything much we could do now
00:47:28.440 | that would influence whether this is going to happen
00:47:31.160 | 100 years in the future.
00:47:33.440 | People in 80 years in the future
00:47:35.120 | would be in a much better position to say,
00:47:37.320 | this is what we need to do to prevent this happening
00:47:39.760 | than we are now.
00:47:41.520 | So to some extent, I find that reassuring,
00:47:44.560 | but I'm all in favor of some people doing research
00:47:48.640 | into this to see if indeed it is that far off,
00:47:51.480 | or if we are in a position to do something about it sooner.
00:47:55.440 | I'm very much of the view that extinction
00:47:58.760 | is a terrible thing, and therefore,
00:48:02.760 | even if the risk of extinction is very small,
00:48:05.960 | if we can reduce that risk,
00:48:09.040 | that's something that we ought to do.
00:48:11.240 | My disagreement with some of these people
00:48:12.760 | who talk about long-term risks, extinction risks,
00:48:16.360 | is only about how much priority that should have
00:48:18.840 | as compared to present questions.
00:48:20.520 | - It was such a, if you look at the math of it
00:48:22.680 | from a utilitarian perspective,
00:48:25.040 | if it's existential risks, so everybody dies,
00:48:28.920 | that it feels like an infinity in the math equation
00:48:33.160 | that that makes the math with the priorities difficult to do
00:48:38.160 | that if we don't know the timescale,
00:48:42.720 | and you can legitimately argue
00:48:43.960 | that it's non-zero probability that it'll happen tomorrow,
00:48:48.200 | that how do you deal with these kinds of existential risks,
00:48:52.080 | like from nuclear war, from nuclear weapons,
00:48:55.720 | from biological weapons, from,
00:48:58.640 | I'm not sure if global warming falls into that category,
00:49:01.960 | because global warming is a lot more gradual.
00:49:04.800 | - And people say it's not an existential risk
00:49:06.840 | 'cause there'll always be possibilities
00:49:08.280 | of some humans existing, farming Antarctica,
00:49:11.200 | or northern Siberia, or something of that sort, yeah.
00:49:14.240 | - But you don't find the sort of,
00:49:16.080 | the complete existential risks, a fundamental,
00:49:19.640 | like an overriding part of the equations of ethics?
00:49:24.720 | - No, certainly if you treat it as an infinity,
00:49:28.960 | then it plays havoc with any calculations,
00:49:32.000 | but arguably we shouldn't,
00:49:34.440 | I mean, one of the ethical assumptions that goes into this
00:49:37.340 | is that the loss of future lives,
00:49:40.640 | that is of merely possible lives,
00:49:42.800 | of beings who may never exist at all,
00:49:45.880 | is in some way comparable to the sufferings or deaths
00:49:50.880 | of people who do exist at some point.
00:49:53.660 | And that's not clear to me.
00:49:57.360 | I think there's a case for saying that,
00:49:59.280 | but I also think there's a case for taking the other view.
00:50:01.780 | So that has some impact on it.
00:50:04.520 | Of course, you might say, ah, yes,
00:50:05.880 | but still if there's some uncertainty about this
00:50:08.880 | and the costs of extinction are infinite,
00:50:12.520 | then still it's gonna overwhelm everything else.
00:50:15.320 | But I suppose I'm not convinced of that.
00:50:20.840 | I'm not convinced that it's really infinite here.
00:50:23.400 | And even Nick Bostrom in his discussion of this
00:50:27.200 | doesn't claim that there'll be an infinite number
00:50:29.520 | of lives lived.
00:50:30.360 | He, what is it, 10 to the 56th or something?
00:50:33.320 | It's a vast number that I think he calculates.
00:50:36.020 | This is assuming we can upload consciousness onto these,
00:50:40.720 | to look at digital forms,
00:50:43.560 | and therefore there'll be much more energy efficient,
00:50:45.280 | but he calculates the amount of energy in the universe
00:50:47.640 | or something like that.
00:50:48.680 | So the numbers are vast, but not infinite,
00:50:50.480 | which gives you some prospect maybe
00:50:52.520 | of resisting some of the argument.
00:50:55.360 | - The beautiful thing with Nick's arguments
00:50:57.360 | is he quickly jumps from the individual scale
00:50:59.800 | to the universal scale,
00:51:01.080 | which is just awe-inspiring to think of
00:51:04.480 | when you think about the entirety
00:51:06.200 | of the span of time of the universe.
00:51:08.880 | It's both interesting from a computer science perspective,
00:51:11.360 | AI perspective, and from an ethical perspective,
00:51:13.720 | the idea of utilitarianism.
00:51:15.960 | Could you say what is utilitarianism?
00:51:18.580 | - Utilitarianism is the ethical view
00:51:22.000 | that the right thing to do is the act
00:51:25.400 | that has the greatest expected utility,
00:51:28.680 | where what that means is it's the act
00:51:32.280 | that will produce the best consequences,
00:51:34.840 | discounted by the odds that you won't be able
00:51:37.640 | to produce those consequences, that something will go wrong.
00:51:40.360 | But in a simple case, let's assume we have certainty
00:51:43.840 | about what the consequences of our actions will be,
00:51:46.100 | then the right action is the action
00:51:47.540 | that will produce the best consequences.
00:51:50.480 | - Is that always, and by the way,
00:51:52.040 | there's a bunch of nuanced stuff
00:51:53.360 | that you talk with Sam Harris on this podcast
00:51:55.960 | on that people should go listen to.
00:51:57.920 | It's great.
00:51:58.760 | That's like two hours of moral philosophy discussion.
00:52:02.920 | But is that an easy calculation?
00:52:05.520 | - No, it's a difficult calculation.
00:52:07.360 | And actually there's one thing that I need to add,
00:52:10.000 | and that is utilitarians,
00:52:12.320 | certainly the classical utilitarians,
00:52:15.320 | think that by best consequences,
00:52:16.760 | we're talking about happiness
00:52:18.840 | and the absence of pain and suffering.
00:52:21.020 | There are other consequentialists
00:52:22.920 | who are not really utilitarians who say
00:52:25.920 | there are different things that could be good consequences.
00:52:29.740 | Justice, freedom, human dignity, knowledge,
00:52:34.120 | they all count as good consequences too.
00:52:35.800 | And that makes the calculations even more difficult
00:52:38.040 | 'cause then you need to know how to balance these things off.
00:52:40.760 | If you are just talking about wellbeing,
00:52:44.520 | using that term to express happiness
00:52:46.480 | and the absence of suffering,
00:52:47.920 | I think that the calculation becomes more manageable
00:52:53.960 | in a philosophical sense.
00:52:56.320 | It's still in practice, we don't know how to do it.
00:52:59.200 | We don't know how to measure quantities
00:53:00.960 | of happiness and misery.
00:53:02.680 | We don't know how to calculate the probabilities
00:53:04.880 | that different actions will produce this or that.
00:53:07.740 | So at best, we can use it as a rough guide
00:53:13.040 | to different actions.
00:53:14.500 | And one way we have to focus on the short-term consequences
00:53:19.500 | because we just can't really predict
00:53:22.760 | all of the longer-term ramifications.
00:53:25.320 | - So what about the sort of,
00:53:27.660 | what about the extreme suffering of very small groups?
00:53:32.760 | Sort of utilitarianism is focused
00:53:34.900 | on the overall aggregate, right?
00:53:37.560 | How do you, would you say you yourself are utilitarian?
00:53:41.040 | - Yes, I'm utilitarian.
00:53:43.160 | - Sort of, what do you make of the difficult, ethical,
00:53:48.160 | maybe poetic suffering of very few individuals?
00:53:54.960 | - I think it's possible that that gets overridden
00:53:57.020 | by benefits to very large numbers of individuals.
00:54:00.060 | I think that can be the right answer.
00:54:02.840 | But before we conclude that it is the right answer,
00:54:05.400 | we have to know how severe the suffering is
00:54:08.920 | and how that compares with the benefits.
00:54:12.300 | So I tend to think that extreme suffering
00:54:16.380 | is worse than, or is further, if you like,
00:54:21.540 | below the neutral level than extreme happiness
00:54:25.540 | or bliss is above it.
00:54:27.300 | So when I think about the worst experiences possible
00:54:30.720 | and the best experiences possible,
00:54:33.160 | I don't think of them as equidistant from neutral.
00:54:36.200 | So like it's a scale that goes from minus 100
00:54:38.480 | through zero as a neutral level to plus 100.
00:54:41.840 | Because I know that I would not exchange
00:54:46.360 | an hour of my most pleasurable experiences
00:54:49.600 | for an hour of my most painful experiences.
00:54:52.380 | Even, I wouldn't have an hour
00:54:54.440 | of my most painful experiences even for two hours
00:54:57.380 | or 10 hours of my most painful experiences.
00:55:01.760 | Did I say that correctly?
00:55:02.600 | - Yeah, yeah, yeah, yeah.
00:55:03.720 | Maybe 20 hours then.
00:55:04.920 | It's not 21.
00:55:05.880 | What's the exchange rate?
00:55:06.720 | - So that's the question.
00:55:07.920 | What is the exchange rate?
00:55:08.760 | But I think it can be quite high.
00:55:10.960 | So that's why you shouldn't just assume
00:55:13.400 | that it's okay to make one person suffer extremely
00:55:18.400 | in order to make two people much better off.
00:55:21.520 | It might be a much larger number.
00:55:23.540 | But at some point, I do think you should aggregate
00:55:27.560 | and the result will be,
00:55:30.560 | even though it violates our intuitions of justice
00:55:33.800 | and fairness, whatever it might be,
00:55:35.560 | of giving priority to those who are worse off,
00:55:39.560 | at some point, I still think
00:55:41.680 | that will be the right thing to do.
00:55:43.040 | - Yeah, it's some complicated nonlinear function.
00:55:45.480 | Can I ask a sort of out there question is,
00:55:49.000 | the more and more we put our data out there,
00:55:51.080 | the more we're able to measure a bunch of factors
00:55:53.200 | of each of our individual human lives.
00:55:55.680 | And I could foresee the ability to estimate well-being
00:55:59.940 | of whatever we together collectively agree
00:56:03.940 | and is in a good objective function
00:56:05.480 | from a utilitarian perspective.
00:56:07.680 | Do you think it'll be possible and is a good idea
00:56:12.400 | to push that kind of analysis to make then public decisions,
00:56:17.400 | perhaps with the help of AI?
00:56:19.920 | That here's a tax rate,
00:56:23.560 | here's a tax rate at which well-being will be optimized.
00:56:28.280 | - Yeah, that would be great if we really knew that,
00:56:31.040 | if we really could calculate that.
00:56:32.360 | - No, but do you think it's possible to converge
00:56:34.120 | towards an agreement amongst humans,
00:56:36.640 | towards an objective function
00:56:39.720 | or is it just a hopeless pursuit?
00:56:42.020 | - I don't think it's hopeless.
00:56:43.080 | I think it would be difficult to get converged
00:56:46.320 | towards agreement, at least at present,
00:56:47.880 | because some people would say,
00:56:49.920 | I've got different views about justice
00:56:52.040 | and I think you ought to give priority
00:56:54.180 | to those who are worse off,
00:56:55.860 | even though I acknowledge that the gains
00:56:58.720 | that the worse off are making are less than the gains
00:57:01.460 | that those who are sort of medium badly off could be making.
00:57:05.720 | So we still have all of these intuitions that we argue about.
00:57:10.240 | So I don't think we would get agreement,
00:57:11.700 | but the fact that we wouldn't get agreement
00:57:14.280 | doesn't show that there isn't a right answer there.
00:57:17.840 | - Do you think, who gets to say what is right and wrong?
00:57:21.360 | Do you think there's place for ethics oversight
00:57:23.640 | from the government?
00:57:26.400 | So I'm thinking in the case of AI,
00:57:29.360 | overseeing what kind of decisions AI can make or not,
00:57:33.940 | but also if you look at animal rights
00:57:36.720 | or rather not rights or perhaps rights,
00:57:39.580 | but the ideas you've explored in animal liberation,
00:57:43.040 | who gets to, so you eloquently, beautifully
00:57:46.140 | write in your book that this, you know,
00:57:49.120 | we shouldn't do this, but is there some harder rules
00:57:52.720 | that should be imposed?
00:57:53.640 | Or is this a collective thing we converse towards a society
00:57:56.720 | and thereby make the better and better ethical decisions?
00:57:59.720 | - Politically, I'm still a Democrat
00:58:04.360 | despite looking at the flaws in democracy
00:58:07.920 | and the way it doesn't work always very well.
00:58:10.200 | So I don't see a better option than allowing the public
00:58:15.400 | to vote for governments in accordance with their policies.
00:58:20.000 | And I hope that they will vote for policies
00:58:23.800 | that reduce the suffering of animals
00:58:27.800 | and reduce the suffering of distant humans,
00:58:30.600 | whether geographically distant or distant
00:58:32.640 | because they're future humans.
00:58:35.160 | But I recognise that democracy isn't really well set up
00:58:37.720 | to do that.
00:58:38.560 | And in a sense, you could imagine a wise
00:58:44.320 | and benevolent, you know, omnibenevolent leader
00:58:48.720 | who would do that better than democracies could.
00:58:51.840 | But in the world in which we live,
00:58:54.680 | it's difficult to imagine that this leader
00:58:57.400 | isn't gonna be corrupted by a variety of influences.
00:59:01.320 | You know, we've had so many examples of people
00:59:05.440 | who've taken power with good intentions
00:59:08.520 | and then have ended up being corrupt
00:59:10.280 | and favouring themselves.
00:59:12.800 | So I don't know, you know, that's why, as I say,
00:59:16.560 | I don't know that we have a better system than democracy
00:59:18.600 | to make these decisions.
00:59:20.040 | - Well, so you also discuss effective altruism,
00:59:23.440 | which is a mechanism for going around government,
00:59:27.240 | for putting the power in the hands of the people
00:59:29.560 | to donate money towards causes to help, you know,
00:59:32.440 | remove the middleman and give it directly
00:59:37.960 | to the causes that they care about.
00:59:41.600 | Sort of, maybe this is a good time to ask,
00:59:45.280 | you've 10 years ago wrote "The Life You Can Save"
00:59:48.240 | that's now, I think, available for free online.
00:59:51.400 | - That's right, you can download either the ebook
00:59:53.880 | or the audio book free from thelifeyoucansave.org.
00:59:57.520 | - And what are the key ideas that you present in the book?
01:00:03.480 | - The main thing I wanna do in the book
01:00:05.200 | is to make people realise that it's not difficult
01:00:10.360 | to help people in extreme poverty,
01:00:13.720 | that there are highly effective organisations now
01:00:16.800 | that are doing this, that they've been independently assessed
01:00:20.320 | and verified by research teams that are expert in this area,
01:00:25.320 | and that it's a fulfilling thing to do
01:00:28.200 | for at least part of your life.
01:00:30.680 | You know, we can't all be saints,
01:00:31.840 | but at least one of your goals should be
01:00:34.360 | to really make a positive contribution to the world
01:00:36.920 | and to do something to help people
01:00:38.280 | who through no fault of their own
01:00:40.960 | are in very dire circumstances
01:00:43.560 | and living a life that is barely, or perhaps not at all,
01:00:48.560 | a decent life for a human being to live.
01:00:51.960 | - So you describe a minimum ethical standard of giving.
01:00:56.960 | What advice would you give to people
01:01:01.400 | that want to be effectively altruistic in their life,
01:01:06.560 | like live an effective altruism life?
01:01:09.400 | - There are many different kinds of ways of living
01:01:12.120 | as an effective altruist.
01:01:13.600 | And if you're at the point where you're thinking
01:01:16.720 | about your long-term career,
01:01:18.760 | I'd recommend you take a look at a website
01:01:21.160 | called 80,000 Hours, 80,000Hours.org,
01:01:24.720 | which looks at ethical career choices.
01:01:27.240 | And they range from, for example,
01:01:29.800 | going to work on Wall Street
01:01:31.120 | so that you can earn a huge amount of money
01:01:33.400 | and then donate most of it to effective charities,
01:01:37.000 | to going to work for a really good nonprofit organization
01:01:40.880 | so that you can directly use your skills and ability
01:01:44.080 | and hard work to further a good cause,
01:01:48.480 | or perhaps going into politics,
01:01:50.840 | maybe small chances but big payoffs in politics.
01:01:55.200 | Go to work in the public service
01:01:56.560 | where if you're talented, you might rise to a higher level
01:01:59.240 | where you can influence decisions.
01:02:01.760 | Do research in an area where the payoffs could be great.
01:02:05.200 | There are a lot of different opportunities,
01:02:07.240 | but too few people are even thinking about those questions.
01:02:11.400 | They're just going along in some sort of preordained rut
01:02:14.760 | to particular careers.
01:02:15.840 | Maybe they think they'll earn a lot of money
01:02:17.440 | and have a comfortable life,
01:02:19.200 | but they may not find that as fulfilling
01:02:20.960 | as actually knowing that they're making
01:02:23.520 | a positive difference to the world.
01:02:25.120 | - What about in terms of,
01:02:27.080 | so that's like long-term, 80,000 hours.
01:02:30.160 | So shorter term giving part of,
01:02:33.120 | well, actually it's part of that.
01:02:34.360 | You go to work at Wall Street,
01:02:37.120 | if you would like to give a percentage of your income
01:02:40.080 | that you talk about in life, you can save that.
01:02:42.440 | I mean, I was looking through, it's quite a compelling,
01:02:45.840 | it's, I mean, I'm just a dumb engineer,
01:02:50.480 | so I like there's simple rules.
01:02:52.440 | There's a nice percentage.
01:02:53.760 | - Okay, so I do actually set out suggested levels of giving
01:02:57.520 | because people often ask me about this.
01:03:00.200 | A popular answer is give 10%,
01:03:02.840 | the traditional tithe that's recommended in Christianity
01:03:06.400 | and also Judaism.
01:03:08.440 | But why should it be the same percentage
01:03:11.760 | irrespective of your income?
01:03:13.600 | Tax scales reflect the idea that the more income you have,
01:03:16.240 | the more you can pay tax.
01:03:18.000 | And I think the same is true in what you can give.
01:03:20.360 | So I do set out a progressive donor scale
01:03:25.360 | which starts at 1% for people on modest incomes
01:03:28.880 | and rises to 33 1/3% for people who are really earning a lot.
01:03:33.880 | And my idea is that I don't think any of these amounts
01:03:38.560 | really impose real hardship on people
01:03:42.080 | because they are progressive and geared to income.
01:03:44.760 | So I think anybody can do this
01:03:48.600 | and can know that they're doing something significant
01:03:51.920 | to play their part in reducing the huge gap
01:03:56.080 | between people in extreme poverty in the world
01:03:58.760 | and people living affluent lives.
01:04:01.200 | - And aside from it being an ethical life,
01:04:05.760 | it's one that you find more fulfilling
01:04:07.520 | because there's something about our human nature
01:04:10.240 | that, or some of our human natures,
01:04:13.720 | maybe most of our human nature
01:04:15.080 | that enjoys doing the ethical thing.
01:04:20.080 | - Yes, I make both those arguments,
01:04:23.120 | that it is an ethical requirement
01:04:25.440 | in the kind of world we live in today
01:04:27.200 | to help people in great need when we can easily do so,
01:04:30.440 | but also that it is a rewarding thing
01:04:32.960 | and there's good psychological research
01:04:35.280 | showing that people who give more
01:04:38.200 | tend to be more satisfied with their lives.
01:04:40.560 | And I think this has something to do
01:04:41.880 | with having a purpose that's larger than yourself.
01:04:45.000 | And therefore, never being, if you like,
01:04:49.560 | never being bored sitting around,
01:04:51.120 | oh, what will I do next?
01:04:52.720 | I've got nothing to do.
01:04:54.200 | In a world like this,
01:04:55.040 | there are many good things that you can do
01:04:57.320 | and enjoy doing them.
01:04:59.400 | Plus, you're working with other people
01:05:02.320 | in the effective altruism movement
01:05:03.880 | who are forming a community of other people
01:05:06.240 | with similar ideas and they tend to be interesting,
01:05:09.280 | thoughtful, and good people as well.
01:05:11.080 | And having friends of that sort
01:05:12.680 | is another big contribution to having a good life.
01:05:15.960 | - So we talked about big things that are beyond ourselves,
01:05:20.280 | but we're also just human and mortal.
01:05:24.560 | Do you ponder your own mortality?
01:05:27.360 | Is there insights about your philosophy,
01:05:29.600 | the ethics that you gain from pondering your own mortality?
01:05:34.600 | - Clearly, as you get into your 70s,
01:05:37.880 | you can't help thinking about your own mortality.
01:05:40.320 | But I don't know that I have great insights
01:05:44.720 | into that from my philosophy.
01:05:47.080 | I don't think there's anything after the death of my body,
01:05:51.280 | assuming that we won't be able to upload my mind
01:05:53.440 | into anything at the time when I die.
01:05:55.320 | So I don't think there's any afterlife
01:05:58.400 | or anything to look forward to in that sense.
01:06:00.880 | - Do you fear death?
01:06:01.840 | So if you look at Ernest Becker
01:06:04.080 | and describing the motivating aspects
01:06:07.960 | of our ability to be cognizant of our mortality,
01:06:14.240 | do you have any of those elements
01:06:17.440 | in your driving your motivation in life?
01:06:21.000 | - I suppose the fact that you have only a limited time
01:06:23.480 | to achieve the things that you wanna achieve
01:06:25.840 | gives you some sort of motivation
01:06:27.320 | to get going in achieving them.
01:06:29.720 | And if we thought we were immortal,
01:06:31.040 | we might say, "Ah, I can put that off
01:06:33.320 | "for another decade or two."
01:06:35.080 | So there's that about it.
01:06:37.760 | But otherwise, no, I'd rather have more time to do more.
01:06:42.040 | I'd also like to be able to see how things go
01:06:45.840 | that I'm interested in.
01:06:47.520 | Is climate change gonna turn out to be as dire
01:06:49.920 | as a lot of scientists say that it is going to be?
01:06:53.520 | Will we somehow scrape through
01:06:55.480 | with less damage than we thought?
01:06:57.880 | I'd really like to know the answers to those questions,
01:06:59.840 | but I guess I'm not going to.
01:07:02.200 | - Well, you said there's nothing afterwards.
01:07:05.800 | So let me ask the even more absurd question.
01:07:08.080 | What do you think is the meaning of it all?
01:07:10.160 | - I think the meaning of life is the meaning we give to it.
01:07:14.120 | I don't think that we were brought into the universe
01:07:18.120 | for any kind of larger purpose,
01:07:21.880 | but given that we exist,
01:07:24.080 | I think we can recognize that some things
01:07:26.480 | are objectively bad.
01:07:29.480 | Extreme suffering is an example,
01:07:32.640 | and other things are objectively good,
01:07:35.080 | like having a rich, fulfilling, enjoyable, pleasurable life.
01:07:39.160 | And we can try to do our part in reducing the bad things
01:07:44.360 | and increasing the good things.
01:07:47.200 | - So one way, the meaning,
01:07:49.520 | is to do a little bit more of the good things,
01:07:51.520 | objectively good things,
01:07:52.640 | and a little bit less of the bad things.
01:07:55.440 | - Yeah, so do as much of the good things as you can,
01:07:58.960 | and as little of the bad things.
01:08:00.600 | - Peter, beautifully put,
01:08:01.920 | I don't think there's a better place to end it.
01:08:03.440 | Thank you so much for talking today.
01:08:04.920 | - Thanks very much,
01:08:05.760 | it's been really interesting talking to you.
01:08:07.720 | - Thanks for listening to this conversation
01:08:10.240 | with Peter Singer,
01:08:11.360 | and thank you to our sponsors, Cash App and Masterclass.
01:08:15.960 | Please consider supporting the podcast
01:08:17.680 | by downloading Cash App and using the code LEXPODCAST,
01:08:21.640 | and signing up at masterclass.com/lex.
01:08:26.160 | Click the links, buy all the stuff,
01:08:28.920 | it's the best way to support this podcast
01:08:31.000 | and the journey I'm on, my research and startup.
01:08:35.260 | If you enjoy this thing, subscribe on YouTube,
01:08:38.080 | review it with 5,000 Apple Podcast,
01:08:40.320 | support on Patreon, or connect with me on Twitter
01:08:43.080 | at Lex Friedman, spelled without the E, just F-R-I-D-M-A-N.
01:08:48.080 | And now, let me leave you with some words
01:08:50.880 | from Peter Singer.
01:08:52.800 | What one generation finds ridiculous, the next accepts.
01:08:56.640 | And the third shudders when it looks back
01:09:00.000 | at what the first did.
01:09:01.120 | Thank you for listening, and hope to see you next time.
01:09:05.300 | (upbeat music)
01:09:07.880 | (upbeat music)
01:09:10.460 | [BLANK_AUDIO]