back to indexPeter Singer: Suffering in Humans, Animals, and AI | Lex Fridman Podcast #107
Chapters
0:0 Introduction
5:25 World War II
9:53 Suffering
16:6 Is everyone capable of evil?
21:52 Can robots suffer?
37:22 Animal liberation
40:31 Question for AI about suffering
43:32 Neuralink
45:11 Control problem of AI
51:8 Utilitarianism
59:43 Helping people in poverty
65:15 Mortality
00:00:00.000 |
The following is a conversation with Peter Singer, 00:00:03.440 |
professor of bioethics at Princeton University, 00:00:06.200 |
best known for his 1975 book, "Animal Liberation," 00:00:10.280 |
that makes an ethical case against eating meat. 00:00:14.240 |
He has written brilliantly from an ethical perspective 00:00:17.680 |
on extreme poverty, euthanasia, human genetic selection, 00:00:23.720 |
and generally happiness, including in his books, 00:00:28.520 |
"Ethics in the Real World" and "The Life You Can Save." 00:00:37.800 |
one of the most influential philosophers in the world. 00:00:48.880 |
by downloading Cash App and using code LEXPODCAST 00:00:57.880 |
It really is the best way to support the podcast 00:01:07.520 |
which means that most of my diet is made up of meat. 00:01:10.400 |
I do not hunt the food I eat, though one day I hope to. 00:01:22.440 |
than participating in the supply chain of factory farming. 00:01:28.440 |
this part of my life has always had a cloud over it. 00:01:37.960 |
but for some reason, whatever the makeup of my body, 00:01:41.280 |
whatever the way I practice the dieting I have, 00:02:07.840 |
and may be upset by the words I say or Peter says, 00:02:18.280 |
I may and probably will talk with people you disagree with. 00:02:31.480 |
in a patient, intelligent, and nuanced discourse. 00:02:34.840 |
If your instinct and desire is to be a voice of mockery 00:02:46.880 |
that thinks deeply and speaks with empathy and compassion. 00:02:51.040 |
That is what I hope to continue being a part of 00:03:33.720 |
in the context of the history of money is fascinating. 00:04:26.080 |
to get a discount and to support this podcast. 00:04:35.160 |
you get an all-access pass to watch courses from, 00:04:46.200 |
Will Wright, creator of SimCity and Sims on game design. 00:04:50.420 |
I promise I'll start streaming games at some point soon. 00:05:04.240 |
and the experience of being launched into space alone 00:05:08.720 |
By the way, you can watch it on basically any device. 00:05:16.600 |
to get a discount and to support this podcast. 00:05:20.260 |
And now here's my conversation with Peter Singer. 00:05:24.060 |
When did you first become conscious of the fact 00:05:35.740 |
pretty much as soon as I was able to understand 00:05:45.660 |
And obviously I knew why I only had one grandparent 00:05:50.660 |
and she herself had been in the camps and survived. 00:05:54.540 |
So I think I knew a lot about that pretty early. 00:05:58.100 |
- My entire family comes from the Soviet Union. 00:06:07.240 |
in the culture and the suffering that the war brought. 00:06:10.340 |
The millions of people who died is in the music, 00:06:16.900 |
What do you think was the impact of the war broadly 00:06:37.940 |
And at least as far as the West was concerned, 00:06:43.200 |
in which there wasn't the kind of overt racism 00:06:48.020 |
and antisemitism that had existed for my parents in Europe. 00:06:59.420 |
There was also though a fear of a further outbreak of war, 00:07:08.960 |
because of the way the Second World War had ended. 00:07:11.740 |
So there was this overshadowing of my childhood 00:07:16.220 |
about the possibility that I would not live to grow up 00:07:19.920 |
and be an adult because of a catastrophic nuclear war. 00:07:28.140 |
in which the city that I was living, Melbourne, 00:07:30.300 |
was the last place on earth to have living human beings 00:07:39.100 |
So that certainly gave us a bit of that sense. 00:07:51.620 |
All of that has its roots in the Second World War. 00:07:55.020 |
- You know, there is much beauty that comes from war. 00:07:58.180 |
Sort of, I had a conversation with Eric Weinstein. 00:08:16.860 |
sort of the ripple effects on it, ethically speaking, 00:08:20.340 |
do you think there are positive aspects to war? 00:08:24.580 |
- I find it hard to see positive aspects in war. 00:08:38.300 |
People say, you know, "During wartime, we all pull together. 00:08:40.940 |
"We all work together against a common enemy." 00:08:47.380 |
And in general, it's good for countries to be united 00:08:51.100 |
But it also engenders a kind of a nationalism 00:09:13.000 |
that the closest that people feel to each other 00:09:25.040 |
That somehow brings people extremely closely together. 00:09:37.880 |
without the suffering and death that war entails. 00:09:43.800 |
you can already hear the romanticized Russian in me. 00:09:49.760 |
just a little bit in our literature and culture and so on. 00:09:54.920 |
And I apologize if it's a ridiculous question, 00:09:59.640 |
If you would try to define what suffering is, 00:10:13.140 |
And it's distinguished from other conscious states 00:10:34.480 |
And that's, I say, emphasized for its own sake, 00:10:43.140 |
And sometimes it does have those consequences. 00:10:47.120 |
And of course, sometimes we might undergo suffering. 00:10:50.780 |
We set ourselves a challenge to run a marathon 00:11:01.960 |
So I'm not saying that we never choose suffering, 00:11:04.520 |
but I am saying that other things being equal, 00:11:07.240 |
we would rather not be in that state of consciousness. 00:11:11.880 |
so if you have the new 10 year anniversary release 00:11:46.160 |
- In practice, I don't think we ever will eliminate suffering 00:11:50.120 |
so I think that little drop of poison, as you put it, 00:11:56.400 |
of an unpleasant color, perhaps something like that, 00:11:59.680 |
in a otherwise harmonious and beautiful composition, 00:12:17.760 |
or whether in terms of by eliminating the suffering, 00:12:23.820 |
And if that's so, then we might be prepared to say 00:12:30.600 |
in order to have the best possible experiences as well. 00:12:37.720 |
So when you talk about eradicating poverty in the world, 00:12:46.880 |
the more the bar of what defines poverty raises? 00:12:49.680 |
Or is there at the basic human ethical level, 00:12:53.400 |
a bar that's absolute, that once you get above it, 00:13:08.680 |
And I think this is true for poverty as well as suffering. 00:13:11.000 |
There's an objective level of suffering or of poverty 00:13:16.000 |
where we're talking about objective indicators, 00:13:22.500 |
you can't get enough food, you're constantly cold, 00:13:27.880 |
you can't get warm, you have some physical pains 00:13:37.000 |
But it may also be true that if you do get rid of that 00:13:39.840 |
and you get to the stage where all of those basic needs 00:13:42.820 |
have been met, there may still be then new forms 00:13:58.200 |
earning money to get enough to eat and shelter. 00:14:01.360 |
So now they're bored, they lack a sense of purpose. 00:14:06.300 |
And that then is a kind of a relative suffering 00:14:09.480 |
that is distinct from the objective forms of suffering. 00:14:14.320 |
- But in your focus on eradicating suffering, 00:14:19.960 |
the kind of interesting challenges and suffering 00:14:24.400 |
That's just not, in your ethical, philosophical brain, 00:14:31.240 |
- It would be of interest to me if we had eliminated 00:14:48.320 |
when we've eliminated those objective forms of suffering, 00:14:55.900 |
But that's not a practical need for me at the moment. 00:14:59.920 |
- Sorry to linger on it because you kind of said it, 00:15:07.600 |
So is there a, do you see a suffering as a creative force? 00:15:17.080 |
I think I'll repeating what I said about the highs 00:15:24.080 |
So it may be that suffering makes us more creative 00:15:29.800 |
Maybe that brings some of those highs with it 00:15:32.880 |
that we would not have had if we'd had no suffering. 00:15:39.480 |
and I certainly can't have no basis for denying it. 00:15:45.660 |
then I would not want to eliminate suffering completely. 00:15:59.780 |
of where the world's population is, that's the focus. 00:16:21.460 |
I'm not prepared to say that everyone is capable of evil. 00:16:24.020 |
That maybe some people who even in the worst of circumstances 00:16:37.900 |
let's say what the Nazis did during the Holocaust, 00:16:46.580 |
I know that I would not have done those things 00:16:54.460 |
Even if let's say I had grown up under the Nazi regime 00:16:58.260 |
and had been indoctrinated with racist ideas, 00:17:02.460 |
had also had the idea that I must obey orders, 00:17:22.740 |
nevertheless I know I would not have killed those Jews 00:17:42.140 |
So I've read a lot about the war, World War II, 00:17:55.460 |
I would like to hope that I would have been one of the 10%, 00:17:59.060 |
but I don't really have any basis for claiming 00:18:02.060 |
that I would have been different from the majority. 00:18:08.460 |
It would be interesting if we could find a way 00:18:19.820 |
on how ordinary Germans got led to do terrible things, 00:18:24.820 |
and there are also studies of the resistance, 00:18:28.220 |
some heroic people in the White Rose group, for example, 00:18:43.220 |
of how many people would have been capable of doing that. 00:18:46.340 |
- Well, sort of the reason I think it's interesting 00:18:55.180 |
when there are things that you'd like to do that are good, 00:19:10.780 |
because I'm simply scared of putting my life, 00:19:23.460 |
in my current skillset and the capacity to do. 00:19:33.700 |
where I would have to experience derision or hatred 00:19:45.740 |
it's difficult to think in the current times, 00:19:50.040 |
it seems easier to put yourself back in history 00:19:53.380 |
where you can sort of objectively contemplate whether, 00:20:01.220 |
- True, but I think we do face those challenges today. 00:20:06.100 |
And I think we can still ask ourselves those questions. 00:20:10.000 |
So one stand that I took more than 40 years ago now 00:20:13.540 |
was to stop eating meat, become a vegetarian at a time 00:20:17.540 |
when you hardly met anybody who was a vegetarian, 00:20:29.000 |
And I know thinking about making that decision, 00:20:33.300 |
I was convinced that it was the right thing to do, 00:20:37.260 |
are all my friends gonna think that I'm a crank 00:20:42.180 |
So, I'm not saying there were any terrible sanctions, 00:20:56.300 |
And one or two friends were clearly uncomfortable 00:20:59.060 |
with that decision, but that was pretty minor 00:21:09.820 |
like global poverty and what we ought to be doing about that 00:21:29.120 |
well, I think it must've taken a lot of courage 00:21:43.000 |
she gets exceptionally huge amounts of support 00:21:48.280 |
- It's very difficult for a teenager to operate in. 00:22:01.980 |
One of the essays asks, should robots have rights? 00:22:10.640 |
- If we ever develop robots capable of consciousness, 00:22:17.120 |
capable of having their own internal perspective 00:22:22.120 |
so that their lives can go well or badly for them, 00:22:56.640 |
But is it true that every being that is conscious 00:23:01.200 |
will suffer or has to be capable of suffering? 00:23:05.400 |
I suppose you could imagine a kind of consciousness, 00:23:08.220 |
especially if we can construct it artificially, 00:23:13.880 |
But just automatically cuts at the consciousness 00:23:20.440 |
as soon as something is gonna cause you suffering. 00:23:44.640 |
There is a conscious subject who is taking things in, 00:24:05.120 |
to where I'm going, Google gives me the directions 00:24:10.840 |
Google doesn't care, it's not like I'm offending Google 00:24:24.360 |
or at least that level of AI is not conscious. 00:24:35.900 |
if it's only mimicking it or if it's the real thing. 00:24:43.480 |
a perspective on the world from which things can go well 00:24:54.200 |
comes from just watching ourselves when we're in pain. 00:24:59.200 |
- Or when we're experiencing pleasure, it's not only-- 00:25:04.600 |
Yeah, so, and then you could actually push back on this, 00:25:08.800 |
but I would say that's how we kind of build an intuition 00:25:11.960 |
about animals is we can infer the similarities 00:25:19.560 |
that they're suffering or not based on certain things 00:25:24.320 |
So what if robots, you mentioned Google Maps, 00:25:29.320 |
and I've done this experiment, so I work in robotics, 00:25:37.640 |
and I play with different speech interaction, 00:25:42.160 |
And if the Roomba or the robot or Google Maps 00:25:45.880 |
shows any signs of pain, like screaming or moaning 00:25:50.360 |
or being displeased by something you've done, 00:25:54.240 |
that in my mind, I can't help but immediately upgrade it. 00:26:02.520 |
just having another entity that's now for the moment 00:26:13.880 |
I immediately realize that it's not obviously, 00:26:19.680 |
So sort of, I guess, what do you think about a world 00:26:24.680 |
where Google Maps and Roombas are pretending to be conscious 00:26:31.440 |
and we, descendants of apes, are not smart enough 00:26:35.400 |
to realize they're not, or whatever, or that is conscious, 00:26:44.120 |
The reason I'm asking that is that kind of capability 00:26:51.160 |
- Yes, that kind of capability may be closer, 00:27:04.480 |
that in those circumstances we should give them rights 00:27:17.920 |
if we get used to looking at a being suffering 00:27:20.920 |
and saying, "Yeah, we don't have to do anything about that. 00:27:25.040 |
Maybe we'll feel the same about animals, for instance. 00:27:28.320 |
And interestingly, among philosophers and thinkers 00:27:34.840 |
who denied that we have any direct duties to animals, 00:27:46.640 |
"Yes, but still it's better not to be cruel to them, 00:27:49.520 |
"not because of the suffering we're inflicting 00:28:00.520 |
"because we're more likely to be cruel to other humans, 00:28:07.760 |
- I don't accept that as the basis of the argument 00:28:14.000 |
is just that we're inflicting suffering on them, 00:28:18.040 |
But possibly, I might accept some sort of parallel 00:28:23.000 |
of that argument as a reason why you shouldn't be cruel 00:28:26.040 |
to these robots that mimic the symptoms of pain 00:28:30.520 |
if it's gonna be harder for us to distinguish. 00:28:33.560 |
- I would venture to say, I'd like to disagree with you, 00:28:42.280 |
I would like to say that if that Roomba is dedicated 00:28:46.920 |
to faking the consciousness and the suffering, 00:29:10.120 |
I'm quite surprised by the upgrade in consciousness 00:29:19.760 |
It's a totally open world, but I'd like to just, 00:29:23.600 |
sort of the difference between animals and other humans 00:29:27.700 |
is that in the robot case, we've added it in ourselves. 00:29:32.480 |
Therefore, we can say something about how real it is. 00:29:37.480 |
But I would like to say that the display of it 00:29:42.000 |
And I'm not a philosopher, I'm not making that argument, 00:29:45.600 |
but I'd at least like to add that as a possibility. 00:29:50.960 |
is all I'm trying to articulate poorly, I suppose. 00:30:00.820 |
which is rather like what you're talking about, 00:30:04.780 |
So behaviorism was employed both in psychology, 00:30:07.500 |
people like B.F. Skinner was a famous behaviorist, 00:30:20.300 |
But in philosophy, the view defended by people 00:30:23.860 |
like Gilbert Ryle, who was a professor of philosophy 00:30:26.040 |
at Oxford, wrote a book called "The Concept of Mind," 00:30:35.320 |
he said, well, the meaning of a term is its use, 00:30:53.640 |
And Norman Malcolm, who was another philosopher 00:30:58.400 |
in the school from Cornell, had the view that, 00:31:04.620 |
After all, we can't see other people's dreams. 00:31:19.040 |
it's basically to wake up and recall something. 00:31:22.720 |
So you could apply this to what you're talking about 00:31:28.480 |
is to exhibit these symptoms of pain behavior, 00:31:38.520 |
that Ryle's kind of philosophical behaviorism 00:31:42.320 |
so I think they would say the same about your view. 00:31:52.760 |
the behaviorist movement, and I'm with that 100% 00:32:14.560 |
because it's hard, sort of philosophically, I agree, 00:32:18.760 |
but the only reason I philosophically agree in that case 00:32:26.800 |
I'm not sure I would be able to interpret that well. 00:32:31.880 |
that I was just curious what your thoughts are. 00:32:46.400 |
is a fake display of suffering is not suffering. 00:33:10.720 |
and you wouldn't want to harden yourself against it, 00:33:19.200 |
so you said, "Once an artificial general intelligence system, 00:33:22.800 |
"a human-level intelligence system, become conscious." 00:33:30.760 |
that just say things that I told them to say, 00:33:33.780 |
but how do you know when a system like Alexa, 00:33:48.040 |
that there's a feeling there's another entity there 00:33:51.200 |
that's self-aware, that has a fear of death, a mortality, 00:33:57.880 |
that we kind of associate with other living creatures? 00:34:00.580 |
I guess I'm sort of trying to do the slippery slope 00:34:07.920 |
into something where it's sufficiently a black box 00:34:12.120 |
to where it's starting to feel like it's conscious. 00:34:20.240 |
with the idea of robot suffering, do you think? 00:34:27.640 |
that would go into this, really, to answer this question, 00:34:31.580 |
but I presume that somebody who does know more about this 00:34:39.160 |
we can explain the behaviors in a parsimonious way 00:34:50.080 |
Or alternatively, whether you're in a situation 00:34:52.400 |
where you say, "I don't know how this is happening. 00:34:56.200 |
"The program does generate a kind of artificial 00:35:04.100 |
"starts to do things itself and is autonomous 00:35:20.640 |
most of the community is really excited about now 00:35:22.700 |
is with learning methods, so machine learning. 00:35:31.440 |
which is why somebody like Noam Chomsky criticizes them. 00:35:37.080 |
without understanding the theory, the physics, 00:35:42.180 |
And so it's possible if those are the kinds of methods 00:35:45.340 |
that succeed, we won't be able to know exactly, 00:36:05.840 |
and emotion and fear, and then we won't be able to say, 00:36:14.520 |
in this artificial neural network is the fear coming from? 00:36:19.120 |
So in that case, that's a really interesting place 00:36:22.480 |
where we do now start to return to behaviorism and say, 00:36:39.420 |
then we ought to try to give it the benefit of the doubt, 00:37:01.460 |
I think we should give them the benefit of the doubt 00:37:03.220 |
where we can, which means, I think it would be wrong 00:37:07.200 |
to torture an insect, but this doesn't necessarily mean 00:37:11.380 |
it's wrong to slap a mosquito that's about to bite you 00:37:22.980 |
- If it's okay with you, if we can go back just briefly. 00:37:26.460 |
So 44 years ago, like you mentioned, 40 plus years ago, 00:37:31.180 |
the classic book that started, that launched, 00:37:44.380 |
- Certainly, the key idea that underlies that book 00:37:56.720 |
who was in Oxford when I was, and I saw a pamphlet 00:37:59.640 |
that he'd written about experiments on chimpanzees 00:38:04.060 |
But I think I contributed to making it philosophically 00:38:08.020 |
more precise and to getting it into a broader audience. 00:38:12.040 |
And the idea is that we have a bias or a prejudice 00:38:16.760 |
against taking seriously the interests of beings 00:38:31.560 |
and men have had a bias against taking seriously 00:38:37.280 |
So I think something analogous, not completely identical, 00:38:41.280 |
but something analogous, goes on and has gone on 00:38:53.880 |
We see animals as existing to serve our needs 00:38:58.280 |
in various ways, and you can find this very explicit 00:39:06.020 |
And either we don't need to take their interests 00:39:17.800 |
They count a little bit, but they don't count 00:39:21.040 |
My book argues that that attitude is responsible 00:39:25.720 |
for a lot of the things that we do to animals 00:39:37.760 |
or milk more cheaply, using them in some research 00:39:41.720 |
that's by no means essential for our survival 00:39:48.240 |
some of the sports and things that we do to animals. 00:39:51.280 |
So I think that's unjustified because I think 00:40:03.480 |
who is in pain or suffering any more than it depends 00:40:05.920 |
on the race or sex of the being who is in pain or suffering. 00:40:10.920 |
And I think we ought to rethink our treatment of animals 00:40:14.720 |
along the lines of saying, if the pain is just as great 00:40:18.280 |
in an animal, then it's just as bad that it happens 00:40:29.520 |
but so as far as we know, we cannot communicate 00:40:35.240 |
but we would be able to communicate with robots, 00:40:43.040 |
between perhaps animals and the future of AI. 00:40:45.400 |
If we do create an AGI system or as we approach creating 00:40:51.320 |
that AGI system, what kind of questions would you ask her 00:40:56.320 |
to try to intuit whether there is consciousness 00:41:02.000 |
or more importantly, whether there's capacity to suffer? 00:41:19.820 |
And if she says yes, to describe those feelings, 00:41:32.060 |
I might also try to find out if the AGI has a sense 00:41:48.660 |
and your brain were transplanted into someone else's body, 00:41:53.260 |
or would it be the person whose body was still surviving, 00:42:00.340 |
if my brain was transplanted along with my memories 00:42:07.940 |
if they were transferred to a different piece of hardware, 00:42:15.340 |
- Sort of on that line, another perhaps absurd question, 00:42:29.680 |
- Presumably digital beings need to be running 00:42:40.460 |
is moving the brain from one place to another. 00:42:42.420 |
- So you could move it to a different kind of hardware, 00:42:49.300 |
we're going to transfer you to a fresh piece of hardware, 00:43:00.260 |
And you could imagine this conscious AGI saying, 00:43:03.220 |
that's fine, I don't mind having a little rest, 00:43:05.320 |
just make sure you don't lose me, or something like that. 00:43:08.740 |
- Yeah, I mean, that's an interesting thought, 00:43:10.340 |
that even with us humans, the suffering is in the software. 00:43:14.900 |
We right now don't know how to repair the hardware, 00:43:19.300 |
but we're getting better at it, and better in the idea. 00:43:23.180 |
I mean, a lot of, some people dream about one day 00:43:26.140 |
being able to transfer certain aspects of the software 00:43:41.180 |
I don't know if you're familiar with the companies 00:43:58.900 |
sort of increase the bandwidth at which your brain 00:44:02.440 |
can look up articles on Wikipedia, kind of thing. 00:44:05.240 |
Sort of expand the knowledge capacity of the brain. 00:44:08.340 |
Do you think that notion, is that interesting to you, 00:44:17.300 |
I'd love to be able to have that increased bandwidth. 00:44:19.960 |
And I want better access to my memory, I have to say, too. 00:44:23.680 |
As I get older, I talk to my wife about things 00:44:30.280 |
Her memory is often better about particular events. 00:44:42.560 |
I could search that particular year and rerun those things. 00:44:56.520 |
because people email me as if they know me well, 00:45:11.080 |
So on the flip side of AI, people like Stuart Russell 00:45:31.160 |
do you think is it possible that align with our values, 00:45:34.640 |
align with our human ethics, or living being ethics? 00:45:47.960 |
that we'll more or less accidentally lose control of AGI. 00:45:51.800 |
- Do you have that fear yourself, personally? 00:45:58.560 |
I talk to philosophers like Nick Bostrom and Toby Ord, 00:46:13.680 |
"No, we're not really that close to producing AGI, 00:46:19.600 |
- So if you look at Nick Bostrom's sort of the arguments, 00:46:24.960 |
So I'm, of course, I am a self-engineer AI system, 00:46:34.880 |
is there any fundamental reason that we'll never achieve it? 00:46:42.200 |
a dire existential risk, so we should be concerned about it. 00:46:46.640 |
And do you find that argument at all appealing 00:47:06.160 |
that raises the question, how far off is that? 00:47:11.480 |
And is there something that we can do about it now? 00:47:24.040 |
it seems unlikely that there's anything much we could do now 00:47:28.440 |
that would influence whether this is going to happen 00:47:37.320 |
this is what we need to do to prevent this happening 00:47:44.560 |
but I'm all in favor of some people doing research 00:47:48.640 |
into this to see if indeed it is that far off, 00:47:51.480 |
or if we are in a position to do something about it sooner. 00:48:02.760 |
even if the risk of extinction is very small, 00:48:12.760 |
who talk about long-term risks, extinction risks, 00:48:16.360 |
is only about how much priority that should have 00:48:20.520 |
- It was such a, if you look at the math of it 00:48:25.040 |
if it's existential risks, so everybody dies, 00:48:28.920 |
that it feels like an infinity in the math equation 00:48:33.160 |
that that makes the math with the priorities difficult to do 00:48:43.960 |
that it's non-zero probability that it'll happen tomorrow, 00:48:48.200 |
that how do you deal with these kinds of existential risks, 00:48:58.640 |
I'm not sure if global warming falls into that category, 00:49:01.960 |
because global warming is a lot more gradual. 00:49:04.800 |
- And people say it's not an existential risk 00:49:11.200 |
or northern Siberia, or something of that sort, yeah. 00:49:16.080 |
the complete existential risks, a fundamental, 00:49:19.640 |
like an overriding part of the equations of ethics? 00:49:24.720 |
- No, certainly if you treat it as an infinity, 00:49:34.440 |
I mean, one of the ethical assumptions that goes into this 00:49:45.880 |
is in some way comparable to the sufferings or deaths 00:49:59.280 |
but I also think there's a case for taking the other view. 00:50:05.880 |
but still if there's some uncertainty about this 00:50:12.520 |
then still it's gonna overwhelm everything else. 00:50:20.840 |
I'm not convinced that it's really infinite here. 00:50:23.400 |
And even Nick Bostrom in his discussion of this 00:50:27.200 |
doesn't claim that there'll be an infinite number 00:50:33.320 |
It's a vast number that I think he calculates. 00:50:36.020 |
This is assuming we can upload consciousness onto these, 00:50:43.560 |
and therefore there'll be much more energy efficient, 00:50:45.280 |
but he calculates the amount of energy in the universe 00:50:57.360 |
is he quickly jumps from the individual scale 00:51:08.880 |
It's both interesting from a computer science perspective, 00:51:11.360 |
AI perspective, and from an ethical perspective, 00:51:34.840 |
discounted by the odds that you won't be able 00:51:37.640 |
to produce those consequences, that something will go wrong. 00:51:40.360 |
But in a simple case, let's assume we have certainty 00:51:43.840 |
about what the consequences of our actions will be, 00:51:53.360 |
that you talk with Sam Harris on this podcast 00:51:58.760 |
That's like two hours of moral philosophy discussion. 00:52:07.360 |
And actually there's one thing that I need to add, 00:52:25.920 |
there are different things that could be good consequences. 00:52:35.800 |
And that makes the calculations even more difficult 00:52:38.040 |
'cause then you need to know how to balance these things off. 00:52:47.920 |
I think that the calculation becomes more manageable 00:52:56.320 |
It's still in practice, we don't know how to do it. 00:53:02.680 |
We don't know how to calculate the probabilities 00:53:04.880 |
that different actions will produce this or that. 00:53:14.500 |
And one way we have to focus on the short-term consequences 00:53:27.660 |
what about the extreme suffering of very small groups? 00:53:37.560 |
How do you, would you say you yourself are utilitarian? 00:53:43.160 |
- Sort of, what do you make of the difficult, ethical, 00:53:48.160 |
maybe poetic suffering of very few individuals? 00:53:54.960 |
- I think it's possible that that gets overridden 00:53:57.020 |
by benefits to very large numbers of individuals. 00:54:02.840 |
But before we conclude that it is the right answer, 00:54:21.540 |
below the neutral level than extreme happiness 00:54:27.300 |
So when I think about the worst experiences possible 00:54:33.160 |
I don't think of them as equidistant from neutral. 00:54:36.200 |
So like it's a scale that goes from minus 100 00:54:54.440 |
of my most painful experiences even for two hours 00:55:13.400 |
that it's okay to make one person suffer extremely 00:55:23.540 |
But at some point, I do think you should aggregate 00:55:30.560 |
even though it violates our intuitions of justice 00:55:35.560 |
of giving priority to those who are worse off, 00:55:43.040 |
- Yeah, it's some complicated nonlinear function. 00:55:51.080 |
the more we're able to measure a bunch of factors 00:55:55.680 |
And I could foresee the ability to estimate well-being 00:56:07.680 |
Do you think it'll be possible and is a good idea 00:56:12.400 |
to push that kind of analysis to make then public decisions, 00:56:23.560 |
here's a tax rate at which well-being will be optimized. 00:56:28.280 |
- Yeah, that would be great if we really knew that, 00:56:32.360 |
- No, but do you think it's possible to converge 00:56:43.080 |
I think it would be difficult to get converged 00:56:58.720 |
that the worse off are making are less than the gains 00:57:01.460 |
that those who are sort of medium badly off could be making. 00:57:05.720 |
So we still have all of these intuitions that we argue about. 00:57:14.280 |
doesn't show that there isn't a right answer there. 00:57:17.840 |
- Do you think, who gets to say what is right and wrong? 00:57:21.360 |
Do you think there's place for ethics oversight 00:57:29.360 |
overseeing what kind of decisions AI can make or not, 00:57:39.580 |
but the ideas you've explored in animal liberation, 00:57:49.120 |
we shouldn't do this, but is there some harder rules 00:57:53.640 |
Or is this a collective thing we converse towards a society 00:57:56.720 |
and thereby make the better and better ethical decisions? 00:58:07.920 |
and the way it doesn't work always very well. 00:58:10.200 |
So I don't see a better option than allowing the public 00:58:15.400 |
to vote for governments in accordance with their policies. 00:58:35.160 |
But I recognise that democracy isn't really well set up 00:58:44.320 |
and benevolent, you know, omnibenevolent leader 00:58:48.720 |
who would do that better than democracies could. 00:58:57.400 |
isn't gonna be corrupted by a variety of influences. 00:59:01.320 |
You know, we've had so many examples of people 00:59:12.800 |
So I don't know, you know, that's why, as I say, 00:59:16.560 |
I don't know that we have a better system than democracy 00:59:20.040 |
- Well, so you also discuss effective altruism, 00:59:23.440 |
which is a mechanism for going around government, 00:59:27.240 |
for putting the power in the hands of the people 00:59:29.560 |
to donate money towards causes to help, you know, 00:59:45.280 |
you've 10 years ago wrote "The Life You Can Save" 00:59:48.240 |
that's now, I think, available for free online. 00:59:51.400 |
- That's right, you can download either the ebook 00:59:53.880 |
or the audio book free from thelifeyoucansave.org. 00:59:57.520 |
- And what are the key ideas that you present in the book? 01:00:05.200 |
is to make people realise that it's not difficult 01:00:13.720 |
that there are highly effective organisations now 01:00:16.800 |
that are doing this, that they've been independently assessed 01:00:20.320 |
and verified by research teams that are expert in this area, 01:00:34.360 |
to really make a positive contribution to the world 01:00:43.560 |
and living a life that is barely, or perhaps not at all, 01:00:51.960 |
- So you describe a minimum ethical standard of giving. 01:01:01.400 |
that want to be effectively altruistic in their life, 01:01:09.400 |
- There are many different kinds of ways of living 01:01:13.600 |
And if you're at the point where you're thinking 01:01:33.400 |
and then donate most of it to effective charities, 01:01:37.000 |
to going to work for a really good nonprofit organization 01:01:40.880 |
so that you can directly use your skills and ability 01:01:50.840 |
maybe small chances but big payoffs in politics. 01:01:56.560 |
where if you're talented, you might rise to a higher level 01:02:01.760 |
Do research in an area where the payoffs could be great. 01:02:07.240 |
but too few people are even thinking about those questions. 01:02:11.400 |
They're just going along in some sort of preordained rut 01:02:37.120 |
if you would like to give a percentage of your income 01:02:40.080 |
that you talk about in life, you can save that. 01:02:42.440 |
I mean, I was looking through, it's quite a compelling, 01:02:53.760 |
- Okay, so I do actually set out suggested levels of giving 01:03:02.840 |
the traditional tithe that's recommended in Christianity 01:03:13.600 |
Tax scales reflect the idea that the more income you have, 01:03:18.000 |
And I think the same is true in what you can give. 01:03:25.360 |
which starts at 1% for people on modest incomes 01:03:28.880 |
and rises to 33 1/3% for people who are really earning a lot. 01:03:33.880 |
And my idea is that I don't think any of these amounts 01:03:42.080 |
because they are progressive and geared to income. 01:03:48.600 |
and can know that they're doing something significant 01:03:56.080 |
between people in extreme poverty in the world 01:04:07.520 |
because there's something about our human nature 01:04:27.200 |
to help people in great need when we can easily do so, 01:04:41.880 |
with having a purpose that's larger than yourself. 01:05:06.240 |
with similar ideas and they tend to be interesting, 01:05:12.680 |
is another big contribution to having a good life. 01:05:15.960 |
- So we talked about big things that are beyond ourselves, 01:05:29.600 |
the ethics that you gain from pondering your own mortality? 01:05:37.880 |
you can't help thinking about your own mortality. 01:05:47.080 |
I don't think there's anything after the death of my body, 01:05:51.280 |
assuming that we won't be able to upload my mind 01:05:58.400 |
or anything to look forward to in that sense. 01:06:07.960 |
of our ability to be cognizant of our mortality, 01:06:21.000 |
- I suppose the fact that you have only a limited time 01:06:37.760 |
But otherwise, no, I'd rather have more time to do more. 01:06:42.040 |
I'd also like to be able to see how things go 01:06:47.520 |
Is climate change gonna turn out to be as dire 01:06:49.920 |
as a lot of scientists say that it is going to be? 01:06:57.880 |
I'd really like to know the answers to those questions, 01:07:10.160 |
- I think the meaning of life is the meaning we give to it. 01:07:14.120 |
I don't think that we were brought into the universe 01:07:35.080 |
like having a rich, fulfilling, enjoyable, pleasurable life. 01:07:39.160 |
And we can try to do our part in reducing the bad things 01:07:49.520 |
is to do a little bit more of the good things, 01:07:55.440 |
- Yeah, so do as much of the good things as you can, 01:08:01.920 |
I don't think there's a better place to end it. 01:08:11.360 |
and thank you to our sponsors, Cash App and Masterclass. 01:08:17.680 |
by downloading Cash App and using the code LEXPODCAST, 01:08:31.000 |
and the journey I'm on, my research and startup. 01:08:35.260 |
If you enjoy this thing, subscribe on YouTube, 01:08:40.320 |
support on Patreon, or connect with me on Twitter 01:08:43.080 |
at Lex Friedman, spelled without the E, just F-R-I-D-M-A-N. 01:08:52.800 |
What one generation finds ridiculous, the next accepts. 01:09:01.120 |
Thank you for listening, and hope to see you next time.