back to index

Nick Bostrom: Superintelligence | AI Podcast Clips


Chapters

0:0 What is Superintelligence
2:10 AI existential risk
3:47 Near term vs long term
5:39 Killer app
8:17 Intelligence expulsion
9:36 Humans are fallible
10:44 We lose our specialness
12:18 What would happen in a posthuman world
14:28 The scale of intelligence
16:43 Learning ability
19:44 Multiple Value Systems
21:25 First instinct

Whisper Transcript | Transcript Only Page

00:00:00.000 | - Let's talk about superintelligence,
00:00:03.560 | at least for a little bit.
00:00:05.680 | And let's start at the basics.
00:00:07.280 | What to you is intelligence?
00:00:09.340 | - Yeah, not to get too stuck with the definitional question.
00:00:14.760 | I mean, the common sense to understand,
00:00:17.400 | like the ability to solve complex problems,
00:00:19.800 | to learn from experience, to plan, to reason,
00:00:23.040 | some combination of things like that.
00:00:27.320 | - Is consciousness mixed up into that or no?
00:00:29.840 | Is consciousness mixed up into that or is it-
00:00:31.760 | - Well, I think it could be fairly intelligent,
00:00:34.880 | at least without being conscious probably.
00:00:38.000 | - So then what is superintelligence?
00:00:42.160 | - Yeah, that would be like something that was much more-
00:00:44.920 | - Of that.
00:00:46.320 | - Had much more general cognitive capacity
00:00:49.000 | than we humans have.
00:00:50.400 | So if we talk about general superintelligence,
00:00:53.920 | it would be much faster learner be able to reason
00:00:57.840 | much better, make plans that are more effective
00:01:00.320 | at achieving its goals, say in a wide range
00:01:03.080 | of complex, challenging environments.
00:01:05.720 | - In terms of, as we turn our eye to the idea
00:01:08.880 | of sort of existential threats from superintelligence,
00:01:12.740 | do you think superintelligence has to exist
00:01:16.240 | in the physical world or can it be digital only?
00:01:19.520 | Sort of, we think of our general intelligence as us humans,
00:01:23.960 | as an intelligence that's associated with the body
00:01:27.360 | that's able to interact with the world,
00:01:28.880 | that's able to affect the world directly with physically.
00:01:32.760 | - I mean, digital only is perfectly fine, I think.
00:01:34.960 | I mean, it's physical in the sense that obviously
00:01:37.720 | the computers and the memories are physical.
00:01:40.840 | - But it's capability to affect the world sort of-
00:01:43.640 | - Could be very strong even if it has a limited
00:01:46.480 | set of actuators, if it can type text on the screen
00:01:51.360 | or something like that, that would be, I think, ample.
00:01:54.480 | - So in terms of the concerns of existential threat of AI,
00:01:59.480 | how can an AI system that's in the digital world
00:02:03.560 | have existential risk sort of,
00:02:07.000 | and what are the attack vectors for a digital system?
00:02:10.520 | - Well, I mean, I guess maybe to take one step back,
00:02:12.920 | so I should emphasize that I also think
00:02:16.600 | there's this huge positive potential
00:02:18.880 | from machine intelligence, including superintelligence.
00:02:22.040 | And I wanna stress that because some of my writing
00:02:26.920 | has focused on what can go wrong.
00:02:29.440 | And when I wrote the book "Superintelligence,"
00:02:31.800 | at that point, I felt that there was a kind of neglect
00:02:36.800 | of what would happen if AI succeeds,
00:02:40.440 | and in particular, a need to get
00:02:42.280 | a more granular understanding of where the pitfalls are
00:02:45.000 | so we can avoid them.
00:02:46.200 | I think that since the book came out in 2014,
00:02:51.680 | there has been a much wider recognition of that,
00:02:54.640 | and a number of research groups
00:02:56.560 | are now actually working on developing,
00:02:58.800 | say, AI alignment techniques and so on and so forth.
00:03:01.400 | So I'd like, yeah, I think now it's important
00:03:05.440 | to make sure we bring back onto the table
00:03:10.040 | the upside as well.
00:03:11.000 | - And there's a little bit of a neglect now on the upside,
00:03:14.560 | which is, I mean, if you look at, I was talking to a friend,
00:03:17.560 | if you look at the amount of information that is available,
00:03:20.440 | or people talking, or people being excited
00:03:22.480 | about the positive possibilities of general intelligence,
00:03:25.720 | that's not, it's far outnumbered
00:03:29.160 | by the negative possibilities
00:03:31.520 | in terms of our public discourse.
00:03:34.000 | - Possibly, yeah.
00:03:35.640 | It's hard to measure it, but--
00:03:37.680 | - What are, can you linger on that for a little bit?
00:03:39.720 | What are some, to you, possible big positive impacts
00:03:44.440 | of general intelligence, superintelligence?
00:03:46.920 | - Well, I mean, superintelligence,
00:03:48.560 | because I tend to also wanna distinguish
00:03:51.560 | these two different contexts of thinking about AI
00:03:54.680 | and AI impacts, the kind of near-term and long-term,
00:03:57.960 | if you want, both of which I think are legitimate things
00:04:01.800 | to think about, and people should discuss both of them,
00:04:06.800 | but they are different, and they often get mixed up,
00:04:10.640 | and then I get, you get confusion.
00:04:13.760 | Like, I think you get simultaneously,
00:04:15.160 | like maybe an overhyping of the near-term
00:04:17.200 | and an underhyping of the long-term,
00:04:18.880 | and so I think as long as we keep them apart,
00:04:20.920 | we can have, like, two good conversations,
00:04:23.840 | but, or we can mix them together
00:04:26.160 | and have one bad conversation.
00:04:27.280 | - Can you clarify just the two things we're talking about,
00:04:30.360 | the near-term and the long-term?
00:04:32.200 | What are the distinctions?
00:04:33.040 | - Well, it's a blurry distinction,
00:04:36.680 | but say the things I wrote about in this book,
00:04:38.880 | superintelligence, long-term,
00:04:41.840 | things people are worrying about today
00:04:45.680 | with, I don't know, algorithmic discrimination,
00:04:48.680 | or even things, self-driving cars and drones and stuff,
00:04:53.120 | more near-term.
00:04:54.120 | And then, of course, you could imagine some medium-term
00:04:58.880 | where they kind of overlap and one evolves into the other.
00:05:01.920 | But at any rate, I think both, yeah,
00:05:05.400 | the issues look kind of somewhat different
00:05:08.440 | depending on which of these contexts.
00:05:10.240 | - So I think it would be nice
00:05:12.640 | if we can talk about the long-term,
00:05:15.360 | and think about a positive impact
00:05:20.360 | or a better world because of the existence
00:05:24.280 | of the long-term superintelligence.
00:05:26.520 | Do you have views of such a world?
00:05:28.000 | - Yeah, I mean, I guess it's a little hard to articulate
00:05:30.960 | because it seems obvious that the world
00:05:33.320 | has a lot of problems as it currently stands.
00:05:36.560 | And it's hard to think of any one of those
00:05:41.120 | which it wouldn't be useful to have,
00:05:43.720 | like, a friendly aligned superintelligence working on.
00:05:48.720 | - So from health to the economic system
00:05:53.720 | to be able to sort of improve the investment
00:05:56.960 | and trade and foreign policy decisions,
00:05:59.120 | all that kind of stuff.
00:06:00.880 | - All that kind of stuff and a lot more.
00:06:03.120 | - I mean, what's the killer app?
00:06:06.760 | - Well, I don't think there is one.
00:06:08.320 | I think AI, especially artificial general intelligence,
00:06:12.960 | is really the ultimate general purpose technology.
00:06:16.440 | So it's not that there is this one problem,
00:06:18.360 | this one area where it will have a big impact,
00:06:20.760 | but if and when it succeeds,
00:06:23.520 | it will really apply across the board
00:06:26.320 | in all fields where human creativity and intelligence
00:06:29.960 | and problem solving is useful,
00:06:31.160 | which is pretty much all fields, right?
00:06:33.680 | The thing that it would do
00:06:36.680 | is give us a lot more control over nature.
00:06:39.480 | It wouldn't automatically solve the problems
00:06:41.680 | that arise from conflict between humans,
00:06:43.960 | fundamentally political problems.
00:06:46.840 | Some subset of those might go away
00:06:48.280 | if we just had more resources and cooler tech,
00:06:50.800 | but some subset would require coordination
00:06:54.800 | that is not automatically achieved
00:06:58.800 | just by having more technological capability.
00:07:01.600 | But anything that's not of that sort,
00:07:03.440 | I think you just get like an enormous boost
00:07:05.680 | with this kind of cognitive technology
00:07:09.600 | once it goes all the way.
00:07:11.480 | Now, again, that doesn't mean I'm like thinking,
00:07:13.920 | oh, people don't recognize what's possible
00:07:18.920 | with current technology
00:07:20.680 | and like sometimes things get overhyped,
00:07:22.760 | but I mean, those are perfectly consistent views to hold,
00:07:25.600 | the ultimate potential being enormous.
00:07:28.480 | And then it's a very different question
00:07:30.400 | of how far are we from that
00:07:31.880 | or what can we do with near-term technology?
00:07:34.040 | - Yeah, so what's your intuition
00:07:35.120 | about the idea of intelligence explosion?
00:07:37.840 | So there's this,
00:07:40.680 | you know, when you start to think about that leap
00:07:42.800 | from the near-term to the long-term,
00:07:44.880 | the natural inclination,
00:07:46.880 | like for me, sort of building machine learning systems today
00:07:49.720 | it seems like it's a lot of work
00:07:51.760 | to get to general intelligence,
00:07:53.640 | but there's some intuition of exponential growth,
00:07:55.880 | of exponential improvement, of intelligence explosion.
00:07:59.480 | Can you maybe try to elucidate,
00:08:04.200 | try to talk about what's your intuition
00:08:07.680 | about the possibility of a intelligence explosion,
00:08:11.560 | that it won't be this gradual, slow process,
00:08:13.920 | there might be a phase shift?
00:08:15.960 | - Yeah, I think it's,
00:08:19.680 | we don't know how explosive it will be.
00:08:22.080 | I think for what it's worth,
00:08:24.200 | seems fairly likely to me that at some point
00:08:27.960 | there will be some intelligence explosion,
00:08:29.960 | like some period of time
00:08:32.000 | where progress in AI becomes extremely rapid.
00:08:35.600 | Roughly in the area where you might say
00:08:39.080 | it's kind of human-ish equivalent
00:08:42.280 | in core cognitive faculties,
00:08:46.040 | that the concept of human equivalent,
00:08:48.600 | like it starts to break down when you look too closely at it
00:08:51.680 | and just how explosive does something have to be
00:08:54.400 | for it to be called an intelligence explosion?
00:08:57.640 | Like, does it have to be like overnight literally,
00:08:59.600 | or a few years?
00:09:01.040 | But overall, I guess,
00:09:04.680 | if you plotted the opinions of different people
00:09:08.160 | in the world, I guess that would be somewhat more
00:09:10.720 | probability towards the intelligence explosion scenario
00:09:14.080 | than probably the average AI researcher, I guess.
00:09:18.200 | - So, and then the other part of the intelligence explosion,
00:09:21.400 | or just, forget explosion, just progress,
00:09:24.600 | is once you achieve that gray area
00:09:27.040 | of human-level intelligence,
00:09:29.040 | is it obvious to you that we should be able
00:09:31.760 | to proceed beyond it to get to super intelligence?
00:09:35.760 | - Yeah, that seems, I mean, as much as any of these things
00:09:39.760 | can be obvious, given we've never had one,
00:09:43.680 | people have different views,
00:09:44.760 | smart people have different views,
00:09:46.040 | it's like there's some degree of uncertainty
00:09:49.280 | that always remains for any big, futuristic,
00:09:52.200 | philosophical, grand question
00:09:54.800 | that just we realize humans are fallible,
00:09:56.640 | especially about these things.
00:09:58.200 | But it does seem, as far as I'm judging things,
00:10:01.680 | based on my own impressions,
00:10:03.680 | that it seems very unlikely that that would be a ceiling
00:10:08.160 | at or near human cognitive capacity.
00:10:12.720 | - But, and that's such a, I don't know,
00:10:15.600 | that's such a special moment.
00:10:17.520 | It's both terrifying and exciting
00:10:20.120 | to create a system that's beyond our intelligence.
00:10:23.640 | So, maybe you can step back and say,
00:10:27.120 | like, how does that possibility make you feel?
00:10:30.360 | That we can create something,
00:10:33.240 | it feels like there's a line beyond which it steps,
00:10:37.000 | it'll be able to outsmart you,
00:10:39.760 | and therefore it feels like a step where we lose control.
00:10:44.240 | - Well, I don't think the latter follows,
00:10:48.200 | that is, you could imagine,
00:10:50.600 | and in fact, this is what a number of people
00:10:52.920 | are working towards, making sure that we could ultimately
00:10:56.880 | project higher levels of problem-solving ability
00:11:00.640 | while still making sure that they are aligned,
00:11:03.520 | like they are in the service of human values.
00:11:06.240 | I mean, so, losing control, I think,
00:11:11.280 | is not a given that that would happen.
00:11:15.040 | Now, you asked how it makes you feel.
00:11:16.760 | I mean, to some extent, I've lived with this for so long,
00:11:19.400 | since as long as I can remember,
00:11:22.560 | being an adult or even a teenager,
00:11:25.160 | it seemed to me obvious that at some point,
00:11:27.000 | AI will succeed.
00:11:28.440 | - And so, I actually misspoke, I didn't mean control.
00:11:33.440 | I meant, because the control problem is an interesting thing
00:11:36.680 | and I think the hope is, at least we should be able
00:11:40.320 | to maintain control over systems that are smarter than us,
00:11:44.040 | but we do lose our specialness.
00:11:48.380 | It's sort of, we'll lose our place
00:11:53.240 | as the smartest, coolest thing on Earth.
00:11:57.000 | And there's an ego involved with that,
00:12:00.600 | that humans aren't very good at dealing with.
00:12:04.480 | I mean, I value my intelligence as a human being.
00:12:08.480 | It seems like a big transformative step
00:12:11.040 | to realize there's something out there
00:12:13.320 | that's more intelligent.
00:12:14.600 | I mean, you don't see that as such a fundamental--
00:12:17.800 | - Yeah, I think, yes, a lot, I think it would be small.
00:12:21.920 | I mean, I think there are already a lot of things out there
00:12:25.120 | that are, I mean, certainly if you think the universe
00:12:27.320 | is big, there's gonna be other civilizations
00:12:29.240 | that already have super intelligences
00:12:31.680 | or that just naturally have brains the size of beach balls
00:12:35.480 | and are like completely leaving us in the dust.
00:12:39.300 | And we haven't come face to face with that.
00:12:42.280 | - We haven't come face to face,
00:12:43.480 | but I mean, that's an open question,
00:12:45.600 | what would happen in a kind of post-human world,
00:12:50.520 | like how much day to day would these super intelligences
00:12:55.520 | be involved in the lives of ordinary?
00:12:58.360 | I mean, you could imagine some scenario
00:13:01.360 | where it would be more like a background thing
00:13:03.020 | that would help protect against some things,
00:13:05.120 | but you wouldn't, like there wouldn't be this intrusive
00:13:08.480 | kind of like making you feel bad
00:13:10.800 | by like making clever jokes on your expense.
00:13:13.360 | Like there's like all sorts of things
00:13:14.680 | that maybe in the human context
00:13:16.760 | would feel awkward about that.
00:13:19.400 | You don't wanna be the dumbest kid in your class,
00:13:21.360 | everybody picks it.
00:13:22.200 | Like a lot of those things,
00:13:23.800 | maybe you need to abstract away from,
00:13:26.760 | if you're thinking about this context
00:13:28.220 | where we have infrastructure that is in some sense
00:13:30.720 | beyond any or all humans.
00:13:34.940 | I mean, it's a little bit like say the scientific community
00:13:38.200 | as a whole, if you think of that as a mind,
00:13:41.040 | it's a little bit of metaphor,
00:13:42.040 | but I mean, obviously it's gotta be like way more capacious
00:13:46.600 | than any individual.
00:13:48.200 | So in some sense, there is this mind like thing
00:13:51.280 | already out there that's just vastly more intelligent
00:13:56.080 | than a new individual is.
00:13:58.280 | And we think, okay, that's,
00:14:01.400 | you just accept that as a fact.
00:14:04.000 | - That's the basic fabric of our existence
00:14:06.200 | is there's a super intelligent.
00:14:07.640 | - Yeah, you get used to a lot of.
00:14:09.320 | - I mean, there's already Google and Twitter and Facebook,
00:14:12.600 | these recommender systems that are the basic fabric
00:14:17.640 | of our, I could see them becoming,
00:14:21.600 | I mean, do you think of the collective intelligence
00:14:23.720 | of these systems as already perhaps
00:14:25.920 | reaching super intelligence level?
00:14:27.720 | - Well, I mean, so here it comes to this,
00:14:30.600 | the concept of intelligence and the scale
00:14:33.120 | and what human level means.
00:14:35.960 | The kind of vagueness and indeterminacy of those concepts
00:14:41.560 | starts to dominate how you would answer that question.
00:14:47.600 | So like say the Google search engine
00:14:50.480 | has a very high capacity of a certain kind,
00:14:53.440 | like remembering and retrieving information,
00:14:56.760 | particularly like text or images
00:15:02.680 | that you have a kind of string, a word string key,
00:15:07.680 | obviously superhuman at that,
00:15:09.120 | but a vast set of other things it can't even do at all,
00:15:14.120 | not just not do well.
00:15:17.200 | So you have these current AI systems
00:15:19.600 | that are superhuman in some limited domain
00:15:22.880 | and then like radically subhuman in all other domains.
00:15:27.880 | Same with a chess, like are just a simple computer
00:15:31.040 | that can multiply really large numbers, right?
00:15:33.000 | So it's gonna have this like one spike of super intelligence
00:15:36.040 | and then a kind of a zero level of capability
00:15:38.880 | across all other cognitive fields.
00:15:40.920 | - Yeah, I don't necessarily think the generalness,
00:15:44.160 | I mean, I'm not so attached to it,
00:15:45.440 | but I could sort of, it's a gray area and it's a feeling,
00:15:49.160 | but to me sort of alpha zero
00:15:53.000 | is somehow much more intelligent,
00:15:56.320 | much, much more intelligent than Deep Blue.
00:15:59.160 | And to say which domain,
00:16:01.680 | well, you could say, well, these are both just board game,
00:16:03.680 | they're both just able to play board games,
00:16:05.440 | who cares if they're gonna do better or not?
00:16:07.860 | But there's something about the learning, the self play--
00:16:10.440 | - The learning, yeah. - That makes it,
00:16:13.400 | crosses over into that land of intelligence
00:16:16.320 | that doesn't necessarily need to be general.
00:16:18.440 | In the same way, Google is much closer to Deep Blue currently
00:16:22.120 | in terms of its search engine--
00:16:23.800 | - Yeah. - Than it is to
00:16:24.920 | sort of the alpha zero.
00:16:26.600 | And the moment it becomes,
00:16:28.360 | and the moment these recommender systems
00:16:30.040 | really become more like alpha zero,
00:16:33.080 | but being able to learn a lot without the constraints
00:16:36.640 | of being heavily constrained by human interaction,
00:16:40.320 | that seems like a special moment in time.
00:16:43.400 | - I mean, certainly learning ability
00:16:46.400 | seems to be an important facet of general intelligence.
00:16:51.200 | - Right. - That you can take
00:16:52.480 | some new domain that you haven't seen before,
00:16:55.840 | and you weren't specifically pre-programmed for,
00:16:57.880 | and then figure out what's going on there,
00:17:00.040 | and eventually become really good at it.
00:17:02.160 | So that's something alpha zero
00:17:05.000 | has much more of than Deep Blue had.
00:17:08.720 | And in fact, I mean, systems like alpha zero can learn,
00:17:12.640 | not just Go, but other,
00:17:14.560 | in fact, probably beat Deep Blue in chess and so forth.
00:17:17.960 | Right? - Yeah,
00:17:18.800 | not just Deep Blue. - So you do see this--
00:17:19.640 | - Destroy Deep Blue. - This general,
00:17:20.760 | and so it matches the intuition.
00:17:22.400 | We feel it's more intelligent,
00:17:23.920 | and it also has more of this
00:17:25.280 | general purpose learning ability.
00:17:27.200 | And if we get systems
00:17:29.600 | that have even more general purpose learning ability,
00:17:31.560 | it might also trigger an even stronger intuition
00:17:33.520 | that they are actually starting to get smart.
00:17:36.760 | - So if you were to pick a future,
00:17:38.320 | what do you think a utopia looks like with AGI systems?
00:17:42.520 | Is it the neural link, brain-computer interface world,
00:17:48.160 | where we're kind of really closely interlinked
00:17:50.400 | with AI systems?
00:17:52.400 | Is it possibly where AGI systems replace us completely
00:17:56.840 | while maintaining the values and the consciousness?
00:18:01.840 | Is it something like it's a completely invisible fabric,
00:18:04.720 | like you mentioned, a society where it's just AIDS
00:18:07.080 | and a lot of stuff that we do,
00:18:09.240 | like curing diseases and so on?
00:18:10.840 | What is utopia if you get to pick?
00:18:13.120 | - Yeah, I mean, it's a good question,
00:18:14.680 | and a deep and difficult one.
00:18:17.800 | I'm quite interested in it.
00:18:19.080 | I don't have all the answers yet,
00:18:22.280 | but, or might never have,
00:18:23.800 | but I think there are some different observations
00:18:27.280 | one can make.
00:18:28.120 | One is if this scenario actually did come to pass,
00:18:32.400 | it would open up this vast space of possible modes of being.
00:18:37.400 | On one hand, material and resource constraints
00:18:42.520 | would just be expanded dramatically.
00:18:44.920 | So there would be a lot of, a big pie, let's say, right?
00:18:49.480 | Also, it would enable us to do things,
00:18:53.680 | including to ourselves,
00:18:57.560 | or like that it would just open up
00:19:00.480 | this much larger design space
00:19:02.080 | and options space than we have ever had access to
00:19:06.120 | in human history.
00:19:07.120 | So I think two things follow from that.
00:19:10.000 | One is that we probably would need to make
00:19:13.120 | a fairly fundamental rethink of what ultimately we value,
00:19:18.120 | like think things through more from first principles.
00:19:20.640 | The context would be so different from the familiar
00:19:22.480 | that we could have just take what we've always been doing
00:19:25.000 | and then like, oh, well, we have this cleaning robot
00:19:28.280 | that cleans the dishes in the sink.
00:19:31.760 | And a few other small things.
00:19:33.000 | And like, I think we would have to go back
00:19:34.640 | to first principles.
00:19:35.760 | - So even from the individual level,
00:19:37.760 | go back to the first principles of what is the meaning
00:19:40.400 | of life, what is happiness, what is fulfillment?
00:19:42.880 | - Yeah.
00:19:44.160 | And then also connected to this large space of resources
00:19:48.560 | is that it would be possible.
00:19:51.960 | And I think something we should aim for is to do well
00:19:59.000 | by the lights of more than one value system.
00:20:02.720 | That is, we wouldn't have to choose only one value criterion
00:20:08.880 | and say, we're gonna do something that scores really high
00:20:17.520 | on the metric of say hedonism.
00:20:21.920 | And then it's like a zero by other criteria,
00:20:26.480 | like kind of wire headed brain Cinebat.
00:20:29.120 | And it's like a lot of pleasure, that's good.
00:20:31.680 | But then like no beauty, no achievement.
00:20:33.720 | Or pick it up.
00:20:36.320 | I think to some significant, not unlimited sense,
00:20:39.800 | but a significant sense, it would be possible to do very well
00:20:44.080 | by many criteria.
00:20:44.960 | Like maybe you could get like 98% of the best
00:20:49.960 | according to several criteria at the same time,
00:20:52.760 | given this great expansion of the option space.
00:20:57.760 | And so-
00:20:59.560 | - So have competing value systems, competing criteria
00:21:02.960 | as a sort of forever, just like our Democrat
00:21:07.960 | versus Republican, there seems to be this always
00:21:10.400 | multiple parties that are useful for our progress in society,
00:21:14.320 | even though it might seem dysfunctional inside the moment,
00:21:16.920 | but having the multiple value systems
00:21:20.000 | seems to be beneficial for, I guess, a balance of power.
00:21:25.000 | - So that's, yeah, not exactly what I have in mind
00:21:27.960 | that it's, well, although it can be,
00:21:29.800 | maybe in an indirect way it is.
00:21:31.360 | But that if you had the chance to do something
00:21:36.760 | that scored well on several different metrics,
00:21:41.560 | our first instinct should be to do that
00:21:43.480 | rather than immediately leap to the thing,
00:21:46.840 | which ones of these value systems
00:21:48.360 | are we gonna screw over?
00:21:49.640 | Like I think our first instinct,
00:21:50.960 | let's first try to do very well by all of them.
00:21:53.240 | Then it might be that you can't get 100% of all,
00:21:55.880 | and you would have to then like have the hard conversation
00:21:58.720 | about which one will only get 97%.
00:22:00.680 | - There you go, there's my cynicism
00:22:02.080 | that all of existence is always a trade-off.
00:22:04.920 | But you say, maybe it's not such a bad trade-off.
00:22:07.560 | Let's first at least try-
00:22:08.800 | - Well, this would be a distinctive context
00:22:11.040 | in which at least some of the constraints would be removed.
00:22:16.040 | - I'll leave you there.
00:22:17.560 | - So there's probably still be trade-offs in the end.
00:22:19.160 | It's just that we should first make sure
00:22:20.680 | we at least take advantage of this abundance.
00:22:24.640 | So in terms of thinking about this,
00:22:27.320 | like, yeah, one should think,
00:22:29.440 | I think in this kind of frame of mind of generosity
00:22:34.440 | and inclusiveness to different value systems
00:22:39.000 | and see how far one can get there first.
00:22:42.280 | And I think one could do something
00:22:45.280 | that would be very good
00:22:46.480 | according to many different criteria.
00:22:49.760 | (laughs)
00:22:51.840 | (upbeat music)
00:22:54.440 | (upbeat music)
00:22:57.040 | (upbeat music)
00:22:59.640 | (upbeat music)
00:23:02.240 | (upbeat music)
00:23:04.840 | (upbeat music)
00:23:07.440 | [BLANK_AUDIO]