back to index

Daniel Kahneman: Thinking Fast and Slow, Deep Learning, and AI | Lex Fridman Podcast #65


Chapters

0:0
3:3 World War Two Taught Us about Human Psychology
8:59 System One
16:38 Advances in Machine Learning
21:14 Neural Networks
22:20 Grounding to the Physical Space
23:45 Active Learning
42:8 The Properties of Happiness
65:38 The Focusing Illusion
73:19 Good Test for Intelligence for an Artificial Intelligence System
78:14 Words of Wisdom

Whisper Transcript | Transcript Only Page

00:00:00.000 | The following is a conversation with Daniel Kahneman,
00:00:03.200 | winner of the Nobel Prize in Economics
00:00:05.780 | for his integration of economic science
00:00:08.240 | with the psychology of human behavior,
00:00:10.080 | judgment and decision-making.
00:00:12.320 | He's the author of the popular book,
00:00:14.320 | "Thinking Fast and Slow,"
00:00:16.160 | that summarizes in an accessible way
00:00:18.720 | his research of several decades,
00:00:20.800 | often in collaboration with Amos Tversky
00:00:23.560 | on cognitive biases, prospect theory and happiness.
00:00:27.940 | The central thesis of this work
00:00:29.700 | is the dichotomy between two modes of thought.
00:00:32.500 | What he calls system one is fast,
00:00:34.900 | instinctive and emotional.
00:00:36.600 | System two is slower, more deliberative and more logical.
00:00:40.740 | The book delineates cognitive biases
00:00:43.180 | associated with each of these two types of thinking.
00:00:46.080 | His study of the human mind
00:00:48.780 | and his peculiar and fascinating limitations
00:00:51.560 | are both instructive and inspiring
00:00:54.080 | for those of us seeking to engineer intelligence systems.
00:00:57.800 | This is the Artificial Intelligence Podcast.
00:01:00.820 | If you enjoy it, subscribe on YouTube,
00:01:03.180 | give it five stars on Apple Podcast,
00:01:05.180 | follow on Spotify, support it on Patreon,
00:01:07.820 | or simply connect with me on Twitter,
00:01:09.940 | Alex Friedman, spelled F-R-I-D-M-A-N.
00:01:13.880 | I recently started doing ads
00:01:15.380 | at the end of the introduction.
00:01:16.940 | I'll do one or two minutes after introducing the episode
00:01:19.900 | and never any ads in the middle
00:01:21.420 | that can break the flow of the conversation.
00:01:23.720 | I hope that works for you
00:01:25.100 | and doesn't hurt the listening experience.
00:01:28.420 | This show is presented by Cash App,
00:01:30.520 | the number one finance app in the App Store.
00:01:33.020 | I personally use Cash App to send money to friends,
00:01:35.800 | but you can also use it to buy, sell
00:01:37.580 | and deposit Bitcoin in just seconds.
00:01:39.980 | Cash App also has a new investing feature.
00:01:42.860 | You can buy fractions of a stock, say $1 worth,
00:01:45.780 | no matter what the stock price is.
00:01:47.920 | Brokers services are provided by Cash App Investing,
00:01:50.820 | a subsidiary of Square and member SIPC.
00:01:54.100 | I'm excited to be working with Cash App
00:01:56.400 | to support one of my favorite organizations called FIRST,
00:01:59.820 | best known for their FIRST robotics and Lego competitions.
00:02:03.340 | They educate and inspire hundreds of thousands of students
00:02:06.620 | in over 110 countries
00:02:08.460 | and have a perfect rating and charity navigator,
00:02:11.100 | which means that donated money
00:02:12.380 | is used to maximum effectiveness.
00:02:15.140 | When you get Cash App from the App Store or Google Play
00:02:17.860 | and use code LEXPODCAST, you'll get $10
00:02:21.740 | and Cash App will also donate $10 to FIRST,
00:02:24.660 | which again is an organization
00:02:26.500 | that I've personally seen inspire girls and boys
00:02:29.280 | to dream of engineering a better world.
00:02:32.500 | And now here's my conversation with Daniel Kahneman.
00:02:35.800 | You tell a story of an SS soldier early in the war,
00:02:40.020 | World War II, in Nazi occupied France and Paris,
00:02:44.340 | where you grew up.
00:02:46.000 | He picked you up and hugged you
00:02:47.820 | and showed you a picture of a boy,
00:02:51.320 | maybe not realizing that you were Jewish.
00:02:53.940 | - Not maybe, certainly not.
00:02:56.480 | - So I told you I'm from the Soviet Union
00:02:59.560 | that was significantly impacted by the war as well,
00:03:01.500 | and I'm Jewish as well.
00:03:03.560 | What do you think World War II taught us
00:03:05.900 | about human psychology broadly?
00:03:09.840 | - Well, I think the only big surprise
00:03:13.880 | is the extermination policy, genocide,
00:03:18.360 | by the German people.
00:03:20.660 | That's when you look back on it
00:03:23.080 | and I think that's a major surprise.
00:03:27.160 | - It's a surprise because--
00:03:28.400 | - It's a surprise that they could do it.
00:03:31.000 | It's a surprise that enough people
00:03:34.820 | willingly participated in that.
00:03:37.320 | This is a surprise.
00:03:39.060 | Now it's no longer a surprise,
00:03:41.600 | but it's changed many people's views,
00:03:45.280 | I think, about human beings.
00:03:48.920 | Certainly for me, the Ackman trial
00:03:52.120 | and that teaches you something
00:03:54.400 | because it's very clear that if it could happen in Germany,
00:03:59.400 | it could happen anywhere.
00:04:01.800 | It's not that the Germans were special.
00:04:04.240 | This could happen anywhere.
00:04:05.600 | - So what do you think that is?
00:04:08.280 | Do you think we're all capable of evil?
00:04:11.760 | We're all capable of cruelty?
00:04:13.720 | - I don't think in those terms.
00:04:16.280 | I think that what is certainly possible
00:04:19.960 | is you can dehumanize people
00:04:23.200 | so that you treat them not as people anymore,
00:04:27.660 | but as animals.
00:04:28.960 | And the same way that you can slaughter animals
00:04:33.220 | without feeling much of anything,
00:04:35.320 | it can the same.
00:04:38.120 | And when you feel that,
00:04:39.920 | I think the combination of dehumanizing the other side
00:04:46.920 | and having uncontrolled power over other people,
00:04:50.720 | I think that doesn't bring out
00:04:52.480 | the most generous aspect of human nature.
00:04:55.160 | So that Nazi soldier,
00:04:58.260 | he was a good man.
00:05:03.360 | And he was perfectly capable of killing a lot of people,
00:05:08.680 | and I'm sure he did.
00:05:10.200 | - But what did the Jewish people mean to Nazis?
00:05:15.280 | So what, the dismissal of Jewish as worthy of--
00:05:20.280 | - Again, this is surprising that it was so extreme,
00:05:25.160 | but it's not one thing in human nature.
00:05:29.120 | I don't want to call it evil,
00:05:30.640 | but the distinction between the in-group and the out-group,
00:05:34.500 | that is very basic.
00:05:36.760 | So that's built in.
00:05:38.320 | The loyalty and affection towards in-group
00:05:42.240 | and the willingness to dehumanize the out-group,
00:05:45.520 | that is in human nature.
00:05:49.080 | And that's what I think,
00:05:51.800 | probably didn't need the Holocaust to teach us that,
00:05:56.520 | but the Holocaust is a very sharp lesson
00:06:00.000 | of what can happen to people and what people can do.
00:06:05.000 | - So the effect of the in-group and the out-group?
00:06:09.720 | - It's clear that those were people,
00:06:13.280 | you could shoot them, they were not human.
00:06:17.360 | There was no empathy, or very, very little empathy left.
00:06:22.360 | So occasionally, there might have been.
00:06:26.840 | And very quickly, by the way,
00:06:30.040 | the empathy disappeared, if there was initially.
00:06:34.600 | And the fact that everybody around you was doing it,
00:06:39.000 | that completely, the group doing it,
00:06:42.960 | and everybody shooting Jews,
00:06:45.880 | I think that makes it permissible.
00:06:50.880 | Now, how much, whether it could happen
00:06:56.200 | in every culture, or whether the Germans
00:07:01.000 | were just particularly efficient and disciplined,
00:07:04.920 | so they could get away with it?
00:07:08.560 | - It's a question.
00:07:09.400 | - It's an interesting question.
00:07:10.800 | - Are these artifacts of history, or is it human nature?
00:07:14.240 | - I think that's really human nature.
00:07:16.560 | You put some people in a position of power
00:07:20.400 | relative to other people,
00:07:22.480 | and then they become less human, they become different.
00:07:27.480 | - But in general, in war,
00:07:30.240 | outside of concentration camps in World War II,
00:07:33.720 | it seems that war brings out darker sides of human nature,
00:07:38.520 | but also the beautiful things about human nature.
00:07:41.160 | - Well, what it brings out is the loyalty among soldiers.
00:07:46.160 | I mean, it brings out the bonding.
00:07:51.120 | Male bonding, I think, is a very real thing that happens.
00:07:56.120 | And there is a certain thrill to friendship,
00:08:00.920 | and there is certainly a certain thrill
00:08:02.880 | to friendship under risk, and to shared risk.
00:08:06.560 | And so people have very profound emotions,
00:08:10.360 | up to the point where it gets so traumatic
00:08:13.480 | that little is left.
00:08:17.160 | - So let's talk about psychology a little bit.
00:08:22.520 | In your book, "Thinking Fast and Slow,"
00:08:24.760 | you describe two modes of thought,
00:08:27.560 | system one, the fast, instinctive, and emotional one,
00:08:32.360 | and system two, the slower, deliberate, logical one.
00:08:36.360 | At the risk of asking Darwin
00:08:37.920 | to discuss theory of evolution,
00:08:41.320 | can you describe distinguishing characteristics
00:08:45.520 | for people who have not read your book of the two systems?
00:08:49.840 | - Well, I mean, the word system is a bit misleading,
00:08:54.320 | but at the same time it's misleading, it's also very useful.
00:08:58.760 | But what I call system one,
00:09:01.560 | it's easier to think of it as a family of activities.
00:09:06.560 | And primarily the way I describe it is
00:09:10.040 | there are different ways for ideas to come to mind.
00:09:13.920 | And some ideas come to mind automatically,
00:09:18.000 | and the example, a standard example is two plus two,
00:09:22.000 | and then something happens to you.
00:09:24.280 | And in other cases, you've got to do something,
00:09:28.160 | you've got to work in order to produce the idea.
00:09:30.760 | And my example, I always give the same pair of numbers
00:09:34.320 | as 27 times 14, I think.
00:09:36.760 | - You have to perform some algorithm in your head,
00:09:39.240 | some steps. - Yes, and it takes time.
00:09:42.440 | It's a very different, nothing comes to mind
00:09:45.640 | except something comes to mind, which is the algorithm,
00:09:49.400 | I mean, that you've got to perform.
00:09:51.880 | And then it's work, and it engages short-term memory,
00:09:55.160 | and it engages executive function,
00:09:58.160 | and it makes you incapable of doing other things
00:10:00.880 | at the same time.
00:10:01.800 | So the main characteristic of system two
00:10:06.080 | is that there is mental effort involved,
00:10:08.240 | and there is a limited capacity for mental effort,
00:10:11.160 | whereas system one is effortless, essentially.
00:10:14.160 | That's the major distinction.
00:10:16.320 | - So you talk about there,
00:10:18.840 | it's really convenient to talk about two systems,
00:10:21.120 | but you also mentioned just now and in general
00:10:24.320 | that there is no distinct two systems in the brain,
00:10:29.200 | from a neurobiological, even from psychology perspective.
00:10:32.680 | But why does it seem to,
00:10:34.840 | from the experiments you've conducted,
00:10:38.280 | there does seem to be kind of emergent
00:10:43.280 | two modes of thinking.
00:10:46.040 | So at some point, these kinds of systems
00:10:50.200 | came into a brain architecture.
00:10:54.560 | Maybe man will share it.
00:10:55.960 | Or do you not think of it at all in those terms
00:10:59.720 | that it's all a mush, and these two things just emerge?
00:11:02.320 | - You know, evolutionary theorizing about this
00:11:05.800 | is cheap and easy.
00:11:08.280 | So it's, the way I think about it
00:11:12.600 | is that it's very clear that animals
00:11:15.800 | have a perceptual system,
00:11:19.120 | and that includes an ability to understand the world,
00:11:22.920 | at least to the extent that they can predict.
00:11:25.720 | They can't explain anything,
00:11:27.200 | but they can anticipate what's going to happen.
00:11:31.160 | And that's a key form of understanding the world.
00:11:34.760 | And my crude idea is that, what I call system two,
00:11:39.760 | well, system two grew out of this.
00:11:45.280 | And, you know, there is language,
00:11:47.600 | and there is the capacity of manipulating ideas,
00:11:51.080 | and the capacity of imagining futures,
00:11:53.480 | and of imagining counterfactuals,
00:11:56.160 | things that haven't happened,
00:11:58.320 | and to do conditional thinking.
00:12:00.520 | And there are really a lot of abilities
00:12:03.840 | that, without language, and without the very large brain
00:12:08.440 | that we have compared to others, would be impossible.
00:12:11.720 | Now, system one is more like what the animals are,
00:12:16.720 | but system one also can talk.
00:12:20.800 | I mean, it has language.
00:12:22.320 | It understands language.
00:12:23.720 | Indeed, it speaks for us.
00:12:25.560 | I mean, you know, I'm not choosing every word
00:12:28.440 | as a deliberate process.
00:12:30.200 | The words, I have some idea, and then the words come out.
00:12:34.200 | And that's automatic and effortless.
00:12:37.240 | - And many of the experiments you've done
00:12:39.360 | is to show that, listen, system one exists,
00:12:42.400 | and it does speak for us,
00:12:43.640 | and we should be careful about the voice it provides.
00:12:48.280 | - Well, I mean, you know, we have to trust it
00:12:51.560 | because it's the speed at which it acts.
00:12:57.560 | System two, if we're dependent on system two for survival,
00:13:01.880 | we wouldn't survive very long because it's very slow.
00:13:05.280 | - Yeah, crossing the street.
00:13:06.560 | - Crossing the street.
00:13:07.520 | I mean, many things depend on there being automatic.
00:13:11.160 | One very important aspect of system one
00:13:13.600 | is that it's not instinctive.
00:13:16.000 | You use the word instinctive.
00:13:17.960 | It contains skills that clearly have been learned
00:13:22.080 | so that skilled behavior, like driving a car
00:13:25.800 | or speaking, in fact, skilled behavior has to be learned.
00:13:30.800 | And so it doesn't, you know,
00:13:33.680 | you don't come equipped with driving.
00:13:36.520 | You have to learn how to drive.
00:13:38.480 | And you have to go through a period
00:13:40.760 | where driving is not automatic before it becomes automatic.
00:13:47.080 | - Yeah, you construct, I mean,
00:13:48.360 | this is where you talk about heuristic and biases
00:13:50.400 | is you make it automatic.
00:13:53.900 | You create a pattern,
00:13:56.400 | and then system one essentially matches a new experience
00:14:00.280 | against a previously seen pattern.
00:14:02.320 | And when that match is not a good one,
00:14:04.440 | that's when all the mess happens,
00:14:06.880 | but most of the time it works,
00:14:08.960 | and so it's pretty--
00:14:09.920 | - Most of the time,
00:14:11.560 | the anticipation of what's going to happen next is correct.
00:14:14.800 | And most of the time,
00:14:17.760 | the plan about what you have to do is correct.
00:14:20.640 | And so most of the time, everything works just fine.
00:14:25.440 | What's interesting actually is that in some sense,
00:14:29.120 | system one is much better at what it does
00:14:32.560 | than system two is at what it does.
00:14:35.540 | That is, there is this quality of effortlessly solving
00:14:38.840 | enormously complicated problems,
00:14:41.120 | which clearly exists,
00:14:44.360 | so that a chess player, a very good chess player,
00:14:48.180 | all the moves that come to their mind are strong moves.
00:14:53.280 | So all the selection of strong moves happens
00:14:56.960 | unconsciously and automatically and very, very fast.
00:15:00.920 | And all that is in system one.
00:15:03.160 | So system two verifies.
00:15:07.440 | - So along this line of thinking,
00:15:09.600 | really what we are are machines
00:15:11.360 | that construct pretty effective system one.
00:15:14.640 | You could think of it that way.
00:15:17.160 | So we're not talking about humans,
00:15:19.600 | but if we think about building
00:15:21.160 | artificial intelligence systems, robots,
00:15:25.880 | do you think all the features and bugs
00:15:28.400 | that you have highlighted in human beings
00:15:31.740 | are useful for constructing AI systems?
00:15:34.520 | So both systems are useful for perhaps
00:15:37.400 | instilling in robots?
00:15:39.440 | - What is happening these days
00:15:42.680 | is that actually what is happening in deep learning
00:15:47.680 | is more like a system one product
00:15:52.360 | than like a system two product.
00:15:53.920 | I mean, deep learning matches patterns
00:15:57.280 | and anticipate what's going to happen,
00:15:59.400 | so it's highly predictive.
00:16:01.020 | What deep learning doesn't have,
00:16:06.280 | and many people think that this is the critical,
00:16:09.520 | it doesn't have the ability to reason,
00:16:12.600 | so there is no system two there.
00:16:15.720 | But I think very importantly,
00:16:18.360 | it doesn't have any causality
00:16:20.040 | or any way to represent meaning
00:16:22.880 | and to represent real interaction.
00:16:24.920 | So until that is solved,
00:16:28.480 | what can be accomplished is marvelous
00:16:33.320 | and very exciting, but limited.
00:16:35.720 | - That's actually really nice to think of
00:16:37.720 | current advances in machine learning
00:16:40.080 | is essentially system one advances.
00:16:42.560 | So how far can we get with just system one?
00:16:45.960 | If we think of deep learning
00:16:47.160 | and artificial intelligence systems--
00:16:48.720 | - I mean, you know, it's very clear
00:16:51.080 | that deep mind has already gone way beyond
00:16:54.400 | what people thought was possible.
00:16:56.560 | I think the thing that has impressed me most
00:17:00.240 | about the developments in AI is the speed.
00:17:03.920 | It's that things, at least in the context of deep learning,
00:17:08.000 | and maybe this is about to slow down,
00:17:10.960 | but things moved a lot faster than anticipated.
00:17:15.080 | The transition from solving chess to solving Go
00:17:20.000 | was, I mean, that's bewildering how quickly it went.
00:17:24.620 | The move from AlphaGo to AlphaZero
00:17:28.240 | is sort of bewildering the speed
00:17:30.920 | at which they accomplished that.
00:17:32.920 | Now, clearly, there are many problems
00:17:37.920 | that you can solve that way,
00:17:39.960 | but there are some problems
00:17:41.520 | for which you need something else.
00:17:44.040 | - Something like reasoning.
00:17:45.960 | - Well, reasoning and also, you know,
00:17:49.200 | one of the real mysteries,
00:17:51.520 | psychologist Gary Marcus, who is also a critic of AI,
00:17:56.040 | I mean, what he points out,
00:18:01.560 | and I think he has a point,
00:18:03.120 | is that humans learn quickly.
00:18:06.740 | Children don't need a million examples.
00:18:12.960 | They need two or three examples.
00:18:15.120 | So clearly, there is a fundamental difference.
00:18:18.320 | And what enables a machine to learn quickly,
00:18:23.320 | what you have to build into the machine,
00:18:26.960 | because it's clear that you have to build some expectations
00:18:30.360 | or something in the machine
00:18:32.240 | to make it ready to learn quickly,
00:18:34.120 | that at the moment seems to be unsolved.
00:18:39.440 | I'm pretty sure that DeepMind is working on it,
00:18:42.680 | but if they have solved it, I haven't heard yet.
00:18:47.680 | - They're trying to actually,
00:18:49.800 | them and OpenAI are trying to start
00:18:52.960 | to get to use neural networks to reason.
00:18:56.120 | So assemble knowledge.
00:18:58.200 | Of course, causality is, temporal causality
00:19:02.040 | is out of reach to most everybody.
00:19:05.480 | You mentioned the benefits of System 1
00:19:07.680 | is essentially that it's fast,
00:19:09.680 | allows us to function in the world.
00:19:11.120 | - Fast and skilled, yeah.
00:19:13.200 | - It's skilled.
00:19:14.040 | - And it has a model of the world.
00:19:16.480 | You know, in a sense, I mean,
00:19:17.680 | there was the earlier phase of AI
00:19:20.680 | attempted to model reasoning,
00:19:26.440 | and they were moderately successful,
00:19:28.400 | but reasoning by itself doesn't get you much.
00:19:32.220 | Deep learning has been much more successful
00:19:36.480 | in terms of what they can do.
00:19:38.920 | But now, it's an interesting question,
00:19:42.280 | whether it's approaching its limits.
00:19:44.080 | What do you think?
00:19:45.720 | - I think absolutely.
00:19:47.040 | So I just talked to Gian Lacoon,
00:19:49.640 | you mentioned him. - I know him.
00:19:51.680 | - So he thinks that the limits,
00:19:54.760 | we're not going to hit the limits
00:19:56.280 | with neural networks,
00:19:58.040 | that ultimately this kind of System 1 pattern matching
00:20:00.680 | will start to look like System 2
00:20:04.480 | without significant transformation of the architecture.
00:20:09.920 | So I'm more with the majority of the people
00:20:12.840 | who think that yes, neural networks will hit a limit
00:20:16.200 | in their capability.
00:20:17.040 | - I mean, he, on the one hand,
00:20:18.880 | I have heard him tell the Mises-Sabiases centrally
00:20:23.160 | that what they have accomplished is not a big deal,
00:20:25.840 | that they have just touched,
00:20:27.880 | that basically they can't do unsupervised learning
00:20:32.160 | in an effective way.
00:20:34.800 | But you're telling me that he thinks
00:20:37.240 | that within the current architecture,
00:20:40.320 | you can do causality and reasoning?
00:20:42.480 | - So he's very much a pragmatist in a sense
00:20:45.800 | that's saying that we're very far away,
00:20:47.320 | that there's still, I think there's this idea
00:20:50.440 | that he says is we can only see
00:20:54.200 | one or two mountain peaks ahead
00:20:56.240 | and there might be either a few more after
00:20:58.720 | or thousands more after. - A lot.
00:21:00.480 | - Yeah, so that kind of idea.
00:21:02.040 | - I heard that metaphor.
00:21:03.920 | - Right, but nevertheless,
00:21:06.240 | it doesn't see the final answer
00:21:10.880 | not fundamentally looking like one that we currently have.
00:21:15.240 | So neural networks being a huge part of that.
00:21:18.760 | - Yeah, I mean, that's very likely
00:21:21.360 | because pattern matching is so much of what's going on.
00:21:26.360 | - And you can think of neural networks
00:21:28.280 | as processing information sequentially.
00:21:30.760 | - Yeah, I mean, there is an important aspect to,
00:21:35.760 | for example, you get systems that translate
00:21:40.440 | and they do a very good job,
00:21:42.160 | but they really don't know what they're talking about.
00:21:44.960 | And for that, I'm really quite surprised.
00:21:49.640 | For that, you would need an AI that has sensation,
00:21:54.640 | an AI that is in touch with the world.
00:21:58.680 | - Yes, self-awareness,
00:22:00.600 | and maybe even something resembles consciousness
00:22:03.720 | kind of ideas.
00:22:04.800 | - Certainly awareness of what's going on
00:22:08.040 | so that the words have meaning
00:22:10.760 | or can get, are in touch with some perception
00:22:14.920 | or some action.
00:22:16.480 | - Yeah, so that's a big thing for Jan
00:22:18.800 | as what he refers to as grounding to the physical space.
00:22:23.800 | - So that's what we're talking about the same.
00:22:26.200 | - Yeah, so how you ground?
00:22:29.520 | - I mean, the grounding, without grounding,
00:22:32.560 | then you get a machine
00:22:34.480 | that doesn't know what it's talking about
00:22:36.440 | because it is talking about the world ultimately.
00:22:40.200 | - The question, the open question
00:22:41.440 | is what it means to ground.
00:22:42.840 | I mean, we're very human-centric in our thinking,
00:22:47.360 | but what does it mean for a machine
00:22:48.840 | to understand what it means to be in this world?
00:22:51.480 | Does it need to have a body?
00:22:54.760 | Does it need to have a finiteness like we humans have?
00:22:58.280 | All of these elements, it's a very, it's an open question.
00:23:02.320 | - You know, I'm not sure about having a body,
00:23:04.200 | but having a perceptual system,
00:23:06.040 | having a body would be very helpful too.
00:23:08.280 | I mean, if you think about human mimicking human,
00:23:13.240 | but having a perception, that seems to be essential
00:23:17.120 | so that you can build,
00:23:20.160 | you can accumulate knowledge about the world.
00:23:22.720 | So if you can imagine a human completely paralyzed
00:23:27.720 | and there is a lot that the human brain could learn,
00:23:31.760 | you know, with a paralyzed body.
00:23:33.520 | So if we got a machine that could do that,
00:23:37.520 | that would be a big deal.
00:23:39.400 | - And then the flip side of that,
00:23:41.640 | something you see in children
00:23:44.080 | and something in machine learning world
00:23:45.640 | is called active learning.
00:23:47.240 | Maybe it is also, is being able to play with the world.
00:23:51.400 | How important for developing system one or system two,
00:23:57.680 | do you think it is to play with the world?
00:24:00.000 | To be able to interact with the world?
00:24:00.840 | - Well, there's certainly a lot, a lot of what you learn
00:24:04.000 | as you learn to anticipate the outcomes of your actions.
00:24:08.800 | I mean, you can see that how babies learn it.
00:24:11.400 | You know, with their hands, how they learn, you know,
00:24:15.680 | to connect, you know, the movements of their hands
00:24:19.120 | with something that clearly is something
00:24:20.800 | that happens in the brain.
00:24:22.520 | And the ability of the brain to learn new patterns.
00:24:27.520 | So, you know, it's the kind of thing that you get
00:24:29.720 | with artificial limbs, that you connect it
00:24:33.200 | and then people learn to operate the artificial limb,
00:24:37.240 | you know, really impressively quickly, at least.
00:24:41.480 | From what I hear.
00:24:44.040 | So we have a system that is ready
00:24:46.360 | to learn the world through action.
00:24:48.120 | - At the risk of going into way too mysterious of land,
00:24:53.280 | what do you think it takes to build a system like that?
00:24:57.120 | Obviously we're very far from understanding
00:25:00.600 | how the brain works, but how difficult is it
00:25:05.600 | to build this mind of ours?
00:25:08.840 | - You know, I mean, I think that Jan LeCun's answer
00:25:11.640 | that we don't know how many mountains there are.
00:25:14.640 | I think that's a very good answer.
00:25:16.640 | I think that, you know, if you look at what Ray Kurzweil
00:25:21.200 | is saying, that strikes me as off the wall.
00:25:24.760 | But I think people are much more realistic than that,
00:25:29.400 | where actually Demis Hassabis is and Jan is.
00:25:33.560 | So the people are actually doing the work
00:25:37.040 | fairly realistic, I think.
00:25:39.720 | - To maybe phrase it another way,
00:25:41.560 | from a perspective not of building it,
00:25:43.760 | but from understanding it,
00:25:45.280 | how complicated are human beings in the following sense?
00:25:50.720 | You know, I work with autonomous vehicles and pedestrians.
00:25:54.720 | So we tried to model pedestrians.
00:25:57.360 | How difficult is it to model a human being,
00:26:01.040 | their perception of the world,
00:26:04.160 | the two systems they operate under,
00:26:06.200 | sufficiently to be able to predict
00:26:08.160 | whether the pedestrian's gonna cross the road or not?
00:26:11.120 | - I'm, you know, I'm fairly optimistic about that, actually,
00:26:14.720 | because what we're talking about
00:26:18.080 | is a huge amount of information that every vehicle has
00:26:23.080 | and that feeds into one system, into one gigantic system.
00:26:29.040 | And so anything that any vehicle learns
00:26:32.000 | becomes part of what the whole system knows.
00:26:34.920 | And with a system multiplier like that,
00:26:39.280 | there is a lot that you can do.
00:26:41.080 | So human beings are very complicated,
00:26:44.200 | but, and, you know, system is going to make mistakes,
00:26:48.120 | but human makes mistakes.
00:26:50.240 | I think that they'll be able to,
00:26:53.600 | I think they are able to anticipate pedestrians,
00:26:56.520 | otherwise a lot would happen.
00:26:58.600 | They're able to, you know,
00:27:01.920 | they're able to get into a roundabout
00:27:04.760 | and into traffic.
00:27:06.720 | So they must know both to expect or to anticipate
00:27:11.720 | how people will react when they're sneaking in.
00:27:16.080 | And there's a lot of learning that's involved in that.
00:27:19.480 | - Currently, the pedestrians are treated
00:27:22.440 | as things that cannot be hit,
00:27:26.560 | and they're not treated as agents
00:27:30.400 | with whom you interact in a game-theoretic way.
00:27:34.800 | So, I mean, it's not, it's a totally open problem,
00:27:38.760 | and every time somebody tries to solve it,
00:27:40.440 | it seems to be harder than we think.
00:27:43.080 | And nobody's really tried to seriously solve
00:27:45.400 | the problem of that dance,
00:27:47.400 | because I'm not sure if you've thought
00:27:49.280 | about the problem of pedestrians,
00:27:51.220 | but you're really putting your life
00:27:53.920 | in the hands of the driver.
00:27:55.800 | - You know, there is a dance,
00:27:57.120 | there's part of the dance that would be quite complicated.
00:28:00.840 | But for example, when I cross the street
00:28:03.400 | and there is a vehicle approaching,
00:28:05.320 | I look the driver in the eye,
00:28:07.520 | and I think many people do that.
00:28:09.720 | And that's a signal that I'm sending,
00:28:13.900 | and I would be sending that machine
00:28:15.680 | to an autonomous vehicle,
00:28:17.680 | and it had better understand it,
00:28:19.560 | because it means I'm crossing.
00:28:22.040 | - So, and there's another thing you do that actually,
00:28:25.480 | so I'll tell you what you do,
00:28:26.840 | 'cause I've watched hundreds of hours of video on this,
00:28:31.300 | is when you step in the street,
00:28:32.680 | you do that before you step in the street.
00:28:34.800 | And when you step in the street,
00:28:36.120 | you actually look away.
00:28:37.120 | - Look away.
00:28:37.960 | - Yeah.
00:28:39.000 | Now, what is that?
00:28:40.200 | What that's saying is,
00:28:43.280 | I mean, you're trusting that the car,
00:28:45.320 | who hasn't slown down yet, will slow down.
00:28:48.360 | - Yeah, and you're telling him, I'm committed.
00:28:51.880 | I mean, this is like in a game of chicken.
00:28:53.760 | So, I'm committed, and if I'm committed, I'm looking away.
00:28:57.460 | So, there is, you just have to stop.
00:29:01.000 | - So, the question is whether a machine that observes that
00:29:04.000 | needs to understand mortality.
00:29:07.120 | - Here, I'm not sure that it's got to understand so much
00:29:12.120 | as it's got to anticipate.
00:29:14.840 | And here, but you're surprising me,
00:29:21.200 | because here, I would think that maybe you can anticipate
00:29:25.480 | without understanding,
00:29:27.160 | because I think this is clearly what's happening
00:29:30.400 | in playing go or in playing chess.
00:29:32.840 | There's a lot of anticipation,
00:29:34.280 | and there is zero understanding.
00:29:36.360 | So, I thought that you didn't need a model of the human,
00:29:41.360 | and a model of the human mind to avoid hitting pedestrians.
00:29:49.080 | But you are suggesting that actually--
00:29:51.120 | - That you do, yeah.
00:29:52.200 | - You do.
00:29:53.520 | Then it's a lot harder, I thought.
00:29:57.560 | - And I have a follow-up question
00:29:59.020 | to see where your intuition lies,
00:30:01.360 | is it seems that almost every robot-human
00:30:04.200 | collaboration system is a lot harder than people realize.
00:30:08.960 | So, do you think it's possible for robots and humans
00:30:12.960 | to collaborate successfully?
00:30:14.760 | We talked a little bit about semi-autonomous vehicles,
00:30:19.840 | like in the Tesla, Autopilot, but just in tasks in general.
00:30:24.120 | If you think, we talked about current neural networks
00:30:28.620 | being kind of system one,
00:30:30.300 | do you think those same systems can borrow humans
00:30:35.300 | for system two type tasks and collaborate successfully?
00:30:41.440 | - Well, I think that in any system where humans
00:30:46.660 | and the machine interact,
00:30:49.040 | the human will be superfluous within a fairly short time.
00:30:53.720 | That is, if the machine is advanced enough
00:30:56.280 | so that it can really help the human,
00:30:59.520 | then it may not need the human for a long time.
00:31:02.280 | Now, it would be very interesting if there are problems
00:31:07.280 | that for some reason the machine doesn't, cannot solve,
00:31:10.760 | but that people could solve,
00:31:12.840 | then you would have to build into the machine
00:31:14.800 | an ability to recognize that it is
00:31:18.440 | in that kind of problematic situation,
00:31:21.560 | and to call the human.
00:31:23.760 | That cannot be easy without understanding.
00:31:28.400 | That is, it must be very difficult
00:31:31.640 | to program a recognition that you are
00:31:35.760 | in a problematic situation
00:31:37.900 | without understanding the problem.
00:31:39.800 | - That's very true.
00:31:42.440 | In order to understand the full scope of situations
00:31:46.640 | that are problematic,
00:31:48.000 | you almost need to be smart enough
00:31:50.600 | to solve all those problems.
00:31:53.480 | - It's not clear to me how much
00:31:56.880 | the machine will need the human.
00:31:59.040 | I think the example of chess is very instructive.
00:32:02.620 | I mean, there was a time at which Kasparov was saying
00:32:05.400 | that human-machine combinations will beat everybody.
00:32:08.660 | Even stockfish doesn't need people,
00:32:12.460 | and alpha zero certainly doesn't need people.
00:32:16.040 | - The question is, just like you said,
00:32:18.080 | how many problems are like chess,
00:32:20.360 | and how many problems are the ones
00:32:22.160 | where are not like chess,
00:32:23.820 | where every problem probably in the end is like chess?
00:32:27.400 | The question is, how long is that transition period?
00:32:30.320 | - I mean, that's a question I would ask you
00:32:33.480 | in terms of, I mean, autonomous vehicle,
00:32:37.000 | just driving is probably a lot more complicated
00:32:39.640 | than Go to solve that.
00:32:41.040 | - Yes, and that's surprising.
00:32:43.080 | - Because it's open.
00:32:45.040 | No, I mean, that's not surprising to me
00:32:48.280 | because there is a hierarchical aspect to this,
00:32:53.280 | which is recognizing a situation,
00:32:59.080 | and then within the situation,
00:33:01.000 | bringing up the relevant knowledge.
00:33:04.080 | - Right.
00:33:04.920 | - And for that hierarchical type of system to work,
00:33:09.920 | you need a more complicated system than we currently have.
00:33:16.360 | - A lot of people think, because as human beings,
00:33:19.160 | this is probably the cognitive biases,
00:33:22.640 | they think of driving as pretty simple
00:33:25.040 | because they think of their own experience.
00:33:28.440 | This is actually a big problem for AI researchers
00:33:32.040 | or people thinking about AI
00:33:33.920 | because they evaluate how hard a particular problem is
00:33:38.860 | based on very limited knowledge,
00:33:42.400 | based on how hard it is for them to do the task.
00:33:45.980 | And then they take for granted,
00:33:47.720 | maybe you can speak to that
00:33:49.240 | 'cause most people tell me driving is trivial.
00:33:53.920 | And humans, in fact, are terrible at driving
00:33:56.800 | is what people tell me.
00:33:58.080 | And I see humans,
00:33:59.840 | and humans are actually incredible at driving,
00:34:02.120 | and driving is really terribly difficult.
00:34:04.240 | - Yeah.
00:34:05.160 | - So is that just another element of the effects
00:34:08.600 | that you've described in your work on the psychology side?
00:34:15.120 | - No, I mean, I haven't really,
00:34:17.180 | I would say that my research has contributed nothing
00:34:21.340 | to understanding the ecology
00:34:23.940 | and to understanding the structure of situations
00:34:26.900 | and the complexity of problems.
00:34:28.860 | So all we know is very clear that that goal,
00:34:35.300 | it's endlessly complicated, but it's very constrained.
00:34:40.820 | So, and in the real world, there are far fewer constraints
00:34:45.820 | and many more potential surprises.
00:34:50.040 | - So that's obvious
00:34:51.740 | because it's not always obvious to people, right?
00:34:54.200 | So when you think about-
00:34:55.640 | - Well, I mean, people thought that reasoning was hard
00:35:00.480 | and perceiving was easy,
00:35:02.720 | but they quickly learned that actually modeling vision
00:35:08.000 | was tremendously complicated,
00:35:10.020 | and modeling, even proving theorems
00:35:13.460 | was relatively straightforward.
00:35:15.900 | - To push back on that a little bit, on the quickly part,
00:35:19.620 | they haven't, it took several decades to learn that,
00:35:22.860 | and most people still haven't learned that.
00:35:25.240 | I mean, our intuition, of course, AI researchers have,
00:35:29.220 | but you drift a little bit outside the specific AI field,
00:35:33.780 | the intuition is still perceptible as a solved task.
00:35:36.260 | - Yeah, I mean, that's true.
00:35:38.100 | Intuitions, the intuitions of the public
00:35:40.460 | haven't changed radically, and they are, as you said,
00:35:45.460 | they're evaluating the complexity of problems
00:35:48.460 | by how difficult it is for them to solve the problems.
00:35:52.500 | And that's got very little to do
00:35:55.460 | with the complexities of solving them in AI.
00:35:59.180 | - How do you think, from the perspective of AI researcher,
00:36:03.340 | do we deal with the intuitions of the public?
00:36:07.860 | So in trying to think, arguably,
00:36:11.580 | the combination of hype investment and the public intuition
00:36:16.100 | is what led to the AI winters.
00:36:18.780 | I'm sure that same could be applied to tech,
00:36:21.180 | or that the intuition of the public leads to media hype,
00:36:26.180 | leads to companies investing in the tech,
00:36:31.540 | and then the tech doesn't make the company's money,
00:36:34.700 | and then there's a crash.
00:36:36.620 | Is there a way to educate people,
00:36:38.700 | sort of to fight the, let's call it system one thinking?
00:36:42.840 | - In general, no.
00:36:45.620 | I think that's the simple answer.
00:36:49.000 | And it's going to take a long time
00:36:54.640 | before the understanding of what those systems can do
00:37:00.020 | becomes public knowledge.
00:37:03.480 | And then, and the expectations,
00:37:10.620 | you know, there are several aspects
00:37:12.460 | that are going to be very complicated.
00:37:14.300 | The fact that you have a device that cannot explain itself
00:37:25.060 | is a major, major difficulty.
00:37:28.140 | And we're already seeing that.
00:37:31.460 | I mean, this is really something that is happening.
00:37:34.220 | So it's happening in the judicial system.
00:37:37.540 | So you have systems that are clearly better
00:37:42.460 | at predicting parole violations than judges,
00:37:47.460 | but they can't explain their reasoning.
00:37:50.720 | And so people don't want to trust them.
00:37:55.820 | - We seem to, in system one even,
00:38:00.100 | use cues to make judgments about our environment.
00:38:05.100 | So this explainability point,
00:38:07.840 | do you think humans can explain stuff themselves?
00:38:10.860 | - No, but I mean,
00:38:13.740 | there is a very interesting aspect of that.
00:38:18.060 | Humans think they can explain themselves.
00:38:21.940 | So when you say something,
00:38:24.780 | and I ask you, why do you believe that?
00:38:27.100 | Then reasons will occur to you.
00:38:30.160 | But actually, my own belief is that in most cases,
00:38:34.900 | the reasons have very little to do
00:38:36.640 | with why you believe what you believe.
00:38:38.820 | So that the reasons are a story that comes to your mind
00:38:43.040 | when you need to explain yourself.
00:38:45.300 | But people traffic in those explanations.
00:38:50.300 | I mean, the human interaction depends
00:38:53.160 | on those shared fictions
00:38:55.020 | and the stories that people tell themselves.
00:38:58.540 | - You just made me actually realize,
00:39:00.380 | and we'll talk about stories in a second,
00:39:02.440 | that, not to be cynical about it,
00:39:06.940 | but perhaps there's a whole movement
00:39:09.380 | of people trying to do explainable AI.
00:39:12.240 | And really, we don't necessarily need to explain,
00:39:17.340 | AI doesn't need to explain itself.
00:39:19.300 | It just needs to tell a convincing story.
00:39:21.840 | - Yeah, absolutely.
00:39:23.460 | - The story doesn't necessarily need to reflect the truth.
00:39:28.460 | It just needs to be convincing.
00:39:31.620 | There's something to that.
00:39:32.740 | - You can say exactly the same thing
00:39:34.820 | in a way that sounds cynical or doesn't sound cynical.
00:39:38.460 | - Right, sure.
00:39:39.300 | - I mean, so, but the objective--
00:39:42.500 | - Brilliant.
00:39:43.340 | - Of having an explanation is to tell a story
00:39:47.020 | that will be acceptable to people.
00:39:51.220 | And for it to be acceptable and to be robustly acceptable,
00:39:55.700 | it has to have some elements of truth.
00:39:58.040 | But the objective is for people to accept it.
00:40:03.880 | - It's quite brilliant, actually.
00:40:06.960 | But so, on the stories that we tell,
00:40:11.380 | sorry to ask you the question
00:40:13.700 | that most people know the answer to,
00:40:15.420 | but you talk about two selves
00:40:19.100 | in terms of how life is lived.
00:40:21.180 | The experienced self and the remembering self.
00:40:24.700 | Can you describe the distinction between the two?
00:40:26.840 | - Well, sure.
00:40:27.680 | I mean, there is an aspect of life
00:40:32.020 | that occasionally, you know,
00:40:33.420 | most of the time we just live,
00:40:35.580 | and we have experiences, and they're better,
00:40:37.580 | and they're worse, and it goes on over time.
00:40:40.540 | And mostly we forget everything that happens,
00:40:43.340 | or we forget most of what happens.
00:40:45.660 | Then occasionally, you,
00:40:49.660 | when something ends or at different points,
00:40:52.980 | you evaluate the past, and you form a memory,
00:40:57.780 | and the memory is schematic.
00:40:59.740 | It's not that you can roll a film of an interaction.
00:41:03.460 | You construct, in effect, the elements of a story
00:41:07.100 | about an episode.
00:41:10.500 | So there is the experience,
00:41:14.220 | and there is the story that is created about the experience.
00:41:17.980 | And that's what I call the remembering.
00:41:20.420 | So I had the image of two selves.
00:41:23.220 | So there is a self that lives,
00:41:25.180 | and there is a self that evaluates life.
00:41:28.820 | Now, the paradox, and the deep paradox in that
00:41:32.220 | is that we have one system,
00:41:35.980 | or one self that does the living,
00:41:38.460 | but the other system, the remembering self,
00:41:42.400 | is all we get to keep.
00:41:44.180 | And basically, decision-making and everything that we do
00:41:49.180 | is governed by our memories,
00:41:51.940 | not by what actually happened.
00:41:54.420 | It's governed by the story that we told ourselves,
00:41:58.300 | or by the story that we're keeping.
00:42:00.420 | So that's the distinction.
00:42:04.060 | - I mean, there's a lot of brilliant ideas
00:42:05.900 | about the pursuit of happiness that come out of that.
00:42:08.940 | What are the properties of happiness
00:42:11.100 | which emerge from a remembering self?
00:42:14.020 | - There are properties of how we construct stories
00:42:17.860 | that are really important.
00:42:19.060 | So I studied a few,
00:42:23.020 | but a couple are really very striking.
00:42:28.020 | And one is that in stories, time doesn't matter.
00:42:32.940 | There's a sequence of events, or there are highlights,
00:42:37.220 | and how long it took.
00:42:41.960 | They lived happily ever after,
00:42:44.420 | and three years later, something.
00:42:47.140 | Time really doesn't matter.
00:42:49.340 | And in stories, events matter, but time doesn't.
00:42:54.340 | That leads to a very interesting set of problems,
00:43:00.180 | because time is all we got to live.
00:43:04.540 | I mean, time is the currency of life.
00:43:07.220 | And yet, time is not represented, basically,
00:43:11.060 | in evaluated memories.
00:43:13.440 | So that creates a lot of paradoxes that I've thought about.
00:43:18.440 | - Yeah, they're fascinating.
00:43:20.020 | But if you were to give advice
00:43:25.020 | on how one lives a happy life,
00:43:28.540 | based on such properties, what's the optimal?
00:43:31.820 | - You know, I gave up, I abandoned happiness research
00:43:36.700 | because I couldn't solve that problem.
00:43:38.900 | I couldn't see.
00:43:41.240 | And in the first place, it's very clear
00:43:44.460 | that if you do talk in terms of those two cells,
00:43:48.260 | then that what makes the remembering self happy
00:43:51.180 | and what makes experiencing self happy are different things.
00:43:55.040 | And I asked the question of,
00:43:58.980 | suppose you're planning a vacation,
00:44:01.300 | and you're just told that at the end of the vacation,
00:44:04.040 | you'll get an amnesic drug, so you remember nothing,
00:44:07.840 | and they'll also destroy all your photos,
00:44:10.060 | so there'll be nothing.
00:44:11.500 | Would you still go to the same vacation?
00:44:15.140 | And it turns out we go to vacations
00:44:21.860 | in large part to construct memories,
00:44:24.900 | not to have experiences, but to construct memories.
00:44:28.580 | And it turns out that the vacation
00:44:30.700 | that you would want for yourself,
00:44:32.460 | if you knew you will not remember,
00:44:35.140 | is probably not the same vacation
00:44:37.140 | that you will want for yourself if you will remember.
00:44:40.420 | So I have no solution to these problems,
00:44:44.700 | but clearly those are big issues, difficult issues.
00:44:48.940 | - You've talked about sort of how many minutes or hours
00:44:52.580 | you spend about the vacation.
00:44:54.080 | It's an interesting way to think about it
00:44:56.020 | because that's how you really experience the vacation
00:44:59.480 | outside the being in it.
00:45:01.740 | But there's also a modern,
00:45:03.460 | I don't know if you think about this or interact with it,
00:45:06.220 | there's a modern way to magnify the remembering self,
00:45:11.220 | which is by posting on Instagram, on Twitter,
00:45:15.620 | on social networks.
00:45:17.300 | A lot of people live life for the picture that you take,
00:45:21.580 | that you post somewhere.
00:45:22.880 | And now thousands of people share it,
00:45:25.260 | and potentially millions,
00:45:27.340 | and then you can relive it even much more
00:45:29.120 | than just those minutes.
00:45:30.460 | Do you think about that magnification much?
00:45:34.220 | - You know, I'm too old for social networks.
00:45:37.140 | I've never seen Instagram,
00:45:40.460 | so I cannot really speak intelligently about those things.
00:45:45.300 | I'm just too old.
00:45:46.460 | - But it's interesting to watch
00:45:48.220 | the exact effects you've described.
00:45:49.060 | - I think it will make a very big difference.
00:45:51.900 | And it will also make a difference,
00:45:55.460 | and that I don't know whether,
00:45:57.340 | it's clear that in some ways
00:46:04.020 | the devices that serve us supplant function.
00:46:09.020 | So you don't have to remember phone numbers.
00:46:12.180 | You really don't have to know facts.
00:46:15.460 | I mean, the number of conversations I'm involved with
00:46:18.940 | where somebody says, "Well, let's look it up."
00:46:21.260 | So in a way, it's made conversations,
00:46:26.660 | well, it means that it's much less important to know things.
00:46:32.340 | No, it used to be very important to know things.
00:46:35.140 | This is changing.
00:46:36.620 | So the requirements that we have for ourselves
00:46:41.620 | and for other people are changing
00:46:47.620 | because of all those supports.
00:46:50.380 | And I have no idea what Instagram does, but it's--
00:46:55.100 | - Well, I'll tell you-- - I wish I knew.
00:46:56.580 | - I mean, I wish I could just have,
00:46:58.980 | my remembering self could enjoy this conversation,
00:47:01.820 | but I'll get to enjoy it even more by watching it.
00:47:06.180 | And then talking to others,
00:47:08.020 | it'll be about 100,000 people as scary as this to say,
00:47:11.820 | "Well, listen or watch this," right?
00:47:13.740 | It changes things.
00:47:15.980 | It changes the experience of the world.
00:47:18.780 | That you seek out experiences
00:47:20.380 | which could be shared in that way.
00:47:22.220 | And I haven't seen, it's the same effects that you described
00:47:27.140 | and I don't think the psychology of that magnification
00:47:30.540 | has been described yet 'cause it's a new world.
00:47:33.100 | - You know, the sharing,
00:47:34.740 | there was a time when people read books
00:47:39.940 | and you could assume that your friends
00:47:47.700 | had read the same books that you read.
00:47:50.340 | So there was--
00:47:52.540 | - Kind of invisible sharing theory.
00:47:54.380 | - There was a lot of sharing going on
00:47:56.900 | and there was a lot of assumed common knowledge.
00:48:00.660 | And that was built in.
00:48:02.660 | I mean, it was obvious that you had read the New York Times,
00:48:05.460 | it was obvious that you had read the reviews.
00:48:08.180 | I mean, so a lot was taken for granted that was shared.
00:48:13.140 | And when there were three television channels,
00:48:19.300 | it was obvious that you'd seen one of them,
00:48:22.100 | probably the same.
00:48:23.380 | So sharing--
00:48:27.780 | - Has always been there.
00:48:28.620 | - Was always there, it was just different.
00:48:31.220 | - At the risk of inviting mockery from you,
00:48:36.340 | let me say that I'm also a fan of Sartre and Camus
00:48:41.340 | and existentialist philosophers.
00:48:43.980 | And I'm joking, of course, about mockery,
00:48:46.940 | but from the perspective of the two selves,
00:48:50.660 | what do you think of the existentialist philosophy of life?
00:48:54.700 | So trying to really emphasize the experiencing self
00:48:59.180 | as the proper way to, or the best way to live life.
00:49:04.180 | - I don't know enough philosophy to answer that,
00:49:09.100 | but it's not, you know, the emphasis on experience
00:49:14.100 | is also the emphasis in Buddhism.
00:49:17.220 | - Yeah, right, that's right.
00:49:18.060 | - So that's, you just have got to experience things
00:49:23.460 | and not to evaluate, and not to pass judgment,
00:49:28.460 | and not to score, not to keep score.
00:49:33.020 | - When you look at the grand picture of experience,
00:49:36.780 | you think there's something to that,
00:49:38.660 | that one of the ways to achieve contentment
00:49:42.260 | and maybe even happiness is letting go
00:49:45.500 | of any of the things,
00:49:48.820 | any of the procedures of the remembering self.
00:49:52.260 | - Well, yeah, I mean, I think, you know,
00:49:54.620 | if one could imagine a life
00:49:56.500 | in which people don't score themselves,
00:49:59.660 | it feels as if that would be a better life,
00:50:04.380 | as if the self-scoring and, you know,
00:50:06.940 | how am I doing kind of question
00:50:10.220 | is not a very happy thing to have.
00:50:17.860 | But I got out of that field
00:50:20.420 | because I couldn't solve that problem.
00:50:22.860 | - Couldn't solve that.
00:50:23.700 | - And that was because my intuition was
00:50:27.220 | that the experiencing self, that's reality.
00:50:31.340 | But then it turns out that what people want for themselves
00:50:34.980 | is not experiences, they want memories,
00:50:37.220 | and they want a good story about their life.
00:50:39.740 | And so you cannot have a theory of happiness
00:50:42.860 | that doesn't correspond to what people want for themselves.
00:50:46.140 | And when I realized that this was where things were going,
00:50:51.140 | I really sort of left the field of research.
00:50:55.340 | - Do you think there's something instructive
00:50:57.100 | about this emphasis of reliving memories
00:51:00.540 | in building AI systems?
00:51:03.380 | So currently, artificial intelligence systems
00:51:05.940 | are more like experiencing self
00:51:10.620 | in that they react to the environment,
00:51:12.780 | there's some pattern formation like learning, so on,
00:51:17.060 | but you really don't construct memories
00:51:20.580 | except in reinforcement learning every once in a while
00:51:23.620 | that you replay over and over.
00:51:25.580 | - Yeah, but you know, that would, in principle,
00:51:28.260 | would not be--
00:51:30.100 | - Do you think that's useful?
00:51:31.380 | Do you think it's a feature or a bug of human beings
00:51:34.100 | that we look back?
00:51:37.260 | - Oh, I think that's definitely a feature.
00:51:40.260 | It's not a bug.
00:51:41.700 | I mean, you have to look back in order to look forward.
00:51:45.100 | So without looking back,
00:51:48.820 | you couldn't really intelligently look forward.
00:51:52.860 | - You're looking for the echoes of the same kind
00:51:54.860 | of experience in order to predict what the future holds?
00:51:58.020 | - Yeah.
00:51:59.260 | - Though Viktor Frankl, in his book,
00:52:02.220 | "Man's Search for Meaning," I'm not sure if you've read,
00:52:05.740 | describes his experience at the concentration camps
00:52:09.100 | during World War II as a way to describe that finding,
00:52:14.100 | identifying a purpose in life, a positive purpose in life,
00:52:18.460 | can save one from suffering.
00:52:20.100 | First of all, do you connect with the philosophy
00:52:23.500 | that he describes there?
00:52:25.100 | - Not really.
00:52:29.180 | So I can really see that somebody who has that feeling
00:52:36.900 | of purpose and meaning and so on,
00:52:39.060 | that that could sustain you.
00:52:41.260 | I, in general, don't have that feeling.
00:52:46.220 | And I'm pretty sure that if I were in a concentration camp,
00:52:49.980 | I'd give up and die.
00:52:51.740 | So he talks, he is a survivor.
00:52:56.780 | And he survived with that.
00:52:59.500 | And I'm not sure how essential to survival this sense is.
00:53:05.500 | But I do know when I think about myself
00:53:08.660 | that I would have given up.
00:53:10.060 | At, oh, this isn't going anywhere.
00:53:14.260 | And there is a sort of character
00:53:17.540 | that manages to survive in conditions like that.
00:53:22.540 | And then because they survive, they tell stories,
00:53:25.980 | and it sounds as if they survived
00:53:27.780 | because of what they were doing.
00:53:29.780 | We have no idea.
00:53:31.100 | They survived because the kind of people that they are,
00:53:34.180 | and they are the kind of people who survives
00:53:36.100 | and would tell themselves stories of a particular kind.
00:53:39.260 | So I'm not.
00:53:40.300 | - So you don't think seeking purpose
00:53:43.940 | is a significant driver in our behavior?
00:53:46.460 | - I mean, it's a very interesting question
00:53:50.060 | because when you ask people whether it's very important
00:53:53.380 | to have meaning in their life,
00:53:54.460 | they say, "Oh, yes, that's the most important thing."
00:53:57.220 | But when you ask people, "What kind of a day did you have?"
00:54:01.460 | And, you know, "What were the experiences
00:54:05.460 | "that you remember?"
00:54:06.580 | You don't get much meaning.
00:54:08.900 | You get social experiences.
00:54:11.420 | Then, and some people say that, for example,
00:54:16.420 | in child, you know, in taking care of children,
00:54:22.940 | the fact that they are your children
00:54:25.100 | and you're taking care of them makes a very big difference.
00:54:30.180 | I think that's entirely true,
00:54:32.260 | but it's more because of a story
00:54:37.580 | that we're telling ourselves,
00:54:38.940 | which is a very different story
00:54:40.660 | when we're taking care of our children
00:54:42.380 | or when we're taking care of other things.
00:54:44.780 | - Jumping around a little bit,
00:54:46.820 | in doing a lot of experiments, let me ask a question.
00:54:49.700 | Most of the work I do, for example, is in the real world,
00:54:54.300 | but most of the clean, good science that you can do
00:54:58.100 | is in the lab.
00:54:59.540 | So that distinction,
00:55:01.020 | do you think we can understand the fundamentals
00:55:05.220 | of human behavior through controlled experiments in the lab?
00:55:10.220 | If we talk about pupil diameter, for example,
00:55:15.060 | it's much easier to do
00:55:17.780 | when you can control lighting conditions, right?
00:55:19.860 | - Yeah, of course.
00:55:21.660 | - So when we look at driving,
00:55:24.340 | lighting variation destroys almost completely
00:55:27.700 | your ability to use pupil diameter.
00:55:30.500 | But in the lab, as I mentioned,
00:55:34.020 | semi-autonomous or autonomous vehicles,
00:55:36.340 | in driving simulators,
00:55:37.500 | we don't capture true, honest human behavior
00:55:42.500 | in that particular domain.
00:55:46.140 | So what's your intuition?
00:55:48.220 | How much of human behavior can we study
00:55:50.340 | in this controlled environment of the lab?
00:55:53.700 | - A lot, but you'd have to verify it.
00:55:57.020 | That your conclusions are basically limited
00:56:01.220 | to the situation, to the experimental situation.
00:56:05.220 | Then you have to jump the big inductive leap
00:56:08.540 | to the real world.
00:56:09.740 | So, and that's the flair.
00:56:13.940 | That's where the difference, I think,
00:56:16.620 | between the good psychologist and others that are mediocre
00:56:21.620 | is in the sense that your experiment
00:56:25.860 | captures something that's important
00:56:28.660 | and something that's real.
00:56:30.820 | And others are just running experiments.
00:56:33.500 | - So what is that?
00:56:34.340 | Like the birth of an idea to its development in your mind
00:56:38.300 | to something that leads to an experiment.
00:56:40.420 | Is that similar to maybe like what Einstein
00:56:43.980 | or a good physicist do is your intuition?
00:56:46.860 | You basically use your intuition to build up.
00:56:49.340 | - Yeah, but I mean, it's very skilled intuition.
00:56:53.020 | - Right, absolutely.
00:56:53.860 | - I mean, I just had that experience, actually.
00:56:55.620 | I had an idea that turns out to be a very good idea
00:56:59.100 | a couple of days ago.
00:57:01.460 | And you have a sense of that building up.
00:57:06.020 | So I'm working with a collaborator
00:57:08.460 | and he essentially was saying,
00:57:10.460 | "What are you doing?
00:57:12.580 | "What's going on?"
00:57:14.140 | And I was really, I couldn't exactly explain it,
00:57:18.580 | but I knew this is going somewhere.
00:57:20.860 | But I've been around that game for a very long time.
00:57:23.940 | And so I can, you develop that anticipation
00:57:28.300 | that yes, this is worth following up.
00:57:31.740 | - There's something here.
00:57:32.820 | - That's part of the skill.
00:57:34.580 | - Is that something you can reduce to words
00:57:37.100 | in describing a process in the form of advice to others?
00:57:42.100 | - No.
00:57:44.060 | - Follow your heart, essentially?
00:57:45.980 | - I mean, it's like trying to explain
00:57:49.140 | what it's like to drive.
00:57:50.740 | It's not, you've got to break it apart
00:57:53.100 | and it's not--
00:57:54.020 | - And then you lose.
00:57:55.020 | - And then you lose the experience.
00:57:56.780 | - You mentioned collaboration.
00:57:59.420 | You've written about your collaboration
00:58:01.340 | with Amos Tversky,
00:58:04.820 | that, this is you writing,
00:58:06.620 | "The 12 or 13 years in which most of our work was joint
00:58:10.260 | "were years of interpersonal and intellectual bliss.
00:58:14.940 | "Everything was interesting, almost everything was funny.
00:58:17.740 | "And there was a current joy of seeing an idea take shape.
00:58:21.140 | "So many times in those years,
00:58:22.820 | "we shared the magical experience
00:58:25.100 | "of one of us saying something,
00:58:26.820 | "which the other one would understand more deeply
00:58:28.780 | "than the speaker had done.
00:58:30.780 | "Contrary to the old laws of information theory,
00:58:33.940 | "it was common for us to find
00:58:36.100 | "that more information was received than had been sent.
00:58:39.940 | "I have almost never had the experience with anyone else.
00:58:43.420 | "If you have not had it,
00:58:44.580 | "you don't know how marvelous collaboration can be."
00:58:48.980 | - So let me ask perhaps a silly question.
00:58:53.180 | How does one find and create such a collaboration?
00:58:58.740 | That may be asking, like, how does one find love, but--
00:59:01.180 | - Yeah, you have to be lucky.
00:59:04.860 | And I think you have to have the character for that,
00:59:10.140 | because I've had many collaborations.
00:59:13.140 | I mean, none were as exciting as with Amos,
00:59:16.220 | but I've had, and I'm having, just very.
00:59:20.900 | So it's a skill.
00:59:23.100 | I think I'm good at it.
00:59:24.420 | Not everybody's good at it,
00:59:27.500 | and then it's the luck of finding people
00:59:30.420 | who are also good at it.
00:59:32.020 | - Is there advice in a form for a young scientist
00:59:35.100 | who also seeks to violate this law of information theory?
00:59:41.100 | (silence)
00:59:43.260 | - I really think it's so much luck is involved.
00:59:52.420 | And those really serious collaborations,
00:59:57.420 | at least in my experience, are a very personal experience.
01:00:04.300 | And I have to like the person I'm working with.
01:00:08.100 | Otherwise, I mean, there is that kind of collaboration,
01:00:12.260 | which is like an exchange, a commercial exchange
01:00:16.220 | of I'm giving this, you give me that.
01:00:19.140 | But the real ones are interpersonal.
01:00:23.540 | They're between people who like each other
01:00:26.020 | and who like making each other think
01:00:28.940 | and who like the way that the other person responds
01:00:31.620 | to your thoughts.
01:00:32.700 | You have to be lucky.
01:00:36.380 | - Yeah, I mean, but I already noticed,
01:00:39.300 | even just me showing up here,
01:00:41.180 | you've quickly started to digging in
01:00:44.060 | on a particular problem I'm working on,
01:00:46.220 | and already new information started to emerge.
01:00:49.780 | Is that a process, just a process of curiosity,
01:00:53.220 | of talking to people about problems and seeing?
01:00:56.500 | - I'm curious about anything to do with AI and robotics,
01:00:59.740 | and so, and I knew you were dealing with that,
01:01:03.740 | so I was curious.
01:01:05.140 | - Just follow your curiosity.
01:01:06.820 | Jumping around on the psychology front,
01:01:09.860 | the dramatic sounding terminology of replication crisis,
01:01:14.860 | but really just the, at times,
01:01:20.900 | this effect that at times studies do not,
01:01:26.980 | are not fully generalizable.
01:01:28.580 | They don't--
01:01:29.420 | - You are being polite.
01:01:30.420 | It's worse than that.
01:01:32.900 | - Is it, so I'm actually not fully familiar
01:01:36.060 | to the degree how bad it is, right?
01:01:38.020 | So what do you think is the source?
01:01:40.420 | Where do you think--
01:01:41.460 | - I think I know what's going on, actually.
01:01:44.260 | I mean, I have a theory about what's going on.
01:01:47.500 | And what's going on is that there is, first of all,
01:01:52.500 | a very important distinction
01:01:55.140 | between two types of experiments.
01:01:57.580 | And one type is within subject.
01:01:59.980 | So it's the same person has two experimental conditions.
01:02:04.980 | And the other type is between subjects,
01:02:08.420 | where some people are this condition,
01:02:10.060 | other people are that condition.
01:02:11.580 | They're different worlds.
01:02:13.420 | And between subject experiments are much harder to predict
01:02:18.420 | and much harder to anticipate.
01:02:21.860 | And the reason, and they're also more expensive
01:02:26.860 | because you need more people,
01:02:28.660 | and it's just, so between subject experiments
01:02:33.100 | is where the problem is.
01:02:34.620 | It's not so much in within subject experiments,
01:02:38.860 | it's really between.
01:02:40.220 | And there is a very good reason
01:02:42.420 | why the intuitions of researchers
01:02:46.580 | about between subject experiments are wrong.
01:02:50.420 | And that's because when you are a researcher,
01:02:54.100 | you are in a within subject situation.
01:02:56.780 | That is, you are imagining the two conditions
01:03:00.420 | and you see the causality and you feel it.
01:03:04.300 | But in the between subject condition,
01:03:06.700 | they don't, they live in one condition
01:03:10.980 | and the other one is just nowhere.
01:03:13.460 | So our intuitions are very weak
01:03:18.180 | about between subject experiments.
01:03:20.380 | And that, I think, is something that people haven't realized.
01:03:25.300 | And in addition, because of that,
01:03:29.820 | we have no idea about the power of manipulations,
01:03:34.780 | of experimental manipulations,
01:03:36.500 | because the same manipulation is much more powerful
01:03:41.020 | when you are in the two conditions
01:03:44.260 | than when you live in only one condition.
01:03:46.860 | And so the experimenters have very poor intuitions
01:03:50.740 | about between subject experiments.
01:03:53.100 | And there is something else,
01:03:56.620 | which is very important, I think,
01:03:58.860 | which is that almost all psychological hypotheses are true.
01:04:03.860 | That is, in the sense that, you know,
01:04:07.260 | directionally, if you have a hypothesis
01:04:11.020 | that A really causes B,
01:04:13.460 | that it's not true that A causes the opposite of B.
01:04:18.380 | Maybe A just has very little effect,
01:04:20.860 | but hypotheses are true, mostly.
01:04:24.700 | Except mostly, they're very weak.
01:04:27.420 | They're much weaker than you think
01:04:29.860 | when you are having images of.
01:04:34.300 | So the reason I'm excited about that
01:04:36.940 | is that I recently heard about some friends of mine
01:04:41.940 | who, they essentially funded 53 studies
01:04:49.900 | of behavioral change by 20 different teams of people
01:04:54.900 | with a very precise objective
01:04:57.380 | of changing the number of times that people go to the gym.
01:05:01.380 | And the success rate was zero.
01:05:09.940 | Not one of the 53 studies worked.
01:05:13.420 | Now, what's interesting about that
01:05:15.460 | is those are the best people in the field,
01:05:18.260 | and they have no idea what's going on.
01:05:21.460 | So they're not calibrated.
01:05:23.460 | They think that it's going to be powerful
01:05:25.500 | because they can imagine it,
01:05:27.060 | but actually it's just weak
01:05:29.460 | because you are focusing on your manipulation,
01:05:34.420 | and it feels powerful to you.
01:05:36.980 | There's a thing that I've written about
01:05:38.540 | that's called the focusing illusion.
01:05:40.780 | That is that when you think about something,
01:05:43.420 | it looks very important,
01:05:46.420 | more important than it really is.
01:05:48.220 | - More important than it really is,
01:05:49.380 | but if you don't see that effect, the 53 studies,
01:05:53.500 | doesn't that mean you just report that?
01:05:56.860 | So what's, I guess, the solution to that?
01:05:59.140 | - Well, I mean, the solution is for people
01:06:03.460 | to trust their intuitions less
01:06:06.140 | or to try out their intuitions before.
01:06:10.660 | I mean, experiments have to be pre-registered,
01:06:13.860 | and by the time you run an experiment,
01:06:16.500 | you have to be committed to it,
01:06:18.700 | and you have to run the experiment seriously enough
01:06:22.260 | and in a public, and so this is happening.
01:06:26.420 | The interesting thing is what happens before
01:06:31.420 | and how do people prepare themselves
01:06:35.140 | and how they run pilot experiments.
01:06:37.540 | It's going to train the way psychology is done,
01:06:40.180 | and it's already happening.
01:06:41.980 | - Do you have a hope for,
01:06:44.100 | this might connect to the study sample size.
01:06:49.100 | - Yeah.
01:06:50.220 | - Do you have a hope for the internet or digitalization?
01:06:52.900 | - Well, I mean, you know, this is really happening.
01:06:54.660 | MTurk, everybody's running experiments on MTurk,
01:06:59.660 | and it's very cheap and very effective, so.
01:07:04.580 | - Do you think that changes psychology, essentially?
01:07:06.900 | Because you're thinking, you can now run 10,000 subjects.
01:07:09.460 | - Eventually, it will.
01:07:11.460 | I mean, I can't put my finger on how exactly,
01:07:16.460 | but that's been true in psychology.
01:07:20.900 | Whenever an important new method came in,
01:07:24.260 | it changes the feel, so, and MTurk is really a method
01:07:29.260 | because it makes it very much easier
01:07:32.620 | to do something, to do some things.
01:07:35.420 | - Is there, undergrad students will ask me,
01:07:39.220 | how big a neural network should be
01:07:40.900 | for a particular problem?
01:07:42.380 | So let me ask you an equivalent question.
01:07:45.460 | How big, how many subjects does a study have
01:07:51.620 | for it to have a conclusive result?
01:07:53.900 | - Well, it depends on the strength of the effect.
01:07:57.140 | So if you're studying visual perception,
01:08:00.460 | or the perception of color,
01:08:02.140 | many of the classic results in visual,
01:08:07.140 | in color perception were done on three or four people,
01:08:09.940 | and I think one of them was colorblind,
01:08:11.940 | but, or partly colorblind.
01:08:14.380 | But on vision, you know, it's highly reliable.
01:08:19.380 | Many people don't need a lot of replications
01:08:24.900 | for some type of neurological experiment.
01:08:29.540 | When you're studying weaker phenomena,
01:08:35.660 | and especially when you're studying them between subjects,
01:08:39.180 | then you need a lot more subjects
01:08:41.180 | than people have been running.
01:08:42.900 | And that is, that's one of the things
01:08:46.340 | that are happening in psychology now,
01:08:48.540 | is that the power, the statistical power of experiments
01:08:51.780 | is increasing rapidly.
01:08:54.100 | - Does the between subject, as the number of subjects
01:08:57.620 | goes to infinity approach?
01:08:59.500 | - Well, I mean, you know,
01:09:01.420 | goes to infinity is exaggerated,
01:09:03.460 | but people, the standard number of subjects
01:09:07.220 | for an experiment in psychology were 30 or 40.
01:09:11.900 | And for a weak effect, that's simply not enough.
01:09:16.460 | And you may need a couple of hundred.
01:09:21.140 | I mean, it's that sort of order of magnitude.
01:09:26.140 | - What are the major disagreements in theories and effects
01:09:33.180 | that you've observed throughout your career
01:09:36.940 | that still stand today?
01:09:38.620 | You've worked on several fields.
01:09:40.380 | - Yeah.
01:09:41.220 | - But what still is out there as major disagreements
01:09:44.860 | that pops into your mind?
01:09:46.100 | - I've had one extreme experience of controversy
01:09:52.380 | with somebody who really doesn't like the work
01:09:56.100 | that Amos Tversky and I did,
01:09:58.260 | and he's been after us for 30 years or more, at least.
01:10:02.220 | - Do you wanna talk about it?
01:10:04.220 | - Well, I mean, his name is good, Gigeranzer.
01:10:06.340 | He's a well-known German psychologist.
01:10:09.580 | And that's the one controversy I have,
01:10:12.420 | which it's been unpleasant,
01:10:16.580 | and no, I don't particularly want to talk about it.
01:10:20.900 | - But is there open questions, even in your own mind?
01:10:23.980 | Every once in a while, you know,
01:10:26.300 | we talked about semi-autonomous vehicles.
01:10:29.380 | In my own mind, I see what the data says,
01:10:31.540 | but I also am constantly torn.
01:10:34.780 | Do you have things where your studies have found something,
01:10:38.140 | but you're also intellectually torn about what it means,
01:10:41.580 | and there's maybe disagreements within your own mind
01:10:46.180 | about particular things?
01:10:47.580 | - I mean, you know, one of the things that are interesting
01:10:50.700 | is how difficult it is for people to change their mind.
01:10:54.580 | Essentially, you know, once they are committed,
01:10:59.580 | people just don't change their mind
01:11:01.700 | about anything that matters.
01:11:03.380 | And that is surprisingly, but it's true about scientists.
01:11:07.620 | So the controversy that I described,
01:11:10.420 | you know, that's been going on like 30 years,
01:11:13.260 | and it's never going to be resolved.
01:11:15.220 | And you build a system, and you live within that system,
01:11:20.300 | and other systems of ideas look foreign to you,
01:11:24.060 | and there is very little contact,
01:11:28.900 | and very little mutual influence.
01:11:31.300 | That happens a fair amount.
01:11:33.300 | - Do you have a hopeful advice or message on that?
01:11:38.300 | Thinking about science, thinking about politics,
01:11:42.420 | thinking about things that have impact on this world,
01:11:47.020 | how can we change our mind?
01:11:48.620 | - I think that, I mean, on things that matter,
01:11:54.340 | which are political or religious,
01:11:57.940 | and people just don't change their mind.
01:12:02.420 | And by and large,
01:12:03.820 | and there's very little that you can do about it.
01:12:06.420 | What does happen is that if leaders change their minds,
01:12:13.340 | so for example, the public, the American public,
01:12:18.300 | doesn't really believe in climate change,
01:12:20.820 | doesn't take it very seriously.
01:12:23.340 | But if some religious leaders decided
01:12:26.820 | this is a major threat to humanity,
01:12:29.620 | that would have a big effect.
01:12:31.060 | So that we have the opinions that we have,
01:12:35.380 | not because we know why we have them,
01:12:37.620 | but because we trust some people,
01:12:39.740 | and we don't trust other people.
01:12:41.980 | And so it's much less about evidence
01:12:46.820 | than it is about stories.
01:12:49.020 | - So the way, one way to change your mind
01:12:51.780 | isn't at the individual level,
01:12:53.860 | is that the leaders of the communities you look up with,
01:12:56.860 | the stories change,
01:12:58.180 | and therefore your mind changes with them.
01:13:01.100 | So there's a guy named Alan Turing,
01:13:04.140 | came up with a Turing test.
01:13:05.580 | - Yeah.
01:13:06.420 | - What do you think is a good test of intelligence?
01:13:11.580 | Perhaps we're drifting in a topic
01:13:15.260 | that we're maybe philosophizing about,
01:13:19.100 | but what do you think is a good test for intelligence
01:13:21.100 | for an artificial intelligence system?
01:13:23.860 | - Well, the standard definition of, you know,
01:13:27.980 | of artificial general intelligence
01:13:30.740 | is that it can do anything that people can do,
01:13:33.740 | and it can do them better.
01:13:35.340 | - Yes.
01:13:36.180 | - What we are seeing is that in many domains,
01:13:39.460 | you have domain-specific,
01:13:41.860 | and, you know, devices or programs or software,
01:13:48.220 | and they beat people easily in specified way.
01:13:53.060 | What we are very far from is that general ability,
01:13:57.180 | a general purpose intelligence.
01:13:59.740 | So we,
01:14:00.580 | in machine learning,
01:14:04.940 | people are approaching something more general.
01:14:07.500 | I mean, for AlphaZero,
01:14:09.420 | it was much more general than AlphaGo,
01:14:14.100 | but it's still extraordinarily narrow and specific
01:14:20.140 | in what it can do.
01:14:21.740 | - So a test-- - So we're quite far
01:14:23.220 | from something that can, in every domain,
01:14:27.820 | think like a human except better.
01:14:30.660 | - What aspects of the Turing test has been criticized
01:14:34.020 | as natural language conversation,
01:14:36.300 | that it's too simplistic,
01:14:38.140 | it's easy to quote-unquote pass
01:14:40.580 | under constraints specified.
01:14:43.460 | What aspect of conversation would impress you
01:14:45.860 | if you heard it?
01:14:46.940 | Is it humor?
01:14:48.300 | Is it,
01:14:51.220 | what would impress the heck out of you
01:14:53.740 | if you saw it in conversation?
01:14:55.660 | - Yeah, I mean, certainly wit would,
01:14:57.580 | you know, wit would be impressive,
01:14:59.740 | and humor would be more impressive
01:15:05.540 | than just factual conversation,
01:15:08.100 | which I think is easy.
01:15:11.100 | And allusions would be interesting,
01:15:14.380 | and metaphors would be interesting.
01:15:19.060 | I mean, but new metaphors,
01:15:20.780 | not practiced metaphors.
01:15:23.780 | So there is a lot that would be sort of impressive,
01:15:28.060 | that is completely natural in conversation,
01:15:32.220 | but that you really wouldn't expect.
01:15:34.700 | - Does the possibility of creating
01:15:37.060 | a human-level intelligence
01:15:38.540 | or superhuman-level intelligence system
01:15:41.780 | excite you, scare you?
01:15:44.340 | - Well, I mean-- - How does it make you feel?
01:15:47.500 | - I find the whole thing fascinating.
01:15:50.580 | Absolutely fascinating.
01:15:51.900 | - So exciting.
01:15:52.740 | - I think, and exciting.
01:15:54.140 | It's also terrifying, you know,
01:15:56.380 | but I'm not going to be around to see it.
01:16:01.380 | And so I'm curious about what is happening now.
01:16:06.020 | But I also know that predictions about it are silly.
01:16:10.860 | (laughing)
01:16:12.020 | We really have no idea what it will look like
01:16:14.860 | 30 years from now, no idea.
01:16:18.340 | - Speaking of silly, bordering on the profound,
01:16:22.460 | they may ask the question of,
01:16:24.620 | in your view, what is the meaning of it all?
01:16:28.660 | The meaning of life?
01:16:30.580 | These descendant of great apes that we are,
01:16:33.940 | why, what drives us as a civilization,
01:16:37.060 | as a human being, as a force behind
01:16:40.380 | everything that you've observed and studied?
01:16:42.580 | Is there any answer, or is it all just a beautiful mess?
01:16:48.740 | - There is no answer that I can understand.
01:16:53.180 | And I'm not actively looking for one.
01:16:59.140 | - Do you think an answer exists?
01:17:02.100 | - No.
01:17:03.060 | There is no answer that we can understand.
01:17:05.860 | I'm not qualified to speak about what we cannot understand,
01:17:09.060 | but there is, I know that we cannot understand reality.
01:17:16.980 | I mean, there are a lot of things that we can do.
01:17:19.020 | I mean, you know, gravity waves,
01:17:21.620 | I mean, that's a big moment for humanity.
01:17:24.260 | And when you imagine that ape, you know,
01:17:27.100 | being able to go back to the Big Bang,
01:17:30.900 | - But the why is bigger than us.
01:17:37.620 | - The why is hopeless, really.
01:17:40.820 | - Danny, thank you so much.
01:17:41.860 | It was an honor.
01:17:42.700 | Thank you for speaking today.
01:17:43.540 | - Thank you.
01:17:45.020 | - Thanks for listening to this conversation.
01:17:47.140 | And thank you to our presenting sponsor, Cash App.
01:17:50.340 | Download it, use code LexPodcast.
01:17:52.980 | You'll get $10 and $10 will go to FIRST,
01:17:55.780 | a STEM education nonprofit that inspires hundreds
01:17:58.580 | of thousands of young minds to become future leaders
01:18:01.740 | and innovators.
01:18:03.140 | If you enjoy this podcast, subscribe on YouTube,
01:18:05.980 | get five stars on Apple Podcast, follow on Spotify,
01:18:09.260 | support on Patreon, or simply connect with me on Twitter.
01:18:13.700 | And now let me leave you with some words of wisdom
01:18:16.300 | from Daniel Kahneman.
01:18:17.580 | Intelligence is not only the ability to reason,
01:18:21.820 | it is also the ability to find relevant material
01:18:24.820 | and memory and to deploy attention when needed.
01:18:27.980 | Thank you for listening and hope to see you next time.
01:18:31.780 | (upbeat music)
01:18:34.360 | (upbeat music)
01:18:36.940 | [BLANK_AUDIO]