back to indexDaniel Kahneman: Thinking Fast and Slow, Deep Learning, and AI | Lex Fridman Podcast #65
Chapters
0:0
3:3 World War Two Taught Us about Human Psychology
8:59 System One
16:38 Advances in Machine Learning
21:14 Neural Networks
22:20 Grounding to the Physical Space
23:45 Active Learning
42:8 The Properties of Happiness
65:38 The Focusing Illusion
73:19 Good Test for Intelligence for an Artificial Intelligence System
78:14 Words of Wisdom
00:00:00.000 |
The following is a conversation with Daniel Kahneman, 00:00:23.560 |
on cognitive biases, prospect theory and happiness. 00:00:29.700 |
is the dichotomy between two modes of thought. 00:00:36.600 |
System two is slower, more deliberative and more logical. 00:00:43.180 |
associated with each of these two types of thinking. 00:00:54.080 |
for those of us seeking to engineer intelligence systems. 00:01:16.940 |
I'll do one or two minutes after introducing the episode 00:01:33.020 |
I personally use Cash App to send money to friends, 00:01:42.860 |
You can buy fractions of a stock, say $1 worth, 00:01:47.920 |
Brokers services are provided by Cash App Investing, 00:01:56.400 |
to support one of my favorite organizations called FIRST, 00:01:59.820 |
best known for their FIRST robotics and Lego competitions. 00:02:03.340 |
They educate and inspire hundreds of thousands of students 00:02:08.460 |
and have a perfect rating and charity navigator, 00:02:15.140 |
When you get Cash App from the App Store or Google Play 00:02:26.500 |
that I've personally seen inspire girls and boys 00:02:32.500 |
And now here's my conversation with Daniel Kahneman. 00:02:35.800 |
You tell a story of an SS soldier early in the war, 00:02:40.020 |
World War II, in Nazi occupied France and Paris, 00:02:59.560 |
that was significantly impacted by the war as well, 00:03:54.400 |
because it's very clear that if it could happen in Germany, 00:04:23.200 |
so that you treat them not as people anymore, 00:04:28.960 |
And the same way that you can slaughter animals 00:04:39.920 |
I think the combination of dehumanizing the other side 00:04:46.920 |
and having uncontrolled power over other people, 00:05:03.360 |
And he was perfectly capable of killing a lot of people, 00:05:10.200 |
- But what did the Jewish people mean to Nazis? 00:05:15.280 |
So what, the dismissal of Jewish as worthy of-- 00:05:20.280 |
- Again, this is surprising that it was so extreme, 00:05:30.640 |
but the distinction between the in-group and the out-group, 00:05:42.240 |
and the willingness to dehumanize the out-group, 00:05:51.800 |
probably didn't need the Holocaust to teach us that, 00:06:00.000 |
of what can happen to people and what people can do. 00:06:05.000 |
- So the effect of the in-group and the out-group? 00:06:17.360 |
There was no empathy, or very, very little empathy left. 00:06:30.040 |
the empathy disappeared, if there was initially. 00:06:34.600 |
And the fact that everybody around you was doing it, 00:07:01.000 |
were just particularly efficient and disciplined, 00:07:10.800 |
- Are these artifacts of history, or is it human nature? 00:07:22.480 |
and then they become less human, they become different. 00:07:30.240 |
outside of concentration camps in World War II, 00:07:33.720 |
it seems that war brings out darker sides of human nature, 00:07:38.520 |
but also the beautiful things about human nature. 00:07:41.160 |
- Well, what it brings out is the loyalty among soldiers. 00:07:51.120 |
Male bonding, I think, is a very real thing that happens. 00:08:02.880 |
to friendship under risk, and to shared risk. 00:08:17.160 |
- So let's talk about psychology a little bit. 00:08:27.560 |
system one, the fast, instinctive, and emotional one, 00:08:32.360 |
and system two, the slower, deliberate, logical one. 00:08:41.320 |
can you describe distinguishing characteristics 00:08:45.520 |
for people who have not read your book of the two systems? 00:08:49.840 |
- Well, I mean, the word system is a bit misleading, 00:08:54.320 |
but at the same time it's misleading, it's also very useful. 00:09:01.560 |
it's easier to think of it as a family of activities. 00:09:10.040 |
there are different ways for ideas to come to mind. 00:09:18.000 |
and the example, a standard example is two plus two, 00:09:24.280 |
And in other cases, you've got to do something, 00:09:28.160 |
you've got to work in order to produce the idea. 00:09:30.760 |
And my example, I always give the same pair of numbers 00:09:36.760 |
- You have to perform some algorithm in your head, 00:09:45.640 |
except something comes to mind, which is the algorithm, 00:09:51.880 |
And then it's work, and it engages short-term memory, 00:09:58.160 |
and it makes you incapable of doing other things 00:10:08.240 |
and there is a limited capacity for mental effort, 00:10:11.160 |
whereas system one is effortless, essentially. 00:10:18.840 |
it's really convenient to talk about two systems, 00:10:21.120 |
but you also mentioned just now and in general 00:10:24.320 |
that there is no distinct two systems in the brain, 00:10:29.200 |
from a neurobiological, even from psychology perspective. 00:10:55.960 |
Or do you not think of it at all in those terms 00:10:59.720 |
that it's all a mush, and these two things just emerge? 00:11:02.320 |
- You know, evolutionary theorizing about this 00:11:19.120 |
and that includes an ability to understand the world, 00:11:22.920 |
at least to the extent that they can predict. 00:11:27.200 |
but they can anticipate what's going to happen. 00:11:31.160 |
And that's a key form of understanding the world. 00:11:34.760 |
And my crude idea is that, what I call system two, 00:11:47.600 |
and there is the capacity of manipulating ideas, 00:12:03.840 |
that, without language, and without the very large brain 00:12:08.440 |
that we have compared to others, would be impossible. 00:12:11.720 |
Now, system one is more like what the animals are, 00:12:25.560 |
I mean, you know, I'm not choosing every word 00:12:30.200 |
The words, I have some idea, and then the words come out. 00:12:43.640 |
and we should be careful about the voice it provides. 00:12:48.280 |
- Well, I mean, you know, we have to trust it 00:12:57.560 |
System two, if we're dependent on system two for survival, 00:13:01.880 |
we wouldn't survive very long because it's very slow. 00:13:07.520 |
I mean, many things depend on there being automatic. 00:13:17.960 |
It contains skills that clearly have been learned 00:13:25.800 |
or speaking, in fact, skilled behavior has to be learned. 00:13:40.760 |
where driving is not automatic before it becomes automatic. 00:13:48.360 |
this is where you talk about heuristic and biases 00:13:56.400 |
and then system one essentially matches a new experience 00:14:11.560 |
the anticipation of what's going to happen next is correct. 00:14:17.760 |
the plan about what you have to do is correct. 00:14:20.640 |
And so most of the time, everything works just fine. 00:14:25.440 |
What's interesting actually is that in some sense, 00:14:35.540 |
That is, there is this quality of effortlessly solving 00:14:44.360 |
so that a chess player, a very good chess player, 00:14:48.180 |
all the moves that come to their mind are strong moves. 00:14:56.960 |
unconsciously and automatically and very, very fast. 00:15:42.680 |
is that actually what is happening in deep learning 00:16:06.280 |
and many people think that this is the critical, 00:17:03.920 |
It's that things, at least in the context of deep learning, 00:17:10.960 |
but things moved a lot faster than anticipated. 00:17:15.080 |
The transition from solving chess to solving Go 00:17:20.000 |
was, I mean, that's bewildering how quickly it went. 00:17:51.520 |
psychologist Gary Marcus, who is also a critic of AI, 00:18:15.120 |
So clearly, there is a fundamental difference. 00:18:26.960 |
because it's clear that you have to build some expectations 00:18:39.440 |
I'm pretty sure that DeepMind is working on it, 00:18:42.680 |
but if they have solved it, I haven't heard yet. 00:19:28.400 |
but reasoning by itself doesn't get you much. 00:19:58.040 |
that ultimately this kind of System 1 pattern matching 00:20:04.480 |
without significant transformation of the architecture. 00:20:12.840 |
who think that yes, neural networks will hit a limit 00:20:18.880 |
I have heard him tell the Mises-Sabiases centrally 00:20:23.160 |
that what they have accomplished is not a big deal, 00:20:27.880 |
that basically they can't do unsupervised learning 00:20:47.320 |
that there's still, I think there's this idea 00:21:10.880 |
not fundamentally looking like one that we currently have. 00:21:15.240 |
So neural networks being a huge part of that. 00:21:21.360 |
because pattern matching is so much of what's going on. 00:21:30.760 |
- Yeah, I mean, there is an important aspect to, 00:21:42.160 |
but they really don't know what they're talking about. 00:21:49.640 |
For that, you would need an AI that has sensation, 00:22:00.600 |
and maybe even something resembles consciousness 00:22:10.760 |
or can get, are in touch with some perception 00:22:18.800 |
as what he refers to as grounding to the physical space. 00:22:23.800 |
- So that's what we're talking about the same. 00:22:36.440 |
because it is talking about the world ultimately. 00:22:42.840 |
I mean, we're very human-centric in our thinking, 00:22:48.840 |
to understand what it means to be in this world? 00:22:54.760 |
Does it need to have a finiteness like we humans have? 00:22:58.280 |
All of these elements, it's a very, it's an open question. 00:23:02.320 |
- You know, I'm not sure about having a body, 00:23:08.280 |
I mean, if you think about human mimicking human, 00:23:13.240 |
but having a perception, that seems to be essential 00:23:20.160 |
you can accumulate knowledge about the world. 00:23:22.720 |
So if you can imagine a human completely paralyzed 00:23:27.720 |
and there is a lot that the human brain could learn, 00:23:47.240 |
Maybe it is also, is being able to play with the world. 00:23:51.400 |
How important for developing system one or system two, 00:24:00.840 |
- Well, there's certainly a lot, a lot of what you learn 00:24:04.000 |
as you learn to anticipate the outcomes of your actions. 00:24:08.800 |
I mean, you can see that how babies learn it. 00:24:11.400 |
You know, with their hands, how they learn, you know, 00:24:15.680 |
to connect, you know, the movements of their hands 00:24:22.520 |
And the ability of the brain to learn new patterns. 00:24:27.520 |
So, you know, it's the kind of thing that you get 00:24:33.200 |
and then people learn to operate the artificial limb, 00:24:37.240 |
you know, really impressively quickly, at least. 00:24:48.120 |
- At the risk of going into way too mysterious of land, 00:24:53.280 |
what do you think it takes to build a system like that? 00:25:08.840 |
- You know, I mean, I think that Jan LeCun's answer 00:25:11.640 |
that we don't know how many mountains there are. 00:25:16.640 |
I think that, you know, if you look at what Ray Kurzweil 00:25:24.760 |
But I think people are much more realistic than that, 00:25:45.280 |
how complicated are human beings in the following sense? 00:25:50.720 |
You know, I work with autonomous vehicles and pedestrians. 00:26:08.160 |
whether the pedestrian's gonna cross the road or not? 00:26:11.120 |
- I'm, you know, I'm fairly optimistic about that, actually, 00:26:18.080 |
is a huge amount of information that every vehicle has 00:26:23.080 |
and that feeds into one system, into one gigantic system. 00:26:44.200 |
but, and, you know, system is going to make mistakes, 00:26:53.600 |
I think they are able to anticipate pedestrians, 00:27:06.720 |
So they must know both to expect or to anticipate 00:27:11.720 |
how people will react when they're sneaking in. 00:27:16.080 |
And there's a lot of learning that's involved in that. 00:27:30.400 |
with whom you interact in a game-theoretic way. 00:27:34.800 |
So, I mean, it's not, it's a totally open problem, 00:27:57.120 |
there's part of the dance that would be quite complicated. 00:28:22.040 |
- So, and there's another thing you do that actually, 00:28:26.840 |
'cause I've watched hundreds of hours of video on this, 00:28:48.360 |
- Yeah, and you're telling him, I'm committed. 00:28:53.760 |
So, I'm committed, and if I'm committed, I'm looking away. 00:29:01.000 |
- So, the question is whether a machine that observes that 00:29:07.120 |
- Here, I'm not sure that it's got to understand so much 00:29:21.200 |
because here, I would think that maybe you can anticipate 00:29:27.160 |
because I think this is clearly what's happening 00:29:36.360 |
So, I thought that you didn't need a model of the human, 00:29:41.360 |
and a model of the human mind to avoid hitting pedestrians. 00:30:04.200 |
collaboration system is a lot harder than people realize. 00:30:08.960 |
So, do you think it's possible for robots and humans 00:30:14.760 |
We talked a little bit about semi-autonomous vehicles, 00:30:19.840 |
like in the Tesla, Autopilot, but just in tasks in general. 00:30:24.120 |
If you think, we talked about current neural networks 00:30:30.300 |
do you think those same systems can borrow humans 00:30:35.300 |
for system two type tasks and collaborate successfully? 00:30:41.440 |
- Well, I think that in any system where humans 00:30:49.040 |
the human will be superfluous within a fairly short time. 00:30:59.520 |
then it may not need the human for a long time. 00:31:02.280 |
Now, it would be very interesting if there are problems 00:31:07.280 |
that for some reason the machine doesn't, cannot solve, 00:31:12.840 |
then you would have to build into the machine 00:31:42.440 |
In order to understand the full scope of situations 00:31:59.040 |
I think the example of chess is very instructive. 00:32:02.620 |
I mean, there was a time at which Kasparov was saying 00:32:05.400 |
that human-machine combinations will beat everybody. 00:32:12.460 |
and alpha zero certainly doesn't need people. 00:32:23.820 |
where every problem probably in the end is like chess? 00:32:27.400 |
The question is, how long is that transition period? 00:32:37.000 |
just driving is probably a lot more complicated 00:32:48.280 |
because there is a hierarchical aspect to this, 00:33:04.920 |
- And for that hierarchical type of system to work, 00:33:09.920 |
you need a more complicated system than we currently have. 00:33:16.360 |
- A lot of people think, because as human beings, 00:33:28.440 |
This is actually a big problem for AI researchers 00:33:33.920 |
because they evaluate how hard a particular problem is 00:33:42.400 |
based on how hard it is for them to do the task. 00:33:49.240 |
'cause most people tell me driving is trivial. 00:33:59.840 |
and humans are actually incredible at driving, 00:34:05.160 |
- So is that just another element of the effects 00:34:08.600 |
that you've described in your work on the psychology side? 00:34:17.180 |
I would say that my research has contributed nothing 00:34:23.940 |
and to understanding the structure of situations 00:34:35.300 |
it's endlessly complicated, but it's very constrained. 00:34:40.820 |
So, and in the real world, there are far fewer constraints 00:34:51.740 |
because it's not always obvious to people, right? 00:34:55.640 |
- Well, I mean, people thought that reasoning was hard 00:35:02.720 |
but they quickly learned that actually modeling vision 00:35:15.900 |
- To push back on that a little bit, on the quickly part, 00:35:19.620 |
they haven't, it took several decades to learn that, 00:35:25.240 |
I mean, our intuition, of course, AI researchers have, 00:35:29.220 |
but you drift a little bit outside the specific AI field, 00:35:33.780 |
the intuition is still perceptible as a solved task. 00:35:40.460 |
haven't changed radically, and they are, as you said, 00:35:45.460 |
they're evaluating the complexity of problems 00:35:48.460 |
by how difficult it is for them to solve the problems. 00:35:59.180 |
- How do you think, from the perspective of AI researcher, 00:36:03.340 |
do we deal with the intuitions of the public? 00:36:11.580 |
the combination of hype investment and the public intuition 00:36:21.180 |
or that the intuition of the public leads to media hype, 00:36:31.540 |
and then the tech doesn't make the company's money, 00:36:38.700 |
sort of to fight the, let's call it system one thinking? 00:36:54.640 |
before the understanding of what those systems can do 00:37:14.300 |
The fact that you have a device that cannot explain itself 00:37:31.460 |
I mean, this is really something that is happening. 00:38:00.100 |
use cues to make judgments about our environment. 00:38:07.840 |
do you think humans can explain stuff themselves? 00:38:30.160 |
But actually, my own belief is that in most cases, 00:38:38.820 |
So that the reasons are a story that comes to your mind 00:39:12.240 |
And really, we don't necessarily need to explain, 00:39:23.460 |
- The story doesn't necessarily need to reflect the truth. 00:39:34.820 |
in a way that sounds cynical or doesn't sound cynical. 00:39:43.340 |
- Of having an explanation is to tell a story 00:39:51.220 |
And for it to be acceptable and to be robustly acceptable, 00:39:58.040 |
But the objective is for people to accept it. 00:40:21.180 |
The experienced self and the remembering self. 00:40:24.700 |
Can you describe the distinction between the two? 00:40:40.540 |
And mostly we forget everything that happens, 00:40:52.980 |
you evaluate the past, and you form a memory, 00:40:59.740 |
It's not that you can roll a film of an interaction. 00:41:03.460 |
You construct, in effect, the elements of a story 00:41:14.220 |
and there is the story that is created about the experience. 00:41:28.820 |
Now, the paradox, and the deep paradox in that 00:41:44.180 |
And basically, decision-making and everything that we do 00:41:54.420 |
It's governed by the story that we told ourselves, 00:42:05.900 |
about the pursuit of happiness that come out of that. 00:42:14.020 |
- There are properties of how we construct stories 00:42:28.020 |
And one is that in stories, time doesn't matter. 00:42:32.940 |
There's a sequence of events, or there are highlights, 00:42:49.340 |
And in stories, events matter, but time doesn't. 00:42:54.340 |
That leads to a very interesting set of problems, 00:43:13.440 |
So that creates a lot of paradoxes that I've thought about. 00:43:28.540 |
based on such properties, what's the optimal? 00:43:31.820 |
- You know, I gave up, I abandoned happiness research 00:43:44.460 |
that if you do talk in terms of those two cells, 00:43:48.260 |
then that what makes the remembering self happy 00:43:51.180 |
and what makes experiencing self happy are different things. 00:44:01.300 |
and you're just told that at the end of the vacation, 00:44:04.040 |
you'll get an amnesic drug, so you remember nothing, 00:44:24.900 |
not to have experiences, but to construct memories. 00:44:37.140 |
that you will want for yourself if you will remember. 00:44:44.700 |
but clearly those are big issues, difficult issues. 00:44:48.940 |
- You've talked about sort of how many minutes or hours 00:44:56.020 |
because that's how you really experience the vacation 00:45:03.460 |
I don't know if you think about this or interact with it, 00:45:06.220 |
there's a modern way to magnify the remembering self, 00:45:11.220 |
which is by posting on Instagram, on Twitter, 00:45:17.300 |
A lot of people live life for the picture that you take, 00:45:40.460 |
so I cannot really speak intelligently about those things. 00:45:49.060 |
- I think it will make a very big difference. 00:46:15.460 |
I mean, the number of conversations I'm involved with 00:46:18.940 |
where somebody says, "Well, let's look it up." 00:46:26.660 |
well, it means that it's much less important to know things. 00:46:32.340 |
No, it used to be very important to know things. 00:46:36.620 |
So the requirements that we have for ourselves 00:46:50.380 |
And I have no idea what Instagram does, but it's-- 00:46:58.980 |
my remembering self could enjoy this conversation, 00:47:01.820 |
but I'll get to enjoy it even more by watching it. 00:47:08.020 |
it'll be about 100,000 people as scary as this to say, 00:47:22.220 |
And I haven't seen, it's the same effects that you described 00:47:27.140 |
and I don't think the psychology of that magnification 00:47:30.540 |
has been described yet 'cause it's a new world. 00:47:56.900 |
and there was a lot of assumed common knowledge. 00:48:02.660 |
I mean, it was obvious that you had read the New York Times, 00:48:05.460 |
it was obvious that you had read the reviews. 00:48:08.180 |
I mean, so a lot was taken for granted that was shared. 00:48:13.140 |
And when there were three television channels, 00:48:36.340 |
let me say that I'm also a fan of Sartre and Camus 00:48:50.660 |
what do you think of the existentialist philosophy of life? 00:48:54.700 |
So trying to really emphasize the experiencing self 00:48:59.180 |
as the proper way to, or the best way to live life. 00:49:04.180 |
- I don't know enough philosophy to answer that, 00:49:09.100 |
but it's not, you know, the emphasis on experience 00:49:18.060 |
- So that's, you just have got to experience things 00:49:23.460 |
and not to evaluate, and not to pass judgment, 00:49:33.020 |
- When you look at the grand picture of experience, 00:49:48.820 |
any of the procedures of the remembering self. 00:50:31.340 |
But then it turns out that what people want for themselves 00:50:42.860 |
that doesn't correspond to what people want for themselves. 00:50:46.140 |
And when I realized that this was where things were going, 00:51:03.380 |
So currently, artificial intelligence systems 00:51:12.780 |
there's some pattern formation like learning, so on, 00:51:20.580 |
except in reinforcement learning every once in a while 00:51:25.580 |
- Yeah, but you know, that would, in principle, 00:51:31.380 |
Do you think it's a feature or a bug of human beings 00:51:41.700 |
I mean, you have to look back in order to look forward. 00:51:48.820 |
you couldn't really intelligently look forward. 00:51:52.860 |
- You're looking for the echoes of the same kind 00:51:54.860 |
of experience in order to predict what the future holds? 00:52:02.220 |
"Man's Search for Meaning," I'm not sure if you've read, 00:52:05.740 |
describes his experience at the concentration camps 00:52:09.100 |
during World War II as a way to describe that finding, 00:52:14.100 |
identifying a purpose in life, a positive purpose in life, 00:52:20.100 |
First of all, do you connect with the philosophy 00:52:29.180 |
So I can really see that somebody who has that feeling 00:52:46.220 |
And I'm pretty sure that if I were in a concentration camp, 00:52:59.500 |
And I'm not sure how essential to survival this sense is. 00:53:17.540 |
that manages to survive in conditions like that. 00:53:22.540 |
And then because they survive, they tell stories, 00:53:31.100 |
They survived because the kind of people that they are, 00:53:36.100 |
and would tell themselves stories of a particular kind. 00:53:50.060 |
because when you ask people whether it's very important 00:53:54.460 |
they say, "Oh, yes, that's the most important thing." 00:53:57.220 |
But when you ask people, "What kind of a day did you have?" 00:54:16.420 |
in child, you know, in taking care of children, 00:54:25.100 |
and you're taking care of them makes a very big difference. 00:54:46.820 |
in doing a lot of experiments, let me ask a question. 00:54:49.700 |
Most of the work I do, for example, is in the real world, 00:54:54.300 |
but most of the clean, good science that you can do 00:55:01.020 |
do you think we can understand the fundamentals 00:55:05.220 |
of human behavior through controlled experiments in the lab? 00:55:10.220 |
If we talk about pupil diameter, for example, 00:55:17.780 |
when you can control lighting conditions, right? 00:55:24.340 |
lighting variation destroys almost completely 00:56:01.220 |
to the situation, to the experimental situation. 00:56:16.620 |
between the good psychologist and others that are mediocre 00:56:34.340 |
Like the birth of an idea to its development in your mind 00:56:46.860 |
You basically use your intuition to build up. 00:56:49.340 |
- Yeah, but I mean, it's very skilled intuition. 00:56:53.860 |
- I mean, I just had that experience, actually. 00:56:55.620 |
I had an idea that turns out to be a very good idea 00:57:14.140 |
And I was really, I couldn't exactly explain it, 00:57:20.860 |
But I've been around that game for a very long time. 00:57:37.100 |
in describing a process in the form of advice to others? 00:58:06.620 |
"The 12 or 13 years in which most of our work was joint 00:58:10.260 |
"were years of interpersonal and intellectual bliss. 00:58:14.940 |
"Everything was interesting, almost everything was funny. 00:58:17.740 |
"And there was a current joy of seeing an idea take shape. 00:58:26.820 |
"which the other one would understand more deeply 00:58:30.780 |
"Contrary to the old laws of information theory, 00:58:36.100 |
"that more information was received than had been sent. 00:58:39.940 |
"I have almost never had the experience with anyone else. 00:58:44.580 |
"you don't know how marvelous collaboration can be." 00:58:53.180 |
How does one find and create such a collaboration? 00:58:58.740 |
That may be asking, like, how does one find love, but-- 00:59:04.860 |
And I think you have to have the character for that, 00:59:32.020 |
- Is there advice in a form for a young scientist 00:59:35.100 |
who also seeks to violate this law of information theory? 00:59:43.260 |
- I really think it's so much luck is involved. 00:59:57.420 |
at least in my experience, are a very personal experience. 01:00:04.300 |
And I have to like the person I'm working with. 01:00:08.100 |
Otherwise, I mean, there is that kind of collaboration, 01:00:12.260 |
which is like an exchange, a commercial exchange 01:00:28.940 |
and who like the way that the other person responds 01:00:46.220 |
and already new information started to emerge. 01:00:49.780 |
Is that a process, just a process of curiosity, 01:00:53.220 |
of talking to people about problems and seeing? 01:00:56.500 |
- I'm curious about anything to do with AI and robotics, 01:00:59.740 |
and so, and I knew you were dealing with that, 01:01:09.860 |
the dramatic sounding terminology of replication crisis, 01:01:44.260 |
I mean, I have a theory about what's going on. 01:01:47.500 |
And what's going on is that there is, first of all, 01:01:59.980 |
So it's the same person has two experimental conditions. 01:02:13.420 |
And between subject experiments are much harder to predict 01:02:21.860 |
And the reason, and they're also more expensive 01:02:28.660 |
and it's just, so between subject experiments 01:02:34.620 |
It's not so much in within subject experiments, 01:02:50.420 |
And that's because when you are a researcher, 01:02:56.780 |
That is, you are imagining the two conditions 01:03:20.380 |
And that, I think, is something that people haven't realized. 01:03:29.820 |
we have no idea about the power of manipulations, 01:03:36.500 |
because the same manipulation is much more powerful 01:03:46.860 |
And so the experimenters have very poor intuitions 01:03:58.860 |
which is that almost all psychological hypotheses are true. 01:04:13.460 |
that it's not true that A causes the opposite of B. 01:04:36.940 |
is that I recently heard about some friends of mine 01:04:49.900 |
of behavioral change by 20 different teams of people 01:04:57.380 |
of changing the number of times that people go to the gym. 01:05:29.460 |
because you are focusing on your manipulation, 01:05:49.380 |
but if you don't see that effect, the 53 studies, 01:06:10.660 |
I mean, experiments have to be pre-registered, 01:06:18.700 |
and you have to run the experiment seriously enough 01:06:37.540 |
It's going to train the way psychology is done, 01:06:50.220 |
- Do you have a hope for the internet or digitalization? 01:06:52.900 |
- Well, I mean, you know, this is really happening. 01:06:54.660 |
MTurk, everybody's running experiments on MTurk, 01:07:04.580 |
- Do you think that changes psychology, essentially? 01:07:06.900 |
Because you're thinking, you can now run 10,000 subjects. 01:07:11.460 |
I mean, I can't put my finger on how exactly, 01:07:24.260 |
it changes the feel, so, and MTurk is really a method 01:07:53.900 |
- Well, it depends on the strength of the effect. 01:08:07.140 |
in color perception were done on three or four people, 01:08:14.380 |
But on vision, you know, it's highly reliable. 01:08:35.660 |
and especially when you're studying them between subjects, 01:08:48.540 |
is that the power, the statistical power of experiments 01:08:54.100 |
- Does the between subject, as the number of subjects 01:09:07.220 |
for an experiment in psychology were 30 or 40. 01:09:11.900 |
And for a weak effect, that's simply not enough. 01:09:21.140 |
I mean, it's that sort of order of magnitude. 01:09:26.140 |
- What are the major disagreements in theories and effects 01:09:41.220 |
- But what still is out there as major disagreements 01:09:46.100 |
- I've had one extreme experience of controversy 01:09:52.380 |
with somebody who really doesn't like the work 01:09:58.260 |
and he's been after us for 30 years or more, at least. 01:10:04.220 |
- Well, I mean, his name is good, Gigeranzer. 01:10:16.580 |
and no, I don't particularly want to talk about it. 01:10:20.900 |
- But is there open questions, even in your own mind? 01:10:34.780 |
Do you have things where your studies have found something, 01:10:38.140 |
but you're also intellectually torn about what it means, 01:10:41.580 |
and there's maybe disagreements within your own mind 01:10:47.580 |
- I mean, you know, one of the things that are interesting 01:10:50.700 |
is how difficult it is for people to change their mind. 01:10:54.580 |
Essentially, you know, once they are committed, 01:11:03.380 |
And that is surprisingly, but it's true about scientists. 01:11:10.420 |
you know, that's been going on like 30 years, 01:11:15.220 |
And you build a system, and you live within that system, 01:11:20.300 |
and other systems of ideas look foreign to you, 01:11:33.300 |
- Do you have a hopeful advice or message on that? 01:11:38.300 |
Thinking about science, thinking about politics, 01:11:42.420 |
thinking about things that have impact on this world, 01:11:48.620 |
- I think that, I mean, on things that matter, 01:12:03.820 |
and there's very little that you can do about it. 01:12:06.420 |
What does happen is that if leaders change their minds, 01:12:13.340 |
so for example, the public, the American public, 01:12:53.860 |
is that the leaders of the communities you look up with, 01:13:06.420 |
- What do you think is a good test of intelligence? 01:13:19.100 |
but what do you think is a good test for intelligence 01:13:23.860 |
- Well, the standard definition of, you know, 01:13:30.740 |
is that it can do anything that people can do, 01:13:36.180 |
- What we are seeing is that in many domains, 01:13:41.860 |
and, you know, devices or programs or software, 01:13:48.220 |
and they beat people easily in specified way. 01:13:53.060 |
What we are very far from is that general ability, 01:14:04.940 |
people are approaching something more general. 01:14:14.100 |
but it's still extraordinarily narrow and specific 01:14:30.660 |
- What aspects of the Turing test has been criticized 01:14:43.460 |
What aspect of conversation would impress you 01:15:23.780 |
So there is a lot that would be sort of impressive, 01:15:44.340 |
- Well, I mean-- - How does it make you feel? 01:16:01.380 |
And so I'm curious about what is happening now. 01:16:06.020 |
But I also know that predictions about it are silly. 01:16:12.020 |
We really have no idea what it will look like 01:16:18.340 |
- Speaking of silly, bordering on the profound, 01:16:42.580 |
Is there any answer, or is it all just a beautiful mess? 01:17:05.860 |
I'm not qualified to speak about what we cannot understand, 01:17:09.060 |
but there is, I know that we cannot understand reality. 01:17:16.980 |
I mean, there are a lot of things that we can do. 01:17:47.140 |
And thank you to our presenting sponsor, Cash App. 01:17:55.780 |
a STEM education nonprofit that inspires hundreds 01:17:58.580 |
of thousands of young minds to become future leaders 01:18:03.140 |
If you enjoy this podcast, subscribe on YouTube, 01:18:05.980 |
get five stars on Apple Podcast, follow on Spotify, 01:18:09.260 |
support on Patreon, or simply connect with me on Twitter. 01:18:13.700 |
And now let me leave you with some words of wisdom 01:18:17.580 |
Intelligence is not only the ability to reason, 01:18:21.820 |
it is also the ability to find relevant material 01:18:24.820 |
and memory and to deploy attention when needed. 01:18:27.980 |
Thank you for listening and hope to see you next time.