back to indexDavid Ferrucci: What is Intelligence? | AI Podcast Clips
Chapters
0:0 What is intelligence
1:15 Understanding the world
2:5 Picking a goal
4:0 Alien Intelligence
6:10 Proof
7:54 Social constructs
9:56 We are bound together
10:48 How hard is that
13:50 Optimistic notion
14:35 Emotional manipulation
00:00:00.000 |
- So let me ask, you've kind of alluded to it, 00:00:18.080 |
being able to reason your way through solving that problem? 00:00:23.720 |
- So I think of intelligence primarily two ways. 00:00:35.840 |
Whether it's to predict the answer of a question 00:00:39.120 |
or to say, look, I'm looking at all the market dynamics 00:00:42.120 |
and I'm gonna tell you what's gonna happen next, 00:00:47.600 |
and you're gonna predict what they're gonna do next 00:00:51.080 |
- So in a highly dynamic environment full of uncertainty, 00:01:08.120 |
what's gonna happen next accurately and consistently? 00:01:16.520 |
You need to have an understanding of the way the world works 00:01:21.100 |
in order to be able to unroll it into the future, right? 00:01:30.520 |
and this is very much what-- - What's a function. 00:01:37.280 |
and you tell me what the output variable is that matters, 00:01:40.240 |
I'm gonna sit there and be able to predict it. 00:01:45.560 |
so that I can get it right more often than not, I'm smart. 00:01:49.240 |
If I can do that with less data and less training time, 00:01:55.300 |
If I can figure out what's even worth predicting, 00:02:07.580 |
About picking a goal is sort of an interesting thing, 00:02:13.260 |
We talk about humans, and humans are pre-programmed 00:02:16.300 |
to survive, so sort of their primary driving goal. 00:02:25.620 |
So it's not just figuring out that you need to run away 00:02:31.900 |
but we survive in a social context as an example. 00:02:36.900 |
So understanding the subtleties of social dynamics 00:02:40.580 |
becomes something that's important for surviving, 00:02:47.620 |
with complex sets of variables, complex constraints, 00:03:04.340 |
That doesn't really require anything specific 00:03:16.700 |
But then when we say, well, do we understand each other? 00:03:21.580 |
In other words, would you perceive me as intelligent 00:03:29.300 |
So now I can predict, but I can't really articulate 00:03:39.500 |
and I can't get you to understand what I'm doing 00:03:49.060 |
the right pattern-managing machinery that I did. 00:03:57.380 |
but I'm sort of an alien intelligence relative to you. 00:04:00.940 |
- You're intelligent, but nobody knows about it. 00:04:06.940 |
- So you're saying, let's sort of separate the two things. 00:04:27.900 |
- Well, it's not impressing you that I'm intelligent. 00:04:37.980 |
- When you can't, I say, wow, you're right all, 00:04:47.420 |
But then what happens is, if I say, how are you doing that? 00:04:59.020 |
I may say, well, you're doing something weird, 00:05:10.660 |
Because now this is, you're in this weird place 00:05:13.420 |
where for you to be recognized as intelligent 00:05:22.580 |
And then my, we start to understand each other, 00:05:31.780 |
my ability to relate to you starts to change. 00:05:35.060 |
So now you're not an alien intelligence anymore. 00:05:42.180 |
And so I think when we look at animals, for example, 00:05:46.380 |
animals can do things we can't quite comprehend, 00:05:52.700 |
They can't put what they're going through in our terms. 00:05:59.780 |
and they're not really worth necessarily what we're worth. 00:06:01.860 |
We don't treat them the same way as a result of that. 00:06:04.620 |
But it's hard, because who knows what's going on. 00:06:16.220 |
explaining the reasoning that went into the prediction 00:06:25.340 |
If we look at humans, look at political debates 00:06:28.500 |
and discourse on Twitter, it's mostly just telling stories. 00:06:33.500 |
So your task is, sorry, your task is not to tell 00:07:02.740 |
- Yeah, there have been several proofs out there 00:07:04.340 |
where mathematicians would study for a long time 00:07:10.500 |
until the community mathematicians decided that it did. 00:07:19.220 |
is that ultimately, this notion of understanding, 00:07:22.740 |
us understanding something is ultimately a social concept. 00:07:26.500 |
In other words, I have to convince enough people 00:07:32.020 |
I did this in a way that other people can understand 00:07:34.660 |
and replicate and that it makes sense to them. 00:07:38.100 |
So human intelligence is bound together in that way. 00:07:54.140 |
- Did you think the general question of intelligence 00:07:59.260 |
So if we ask questions of an artificial intelligence system, 00:08:06.900 |
The answer will ultimately be a socially constructed-- 00:08:10.900 |
- I think, so I think, I'm making two statements. 00:08:16.260 |
in this super objective way that says, here's this data. 00:08:24.020 |
Learn this function, and then if you get it right, 00:08:36.940 |
It could be solving a problem we can't otherwise solve 00:08:54.420 |
Can we relate to the process that you're going through? 00:08:59.460 |
whether you're a machine or another human, frankly, 00:09:06.960 |
how it is that you're arriving at that answer 00:09:12.180 |
or a judge of people to decide whether or not 00:09:15.940 |
And by the way, that happens with humans as well. 00:09:18.520 |
You're sitting down with your staff, for example, 00:09:20.360 |
and you ask for suggestions about what to do next, 00:09:23.840 |
and someone says, "Oh, I think you should buy, 00:09:31.440 |
or I think you should launch the product today or tomorrow 00:09:35.360 |
whatever the decision may be, and you ask why, 00:09:38.120 |
and the person says, "I just have a good feeling about it." 00:09:52.340 |
"Can you explain to me why I should believe this?" 00:09:55.120 |
- And that explanation may have nothing to do 00:10:06.120 |
And that's why I'm saying we're bound together. 00:10:10.440 |
Our intelligences are bound together in that sense. 00:10:13.640 |
And if, for example, you're giving me an explanation, 00:10:31.800 |
and being objective and following logical paths 00:10:39.680 |
and sort of computing probabilities across those paths, 00:10:51.440 |
So I think we'll talk quite a bit about the first 00:10:56.240 |
on a specific objective metric benchmark performing well. 00:11:01.240 |
But being able to explain the steps, the reasoning, 00:11:16.440 |
- The thing that's hard for humans, as you know, 00:11:22.720 |
So, sorry, so how hard is that problem for computers? 00:11:36.620 |
and we say we wanna design computers to do that, 00:11:56.680 |
and what judgments we use to learn that well. 00:12:03.720 |
if you look at the entire enterprise of science, 00:12:07.760 |
science is supposed to be about objective reason, right? 00:12:11.960 |
So we think about, gee, who's the most intelligent person 00:12:18.780 |
Do we think about the savants who can close their eyes 00:12:38.940 |
And my point is that, how do you train someone to do that? 00:12:45.880 |
What's the process of training people to do that well? 00:12:54.300 |
to get other people to understand our thinking 00:13:05.780 |
we can persuade them through emotional means, 00:13:22.480 |
we try to do it as even artists in many forms, 00:13:28.040 |
We go through a fairly significant training process 00:13:36.180 |
But it's hard, and for humans, it takes a lot of work. 00:13:59.300 |
which is being able to explain something through reason. 00:14:03.360 |
But if you look at algorithms that recommend things 00:14:12.860 |
you know, their goal is to convince you to buy things 00:14:25.460 |
is showing you things that you really do need 00:14:30.300 |
But it could also be through emotional manipulation. 00:14:33.960 |
The algorithm that describes why a certain reason, 00:14:42.080 |
how hard is it to do it through emotional manipulation? 00:14:55.160 |
really showing in a clear way why something is good. 00:15:08.180 |
in the reasoning aspect and the emotional manipulation? 00:15:15.620 |
but more objectively, it's essentially saying, 00:15:22.700 |
I mean, it kind of give you more of that stuff. 00:15:26.540 |
- Yeah, I mean, I'm not saying it's right or wrong. 00:15:44.360 |
because the objective is to get you to click on it 00:15:58.720 |
- I guess this seems to be very useful for convincing, 00:16:15.280 |
I think there's a more optimistic view of that, too, 00:16:24.440 |
And these algorithms are saying, look, that's up to you 00:16:31.960 |
You may have an unhealthy addiction to this stuff, 00:16:35.200 |
or you may have a reasoned and thoughtful explanation 00:16:42.800 |
and the algorithms are saying, hey, that's whatever. 00:16:50.200 |
Could be a bad reason, could be a good reason. 00:16:55.800 |
And I think that that's, it's not good or bad. 00:17:03.200 |
which is saying, you seem to be interested in this, 00:17:07.640 |
And I think we're seeing this not just in buying stuff, 00:17:15.240 |
I'm just saying, I'm gonna show you other stuff 00:17:30.200 |
So one, the bar of performance is extremely high, 00:17:33.200 |
and yet we also ask them to, in the case of social media, 00:17:37.840 |
to help find the better angels of our nature, 00:17:44.240 |
So what do you think about the role of AI there? 00:17:56.240 |
We're not building, the system's not building a theory 00:18:00.080 |
that is consumable and understandable by other humans 00:18:04.640 |
And so on one hand, to say, oh, AI is doing this. 00:18:14.560 |
And it's interesting to think about why it's harder. 00:18:24.520 |
In other words, understandings of what's important 00:18:33.640 |
What's sensible, what's not sensible, what's good, 00:18:35.680 |
what's bad, what's moral, what's valuable, what isn't? 00:18:41.520 |
So when I see you clicking on a bunch of stuff, 00:18:44.880 |
and I look at these simple features, the raw features, 00:19:05.920 |
or what the category is, and stuff like that. 00:19:09.880 |
That's very different than kind of getting in there 00:19:14.320 |
The stuff you're reading, like why are you reading it? 00:19:18.680 |
What assumptions are you bringing to the table? 00:19:27.320 |
Does it lead you to thoughtful, good conclusions? 00:19:32.320 |
Again, there's interpretation and judgment involved 00:19:43.760 |
Because you have to start getting at the meaning 00:19:50.320 |
You have to get at how humans interpret the content 00:20:00.440 |
is not just some kind of deep, timeless, semantic thing 00:20:13.520 |
So it's again, even meaning is a social construct. 00:20:19.800 |
how most people would understand this kind of statement. 00:20:30.120 |
If I show you a painting, it's a bunch of colors on a canvas, 00:20:35.400 |
And it may mean different things to different people 00:20:44.720 |
As we try to get more rigorous with our communication, 00:20:51.520 |
So we go from abstract art to precise mathematics, 00:20:56.520 |
precise engineering drawings and things like that. 00:21:01.680 |
I wanna narrow that space of possible interpretations 00:21:16.160 |
and I think that's why this becomes really hard. 00:21:33.360 |
lots of different ways at many, many different levels. 00:21:36.060 |
But when I wanna align our understanding of that, 00:21:46.680 |
that's actually not directly in the artifact. 00:21:50.600 |
Now I have to say, well, how are you interpreting 00:21:55.560 |
And what about the colors and what do they mean to you? 00:21:57.680 |
What perspective are you bringing to the table? 00:22:00.880 |
What are your prior experiences with those artifacts? 00:22:03.920 |
What are your fundamental assumptions and values? 00:22:12.780 |
well, if this is the case, then I would conclude this. 00:22:14.840 |
If that's the case, then I would conclude that. 00:22:17.400 |
So your reasoning processes and how they work, 00:22:25.480 |
all those things now come together into the interpretation. 00:22:32.060 |
- And yet humans are able to intuit some of that 00:22:44.880 |
We have the shared experience and we have similar brains. 00:23:00.320 |
We have similar, what we like to call prior models 00:23:05.680 |
think of it as a wide collection of interrelated variables 00:23:15.840 |
But as humans, we have a lot of shared experience. 00:23:29.720 |
how is biological and computer information systems 00:23:36.280 |
Well, one is humans come with a lot of pre-programmed stuff,