back to indexDavid Ferrucci: AI Understanding the World Through Shared Knowledge Frameworks | AI Podcast Clips
00:00:04.380 |
if we can maybe escape the hardware question, 00:00:12.880 |
the history, the many centuries of wars and so on 00:00:27.480 |
Can you speak to how hard is it to encode that knowledge 00:00:30.960 |
systematically in a way that could be used by a computer? 00:00:34.420 |
- So I think it is possible to learn for a machine, 00:00:37.960 |
to program a machine to acquire that knowledge 00:00:43.080 |
In other words, a similar interpretive foundation 00:00:50.720 |
- So in other words, we view the world in a particular way. 00:01:17.200 |
they have goals, goals are largely built around survival 00:01:23.000 |
their fundamental economics around scarcity of resources. 00:01:32.360 |
because you brought up like historical events, 00:01:35.280 |
they start interpreting situations like that. 00:01:37.160 |
They apply a lot of this fundamental framework 00:01:46.680 |
How much power or influence did they have over the other? 00:01:48.720 |
Like this fundamental substrate, if you will, 00:01:54.440 |
So I think it is possible to imbue a computer 00:01:58.600 |
with that stuff that humans like take for granted 00:02:02.320 |
when they go and sit down and try to interpret things. 00:02:14.480 |
are then able to interpret it with regard to that framework. 00:02:34.000 |
Now you can find humans that come and interpret events 00:02:37.960 |
because they're like using a different framework. 00:02:44.160 |
where they decided humans were really just batteries. 00:02:48.080 |
And that's how they interpreted the value of humans 00:02:53.300 |
So, but I think that, you know, for the most part, 00:03:05.800 |
It comes from, again, the fact that we're similar beings 00:03:16.680 |
- So how much knowledge is there, do you think? 00:03:21.280 |
- There's a tremendous amount of detailed knowledge 00:03:24.600 |
There are, you know, you can imagine, you know, 00:03:27.880 |
effectively infinite number of unique situations 00:03:39.240 |
for you need for interpreting them, I don't think. 00:03:43.160 |
- You think the frameworks are more important 00:03:50.880 |
is they give you now the ability to interpret and reason, 00:04:15.080 |
or it almost requires playing around with the world 00:04:19.360 |
Just being able to sort of manipulate objects, 00:04:27.800 |
in robotics or AI, it seems to be like an onion. 00:04:47.360 |
Do they have to be learned through experience? 00:04:53.040 |
sort of the physics, the basic physics around us, 00:04:59.840 |
Yeah, I think there's a combination of things going on. 00:05:06.320 |
I think there is fundamental pattern matching, 00:05:21.760 |
You may learn very quickly that when you let something go, 00:05:42.400 |
- But that seems to be, that's exactly what I mean. 00:06:09.080 |
It seems like you have to have a lot of different knowledge 00:06:13.040 |
to be able to integrate that into the framework, 00:06:22.040 |
and start to reason about sociopolitical discourse. 00:06:30.280 |
and the high level reasoning decision making. 00:06:34.240 |
I guess my question is, how hard is this problem? 00:06:44.680 |
is take on a problem that's much more constrained 00:07:28.160 |
first of all, it's about getting machines to learn. 00:07:33.040 |
And I think we're already in a place that we understand, 00:07:36.080 |
for example, how machines can learn in various ways. 00:07:40.280 |
Right now, our learning stuff is sort of primitive 00:07:58.040 |
all the data in the world with the frameworks 00:08:00.640 |
that are inherent or underlying our understanding. 00:08:07.840 |
So if we wanna be able to reason over the data 00:08:15.440 |
or at least we need to program the computer to acquire, 00:08:19.280 |
to have access to and acquire, learn the frameworks as well 00:08:30.100 |
I think we can start, I think machine learning, 00:08:40.600 |
Will they relate them necessarily to gravity? 00:08:43.920 |
Not unless they can also acquire those theories as well 00:08:52.600 |
and connect it back to the theoretical knowledge. 00:08:55.080 |
I think if we think in terms of these class of architectures 00:08:58.880 |
that are designed to both learn the specifics, 00:09:02.720 |
find the patterns, but also acquire the frameworks 00:09:08.000 |
if we think in terms of robust architectures like this, 00:09:11.400 |
I think there is a path toward getting there. 00:09:15.080 |
- In terms of encoding architectures like that, 00:09:17.880 |
do you think systems that are able to do this 00:09:20.880 |
will look like neural networks or representing, 00:09:28.640 |
with the expert systems, so more like graphs, 00:09:38.220 |
where the challenge was the automated acquisition 00:09:48.980 |
- Yeah, so I mean, I think asking the question 00:09:51.060 |
do they look like neural networks is a bit of a red herring. 00:09:52.960 |
I mean, I think that they will certainly do inductive 00:09:58.420 |
And I've already experimented with architectures 00:10:04.380 |
and neural networks to learn certain classes of knowledge, 00:10:08.980 |
in order for it to make good inductive guesses, 00:10:13.220 |
but then ultimately to try to take those learnings 00:10:16.940 |
and marry them, in other words, connect them to frameworks 00:10:25.340 |
So for example, at Elemental Cognition, we do both. 00:10:30.380 |
But both those things, but also have a learning method 00:10:33.340 |
for acquiring the frameworks themselves and saying, 00:10:38.940 |
I need to interpret it in the form of these frameworks 00:10:42.540 |
So there is a fundamental knowledge representation, 00:11:00.860 |
- Yeah, so it seems like the idea of frameworks 00:11:04.220 |
requires some kind of collaboration with humans. 00:11:13.580 |
Only for the express purpose that you're designing 00:11:18.580 |
an intelligence that can ultimately communicate with humans 00:11:24.220 |
in the terms of frameworks that help them understand things. 00:11:31.100 |
you can independently create a machine learning system, 00:11:36.100 |
an intelligence that I might call an alien intelligence 00:11:40.180 |
that does a better job than you with some things, 00:11:45.220 |
That doesn't mean it might be better than you at the thing. 00:11:48.420 |
It might be that you cannot comprehend the framework 00:11:56.980 |
- But you're more interested in a case where you can. 00:12:07.600 |
I want machines to be able to ultimately communicate 00:12:13.100 |
I want them to be able to acquire and communicate, 00:12:28.500 |
whether it be in language or whether it be in images 00:12:37.120 |
to induce the generalizations from those patterns, 00:12:42.900 |
to connect them to frameworks, interpretations, if you will, 00:12:48.380 |
Of course, the machine is gonna have the strength 00:12:53.120 |
but it has the more rigorous reasoning abilities, 00:12:58.740 |
so it'll be an interesting complementary relationship 00:13:04.900 |
- Do you think that ultimately needs explainability 00:13:21.980 |
and the human is responsible for their own life 00:13:45.520 |
it has a failure, somehow the failure's communicated, 00:13:50.360 |
the human is now filling in the mistake, if you will, 00:14:14.320 |
"I know that the next word might be this or that, 00:14:29.820 |
the next time it's reading to try to understand something. 00:14:36.420 |
I mean, I remember when my daughter was in first grade 00:14:39.140 |
and she had a reading assignment about electricity. 00:14:47.220 |
"An electricity is produced by water flowing over turbines," 00:14:58.160 |
"created and produced are kind of synonyms in this case. 00:15:02.220 |
"and I can copy by water flowing over turbines, 00:15:09.240 |
"water flowing over turbines and what electricity even is. 00:15:12.000 |
"I mean, I can get the answer right by matching the text, 00:15:15.620 |
"but I don't have any framework for understanding 00:15:19.560 |
- And framework really is, I mean, it's a set of, 00:15:25.860 |
that you bring to the table in interpreting stuff 00:15:32.140 |
that there's a shared understanding of what they are. 00:15:35.460 |
- Share, yeah, it's the social, the us humans. 00:15:39.500 |
Do you have a sense that humans on Earth in general 00:15:43.780 |
share a set of, like how many frameworks are there? 00:15:48.220 |
- I mean, it depends on how you bound them, right? 00:15:55.900 |
I think the way I think about it is kind of in a layer. 00:15:59.300 |
I think of the architecture as being layered in that 00:16:05.260 |
that allow you the foundation to build frameworks. 00:16:14.740 |
I mean, one of the most compelling ways of thinking 00:16:17.020 |
about this is a reasoning by analogy where I can say, 00:16:19.500 |
oh, wow, I've learned something very similar. 00:16:26.940 |
but if it's like basketball in the sense that the goal's 00:16:30.340 |
like the hoop and I have to get the ball in the hoop 00:16:32.700 |
and I have guards and I have this and I have that, 00:16:35.200 |
like where are the similarities and where are 00:16:48.060 |
and then, you know, Democrats and Republicans. 00:16:57.260 |
- Right, I mean, I think we're talking about political 00:16:58.780 |
and social ways of interpreting the world around them. 00:17:01.540 |
And I think these frameworks are still largely, 00:17:04.500 |
I think they differ in maybe what some fundamental 00:17:15.820 |
The implications of different fundamental values 00:17:18.220 |
or fundamental assumptions in those frameworks 00:17:30.100 |
I just followed where my assumptions took me. 00:17:33.300 |
- Yeah, the process itself will look similar, 00:17:35.100 |
but that's a fascinating idea that frameworks 00:17:39.440 |
really help carve how a statement will be interpreted. 00:17:44.440 |
I mean, having a Democrat and a Republican framework 00:17:55.880 |
will be totally different from an AI perspective 00:17:59.280 |
- What we would want out of the AI is to be able to tell you 00:18:05.400 |
one set of assumptions is gonna lead you here, 00:18:07.200 |
another set of assumptions is gonna lead you there. 00:18:25.000 |
that there's one way to really understand a statement, 00:18:32.640 |
- Well, there's lots of different interpretations 00:18:35.100 |
and the broader the content, the richer it is. 00:18:40.100 |
And so, you and I can have very different experiences 00:18:49.120 |
And if we're committed to understanding each other, 00:18:53.020 |
we start, and that's the other important point, 00:18:56.960 |
if we're committed to understanding each other, 00:18:59.440 |
we start decomposing and breaking down our interpretation 00:19:13.880 |
But that requires a commitment to breaking down 00:19:17.260 |
that interpretation in terms of that framework 00:19:24.440 |
as really complementing and helping human intelligence 00:19:27.680 |
to overcome some of its biases and its predisposition 00:19:43.040 |
and someone labeled this as a Democratic point of view 00:19:47.060 |
And if the machine can help us break that argument down 00:19:59.200 |
- We're gonna have to sit and think about that as fast.