back to indexNoam Chomsky: Language, Cognition, and Deep Learning | Lex Fridman Podcast #53
Chapters
0:0 Introduction
4:0 If an alien species visited Earth would we be able to find a common language or protocol of communication
5:45 The structure of language
7:18 The roots of language
9:45 Limits of human cognition
13:5 The mysticism of Neo scholastics
16:45 Expanding our cognitive capacity
20:5 Linear vs remote
22:5 Deep learning
25:35 The structure defendants case
30:5 The origins of modern linguistics
00:00:00.000 |
The following is a conversation with Noam Chomsky. 00:00:03.800 |
He's truly one of the great minds of our time 00:00:13.400 |
and recently also joined the University of Arizona 00:00:18.600 |
But it was at MIT about four and a half years ago 00:00:24.760 |
I remember getting into an elevator at Stata Center, 00:00:29.480 |
looking up and realizing it was just me and Noam Chomsky 00:00:35.520 |
Just me and one of the seminal figures of linguistics, 00:00:40.040 |
and political thought in the past century, if not ever. 00:00:46.640 |
life is made up of funny little defining moments 00:00:49.240 |
that you never forget for reasons that may be too poetic 00:00:57.360 |
Noam has been an inspiration to me and millions of others. 00:01:00.960 |
It was truly an honor for me to sit down with him in Arizona. 00:01:14.520 |
the recording button was pressed, stopping the recording. 00:01:17.360 |
So I have good audio of both of us, but no video of Noam. 00:01:30.520 |
Most people just listen to this audio version 00:01:32.480 |
for the podcast as opposed to watching it on YouTube. 00:01:39.040 |
I hope you understand and still enjoy this conversation 00:01:51.180 |
it was humbling and something I'm deeply grateful for. 00:01:55.560 |
As some of you know, this podcast is a side project for me 00:01:59.640 |
where my main journey and dream is to build AI systems 00:02:15.400 |
And I hope you feel that even when I screw things up. 00:02:18.600 |
I recently started doing ads at the end of the introduction. 00:02:22.880 |
I'll do one or two minutes after introducing the episode 00:02:54.280 |
I personally use Cash App to send money to friends, 00:03:04.200 |
You can buy fractions of a stock, say $1 worth, 00:03:09.240 |
Brokerage services are provided by Cash App Investing, 00:03:17.680 |
to support one of my favorite organizations called First, 00:03:20.920 |
best known for their FIRST Robotics and LEGO competitions. 00:03:24.360 |
They educate and inspire hundreds of thousands of students 00:03:29.640 |
and have a perfect rating on Charity Navigator, 00:03:42.840 |
you'll get $10, and Cash App will also donate $10 to FIRST, 00:03:50.760 |
and boys to dream of engineering a better world. 00:03:54.560 |
And now, here's my conversation with Noam Chomsky. 00:03:58.940 |
I apologize for the absurd philosophical question, 00:04:07.200 |
do you think we would be able to find a common language 00:04:13.720 |
- There are arguments to the effect that we could. 00:04:35.120 |
Turing machines, just free to see what would happen. 00:04:58.880 |
if some alien species developed higher intelligence, 00:05:07.480 |
They would at least have what the simplest computer would do. 00:05:12.480 |
And in fact, he didn't know that at the time, 00:05:20.800 |
are based on operations which yield something 00:05:25.240 |
like arithmetic in the limiting case and the minimal case. 00:05:29.400 |
So it's conceivable that a mode of communication 00:05:34.080 |
could be established based on the core properties 00:05:38.600 |
of human language and the core properties of arithmetic, 00:05:50.840 |
of language as an internal system inside our mind 00:06:03.800 |
- It's a simple fact that there's something about you, 00:06:23.000 |
of the infinite number of expressions of your language. 00:06:32.960 |
it's in specific configurations of your brain. 00:06:36.500 |
And that's essentially like the internal structure 00:06:40.640 |
of your laptop, whatever programs it has are in there. 00:06:45.000 |
Now, one of the things you can do with language, 00:06:50.080 |
is use it to externalize what's in your head. 00:07:02.680 |
Well, the set of things that we're externalizing 00:07:19.020 |
- So how deep do the roots of language go in our brain? 00:07:22.240 |
Our mind, is it yet another feature like vision, 00:07:28.520 |
from which everything else springs in the human mind? 00:07:33.880 |
There's something about our genetic endowment 00:07:44.760 |
And there's something in our genetic endowment 00:07:47.440 |
that determines that we have a human language faculty. 00:07:51.500 |
No other organism has anything remotely similar. 00:08:03.640 |
to the early scientific revolution, at least, 00:08:06.120 |
that holds that language is the core of human life. 00:08:14.640 |
It's the mode for constructing thoughts and expressing them. 00:08:22.800 |
And it's got fundamental creative capacities. 00:08:27.240 |
It's free, independent, unbounded, and so on. 00:08:47.520 |
and not-so-great achievements of the species. 00:08:53.620 |
Do you think that's deeply linked with language? 00:08:56.240 |
Do you think the way we, the internal language system 00:09:04.160 |
- It is undoubtedly the mechanism by which we reason. 00:09:09.360 |
there are undoubtedly other faculties involved in reasoning. 00:09:20.940 |
to pursue certain lines of endeavor and inquiry 00:09:25.480 |
and to decide what makes sense and doesn't make sense 00:09:44.880 |
- The idea of capacity, our biology, evolution, 00:09:49.360 |
you've talked about it defining, essentially, 00:09:55.200 |
Can you try to define what limit and scope are? 00:10:09.640 |
It's commonly believed, most scientists believe, 00:10:21.800 |
If we're biological organisms, which are not angels, 00:10:26.160 |
then we, our capacities ought to have scope and limits, 00:10:48.800 |
and therefore become a rich, complex organism. 00:10:53.480 |
But if you look at that same genetic endowment, 00:10:56.260 |
it prevents you from developing in other directions. 00:11:12.000 |
So the very endowment that confers richness and complexity 00:11:29.680 |
therefore they should have the same properties. 00:11:50.360 |
you would just be some kind of a random amoeboid creature 00:12:00.280 |
and I think we even have some evidence as to what they are. 00:12:06.700 |
in the history of science at the time of Newton. 00:12:13.920 |
modern science developed on a fundamental assumption 00:12:29.300 |
the kinds of artifacts that were being developed 00:12:39.360 |
the world is just a more complex variant of this. 00:12:57.680 |
just dismissed this as returning to the mysticism 00:13:20.340 |
That was the very criterion of intelligibility 00:13:37.560 |
Finally, after a long struggle, took a long time, 00:13:41.240 |
scientists just accepted this as common sense. 00:13:57.000 |
So for example, David Hume, in his encomium to Newton, 00:14:02.000 |
wrote that, who was the greatest thinker ever and so on, 00:14:05.560 |
he said that he unveiled many of the secrets of nature, 00:14:13.320 |
of the mechanical philosophy, mechanical science, 00:14:17.520 |
he left us with, he showed that there are mysteries 00:14:26.760 |
It abandoned the mysteries, it can't solve it, 00:14:49.720 |
We cannot attain the goal of understanding the world, 00:14:58.440 |
This mechanical philosophy, Galileo to Newton, 00:15:05.320 |
that that's our instinctive conception of how things work. 00:15:18.680 |
they kind of invent something that must be invisible 00:15:22.080 |
that's in between them that's making them move. 00:15:55.640 |
You know, three lines not coming quite together, 00:16:09.360 |
It's now been shown that it goes way beyond that, 00:16:20.880 |
what people actually see is a rigid object in motion, 00:16:26.920 |
We all know that from a television set, basically. 00:16:35.920 |
- I think it does, but it's a very contested view. 00:16:48.600 |
So I just spent a day at a company called Neuralink, 00:16:59.580 |
So they try to do thousands readings in the brain, 00:17:08.520 |
Do you think their dream is to expand the capacity 00:17:22.440 |
Do you think our cognitive capacity might be expanded, 00:17:26.240 |
our linguistic capacity, our ability to reason 00:17:29.340 |
might be expanded by adding a machine into the picture? 00:17:35.600 |
but a sense that was known thousands of years ago. 00:17:47.960 |
It's not totally new things could be understood. 00:18:15.040 |
and present it in a form so that we could follow it. 00:18:19.920 |
- But you don't think there's something greater than bees 00:18:24.400 |
that we can map, and then all of a sudden discover 00:18:28.440 |
something, be able to understand a quantum world, 00:18:36.000 |
- Students at MIT study and understand quantum mechanics. 00:18:40.500 |
- But they always reduce it to the infant, the physical. 00:18:48.240 |
that may be another area where there's just a limit 00:18:54.480 |
but the world that it describes doesn't make any sense. 00:19:07.500 |
One of the reasons why Einstein was always very skeptical 00:19:23.040 |
- He has something in common with infants, in that way. 00:19:32.680 |
what are the most beautiful or fascinating aspects 00:19:44.160 |
- Well, I think the deepest property of language 00:19:52.880 |
is what is sometimes called structure dependence. 00:20:05.640 |
the guy who fixed the car carefully packed his tools. 00:20:21.060 |
carefully the guy who fixed the car packed his tools. 00:20:25.860 |
Then it's carefully packed, not carefully fixed. 00:20:29.380 |
And in fact, you do that even if it makes no sense. 00:20:38.160 |
You have to interpret it as carefully he's tall, 00:21:11.940 |
If you look at the actual structure of the sentence, 00:21:27.980 |
But notice that what's linear is 100% of what you hear. 00:22:13.160 |
- Let me ask you about a field of machine learning, 00:22:18.940 |
There's been a lot of progress in neural networks based, 00:22:22.060 |
neural network based machine learning in the recent decade. 00:22:26.380 |
Of course neural network research goes back many decades. 00:22:29.940 |
What do you think are the limits of deep learning, 00:22:44.900 |
that are taking place, and those are pretty opaque. 00:22:50.260 |
about what can be done and what can't be done. 00:22:59.180 |
what deep learning is doing is taking huge numbers 00:23:18.220 |
Engineering in the sense of just trying to build something 00:23:21.380 |
that's useful, or science in the sense that it's trying 00:23:24.800 |
to understand something about elements of the world. 00:23:39.380 |
So on engineering grounds it's kind of worth having, 00:23:44.880 |
Does it tell you anything about human language? 00:24:11.820 |
the right description of every sentence in the corpus. 00:24:21.540 |
Each sentence that you produce is an experiment 00:24:29.640 |
So most of the stuff in the corpus is grammatical sentences. 00:24:40.140 |
which are carried out for no reason whatsoever 00:24:46.500 |
Like if you're, say, a chemistry PhD student, 00:25:09.140 |
Doesn't care about coverage of millions of experiments. 00:25:12.980 |
So it just begins by being very remote from science, 00:25:21.660 |
say, a Google parser, is how well does it do, 00:25:25.180 |
or some parser, how well does it do on a corpus? 00:25:28.340 |
But there's another question that's never asked. 00:25:36.100 |
So for example, take the structure dependence case 00:25:39.700 |
Suppose there was a language in which you used 00:25:43.420 |
linear proximity as the mode of interpretation. 00:25:48.420 |
These deep learning would work very easily on that. 00:25:51.740 |
In fact, much more easily than an actual language. 00:25:57.600 |
From a scientific point of view, it's a failure. 00:26:07.740 |
on things that violate the structure of the system. 00:26:17.220 |
- So yes, so neural networks are kind of approximators 00:26:20.660 |
that look, there's echoes of the behavioral debates, 00:26:27.600 |
Many of the people in deep learning say they've vindicated. 00:26:32.820 |
Terry Sanyoski, for example, in his recent books says, 00:26:43.780 |
actually fundamentally different when the data set is huge. 00:26:55.420 |
that interesting complex structure of language 00:26:58.820 |
with neural networks that will somehow help us 00:27:04.500 |
I mean, you find patterns that you hadn't noticed, 00:27:09.780 |
In fact, it's very much like a kind of linguistics 00:27:13.620 |
that's done, what's called corpus linguistics. 00:27:38.540 |
So you have to try to see what you can find out 00:27:45.060 |
Actually, paleoanthropology is very much like that. 00:27:53.540 |
So you're kind of forced just to take what data's around 00:28:01.460 |
- So let me venture into another whole body of work 00:28:07.440 |
You've said that evil in society arises from institutions, 00:28:19.620 |
or do most have the capacity for intentional evil 00:28:27.220 |
- I wouldn't say that they don't arise from our nature. 00:28:34.060 |
And the fact that we have certain institutions, not others, 00:28:38.140 |
is one mode in which human nature has expressed itself. 00:28:56.980 |
the who conquered whom and that sort of thing. 00:29:03.860 |
in the sense that they're essential to our nature. 00:29:21.780 |
It's a particular fact about a moment of modern history. 00:29:26.260 |
Others have argued that the roots of classical liberalism 00:29:36.100 |
instinct to be free of domination by illegitimate authority 00:29:47.540 |
We just know that human nature can accommodate both kinds. 00:30:10.220 |
- What about, so you have put forward into the world 00:30:34.140 |
like say, even the observation of structure dependence 00:30:44.460 |
But the major things just seem like common sense. 00:30:52.300 |
take your question about external and internal language, 00:30:58.140 |
almost entirely language is regarded an external object, 00:31:06.260 |
It just seemed obvious that that can't be true. 00:31:20.280 |
that's just an observation, what's transparent. 00:31:24.140 |
You might say it's kind of like the 17th century, 00:31:29.140 |
the beginnings of modern science, 17th century. 00:31:40.420 |
So it seems obvious that a heavy ball of lead 00:31:54.420 |
Carried out experiments, actually thought experiments, 00:32:03.700 |
And out of things like that, observations of that kind, 00:32:16.920 |
Seems obvious, until you start thinking about it. 00:32:23.900 |
And I think the beginnings of modern linguistics, 00:32:30.060 |
just being willing to be puzzled about phenomena 00:32:33.620 |
that looked, from some point of view, obvious. 00:32:41.340 |
almost official doctrine of structural linguistics 00:32:58.900 |
In fact, there were similar views among biologists 00:33:16.980 |
on what could be an organism or what could be a language. 00:33:20.600 |
But these are, that's just the nature of inquiry. 00:33:29.380 |
So one of the peculiar things about us human beings 00:33:46.800 |
I wondered, I didn't care much about my own mortality, 00:34:01.580 |
- Did you ever find an answer to that question? 00:34:07.860 |
It's kind of like Woody Allen in one of his films, 00:34:10.380 |
you may recall, he starts, he goes to a shrink 00:34:17.540 |
He says, "I just learned that the universe is expanding. 00:34:27.260 |
what do you think is the meaning of our existence here, 00:34:32.260 |
our life on Earth, our brief little moment in time? 00:34:35.860 |
- That's something we answer by our own activities. 00:34:50.560 |
not meaning in the sense that chair means this, 00:34:54.440 |
but the significance of your life is something you create. 00:35:05.940 |
Thanks for listening to this conversation with Noam Chomsky 00:35:08.660 |
and thank you to our presenting sponsor, Cash App. 00:35:17.940 |
a STEM education nonprofit that inspires hundreds 00:35:25.980 |
If you enjoy this podcast, subscribe on YouTube, 00:35:30.600 |
support on Patreon or connect with me on Twitter. 00:35:34.260 |
Thank you for listening and hope to see you next time.