back to index

Noam Chomsky: Language, Cognition, and Deep Learning | Lex Fridman Podcast #53


Chapters

0:0 Introduction
4:0 If an alien species visited Earth would we be able to find a common language or protocol of communication
5:45 The structure of language
7:18 The roots of language
9:45 Limits of human cognition
13:5 The mysticism of Neo scholastics
16:45 Expanding our cognitive capacity
20:5 Linear vs remote
22:5 Deep learning
25:35 The structure defendants case
30:5 The origins of modern linguistics

Whisper Transcript | Transcript Only Page

00:00:00.000 | The following is a conversation with Noam Chomsky.
00:00:03.800 | He's truly one of the great minds of our time
00:00:06.760 | and is one of the most cited scholars
00:00:08.400 | in the history of our civilization.
00:00:10.800 | He has spent over 60 years at MIT
00:00:13.400 | and recently also joined the University of Arizona
00:00:16.300 | where we met for this conversation.
00:00:18.600 | But it was at MIT about four and a half years ago
00:00:21.760 | when I first met Noam.
00:00:23.440 | My first few days there,
00:00:24.760 | I remember getting into an elevator at Stata Center,
00:00:27.400 | pressing the button for whatever floor,
00:00:29.480 | looking up and realizing it was just me and Noam Chomsky
00:00:33.600 | riding the elevator.
00:00:35.520 | Just me and one of the seminal figures of linguistics,
00:00:38.440 | cognitive science, philosophy,
00:00:40.040 | and political thought in the past century, if not ever.
00:00:43.960 | I tell that silly story because I think
00:00:46.640 | life is made up of funny little defining moments
00:00:49.240 | that you never forget for reasons that may be too poetic
00:00:53.200 | to try and explain.
00:00:54.920 | That was one of mine.
00:00:57.360 | Noam has been an inspiration to me and millions of others.
00:01:00.960 | It was truly an honor for me to sit down with him in Arizona.
00:01:04.640 | I traveled there just for this conversation.
00:01:07.520 | And in a rare heartbreaking moment,
00:01:10.160 | after everything was set up and tested,
00:01:12.720 | the camera was moved and accidentally
00:01:14.520 | the recording button was pressed, stopping the recording.
00:01:17.360 | So I have good audio of both of us, but no video of Noam.
00:01:22.180 | Just the video of me and my sleep deprived,
00:01:25.160 | but excited face that I get to keep
00:01:28.000 | as a reminder of my failures.
00:01:30.520 | Most people just listen to this audio version
00:01:32.480 | for the podcast as opposed to watching it on YouTube.
00:01:35.800 | But still, it's heartbreaking for me.
00:01:39.040 | I hope you understand and still enjoy this conversation
00:01:41.760 | as much as I did.
00:01:43.160 | The depth of intellect that Noam showed
00:01:45.360 | and his willingness to truly listen to me,
00:01:48.360 | a silly looking Russian in a suit,
00:01:51.180 | it was humbling and something I'm deeply grateful for.
00:01:55.560 | As some of you know, this podcast is a side project for me
00:01:59.640 | where my main journey and dream is to build AI systems
00:02:03.600 | that do some good for the world.
00:02:05.480 | This latter effort takes up most of my time,
00:02:07.840 | but for the moment has been mostly private.
00:02:10.560 | But the former, the podcast,
00:02:12.820 | is something I put my heart and soul into.
00:02:15.400 | And I hope you feel that even when I screw things up.
00:02:18.600 | I recently started doing ads at the end of the introduction.
00:02:22.880 | I'll do one or two minutes after introducing the episode
00:02:25.680 | and never any ads in the middle
00:02:27.440 | that break the flow of the conversation.
00:02:29.760 | I hope that works for you.
00:02:31.220 | It doesn't hurt the listening experience.
00:02:34.000 | This is the Artificial Intelligence Podcast.
00:02:37.240 | If you enjoy it, subscribe on YouTube,
00:02:39.840 | give it five stars on Apple Podcast,
00:02:41.880 | support it on Patreon,
00:02:43.280 | or simply connect with me on Twitter,
00:02:45.440 | Alex Friedman, spelled F-R-I-D-M-A-N.
00:02:49.480 | This show is presented by Cash App,
00:02:51.760 | the number one finance app in the App Store.
00:02:54.280 | I personally use Cash App to send money to friends,
00:02:56.840 | but you can also use it to buy, sell,
00:02:58.800 | and deposit Bitcoin in just seconds.
00:03:01.360 | Cash App also has a new investing feature.
00:03:04.200 | You can buy fractions of a stock, say $1 worth,
00:03:07.160 | no matter what the stock price is.
00:03:09.240 | Brokerage services are provided by Cash App Investing,
00:03:12.000 | a subsidiary of Square, a member of SIPC.
00:03:15.580 | I'm excited to be working with Cash App
00:03:17.680 | to support one of my favorite organizations called First,
00:03:20.920 | best known for their FIRST Robotics and LEGO competitions.
00:03:24.360 | They educate and inspire hundreds of thousands of students
00:03:27.720 | in over 110 countries,
00:03:29.640 | and have a perfect rating on Charity Navigator,
00:03:31.820 | which means the donated money
00:03:33.840 | is used to maximum effectiveness.
00:03:36.560 | When you get Cash App from the App Store,
00:03:38.680 | Google Play, and use code LEXPODCAST,
00:03:42.840 | you'll get $10, and Cash App will also donate $10 to FIRST,
00:03:47.140 | which again, is an organization
00:03:48.800 | that I've personally seen inspire girls
00:03:50.760 | and boys to dream of engineering a better world.
00:03:54.560 | And now, here's my conversation with Noam Chomsky.
00:03:58.940 | I apologize for the absurd philosophical question,
00:04:04.160 | but if an alien species were to visit Earth,
00:04:07.200 | do you think we would be able to find a common language
00:04:10.960 | or protocol of communication with them?
00:04:13.720 | - There are arguments to the effect that we could.
00:04:18.360 | In fact, one of them was Marvin Minsky's.
00:04:22.480 | Back about 20 or 30 years ago,
00:04:24.760 | he performed a brief experiment
00:04:29.040 | with a student of his, Dan Bobrow.
00:04:31.000 | They essentially ran the simplest possible
00:04:35.120 | Turing machines, just free to see what would happen.
00:04:39.600 | And most of them crashed,
00:04:42.600 | either got into an infinite loop or stopped.
00:04:47.840 | The few that persisted essentially gave
00:04:52.240 | something like arithmetic.
00:04:54.920 | And his conclusion from that was that
00:04:58.880 | if some alien species developed higher intelligence,
00:05:05.000 | they would at least have arithmetic.
00:05:07.480 | They would at least have what the simplest computer would do.
00:05:12.480 | And in fact, he didn't know that at the time,
00:05:16.080 | but the core principles of natural language
00:05:20.800 | are based on operations which yield something
00:05:25.240 | like arithmetic in the limiting case and the minimal case.
00:05:29.400 | So it's conceivable that a mode of communication
00:05:34.080 | could be established based on the core properties
00:05:38.600 | of human language and the core properties of arithmetic,
00:05:41.480 | which maybe are universally shared.
00:05:44.960 | So it's conceivable.
00:05:46.720 | - What is the structure of that language,
00:05:50.840 | of language as an internal system inside our mind
00:05:55.200 | versus an external system as it's expressed?
00:05:58.120 | - It's not an alternative.
00:06:00.920 | It's two different concepts of language.
00:06:02.960 | - Different.
00:06:03.800 | - It's a simple fact that there's something about you,
00:06:07.320 | a trait of yours, part of the organism, you,
00:06:11.680 | that determines that you're talking English
00:06:14.640 | and not Tagalog, let's say.
00:06:17.000 | So there is an inner system.
00:06:18.660 | It determines the sound and meaning
00:06:23.000 | of the infinite number of expressions of your language.
00:06:27.160 | It's localized.
00:06:28.760 | It's not on your foot, obviously.
00:06:30.400 | It's in your brain.
00:06:31.780 | If you look more closely,
00:06:32.960 | it's in specific configurations of your brain.
00:06:36.500 | And that's essentially like the internal structure
00:06:40.640 | of your laptop, whatever programs it has are in there.
00:06:45.000 | Now, one of the things you can do with language,
00:06:47.880 | it's a marginal thing, in fact,
00:06:50.080 | is use it to externalize what's in your head.
00:06:54.840 | Actually, most of your use of language
00:06:56.680 | is thought, internal thought.
00:06:58.860 | But you can do what you and I are now doing.
00:07:00.960 | We can externalize it.
00:07:02.680 | Well, the set of things that we're externalizing
00:07:05.680 | are an external system.
00:07:07.720 | They're noises in the atmosphere.
00:07:11.200 | And you can call that language
00:07:12.960 | in some other sense of the word,
00:07:14.400 | but it's not a set of alternatives.
00:07:16.840 | These are just different concepts.
00:07:19.020 | - So how deep do the roots of language go in our brain?
00:07:22.240 | Our mind, is it yet another feature like vision,
00:07:26.820 | or is it something more fundamental
00:07:28.520 | from which everything else springs in the human mind?
00:07:31.560 | - Well, in a way, it's like vision.
00:07:33.880 | There's something about our genetic endowment
00:07:38.620 | that determines that we have a mammalian
00:07:41.640 | rather than an insect visual system.
00:07:44.760 | And there's something in our genetic endowment
00:07:47.440 | that determines that we have a human language faculty.
00:07:51.500 | No other organism has anything remotely similar.
00:07:55.240 | So in that sense, it's internal.
00:07:58.320 | Now, there is a long tradition,
00:07:59.760 | which I think is valid, going back centuries
00:08:03.640 | to the early scientific revolution, at least,
00:08:06.120 | that holds that language is the core of human life.
00:08:11.120 | The core of human cognitive nature.
00:08:13.640 | It's the source.
00:08:14.640 | It's the mode for constructing thoughts and expressing them.
00:08:19.640 | That is what forms thought.
00:08:22.800 | And it's got fundamental creative capacities.
00:08:27.240 | It's free, independent, unbounded, and so on.
00:08:31.320 | And undoubtedly, I think, the basis
00:08:34.880 | for our creative capacities
00:08:38.480 | and the other remarkable human capacities
00:08:43.480 | that lead to the unique achievements
00:08:47.520 | and not-so-great achievements of the species.
00:08:51.320 | - The capacity to think and reason.
00:08:53.620 | Do you think that's deeply linked with language?
00:08:56.240 | Do you think the way we, the internal language system
00:08:59.880 | is essentially the mechanism
00:09:01.600 | by which we also reason internally?
00:09:04.160 | - It is undoubtedly the mechanism by which we reason.
00:09:06.960 | There may also be other,
00:09:09.360 | there are undoubtedly other faculties involved in reasoning.
00:09:13.240 | We have a kind of scientific faculty.
00:09:17.560 | Nobody knows what it is.
00:09:18.880 | But whatever it is that enables us
00:09:20.940 | to pursue certain lines of endeavor and inquiry
00:09:25.480 | and to decide what makes sense and doesn't make sense
00:09:29.800 | and to achieve a certain degree
00:09:32.240 | of understanding of the world,
00:09:33.640 | that uses language but goes beyond it.
00:09:37.440 | Just as using our capacity for arithmetic
00:09:42.000 | is not the same as having the capacity.
00:09:44.880 | - The idea of capacity, our biology, evolution,
00:09:49.360 | you've talked about it defining, essentially,
00:09:51.560 | our capacity, our limit, and our scope.
00:09:55.200 | Can you try to define what limit and scope are?
00:09:58.840 | And the bigger question,
00:10:01.160 | do you think it's possible to find
00:10:03.640 | the limit of human cognition?
00:10:06.220 | - Well, that's an interesting question.
00:10:09.640 | It's commonly believed, most scientists believe,
00:10:13.080 | that human intelligence can answer
00:10:16.400 | any question in principle.
00:10:19.340 | I think that's a very strange belief.
00:10:21.800 | If we're biological organisms, which are not angels,
00:10:26.160 | then we, our capacities ought to have scope and limits,
00:10:33.200 | which are interrelated.
00:10:34.960 | - Can you define those two terms?
00:10:36.640 | - Well, let's take a concrete example.
00:10:40.780 | Your genetic endowment determines
00:10:44.080 | that you can have a male and visual system,
00:10:47.000 | arms and legs and so on,
00:10:48.800 | and therefore become a rich, complex organism.
00:10:53.480 | But if you look at that same genetic endowment,
00:10:56.260 | it prevents you from developing in other directions.
00:11:00.020 | There's no kind of experience
00:11:01.760 | which would yield the embryo
00:11:05.800 | to develop an insect visual system,
00:11:08.800 | or to develop wings instead of arms.
00:11:12.000 | So the very endowment that confers richness and complexity
00:11:17.000 | also sets bounds on what can be attained.
00:11:23.480 | Now, I assume that our cognitive capacities
00:11:27.480 | are part of the organic world,
00:11:29.680 | therefore they should have the same properties.
00:11:32.280 | If they had no built-in capacity
00:11:35.720 | to develop a rich and complex structure,
00:11:39.200 | we would have understand nothing.
00:11:41.920 | Just as if your genetic endowment
00:11:46.080 | did not compel you to develop arms and legs,
00:11:50.360 | you would just be some kind of a random amoeboid creature
00:11:54.040 | with no structure at all.
00:11:56.120 | So I think it's plausible to assume
00:11:58.360 | that there are limits,
00:12:00.280 | and I think we even have some evidence as to what they are.
00:12:03.720 | So for example, there's a classic moment
00:12:06.700 | in the history of science at the time of Newton.
00:12:10.700 | There was a, from Galileo to Newton,
00:12:13.920 | modern science developed on a fundamental assumption
00:12:17.960 | which Newton also accepted,
00:12:20.160 | namely that the world, the entire universe,
00:12:24.120 | is a mechanical object.
00:12:26.380 | And by mechanical, they meant something like
00:12:29.300 | the kinds of artifacts that were being developed
00:12:31.680 | by skilled artisans all over Europe,
00:12:34.300 | gears, levers, and so on.
00:12:37.160 | And their belief was, well,
00:12:39.360 | the world is just a more complex variant of this.
00:12:42.060 | Newton, to his astonishment and distress,
00:12:47.980 | proved that there are no machines,
00:12:51.000 | that there's interaction without contact.
00:12:54.340 | His contemporaries like Leibniz and Huygens
00:12:57.680 | just dismissed this as returning to the mysticism
00:13:02.600 | of the neo-scholastics.
00:13:04.000 | And Newton agreed.
00:13:05.880 | He said it is totally absurd.
00:13:08.280 | No person of any scientific intelligence
00:13:11.160 | could ever accept this for a moment.
00:13:13.800 | In fact, he spent the rest of his life
00:13:15.320 | trying to get around it somehow,
00:13:17.760 | as did many other scientists.
00:13:20.340 | That was the very criterion of intelligibility
00:13:24.080 | for say, Galileo or Newton.
00:13:26.540 | Theory did not produce an intelligible world
00:13:31.280 | unless you could duplicate it in a machine.
00:13:34.060 | He showed you can't.
00:13:35.120 | There are no machines, any.
00:13:37.560 | Finally, after a long struggle, took a long time,
00:13:41.240 | scientists just accepted this as common sense.
00:13:45.240 | But that's a significant moment.
00:13:47.400 | That means they abandoned the search
00:13:49.360 | for an intelligible world.
00:13:51.780 | And the great philosophers of the time
00:13:54.800 | understood that very well.
00:13:57.000 | So for example, David Hume, in his encomium to Newton,
00:14:02.000 | wrote that, who was the greatest thinker ever and so on,
00:14:05.560 | he said that he unveiled many of the secrets of nature,
00:14:10.560 | but by showing the imperfections
00:14:13.320 | of the mechanical philosophy, mechanical science,
00:14:17.520 | he left us with, he showed that there are mysteries
00:14:21.240 | which ever will remain.
00:14:23.560 | And science just changed its goals.
00:14:26.760 | It abandoned the mysteries, it can't solve it,
00:14:29.800 | it will put it aside.
00:14:31.480 | We only look for intelligible theories.
00:14:34.760 | Newton's theories were intelligible.
00:14:36.720 | It's just what they described wasn't.
00:14:39.120 | Well, Locke said the same thing.
00:14:42.820 | I think they're basically right.
00:14:44.840 | And if so, that showed something
00:14:47.040 | about the limits of human cognition.
00:14:49.720 | We cannot attain the goal of understanding the world,
00:14:54.720 | of finding an intelligible world.
00:14:58.440 | This mechanical philosophy, Galileo to Newton,
00:15:02.520 | there's a good case that can be made
00:15:05.320 | that that's our instinctive conception of how things work.
00:15:10.320 | So if say infants are tested with things,
00:15:16.280 | if this moves and then this moves,
00:15:18.680 | they kind of invent something that must be invisible
00:15:22.080 | that's in between them that's making them move.
00:15:24.960 | - Yeah, we like physical contact.
00:15:26.560 | Something about our brain seeks--
00:15:28.960 | - Makes us want a world like that,
00:15:31.560 | just like it wants a world
00:15:32.960 | that has regular geometric figures.
00:15:36.640 | So for example, Descartes pointed this out,
00:15:38.940 | that if you have an infant
00:15:41.840 | who's never seen a triangle before,
00:15:45.160 | and you draw a triangle,
00:15:47.680 | the infant will see a distorted triangle,
00:15:52.280 | not whatever crazy figure it actually is.
00:15:55.640 | You know, three lines not coming quite together,
00:15:58.440 | one of them a little bit curved and so on.
00:16:00.360 | We just impose a conception of the world
00:16:04.560 | in terms of perfect geometric objects.
00:16:09.360 | It's now been shown that it goes way beyond that,
00:16:12.180 | that if you show on a tachistoscope,
00:16:15.440 | let's say a couple of lights shining,
00:16:18.560 | you do it three or four times in a row,
00:16:20.880 | what people actually see is a rigid object in motion,
00:16:25.240 | not whatever's there.
00:16:26.920 | We all know that from a television set, basically.
00:16:31.840 | - So that gives us hints of potential limits
00:16:34.640 | to our cognition.
00:16:35.920 | - I think it does, but it's a very contested view.
00:16:39.400 | If you do a poll among scientists,
00:16:42.280 | it's impossible we can understand anything.
00:16:45.440 | - Let me ask and give me a chance with this.
00:16:48.600 | So I just spent a day at a company called Neuralink,
00:16:52.520 | and what they do is try to design
00:16:56.360 | what's called the brain-computer interface.
00:16:59.580 | So they try to do thousands readings in the brain,
00:17:03.280 | be able to read what the neurons are firing,
00:17:05.580 | and then stimulate back, so two-way.
00:17:08.520 | Do you think their dream is to expand the capacity
00:17:12.760 | of the brain to attain information,
00:17:16.640 | sort of increase the bandwidth
00:17:18.160 | at which we can search Google kind of thing?
00:17:22.440 | Do you think our cognitive capacity might be expanded,
00:17:26.240 | our linguistic capacity, our ability to reason
00:17:29.340 | might be expanded by adding a machine into the picture?
00:17:33.160 | - Can be expanded in a certain sense,
00:17:35.600 | but a sense that was known thousands of years ago.
00:17:39.880 | A book expands your cognitive capacity.
00:17:43.720 | Okay, so this could expand it too.
00:17:46.040 | - But it's not a fundamental expansion.
00:17:47.960 | It's not totally new things could be understood.
00:17:50.960 | - Well, nothing that goes beyond
00:17:53.080 | our native cognitive capacities.
00:17:56.480 | Just like you can't turn the visual system
00:17:58.640 | into an insect system.
00:18:00.680 | - Well, I mean, the thought is,
00:18:04.240 | the thought is perhaps you can't directly,
00:18:06.840 | but you can map.
00:18:08.400 | - You could, but we already,
00:18:10.080 | we know that without this experiment.
00:18:12.400 | You could map what a bee sees,
00:18:15.040 | and present it in a form so that we could follow it.
00:18:17.960 | In fact, every bee scientist does that.
00:18:19.920 | - But you don't think there's something greater than bees
00:18:24.400 | that we can map, and then all of a sudden discover
00:18:28.440 | something, be able to understand a quantum world,
00:18:32.840 | quantum mechanics, be able to start
00:18:34.800 | to be able to make sense.
00:18:36.000 | - Students at MIT study and understand quantum mechanics.
00:18:40.500 | - But they always reduce it to the infant, the physical.
00:18:45.120 | I mean, they don't really understand.
00:18:46.880 | - Oh, you don't, there's things,
00:18:48.240 | that may be another area where there's just a limit
00:18:51.360 | to understanding.
00:18:52.720 | We understand the theories,
00:18:54.480 | but the world that it describes doesn't make any sense.
00:18:58.440 | So, you know, the experiment,
00:19:00.360 | Schrodinger's cat, for example,
00:19:02.400 | can understand the theory,
00:19:03.720 | but as Schrodinger pointed out,
00:19:05.760 | it's an unintelligible world.
00:19:07.500 | One of the reasons why Einstein was always very skeptical
00:19:13.240 | about quantum theory,
00:19:14.420 | he described himself as a classical realist,
00:19:19.280 | once in one's intelligibility.
00:19:23.040 | - He has something in common with infants, in that way.
00:19:27.440 | So, back to linguistics.
00:19:30.000 | If you could humor me,
00:19:32.680 | what are the most beautiful or fascinating aspects
00:19:35.300 | of language, or ideas in linguistics,
00:19:37.720 | or cognitive science that you've seen
00:19:39.560 | in a lifetime of studying language
00:19:42.080 | and studying the human mind?
00:19:44.160 | - Well, I think the deepest property of language
00:19:49.160 | and puzzling property that's been discovered
00:19:52.880 | is what is sometimes called structure dependence.
00:19:57.180 | We now understand it pretty well,
00:19:59.580 | but it was puzzling for a long time.
00:20:01.940 | I'll give you a concrete example.
00:20:03.600 | So, suppose you say,
00:20:05.640 | the guy who fixed the car carefully packed his tools.
00:20:11.860 | It's ambiguous.
00:20:13.100 | He could fix the car carefully,
00:20:15.400 | or carefully pack his tools.
00:20:17.940 | Suppose you put carefully in front,
00:20:21.060 | carefully the guy who fixed the car packed his tools.
00:20:25.860 | Then it's carefully packed, not carefully fixed.
00:20:29.380 | And in fact, you do that even if it makes no sense.
00:20:32.320 | So, suppose you say, carefully,
00:20:34.660 | the guy who fixed the car is tall.
00:20:38.160 | You have to interpret it as carefully he's tall,
00:20:41.900 | even though that doesn't make any sense.
00:20:44.300 | And notice that that's a very puzzling fact,
00:20:47.220 | because you're relating carefully,
00:20:50.100 | not to the linearly closest verb,
00:20:53.660 | but to the linearly more remote verb.
00:20:57.340 | A linear closeness is an easy computation,
00:21:02.340 | but here you're doing a much more,
00:21:03.780 | what looks like a more complex computation.
00:21:06.820 | You're doing something that's taking you
00:21:09.540 | essentially to the more remote thing.
00:21:11.940 | If you look at the actual structure of the sentence,
00:21:17.820 | where the phrases are and so on,
00:21:20.180 | turns out you're picking out
00:21:21.540 | the structurally closest thing,
00:21:24.220 | but the linearly more remote thing.
00:21:27.980 | But notice that what's linear is 100% of what you hear.
00:21:32.500 | You never hear structure, can't.
00:21:35.160 | So what you're doing is,
00:21:37.260 | and incidentally this is universal,
00:21:39.220 | all constructions, all languages.
00:21:42.180 | And what we're compelled to do is carry out
00:21:45.540 | what looks like the more complex computation
00:21:48.680 | on material that we never hear,
00:21:52.260 | and we ignore 100% of what we hear
00:21:55.380 | and the simplest computation.
00:21:57.540 | By now there's even a neural basis for this
00:22:00.700 | that's somewhat understood.
00:22:02.860 | And there's good theories by now
00:22:04.380 | that explain why it's true.
00:22:06.620 | That's a deep insight into the surprising
00:22:10.580 | nature of language with many consequences.
00:22:13.160 | - Let me ask you about a field of machine learning,
00:22:17.540 | deep learning.
00:22:18.940 | There's been a lot of progress in neural networks based,
00:22:22.060 | neural network based machine learning in the recent decade.
00:22:26.380 | Of course neural network research goes back many decades.
00:22:29.940 | What do you think are the limits of deep learning,
00:22:35.580 | of neural network based machine learning?
00:22:38.480 | - Well to give a real answer to that,
00:22:41.140 | you'd have to understand the exact processes
00:22:44.900 | that are taking place, and those are pretty opaque.
00:22:47.940 | So it's pretty hard to prove a theorem
00:22:50.260 | about what can be done and what can't be done.
00:22:54.060 | But I think it's reasonably clear.
00:22:56.820 | I mean putting technicalities aside,
00:22:59.180 | what deep learning is doing is taking huge numbers
00:23:03.980 | of examples and finding some patterns.
00:23:07.740 | Okay, that could be interesting.
00:23:10.340 | In some areas it is, but we have to ask here
00:23:13.460 | a certain question.
00:23:15.060 | Is it engineering or is it science?
00:23:18.220 | Engineering in the sense of just trying to build something
00:23:21.380 | that's useful, or science in the sense that it's trying
00:23:24.800 | to understand something about elements of the world.
00:23:28.740 | So it takes a Google parser.
00:23:31.800 | We can ask that question.
00:23:33.860 | Is it useful?
00:23:35.140 | Yeah, it's pretty useful.
00:23:36.420 | You know, I use a Google translator.
00:23:39.380 | So on engineering grounds it's kind of worth having,
00:23:43.420 | like a bulldozer.
00:23:44.880 | Does it tell you anything about human language?
00:23:48.920 | Zero.
00:23:49.760 | Nothing.
00:23:51.780 | And in fact, it's very striking.
00:23:54.140 | It's from the very beginning,
00:23:56.780 | it's just totally remote from science.
00:24:00.300 | So what is a Google parser doing?
00:24:02.580 | It's taking an enormous text,
00:24:05.140 | let's say the Wall Street Journal corpus,
00:24:07.700 | and asking how close can we come to getting
00:24:11.820 | the right description of every sentence in the corpus.
00:24:16.380 | Well, every sentence in the corpus
00:24:18.500 | is essentially an experiment.
00:24:21.540 | Each sentence that you produce is an experiment
00:24:24.740 | which is, am I a grammatical sentence?
00:24:27.700 | The answer is usually yes.
00:24:29.640 | So most of the stuff in the corpus is grammatical sentences.
00:24:33.220 | But now ask yourself, is there any science
00:24:36.820 | which takes random experiments
00:24:40.140 | which are carried out for no reason whatsoever
00:24:43.700 | and tries to find out something from them?
00:24:46.500 | Like if you're, say, a chemistry PhD student,
00:24:49.640 | you wanna get a thesis, can you say,
00:24:51.180 | well, I'm just gonna do a lot of,
00:24:53.380 | mix a lot of things together, no purpose,
00:24:56.460 | just maybe I'll find something.
00:24:59.620 | You'd be laughed out of the department.
00:25:01.580 | Science tries to find critical experiments,
00:25:06.200 | ones that answer some theoretical question.
00:25:09.140 | Doesn't care about coverage of millions of experiments.
00:25:12.980 | So it just begins by being very remote from science,
00:25:16.220 | and it continues like that.
00:25:18.260 | So the usual question that's asked about,
00:25:21.660 | say, a Google parser, is how well does it do,
00:25:25.180 | or some parser, how well does it do on a corpus?
00:25:28.340 | But there's another question that's never asked.
00:25:31.140 | How well does it do on something
00:25:32.940 | that violates all the rules of language?
00:25:36.100 | So for example, take the structure dependence case
00:25:38.740 | that I mentioned.
00:25:39.700 | Suppose there was a language in which you used
00:25:43.420 | linear proximity as the mode of interpretation.
00:25:48.420 | These deep learning would work very easily on that.
00:25:51.740 | In fact, much more easily than an actual language.
00:25:54.780 | Is that a success?
00:25:55.940 | No, that's a failure.
00:25:57.600 | From a scientific point of view, it's a failure.
00:26:00.460 | It shows that we're not discovering
00:26:03.500 | the nature of the system at all,
00:26:05.820 | 'cause it does just as well or even better
00:26:07.740 | on things that violate the structure of the system.
00:26:10.900 | And it goes on from there.
00:26:12.740 | It's not an argument against doing it.
00:26:14.820 | It is useful to have devices like this.
00:26:17.220 | - So yes, so neural networks are kind of approximators
00:26:20.660 | that look, there's echoes of the behavioral debates,
00:26:24.220 | right, behavioralism.
00:26:26.140 | - More than echoes.
00:26:27.600 | Many of the people in deep learning say they've vindicated.
00:26:31.680 | (Lex laughs)
00:26:32.820 | Terry Sanyoski, for example, in his recent books says,
00:26:36.300 | this vindicates Skinnerian behaviors.
00:26:39.540 | It doesn't have anything to do with it.
00:26:41.460 | - Yes, but I think there's something
00:26:43.780 | actually fundamentally different when the data set is huge.
00:26:48.300 | But your point is extremely well taken.
00:26:51.180 | But do you think we can learn, approximate
00:26:55.420 | that interesting complex structure of language
00:26:58.820 | with neural networks that will somehow help us
00:27:01.340 | understand the science?
00:27:02.800 | - It's possible.
00:27:04.500 | I mean, you find patterns that you hadn't noticed,
00:27:07.300 | let's say, could be.
00:27:09.780 | In fact, it's very much like a kind of linguistics
00:27:13.620 | that's done, what's called corpus linguistics.
00:27:18.100 | When you, suppose you have some language
00:27:21.100 | where all the speakers have died out,
00:27:23.460 | but you have records.
00:27:25.140 | So you just look at the records
00:27:28.100 | and see what you can figure out from that.
00:27:30.620 | It's much better to have actual speakers
00:27:33.700 | where you can do critical experiments.
00:27:36.100 | But if they're all dead, you can't do them.
00:27:38.540 | So you have to try to see what you can find out
00:27:40.800 | from just looking at the data that's around.
00:27:43.900 | You can learn things.
00:27:45.060 | Actually, paleoanthropology is very much like that.
00:27:48.420 | You can't do a critical experiment
00:27:50.620 | on what happened two million years ago.
00:27:53.540 | So you're kind of forced just to take what data's around
00:27:56.580 | and see what you can figure out from it.
00:27:59.260 | Okay, it's a serious study.
00:28:01.460 | - So let me venture into another whole body of work
00:28:05.620 | and philosophical question.
00:28:07.440 | You've said that evil in society arises from institutions,
00:28:13.100 | not inherently from our nature.
00:28:15.580 | Do you think most human beings are good,
00:28:17.840 | they have good intent,
00:28:19.620 | or do most have the capacity for intentional evil
00:28:22.900 | that depends on their upbringing,
00:28:24.660 | depends on their environment, on context?
00:28:27.220 | - I wouldn't say that they don't arise from our nature.
00:28:30.980 | Anything we do arises from our nature.
00:28:34.060 | And the fact that we have certain institutions, not others,
00:28:38.140 | is one mode in which human nature has expressed itself.
00:28:43.140 | But as far as we know,
00:28:45.460 | human nature could yield
00:28:47.260 | many different kinds of institutions.
00:28:50.260 | The particular ones that have developed
00:28:53.180 | have to do with historical contingency,
00:28:56.980 | the who conquered whom and that sort of thing.
00:28:59.280 | They're not rooted in our nature
00:29:03.860 | in the sense that they're essential to our nature.
00:29:06.780 | So it's commonly argued that these days
00:29:10.180 | that something like market systems
00:29:12.940 | is just part of our nature.
00:29:15.600 | But we know from a huge amount of evidence
00:29:18.700 | that that's not true.
00:29:19.540 | There's all kinds of other structures.
00:29:21.780 | It's a particular fact about a moment of modern history.
00:29:26.260 | Others have argued that the roots of classical liberalism
00:29:30.780 | actually argue that what's called sometimes
00:29:34.460 | an instinct for freedom,
00:29:36.100 | instinct to be free of domination by illegitimate authority
00:29:42.060 | is the core of our nature.
00:29:43.700 | That would be the opposite of this.
00:29:45.660 | And we don't know.
00:29:47.540 | We just know that human nature can accommodate both kinds.
00:29:50.740 | - If you look back at your life,
00:29:54.940 | is there a moment in your intellectual life
00:29:58.140 | or life in general that jumps from memory
00:30:00.220 | that brought you happiness,
00:30:02.140 | that you would love to relive again?
00:30:04.040 | - Sure.
00:30:06.500 | Falling in love, having children.
00:30:10.220 | - What about, so you have put forward into the world
00:30:13.900 | a lot of incredible ideas in linguistics,
00:30:17.700 | in cognitive science.
00:30:20.440 | In terms of ideas that just excites you
00:30:24.420 | when they first came to you,
00:30:26.100 | that you would love to relive those moments.
00:30:28.980 | - Well, I mean, when you make a discovery
00:30:32.180 | about something that's exciting,
00:30:34.140 | like say, even the observation of structure dependence
00:30:39.140 | and on from that, the explanation for it.
00:30:44.460 | But the major things just seem like common sense.
00:30:49.460 | So if you go back to,
00:30:52.300 | take your question about external and internal language,
00:30:55.820 | you go back to say the 1950s,
00:30:58.140 | almost entirely language is regarded an external object,
00:31:03.900 | something outside the mind.
00:31:06.260 | It just seemed obvious that that can't be true.
00:31:09.380 | Like I said, there's something about you
00:31:13.220 | that determines you're talking English,
00:31:15.360 | not Swahili or something.
00:31:18.660 | But that's not really a discovery,
00:31:20.280 | that's just an observation, what's transparent.
00:31:24.140 | You might say it's kind of like the 17th century,
00:31:29.140 | the beginnings of modern science, 17th century.
00:31:33.620 | They came from being willing to be puzzled
00:31:37.980 | about things that seemed obvious.
00:31:40.420 | So it seems obvious that a heavy ball of lead
00:31:44.260 | will fall faster than a light ball of lead.
00:31:47.620 | But Galileo was not impressed
00:31:50.380 | by the fact that it seemed obvious.
00:31:52.720 | So he wanted to know if it's true.
00:31:54.420 | Carried out experiments, actually thought experiments,
00:31:59.160 | never actually carried them out,
00:32:01.200 | which showed that it can't be true.
00:32:03.700 | And out of things like that, observations of that kind,
00:32:09.520 | why does a ball fall to the ground
00:32:14.520 | instead of rising, let's say?
00:32:16.920 | Seems obvious, until you start thinking about it.
00:32:20.060 | Because why does steam rise, let's say?
00:32:23.900 | And I think the beginnings of modern linguistics,
00:32:27.260 | roughly in the 50s, are kind of like that,
00:32:30.060 | just being willing to be puzzled about phenomena
00:32:33.620 | that looked, from some point of view, obvious.
00:32:38.020 | And for example, a kind of doctrine,
00:32:41.340 | almost official doctrine of structural linguistics
00:32:44.960 | in the 50s was that languages can differ
00:32:49.620 | from one another in arbitrary ways,
00:32:52.660 | and each one has to be studied on its own
00:32:56.460 | without any presuppositions.
00:32:58.900 | In fact, there were similar views among biologists
00:33:02.380 | about the nature of organisms,
00:33:04.460 | that each one is, they're so different
00:33:06.780 | when you look at them that almost anything,
00:33:09.100 | you could be almost anything.
00:33:10.960 | Well, in both domains, it's been learned
00:33:13.140 | that that's very far from true.
00:33:15.500 | There are very narrow constraints
00:33:16.980 | on what could be an organism or what could be a language.
00:33:20.600 | But these are, that's just the nature of inquiry.
00:33:26.580 | - Science in general, yeah, inquiry.
00:33:29.380 | So one of the peculiar things about us human beings
00:33:33.400 | is our mortality.
00:33:35.260 | Ernest Becker explored it in general.
00:33:38.180 | Do you ponder the value of mortality?
00:33:40.420 | Do you think about your own mortality?
00:33:42.380 | - I used to when I was about 12 years old.
00:33:46.800 | I wondered, I didn't care much about my own mortality,
00:33:51.900 | but I was worried about the fact
00:33:53.580 | that if my consciousness disappeared,
00:33:57.680 | would the entire universe disappear?
00:34:00.260 | That was frightening.
00:34:01.580 | - Did you ever find an answer to that question?
00:34:03.740 | - No, nobody's ever found an answer,
00:34:05.900 | but I stopped being bothered by it.
00:34:07.860 | It's kind of like Woody Allen in one of his films,
00:34:10.380 | you may recall, he starts, he goes to a shrink
00:34:14.140 | when he's a child and the shrink asks him,
00:34:16.500 | "What's your problem?"
00:34:17.540 | He says, "I just learned that the universe is expanding.
00:34:21.620 | "I can't handle that."
00:34:22.820 | - And then another absurd question is,
00:34:27.260 | what do you think is the meaning of our existence here,
00:34:32.260 | our life on Earth, our brief little moment in time?
00:34:35.860 | - That's something we answer by our own activities.
00:34:39.520 | There's no general answer.
00:34:42.360 | We determine what the meaning of it is.
00:34:44.620 | - The action determined the meaning.
00:34:48.720 | - Meaning in the sense of significance,
00:34:50.560 | not meaning in the sense that chair means this,
00:34:54.440 | but the significance of your life is something you create.
00:34:58.820 | - Noam, thank you so much for talking today.
00:35:02.520 | It was a huge honor.
00:35:04.160 | Thank you so much.
00:35:05.940 | Thanks for listening to this conversation with Noam Chomsky
00:35:08.660 | and thank you to our presenting sponsor, Cash App.
00:35:11.940 | Download it, use code LEXPODCAST,
00:35:14.760 | you'll get $10 and $10 will go to FIRST,
00:35:17.940 | a STEM education nonprofit that inspires hundreds
00:35:20.900 | of thousands of young minds to learn
00:35:23.180 | and to dream of engineering our future.
00:35:25.980 | If you enjoy this podcast, subscribe on YouTube,
00:35:28.620 | give it five stars on Apple Podcast,
00:35:30.600 | support on Patreon or connect with me on Twitter.
00:35:34.260 | Thank you for listening and hope to see you next time.
00:35:37.220 | (upbeat music)
00:35:39.800 | (upbeat music)
00:35:42.380 | [BLANK_AUDIO]