back to index

Cognition Is a Function of the Environment | Matt Botvinick and Lex Fridman


Whisper Transcript | Transcript Only Page

00:00:00.000 | You know, if you take an introductory computer science course and they are introducing you
00:00:06.900 | to the notion of Turing machines, one way of articulating what the significance of a
00:00:15.920 | Turing machine is, is that it's a machine emulator.
00:00:21.100 | It can emulate any other machine.
00:00:25.680 | And that to me, you know, that way of looking at a Turing machine, you know, really sticks
00:00:34.640 | with me.
00:00:35.640 | I think of humans as maybe sharing in some of that character.
00:00:43.040 | We're capacity limited, we're not Turing machines, obviously, but we have the ability to adapt
00:00:48.160 | behaviors that are very much unlike anything we've done before, but there's some basic
00:00:54.600 | mechanism that's implemented in our brain that allows us to run software.
00:01:00.400 | But just on that point, you mentioned Turing machine, but nevertheless, it's fundamentally
00:01:04.680 | our brains are just computational devices in your view?
00:01:07.720 | Is that what you're getting at?
00:01:08.720 | Like, it was a little bit unclear to this line you drew.
00:01:14.400 | Is there any magic in there or is it just basic computation?
00:01:18.560 | I'm happy to think of it as just basic computation, but mind you, I won't be satisfied until somebody's
00:01:24.560 | explains to me what the basic computations are that are leading to the full richness
00:01:30.360 | of human cognition.
00:01:32.800 | It's not going to be enough for me to understand what the computations are that allow people
00:01:37.360 | to do arithmetic or play chess.
00:01:40.040 | I want the whole thing.
00:01:44.640 | And a small tangent, because you kind of mentioned coronavirus, there's group behavior.
00:01:51.880 | Is there something interesting to your search of understanding the human mind where behavior
00:01:58.640 | of large groups or just behavior of groups is interesting?
00:02:01.920 | You know, seeing that as a collective mind, as a collective intelligence, perhaps seeing
00:02:05.880 | the groups of people as a single intelligent organism, especially looking at the reinforcement
00:02:10.920 | learning work you've done recently.
00:02:13.280 | Well, yeah, I can't, I mean, I have the honor of working with a lot of incredibly smart
00:02:20.960 | people and I wouldn't want to take any credit for leading the way on the multi-agent work
00:02:26.680 | that's come out of my group or DeepMind lately, but I do find it fascinating.
00:02:32.120 | And I mean, I think it can't be debated, you know, human behavior arises within communities.
00:02:44.080 | That just seems to me self-evident.
00:02:47.000 | But to me, it is self-evident, but that seems to be a profound aspects of something that
00:02:53.320 | created that was like, if you look at like 2001 Space Odyssey when the monkeys touched
00:02:58.800 | the, like that's the magical moment.
00:03:01.320 | I think Yuval Harari argues that the ability of our large numbers of humans to hold an
00:03:07.760 | idea to converge towards idea together, like you said, shaking hands versus bumping elbows,
00:03:12.440 | somehow converge, like without even, like without, you know, without being in a room
00:03:18.240 | altogether, just kind of this like distributed convergence towards an idea over a particular
00:03:23.480 | period of time seems to be fundamental to just every aspect of our cognition of our
00:03:30.320 | intelligence because humans, I will talk about reward, but it seems like we don't really
00:03:36.160 | have a clear objective function under which we operate, but we all kind of converge towards
00:03:40.920 | one somehow.
00:03:42.440 | And that to me has always been a mystery that I think is somehow productive for also understanding
00:03:49.280 | AI systems.
00:03:51.800 | But I guess that's the next step.
00:03:54.520 | The first step is try to understand the mind.
00:03:56.520 | Well, I don't know.
00:03:57.680 | I mean, I think there's something to the argument that that kind of bottom, like strictly bottom
00:04:04.720 | up approach is wrongheaded.
00:04:08.000 | In other words, you know, there are basic phenomena that, you know, basic aspects of
00:04:13.840 | human intelligence that, you know, can only be understood in the context of groups.
00:04:21.200 | I'm perfectly open to that.
00:04:22.560 | I've never been particularly convinced by the notion that we should consider intelligence
00:04:30.160 | to inhere at the level of communities.
00:04:33.520 | I don't know why.
00:04:35.440 | I'm sort of stuck on the notion that the basic unit that we want to understand is individual
00:04:40.320 | humans.
00:04:41.320 | And if we have to understand that in the context of other humans, fine.
00:04:46.800 | But for me, intelligence is just, I stubbornly define it as something that is, you know,
00:04:54.880 | an aspect of an individual human.
00:04:56.880 | That's just my, I don't know.
00:04:57.880 | I'm with you, but that could be the reductionist dream of a scientist because you can understand
00:05:02.680 | a single human.
00:05:04.460 | It also is very possible that intelligence can only arise when there's multiple intelligences.
00:05:11.080 | When there's multiple sort of, it's a sad thing if that's true, because it's very difficult
00:05:16.960 | to study.
00:05:18.000 | But if it's just one human, that one human would not be homo sapien, would not become
00:05:23.400 | that intelligent.
00:05:24.400 | That's a possibility.
00:05:25.400 | I'm with you.
00:05:28.060 | One thing I will say along these lines is that I think a serious effort to understand
00:05:39.980 | human intelligence and maybe to build a human like intelligence needs to pay just as much
00:05:49.220 | attention to the structure of the environment as to the structure of the cognizing system,
00:05:58.020 | whether it's a brain or an AI system.
00:06:01.360 | That's one thing I took away actually from my early studies with the pioneers of neural
00:06:07.100 | network research, people like Jay McClelland and John Cohen.
00:06:12.100 | The structure of cognition is really, it's only partly a function of the architecture
00:06:21.740 | of the brain and the learning algorithms that it implements.
00:06:25.180 | What really shapes it is the interaction of those things with the structure of the world
00:06:32.340 | in which those things are embedded.
00:06:34.740 | And that's especially important for, that's made most clear in reinforcement learning
00:06:38.740 | where a simulated environment is, you can only learn as much as you can simulate.
00:06:43.900 | And that's what made, what DeepMind made very clear with the other aspect of the environment,
00:06:49.140 | which is the self-play mechanism of the other agent of the competitive behavior, which the
00:06:55.460 | other agent becomes the environment essentially.
00:06:58.140 | And that's, I mean, one of the most exciting ideas in AI is the self-play mechanism that's
00:07:03.820 | able to learn successfully.
00:07:06.020 | So there you go.
00:07:07.020 | - There's a thing where competition is essential for learning, at least in that context.
00:07:12.540 | - Yeah.
00:07:13.540 | - So I think that's really exciting.
00:07:14.540 | - Yeah.
00:07:15.540 | - So that's great.
00:07:15.540 | - Thank you.
00:07:20.540 | - Thank you.
00:07:25.540 | - Thank you.
00:07:30.540 | [BLANK_AUDIO]