Back to Index

Cognition Is a Function of the Environment | Matt Botvinick and Lex Fridman


Transcript

You know, if you take an introductory computer science course and they are introducing you to the notion of Turing machines, one way of articulating what the significance of a Turing machine is, is that it's a machine emulator. It can emulate any other machine. And that to me, you know, that way of looking at a Turing machine, you know, really sticks with me.

I think of humans as maybe sharing in some of that character. We're capacity limited, we're not Turing machines, obviously, but we have the ability to adapt behaviors that are very much unlike anything we've done before, but there's some basic mechanism that's implemented in our brain that allows us to run software.

But just on that point, you mentioned Turing machine, but nevertheless, it's fundamentally our brains are just computational devices in your view? Is that what you're getting at? Like, it was a little bit unclear to this line you drew. Is there any magic in there or is it just basic computation?

I'm happy to think of it as just basic computation, but mind you, I won't be satisfied until somebody's explains to me what the basic computations are that are leading to the full richness of human cognition. It's not going to be enough for me to understand what the computations are that allow people to do arithmetic or play chess.

I want the whole thing. And a small tangent, because you kind of mentioned coronavirus, there's group behavior. Is there something interesting to your search of understanding the human mind where behavior of large groups or just behavior of groups is interesting? You know, seeing that as a collective mind, as a collective intelligence, perhaps seeing the groups of people as a single intelligent organism, especially looking at the reinforcement learning work you've done recently.

Well, yeah, I can't, I mean, I have the honor of working with a lot of incredibly smart people and I wouldn't want to take any credit for leading the way on the multi-agent work that's come out of my group or DeepMind lately, but I do find it fascinating. And I mean, I think it can't be debated, you know, human behavior arises within communities.

That just seems to me self-evident. But to me, it is self-evident, but that seems to be a profound aspects of something that created that was like, if you look at like 2001 Space Odyssey when the monkeys touched the, like that's the magical moment. I think Yuval Harari argues that the ability of our large numbers of humans to hold an idea to converge towards idea together, like you said, shaking hands versus bumping elbows, somehow converge, like without even, like without, you know, without being in a room altogether, just kind of this like distributed convergence towards an idea over a particular period of time seems to be fundamental to just every aspect of our cognition of our intelligence because humans, I will talk about reward, but it seems like we don't really have a clear objective function under which we operate, but we all kind of converge towards one somehow.

And that to me has always been a mystery that I think is somehow productive for also understanding AI systems. But I guess that's the next step. The first step is try to understand the mind. Well, I don't know. I mean, I think there's something to the argument that that kind of bottom, like strictly bottom up approach is wrongheaded.

In other words, you know, there are basic phenomena that, you know, basic aspects of human intelligence that, you know, can only be understood in the context of groups. I'm perfectly open to that. I've never been particularly convinced by the notion that we should consider intelligence to inhere at the level of communities.

I don't know why. I'm sort of stuck on the notion that the basic unit that we want to understand is individual humans. And if we have to understand that in the context of other humans, fine. But for me, intelligence is just, I stubbornly define it as something that is, you know, an aspect of an individual human.

That's just my, I don't know. I'm with you, but that could be the reductionist dream of a scientist because you can understand a single human. It also is very possible that intelligence can only arise when there's multiple intelligences. When there's multiple sort of, it's a sad thing if that's true, because it's very difficult to study.

But if it's just one human, that one human would not be homo sapien, would not become that intelligent. That's a possibility. I'm with you. One thing I will say along these lines is that I think a serious effort to understand human intelligence and maybe to build a human like intelligence needs to pay just as much attention to the structure of the environment as to the structure of the cognizing system, whether it's a brain or an AI system.

That's one thing I took away actually from my early studies with the pioneers of neural network research, people like Jay McClelland and John Cohen. The structure of cognition is really, it's only partly a function of the architecture of the brain and the learning algorithms that it implements. What really shapes it is the interaction of those things with the structure of the world in which those things are embedded.

And that's especially important for, that's made most clear in reinforcement learning where a simulated environment is, you can only learn as much as you can simulate. And that's what made, what DeepMind made very clear with the other aspect of the environment, which is the self-play mechanism of the other agent of the competitive behavior, which the other agent becomes the environment essentially.

And that's, I mean, one of the most exciting ideas in AI is the self-play mechanism that's able to learn successfully. So there you go. - There's a thing where competition is essential for learning, at least in that context. - Yeah. - So I think that's really exciting. - Yeah.

- So that's great. - Thank you. - Thank you. - Thank you.