Back to Index

Gary Marcus: Nature vs Nurture is a False Dichotomy | AI Podcast Clips


Chapters

0:0 Intro
0:19 Innate Knowledge
1:49 The Birth of the Mind
3:11 Disjoint systems
4:35 Evolution
5:31 Libraries
6:58 Evolution is cumulative
7:45 Biology
8:43 Biomimicry

Transcript

You've talked about this, you've written about it, you've thought about it, nature versus nurture. So what innate knowledge do you think we're born with and what do we learn along the way in those early months and years? - Can I just say how much I like that question? You phrased it just right and almost nobody ever does, which is what is the innate knowledge and what's learned along the way?

So many people dichotomize it and they think it's nature versus nurture when it is obviously has to be nature and nurture, they have to work together. You can't learn the stuff along the way unless you have some innate stuff. But just because you have the innate stuff doesn't mean you don't learn anything.

And so many people get that wrong, including in the field. Like people think if I work in machine learning, the learning side, I must not be allowed to work on the innate side or that will be cheating. People have said that to me and it's just absurd. So thank you.

- But you could break that apart more. I've talked to folks who studied the development of the brain and the growth of the brain in the first few days, in the first few months in the womb, all of that, is that innate? So that process of development from a stem cell to the growth of the central nervous system and so on to the information that's encoded through the long arc of evolution.

So all of that comes into play and it's unclear. It's not just whether it's a dichotomy or not, it's where most or where the knowledge is encoded. So what's your intuition about the innate knowledge, the power of it, what's contained in it, what can we learn from it? - One of my earlier books was actually trying to understand the biology of this.

The book was called "The Birth of the Mind." How is it the genes even build innate knowledge? And from the perspective of the conversation we're having today, there's actually two questions. One is what innate knowledge or mechanisms or what have you, people or other animals might be endowed with.

I always like showing this video of a baby ibex climbing down a mountain. That baby ibex, a few hours after its birth, knows how to climb down a mountain. That means that it knows not consciously something about its own body and physics and 3D geometry and all of this kind of stuff.

So there's one question about what does biology give its creatures and what is evolved in our brains? How is that represented in our brains? The question I thought about in the book "The Birth of the Mind." And then there's a question of what AI should have. And they don't have to be the same.

But I would say that it's a pretty interesting set of things that we are equipped with that allows us to do a lot of interesting things. So I would argue or guess based on my reading of the developmental psychology literature, which I've also participated in, that children are born with a notion of space, time, other agents, places, and also this kind of mental algebra that I was describing before.

No certain causation if I didn't just say that. So at least those kinds of things. They're like frameworks for learning the other things. - Are they disjoint in your view or is it just somehow all connected? You've talked a lot about language. Is it all kind of connected in some mesh that's language-like of understanding concepts altogether?

- I don't think we know for people how they're represented and machines just don't really do this yet. So I think it's an interesting open question both for science and for engineering. Some of it has to be at least interrelated in the way that the interfaces of a software package have to be able to talk to one another.

So the systems that represent space and time can't be totally disjoint because a lot of the things that we reason about are the relations between space and time and cause. So I put this on and I have expectations about what's going to happen with the bottle cap on top of the bottle and those span space and time.

If the cap is over here, I get a different outcome. If the timing is different, if I put this here, after I move that, then I get a different outcome that relates to causality. So obviously these mechanisms, whatever they are, can certainly communicate with each other. - So I think evolution had a significant role to play in the development of this whole collage, right?

How efficient do you think is evolution? - Oh, it's terribly inefficient except that-- - Okay, well, can we do better? - Well, let's come to that in a second. It's inefficient except that once it gets a good idea, it runs with it. So it took, I guess, a billion years, roughly a billion years to evolve to a vertebrate brain plan.

And once that vertebrate brain plan evolved, it spread everywhere. So fish have it and dogs have it and we have it. We have adaptations of it and specializations of it. But and the same thing with a primate brain plan. So monkeys have it and apes have it and we have it.

So there are additional innovations like color vision and those spread really rapidly. So it takes evolution a long time to get a good idea, but being anthropomorphic and not literal here. But once it has that idea, so to speak, which caches out into one set of genes or in the genome, those genes spread very rapidly.

And they're like subroutines or libraries, I guess the word people might use nowadays or be more familiar with. They're libraries that get used over and over again. So once you have the library for building something with multiple digits, you can use it for a hand, but you can also use it for a foot.

You just kind of reuse the library with slightly different parameters. Evolution does a lot of that, which means that the speed over time picks up. So evolution can happen faster because you have bigger and bigger libraries. And what I think has happened in attempts at evolutionary computation is that people start with libraries that are very, very minimal, like almost nothing.

And then progress is slow and it's hard for someone to get a good PhD thesis out of it and they give up. If we had richer libraries to begin with, if you were evolving from systems that had an originate structure to begin with, then things might speed up. - More and more PhD students, if the evolutionary process is indeed in a meta way, runs away with good ideas, you need to have a lot of ideas, pool of ideas in order for it to discover one that you can run away with.

And PhD students representing individual ideas as well. - Yeah, I mean, you could throw a billion PhD students at it. - Yeah, the monkeys are typewriters with Shakespeare, yeah. - Well, I mean, those aren't cumulative, right? That's just random. And part of the point that I'm making is that evolution is cumulative.

So if you have a billion monkeys independently, you don't really get anywhere. But if you have a billion monkeys, and I think Dawkins made this point originally, or probably other people, Dawkins made it very nice, either a selfish gene or blind watchmaker. If there's some sort of fitness function that can drive you towards something, I guess that's Dawkins point.

And my point, which is a variation on that, is that if the evolution is cumulative, the related points, then you can start going faster. - Do you think something like the process of evolution is required to build intelligent systems? So if we-- - Not logically. So all the stuff that evolution did, a good engineer might be able to do.

So for example, evolution made quadrupeds, which distribute the load across a horizontal surface. A good engineer could come up with that idea. I mean, sometimes good engineers come up with ideas by looking at biology. There's lots of ways to get your ideas. Probably what I'm suggesting is we should look at biology a lot more.

We should look at the biology of thought and understanding, and the biology by which creatures intuitively reason about physics or other agents, or how do dogs reason about people? They're actually pretty good at it. If we could understand, at my college we joked dognition, if we could understand dognition well and how it was implemented, that might help us with our AI.

- So do you think it's possible that the kind of timescale that evolution took is the kind of timescale that will be needed to build intelligent systems, or can we significantly accelerate that process inside a computer? - I mean, I think the way that we accelerate that process is we borrow from biology.

Not slavishly, but I think we look at how biology has solved problems, and we say, does that inspire any engineering solutions here? Try to mimic biological systems, and then therefore have a shortcut. - Yeah, I mean, there's a field called biomimicry, and people do that for material science all the time.

We should be doing the analog of that for AI, and the analog for that for AI is to look at cognitive science, or the cognitive sciences, which is psychology, maybe neuroscience, linguistics, and so forth. Look to those for insight. - Thank you. - Thank you. - Thank you. - Thank you.

- Thank you.