Back to Index

Yann LeCun: Sophia and Does AI Need a Body? | AI Podcast Clips


Transcript

(gentle music) - You've criticized the art project that is Sophia the Robot. And what that project essentially does is uses our natural inclination to anthropomorphize things that look like human and give them more. Do you think that could be used by AI systems like in the movie "Her"? - So do you think that body is needed to create a feeling of intelligence?

- Well, if Sophia was just an art piece, I would have no problem with it, but it's presented as something else. - Let me add on that comment real quick. If creators of Sophia could change something about their marketing or behavior in general, what would it be? What's-- - Oh, just about everything.

(laughing) - I mean, don't you think, here's a tough question. Let me, so I agree with you. So Sophia is not, the general public feels that Sophia can do way more than she actually can. - That's right. - And the people who created Sophia are not honestly publicly communicating trying to teach the public.

- Right. - But here's a tough question. Don't you think the same thing is scientists in industry and research are taking advantage of the same misunderstanding in the public when they create AI companies or publish stuff? - Some companies, yes. I mean, there is no sense of, there's no desire to delude.

There's no desire to kind of over-claim what something is done. Right, you publish a paper on AI that has this result on ImageNet, it's pretty clear. I mean, it's not even interesting anymore. But I don't think there is that, I mean, the reviewers are generally not very forgiving of unsupported claims of this type.

And, but there are certainly quite a few startups that have had a huge amount of hype around this that I find extremely damaging and I've been calling it out when I've seen it. So yeah, but to go back to your original question, like the necessity of embodiment. I think, I don't think embodiment is necessary.

I think grounding is necessary. So I don't think we're gonna get machines that really understand language without some level of grounding in the real world. And it's not clear to me that language is a high enough bandwidth medium to communicate how the real world works. I think for this- - Can you talk to ground, what grounding means?

- So grounding means that, so there is this classic problem of common sense reasoning, you know, the Winograd schema, right? And so I tell you the trophy doesn't fit in the suitcase because it's too big, or the trophy doesn't fit in the suitcase because it's too small. And the it in the first case refers to the trophy in the second case to the suitcase.

And the reason you can figure this out is because you know where the trophy and the suitcase are, you know, one is supposed to fit in the other one and you know the notion of size and the big object doesn't fit in a small object unless it's a TARDIS, you know, things like that, right?

So you have this knowledge of how the world works, of geometry and things like that. I don't believe you can learn everything about the world by just being told in language how the world works. I think you need some low-level perception of the world, you know, be it visual touch, you know, whatever, but some higher bandwidth perception of the world.

- So by reading all the world's text, you still may not have enough information. - That's right. There's a lot of things that just will never appear in text and that you can't really infer. So I think common sense will emerge from, you know, certainly a lot of language interaction, but also with watching videos or perhaps even interacting in virtual environments and possibly, you know, robot interacting in the real world.

But I don't actually believe necessarily that this last one is absolutely necessary, but I think there's a need for some grounding. - But the final product doesn't necessarily need to be embodied, you're saying. - It just needs to have an awareness, a grounding. - Right, but it needs to know how the world works to have, you know, to not be frustrating to talk to.

- And you talked about emotions being important. That's a whole nother topic. - Well, so, you know, I talked about this, the basal ganglia as the thing that calculates your level of miscontentment, and then there is this other module that sort of tries to do a prediction of whether you're going to be content or not.

That's the source of some emotion. So fear, for example, is an anticipation of bad things that can happen to you, right? You have this inkling that there is some chance that something really bad is going to happen to you, and that creates fear. When you know for sure that something bad is going to happen to you, you kind of give up, right?

It's not there anymore. It's uncertainty that creates fear. So the punchline is, we're not going to have autonomous intelligence without emotions. - Whatever the heck emotions are. So you mentioned very practical things of fear, but there's a lot of other mess around it. - But there are kind of the results of drives.

- Yeah, there's deeper biological stuff going on, and I've talked to a few folks on this. There's fascinating stuff that ultimately connects to our brain. (silence) (silence) (silence) (silence) (silence) (silence) (silence)