Back to Index

Yann LeCun: Human-Level Artificial Intelligence | AI Podcast Clips


Transcript

- What do you think it takes to build a system with human level intelligence? You talked about the AI system in the movie "Her" being way out of reach, our current reach. This might be outdated as well, but-- - It's still way out of reach. - It's still way out of reach.

What would it take to build "Her"? Do you think? - So I can tell you the first two obstacles that we have to clear, but I don't know how many obstacles there are after this. So the image I usually use is that there is a bunch of mountains that we have to climb and we can see the first one, but we don't know if there are 50 mountains behind it or not.

And this might be a good sort of metaphor for why AI researchers in the past have been overly optimistic about the result of AI. You know, for example, Newell and Simon, right, wrote the general problem solver and they called it a general problem solver. - General problem solver. - Okay, and of course, the first thing you realize is that all the problems you want to solve are exponential and so you can't actually use it for anything useful.

But, you know. - Yeah, so yeah, all you see is the first peak. So what are the first couple of peaks for "Her"? - So the first peak, which is precisely what I'm working on, is self-supervised learning. How do we get machines to learn models of the world by observation, kind of like babies and like young animals?

So we've been working with cognitive scientists. So this Emmanuelle Dupou, who is at FAIR in Paris, half-time, is also a researcher in French University. And he has this chart that shows which, how many months of life baby humans can learn different concepts. And you can measure this in various ways.

So things like distinguishing animate objects from inanimate objects. You can tell the difference at age two, three months. Whether an object is going to stay stable, is going to fall, you know, about four months, you can tell. You know, there are various things like this. And then things like gravity, the fact that objects are not supposed to float in the air, but are supposed to fall, you learn this around the age of eight or nine months.

If you look at a lot of, you know, eight-month-old babies, you give them a bunch of toys on their high chair. First thing they do is they throw them on the ground and they look at them. It's because, you know, they're learning about, actively learning about gravity. - Gravity, yeah.

- Okay, so they're not trying to annoy you, but they, you know, they need to do the experiment, right? So, you know, how do we get machines to learn like babies? Mostly by observation with a little bit of interaction and learning those models of the world, because I think that's really a crucial piece of an intelligent autonomous system.

So if you think about the architecture of an intelligent autonomous system, it needs to have a predictive model of the world. So something that says, here is a world at time T, here is a state of the world at time T plus one if I take this action. And it's not a single answer, it can be a-- - Yeah, it can be a distribution, yeah.

- Yeah, well, but we don't know how to represent distributions in high-dimensional continuous spaces, so it's gotta be something weaker than that, okay? But with some representation of uncertainty. If you have that, then you can do what optimal control theorists call model predictive control, which means that you can run your model with a hypothesis for a sequence of action and then see the result.

Now, what you need, the other thing you need is some sort of objective that you want to optimize. Am I reaching the goal of grabbing this object? Am I minimizing energy? Am I whatever, right? So there is some sort of objectives that you have to minimize. And so in your head, if you have this model, you can figure out the sequence of action that will optimize your objective.

That objective is something that ultimately is rooted in your basal ganglia, at least in the human brain, that's what it is. Basal ganglia computes your level of contentment or miscontentment, I don't know if that's a word. Unhappiness, okay? - Yeah, yeah. - Discontentment. - Discontentment, maybe. - And so your entire behavior is driven towards kind of minimizing that objective, which is maximizing your contentment, computed by your basal ganglia.

And what you have is an objective function, which is basically a predictor of what your basal ganglia is gonna tell you. So you're not gonna put your hand on fire because you know it's gonna burn and you're gonna get hurt. And you're predicting this because of your model of the world and your sort of predictor of this objective, right?

So if you have those three components, you have four components, you have the hardwired contentment objective computer, if you want, calculator. And then you have those three components. One is the objective predictor, which basically predicts your level of contentment. One is the model of the world. And there's a third module I didn't mention, which is the module that will figure out the best course of action to optimize an objective given your model.

Okay? - Yeah. - Glissa policy, policy network, or something like that, right? Now, you need those three components to act autonomously, intelligently. And you can be stupid in three different ways. You can be stupid because your model of the world is wrong. You can be stupid because your objective is not aligned with what you actually want to achieve.

Okay? In humans, that would be a psychopath. - Right. - And then the third thing, the third way you can be stupid is that you have the right model, you have the right objective, but you're unable to figure out a course of action to optimize your objective given your model.

(silence) (silence) (silence) (silence) (silence) (silence) (silence)