back to indexYann LeCun: Human-Level Artificial Intelligence | AI Podcast Clips
00:00:00.000 |
- What do you think it takes to build a system 00:00:05.380 |
You talked about the AI system in the movie "Her" 00:00:24.160 |
but I don't know how many obstacles there are after this. 00:00:27.920 |
there is a bunch of mountains that we have to climb 00:00:39.640 |
have been overly optimistic about the result of AI. 00:00:43.280 |
You know, for example, Newell and Simon, right, 00:00:53.560 |
- Okay, and of course, the first thing you realize 00:00:55.840 |
is that all the problems you want to solve are exponential 00:00:57.640 |
and so you can't actually use it for anything useful. 00:01:01.360 |
- Yeah, so yeah, all you see is the first peak. 00:01:03.560 |
So what are the first couple of peaks for "Her"? 00:01:11.080 |
How do we get machines to learn models of the world 00:01:13.560 |
by observation, kind of like babies and like young animals? 00:01:17.120 |
So we've been working with cognitive scientists. 00:01:25.080 |
So this Emmanuelle Dupou, who is at FAIR in Paris, 00:01:28.080 |
half-time, is also a researcher in French University. 00:01:46.960 |
So things like distinguishing animate objects 00:01:54.240 |
You can tell the difference at age two, three months. 00:02:04.760 |
You know, there are various things like this. 00:02:08.280 |
the fact that objects are not supposed to float in the air, 00:02:11.880 |
you learn this around the age of eight or nine months. 00:02:14.420 |
If you look at a lot of, you know, eight-month-old babies, 00:02:17.160 |
you give them a bunch of toys on their high chair. 00:02:20.360 |
First thing they do is they throw them on the ground 00:02:23.040 |
It's because, you know, they're learning about, 00:02:31.000 |
but they, you know, they need to do the experiment, right? 00:02:33.960 |
So, you know, how do we get machines to learn like babies? 00:02:37.880 |
Mostly by observation with a little bit of interaction 00:02:42.520 |
because I think that's really a crucial piece 00:02:50.800 |
it needs to have a predictive model of the world. 00:02:52.640 |
So something that says, here is a world at time T, 00:02:55.360 |
here is a state of the world at time T plus one 00:03:02.560 |
- Yeah, well, but we don't know how to represent 00:03:04.520 |
distributions in high-dimensional continuous spaces, 00:03:06.160 |
so it's gotta be something weaker than that, okay? 00:03:20.200 |
for a sequence of action and then see the result. 00:03:24.560 |
is some sort of objective that you want to optimize. 00:03:27.340 |
Am I reaching the goal of grabbing this object? 00:03:46.920 |
at least in the human brain, that's what it is. 00:03:48.520 |
Basal ganglia computes your level of contentment 00:03:52.160 |
or miscontentment, I don't know if that's a word. 00:03:59.720 |
- And so your entire behavior is driven towards 00:04:25.040 |
And you're predicting this because of your model of the world 00:04:27.360 |
and your sort of predictor of this objective, right? 00:04:48.000 |
which basically predicts your level of contentment. 00:04:58.520 |
the best course of action to optimize an objective 00:05:15.240 |
And you can be stupid in three different ways. 00:05:17.400 |
You can be stupid because your model of the world is wrong. 00:05:20.680 |
You can be stupid because your objective is not aligned 00:05:37.640 |
but you're unable to figure out a course of action