back to indexYann LeCun: Benchmarks for Human-Level Intelligence | AI Podcast Clips
Chapters
0:0 Dont get fooled
0:52 Toy problems
2:45 Interactive environments
5:13 Specialization
6:53 Boolean Functions
00:00:12.120 |
"to have a solution to artificial general intelligence, 00:00:18.460 |
"or who claim to have figured out how the brain works. 00:00:27.360 |
- Yeah, this is a little dated, by the way. (laughs) 00:00:47.640 |
and the practical testing, the practical application 00:00:54.040 |
Like, for example, it could be a toy dataset, 00:01:01.520 |
as some sort of standard kind of benchmark, if you want. 00:01:08.520 |
people, Jason West, Antoine Born, and a few others 00:01:11.880 |
proposed the Babi tasks, which were kind of a toy problem 00:01:15.380 |
to test the ability of machines to reason, actually, 00:01:18.560 |
to access working memory and things like this. 00:01:21.180 |
And it was very useful, even though it wasn't a real task. 00:01:27.880 |
So, you know, toy problems can be very useful. 00:01:30.280 |
It's just that I was really struck by the fact that 00:01:33.800 |
a lot of people, particularly a lot of people 00:01:35.400 |
with money to invest, would be fooled by people telling them, 00:01:38.640 |
oh, we have, you know, the algorithm of the cortex 00:01:44.460 |
So there's a lot of people who try to take advantage 00:01:56.080 |
that the new ideas, the ideas that push the field forward 00:02:02.880 |
or it may be very difficult to establish a benchmark. 00:02:06.800 |
Establishing benchmarks is part of the process. 00:02:19.160 |
to just every kind of information you can pull off 00:02:29.200 |
what kind of stuff, what kind of benchmarks do you see 00:02:32.960 |
that start creeping on to more something like intelligence, 00:02:37.880 |
like reasoning, like, maybe you don't like the term, 00:02:47.120 |
interactive environments in which you can train 00:02:54.460 |
the classical paradigm of supervised learning 00:03:14.360 |
the order in which you see them shouldn't matter, 00:03:21.840 |
which is the case, for example, in robotics, right? 00:03:37.240 |
so that creates also a dependency between samples, right? 00:03:45.220 |
is gonna be probably in the same building, most likely. 00:03:52.200 |
of this training set, test set hypothesis break. 00:04:00.640 |
So people are setting up artificial environments 00:04:10.100 |
and can interact with objects and things like this. 00:04:23.080 |
and you have games, you know, things like that. 00:04:34.200 |
because it implies that human intelligence is general, 00:04:40.040 |
and human intelligence is nothing like general, 00:04:45.960 |
we like to think of ourselves as having general intelligence, 00:05:17.320 |
it's more like a quasi-mathematical demonstration. 00:05:27.720 |
It's one million nerve fibers, your optical nerve. 00:05:34.880 |
So the input to your visual cortex is one million bits. 00:05:38.300 |
Now, they're connected to your brain in a particular way, 00:05:46.200 |
that are kind of a little bit like a convolutional net, 00:05:59.960 |
and I put a device that makes a random perturbation, 00:06:08.840 |
is a fixed but random permutation of all the pixels. 00:06:13.400 |
There's no way in hell that your visual cortex, 00:06:26.960 |
- No, because now two pixels that are nearby in the world 00:06:29.880 |
will end up in very different places in your visual cortex. 00:06:33.480 |
And your neurons there have no connections with each other 00:06:39.280 |
the hardware is built in many ways to support? 00:06:46.840 |
- Yeah, but it's still pretty damn impressive. 00:06:58.280 |
So let's imagine you want to train your visual system 00:07:02.520 |
to recognize particular patterns of those one million bits. 00:07:38.800 |
can actually be computed by your visual cortex? 00:07:41.520 |
And the answer is a tiny, tiny, tiny, tiny, tiny, tiny sliver 00:07:51.560 |
- But, okay, that's an argument against the word general. 00:07:58.120 |
I think there's a, I agree with your intuition, 00:08:17.640 |
that are outside of our comprehension, right? 00:08:58.400 |
When you reduce the volume, the temperature goes up, 00:09:01.640 |
the pressure goes up, things like that, right? 00:09:06.440 |
Those are the things you can know about that system. 00:09:20.980 |
And what you don't know about it is the entropy, 00:09:28.240 |
The energy contained in that thing is what we call heat. 00:09:49.640 |
And you're right, that's a nice way to put it. 00:09:51.560 |
We're general to all the things we can imagine, 00:09:54.420 |
which is a very tiny subset of all things that are possible. 00:10:09.800 |
except for all the ones that you can actually write down. 00:10:17.080 |
But so we can just call it artificial intelligence. 00:10:35.840 |
and it's difficult to define what human intelligence is. 00:10:47.480 |
Okay, damn impressive demonstration of intelligence,