back to index

What is Intelligence? - François Chollet and Lex Fridman | AI Podcast Clips


Whisper Transcript | Transcript Only Page

00:00:00.000 | (gentle music)
00:00:02.580 | - Can you try to define intelligence?
00:00:11.360 | Like what does it mean to be more or less intelligent?
00:00:16.200 | Is it completely coupled to a particular problem
00:00:19.200 | or is there something a little bit more universal?
00:00:21.920 | - Yeah, I do believe all intelligence
00:00:23.640 | is specialized intelligence.
00:00:25.280 | Even human intelligence has some degree of generality.
00:00:28.440 | Well, all intelligence systems have some degree of generality
00:00:31.560 | but they're always specialized in one category of problems.
00:00:35.640 | So the human intelligence is specialized
00:00:38.120 | in the human experience.
00:00:39.800 | And that shows at various levels.
00:00:41.800 | That shows in some prior knowledge
00:00:45.560 | that's innate that we have at birth.
00:00:48.260 | Knowledge about things like agents, goal-driven behavior,
00:00:53.260 | visual priors about what makes an object,
00:00:56.640 | priors about time and so on.
00:00:59.720 | That shows also in the way we learn.
00:01:01.520 | For instance, it's very, very easy for us
00:01:03.360 | to pick up language.
00:01:04.780 | It's very, very easy for us to learn certain things
00:01:08.240 | because we are basically hard-coded to learn them.
00:01:11.120 | And we are specialized in solving certain kinds of problem
00:01:14.480 | and we are quite useless
00:01:15.880 | when it comes to other kinds of problems.
00:01:17.600 | For instance, we are not really designed
00:01:22.320 | to handle very long-term problems.
00:01:24.980 | We have no capability of seeing the very long-term.
00:01:28.240 | We don't have very much working memory.
00:01:33.060 | - So how do you think about long-term?
00:01:36.240 | Do you think long-term planning,
00:01:37.560 | we're talking about scale of years, millennia,
00:01:41.060 | what do you mean by long-term we're not very good?
00:01:44.300 | - Well, human intelligence is specialized
00:01:45.920 | in the human experience.
00:01:46.920 | And human experience is very short.
00:01:48.800 | Like one lifetime is short.
00:01:50.420 | Even within one lifetime,
00:01:52.080 | we have a very hard time envisioning things
00:01:56.220 | on a scale of years.
00:01:57.380 | Like it's very difficult to project yourself
00:01:59.420 | at a scale of five years, at a scale of 10 years and so on.
00:02:02.340 | - Right.
00:02:03.180 | - We can solve only fairly narrowly scoped problems.
00:02:06.180 | So when it comes to solving bigger problems,
00:02:08.500 | larger scale problems,
00:02:09.940 | we are not actually doing it on an individual level.
00:02:12.540 | So it's not actually our brain doing it.
00:02:15.500 | We have this thing called civilization, right?
00:02:19.260 | Which is itself a sort of problem solving system,
00:02:22.840 | a sort of artificial intelligence system, right?
00:02:26.240 | And it's not running on one brain,
00:02:28.320 | it's running on a network of brains.
00:02:30.320 | In fact, it's running on much more
00:02:31.840 | than a network of brains.
00:02:32.960 | It's running on a lot of infrastructure,
00:02:36.320 | like books and computers and the internet
00:02:39.280 | and human institutions and so on.
00:02:42.000 | And that is capable of handling problems
00:02:46.440 | on a much greater scale than any individual human.
00:02:49.960 | If you look at computer science, for instance,
00:02:53.820 | that's an institution that solves problems
00:02:56.080 | and it is superhuman, right?
00:02:58.780 | It operates on a greater scale,
00:03:00.400 | it can solve much bigger problems
00:03:03.120 | than an individual human could.
00:03:05.320 | And science itself, science as a system,
00:03:07.600 | as an institution is a kind of artificially intelligent
00:03:11.320 | problem solving algorithm that is superhuman.
00:03:15.600 | - Yeah, it's a, at least computer science
00:03:19.000 | is like a theorem prover.
00:03:20.600 | At a scale of thousands,
00:03:23.960 | maybe hundreds of thousands of human beings.
00:03:26.640 | At that scale, what do you think is an intelligent agent?
00:03:30.900 | So there's us humans at the individual level,
00:03:34.520 | there is millions, maybe billions of bacteria in our skin.
00:03:38.600 | There is, that's at the smaller scale.
00:03:42.640 | You can even go to the particle level
00:03:45.400 | as systems that behave, you can say intelligently
00:03:49.760 | in some ways.
00:03:50.600 | And then you can look at Earth as a single organism,
00:03:54.080 | you can look at our galaxy
00:03:55.440 | and even the universe as a single organism.
00:03:57.600 | Do you think, how do you think about scale
00:04:00.880 | and defining intelligent systems?
00:04:02.520 | And we're here at Google,
00:04:04.320 | there is millions of devices doing computation
00:04:08.080 | in a distributed way.
00:04:09.640 | How do you think about intelligence versus scale?
00:04:12.120 | - You can always characterize anything as a system.
00:04:15.640 | I think people who talk about things
00:04:19.840 | like intelligence explosion tend to focus on one agent
00:04:23.640 | is basically one brain,
00:04:25.040 | like one brain considered in isolation,
00:04:27.240 | like a brain a jar that's controlling a body
00:04:29.440 | in a very like top to bottom kind of fashion.
00:04:32.520 | And that body is pursuing goals into an environment.
00:04:35.720 | So it's a very hierarchical view.
00:04:36.960 | You have the brain at the top of the pyramid,
00:04:39.120 | then you have the body just plainly receiving orders
00:04:42.240 | and then the body is manipulating objects
00:04:43.880 | in environment and so on.
00:04:45.160 | So everything is subordinate to this one thing,
00:04:49.160 | this epicenter, which is the brain.
00:04:50.920 | But in real life,
00:04:52.200 | intelligent agents don't really work like this.
00:04:55.480 | There is no strong delimitation
00:04:57.160 | between the brain and the body to start with.
00:04:59.640 | You have to look not just at the brain,
00:05:01.240 | but at the nervous system.
00:05:02.760 | But then the nervous system and the body
00:05:05.080 | are naturally two separate entities.
00:05:07.000 | So you have to look at an entire animal as one agent,
00:05:10.240 | but then you start realizing as you observe an animal
00:05:13.280 | over any length of time,
00:05:16.480 | that a lot of the intelligence of an animal
00:05:19.440 | is actually externalized.
00:05:20.880 | That's especially true for humans.
00:05:22.520 | A lot of our intelligence is externalized.
00:05:25.160 | When you write down some notes,
00:05:26.640 | that is externalized intelligence.
00:05:28.240 | When you write a computer program,
00:05:30.240 | you are externalizing cognition.
00:05:32.280 | So it's externalizing books,
00:05:33.600 | it's externalized in computers,
00:05:35.960 | it's externalized in the internet, in other humans.
00:05:38.400 | It's externalized in language and so on.
00:05:42.160 | So there is no hard delimitation
00:05:47.160 | of what makes an intelligent agent.
00:05:49.400 | It's all about context.
00:05:50.680 | - Okay, but AlphaGo is better at Go
00:05:55.560 | than the best human player.
00:05:57.000 | There's levels of skill here.
00:06:02.480 | Do you think there's such a concept
00:06:07.000 | as intelligence explosion in a specific task?
00:06:10.880 | And then, well, yeah.
00:06:13.480 | Do you think it's possible to have a category of tasks
00:06:16.240 | on which you do have something
00:06:18.200 | like an exponential growth of ability
00:06:21.160 | to solve that particular problem?
00:06:23.560 | - I think if you consider a specific vertical,
00:06:26.480 | it's probably possible to some extent.
00:06:31.440 | I also don't think we have to speculate about it
00:06:34.480 | because we have real world examples
00:06:38.440 | of recursively self-improving intelligent systems.
00:06:42.160 | So for instance, science is a problem solving system,
00:06:47.040 | a knowledge generation system,
00:06:48.720 | like a system that experiences the world in some sense
00:06:52.360 | and then gradually understands it and can act on it.
00:06:56.280 | And that system is superhuman
00:06:58.240 | and it is clearly recursively self-improving
00:07:01.720 | because science feeds into technology.
00:07:03.680 | Technology can be used to build better tools,
00:07:06.320 | better computers, better instrumentation and so on,
00:07:09.000 | which in turn can make science faster.
00:07:11.960 | So science is probably the closest thing we have today
00:07:16.680 | to a recursively self-improving superhuman AI.
00:07:20.880 | And you can just observe,
00:07:22.600 | is scientific progress today exploding,
00:07:26.440 | which itself is an interesting question.
00:07:28.920 | And you can use that as a basis to try to understand
00:07:31.680 | what will happen with a superhuman AI
00:07:34.000 | that has science-like behavior.
00:07:36.440 | Thank you.
00:07:37.280 | (upbeat music)
00:07:39.860 | (upbeat music)
00:07:42.440 | (upbeat music)
00:07:45.020 | (upbeat music)
00:07:47.600 | (upbeat music)
00:07:50.180 | (upbeat music)
00:07:52.760 | [BLANK_AUDIO]