Back to Index

What is Intelligence? - François Chollet and Lex Fridman | AI Podcast Clips


Transcript

(gentle music) - Can you try to define intelligence? Like what does it mean to be more or less intelligent? Is it completely coupled to a particular problem or is there something a little bit more universal? - Yeah, I do believe all intelligence is specialized intelligence. Even human intelligence has some degree of generality.

Well, all intelligence systems have some degree of generality but they're always specialized in one category of problems. So the human intelligence is specialized in the human experience. And that shows at various levels. That shows in some prior knowledge that's innate that we have at birth. Knowledge about things like agents, goal-driven behavior, visual priors about what makes an object, priors about time and so on.

That shows also in the way we learn. For instance, it's very, very easy for us to pick up language. It's very, very easy for us to learn certain things because we are basically hard-coded to learn them. And we are specialized in solving certain kinds of problem and we are quite useless when it comes to other kinds of problems.

For instance, we are not really designed to handle very long-term problems. We have no capability of seeing the very long-term. We don't have very much working memory. - So how do you think about long-term? Do you think long-term planning, we're talking about scale of years, millennia, what do you mean by long-term we're not very good?

- Well, human intelligence is specialized in the human experience. And human experience is very short. Like one lifetime is short. Even within one lifetime, we have a very hard time envisioning things on a scale of years. Like it's very difficult to project yourself at a scale of five years, at a scale of 10 years and so on.

- Right. - We can solve only fairly narrowly scoped problems. So when it comes to solving bigger problems, larger scale problems, we are not actually doing it on an individual level. So it's not actually our brain doing it. We have this thing called civilization, right? Which is itself a sort of problem solving system, a sort of artificial intelligence system, right?

And it's not running on one brain, it's running on a network of brains. In fact, it's running on much more than a network of brains. It's running on a lot of infrastructure, like books and computers and the internet and human institutions and so on. And that is capable of handling problems on a much greater scale than any individual human.

If you look at computer science, for instance, that's an institution that solves problems and it is superhuman, right? It operates on a greater scale, it can solve much bigger problems than an individual human could. And science itself, science as a system, as an institution is a kind of artificially intelligent problem solving algorithm that is superhuman.

- Yeah, it's a, at least computer science is like a theorem prover. At a scale of thousands, maybe hundreds of thousands of human beings. At that scale, what do you think is an intelligent agent? So there's us humans at the individual level, there is millions, maybe billions of bacteria in our skin.

There is, that's at the smaller scale. You can even go to the particle level as systems that behave, you can say intelligently in some ways. And then you can look at Earth as a single organism, you can look at our galaxy and even the universe as a single organism.

Do you think, how do you think about scale and defining intelligent systems? And we're here at Google, there is millions of devices doing computation in a distributed way. How do you think about intelligence versus scale? - You can always characterize anything as a system. I think people who talk about things like intelligence explosion tend to focus on one agent is basically one brain, like one brain considered in isolation, like a brain a jar that's controlling a body in a very like top to bottom kind of fashion.

And that body is pursuing goals into an environment. So it's a very hierarchical view. You have the brain at the top of the pyramid, then you have the body just plainly receiving orders and then the body is manipulating objects in environment and so on. So everything is subordinate to this one thing, this epicenter, which is the brain.

But in real life, intelligent agents don't really work like this. There is no strong delimitation between the brain and the body to start with. You have to look not just at the brain, but at the nervous system. But then the nervous system and the body are naturally two separate entities.

So you have to look at an entire animal as one agent, but then you start realizing as you observe an animal over any length of time, that a lot of the intelligence of an animal is actually externalized. That's especially true for humans. A lot of our intelligence is externalized.

When you write down some notes, that is externalized intelligence. When you write a computer program, you are externalizing cognition. So it's externalizing books, it's externalized in computers, it's externalized in the internet, in other humans. It's externalized in language and so on. So there is no hard delimitation of what makes an intelligent agent.

It's all about context. - Okay, but AlphaGo is better at Go than the best human player. There's levels of skill here. Do you think there's such a concept as intelligence explosion in a specific task? And then, well, yeah. Do you think it's possible to have a category of tasks on which you do have something like an exponential growth of ability to solve that particular problem?

- I think if you consider a specific vertical, it's probably possible to some extent. I also don't think we have to speculate about it because we have real world examples of recursively self-improving intelligent systems. So for instance, science is a problem solving system, a knowledge generation system, like a system that experiences the world in some sense and then gradually understands it and can act on it.

And that system is superhuman and it is clearly recursively self-improving because science feeds into technology. Technology can be used to build better tools, better computers, better instrumentation and so on, which in turn can make science faster. So science is probably the closest thing we have today to a recursively self-improving superhuman AI.

And you can just observe, is scientific progress today exploding, which itself is an interesting question. And you can use that as a basis to try to understand what will happen with a superhuman AI that has science-like behavior. Thank you. (upbeat music) (upbeat music) (upbeat music) (upbeat music) (upbeat music) (upbeat music)