Back to Index

Daniel Kahneman: Deep Learning (System 1 and System 2) | AI Podcast Clips


Chapters

0:0 Intro
1:20 System 1 advances
2:28 Humans learn quickly
3:32 Benefits of System 1
5:26 Current Architecture
6:10 Neural Networks
7:0 Grounding
7:24 What is Grounding
8:23 Active Learning
9:32 Building a System
10:23 Human Perception
12:3 Pedestrians
13:44 Understanding mortality

Transcript

So we're not talking about humans, but if we think about building artificial intelligence systems, robots, do you think all the features and bugs that you have highlighted in human beings are useful for constructing AI systems? So both systems are useful for perhaps instilling in robots? What is happening these days is that actually what is happening in deep learning is more like a system one product than like a system two product.

I mean, deep learning matches patterns and anticipate what's going to happen, so it's highly predictive. But what deep learning doesn't have, and many people think that this is the critical, it doesn't have the ability to reason, so there is no system two there. But I think very importantly, it doesn't have any causality or any way to represent meaning and to represent real interaction.

So until that is solved, what can be accomplished is marvelous and very exciting, but limited. That's actually really nice to think of current advances in machine learning as essentially system one advances. So how far can we get with just system one? If we think of deep learning and artificial intelligence systems?

It's very clear that deep mind has already gone way beyond what people thought was possible. I think the thing that has impressed me most about the developments in AI is the speed. It's that things, at least in the context of deep learning, and maybe this is about to slow down, but things moved a lot faster than anticipated.

The transition from solving chess to solving Go, that's bewildering how quickly it went. The move from AlphaGo to AlphaZero is sort of bewildering the speed at which they accomplished that. Now, clearly, there are many problems that you can solve that way, but there are some problems for which you need something else.

Something like reasoning. Well, reasoning and also, you know, one of the real mysteries, psychologist Gary Marcus, who is also a critic of AI, I mean, what he points out, and I think he has a point, is that humans learn quickly. Humans don't need a million examples, they need two or three examples.

So clearly, there is a fundamental difference. And what enables a machine to learn quickly, what you have to build into the machine, because it's clear that you have to build some expectations or something in the machine to make it ready to learn quickly, that at the moment seems to be unsolved.

I'm pretty sure that DeepMind is working on it, but if they have solved it, I haven't heard yet. They're trying to actually, them and OpenAI are trying to start to get to use neural networks to reason. So assembled knowledge, of course, causality is, temporal causality is out of reach to most everybody.

You mentioned the benefits of System 1 is essentially that it's fast, allows us to function in the world. Fast and skilled, yeah. It's skilled. And it has a model of the world. You know, in a sense, I mean, there was the earlier phase of AI attempted to model reasoning, and they were moderately successful, but, you know, reasoning by itself doesn't get you much.

Deep learning has been much more successful in terms of, you know, what they can do. But now, it's an interesting question, whether it's approaching its limits. What do you think? I think absolutely. So I just talked to Gian Lacoon, you mentioned, you know, so he thinks that the limits, we're not going to hit the limits with neural networks, that ultimately this kind of System 1 pattern matching will start to start to look like System 2 without significant transformation of the architecture.

So I'm more with the majority of the people who think that yes, neural networks will hit a limit in their capability. He, on the one hand, I have heard him tell the Mises-Sabies essentially that, you know, what they have accomplished is not a big deal, that they have just touched, that basically, you know, they can't do unsupervised learning in an effective way.

But you're telling me that he thinks that the current, within the current architecture, you can do causality and reasoning? So he's very much a pragmatist in a sense that's saying that we're very far away, that there's still, I think there's this idea that he says is we can only see one or two mountain peaks ahead and there might be either a few more after or thousands more after.

So that kind of idea. I heard that metaphor. Right. But nevertheless, it doesn't see a, the final answer not fundamentally looking like one that we currently have. So neural networks being a huge part of that. Yeah. I mean, that's very likely because, because pattern matching is so much of what's going on.

But. And you can think of neural networks as processing information sequentially. Yeah, I mean, you know, there is, there is an important aspect to, for example, you get systems that translate and they do a very good job, but they really don't know what they're talking about. And for that, I'm really quite surprised.

For that, you would need, you would need an AI that has sensation, an AI that is in touch with the world. Yes, self-awareness and maybe even something that resembles consciousness kind of ideas. Certainly awareness of, you know, awareness of what's going on so that the words have meaning or can get, are in touch with some perception or some action.

Yeah. So that's a big thing for Jan and as what he refers to as grounding to the physical space. So, so that's what we're talking about the same. Yeah. So how, how you ground. I mean the grounding, without grounding, then you get, you get a machine that doesn't know what it's talking about because it is talking about the world ultimately.

The question, the open question is what it means to ground. I mean, we're very human centric in our thinking, but what does it mean for a machine to understand what it means to be in this world? Does it need to have a body? Does it need to have a finiteness like we humans have?

All of these elements, it's a very, it's an open question. You know, I'm not sure about having a body, but having a perceptual system, having a body would be very helpful too. I mean, if, if you think about human mimicking human, but having a perception that seems to be essential so that you can build, you can accumulate knowledge about the world.

However, you can, you can imagine a human completely paralyzed and there is a lot that the human brain could learn, you know, with a paralyzed body. So if we got a machine that could do that, that would be a big deal. And then the flip side of that, something you see in children and something in machine learning world is called active learning.

Maybe it is also is being able to play with the world. How important for developing system one or system two, do you think it is to play with the world? To be able to interact with it? Well, certainly a lot, a lot of what you learn as you learn to anticipate the outcomes of your actions.

I mean, you can see that how babies learn it, you know, with their hands, they, how they learn, you know, to connect, you know, the movements of their hands with something that clearly is something that happens in the brain and, and, and the ability of the brain to learn new patterns.

So you know, it's the kind of thing that you get with artificial limbs that you connected and then people learn to operate the artificial limb, you know, really impressively quickly, at least from, from what I hear. So we have a system that is ready to learn the world through action.

At the risk of going into way too mysterious of land, what do you think it takes to build a system like that? Obviously we're very far from understanding how the brain works, but how difficult is it to build this mind of ours? You know, I mean, I think that Jan LeCun's answer that we don't know how many mountains there are.

I think that's a very good answer. I think that, you know, if you, if you look at what Ray Kurzweil is saying, that strikes me as off the wall, but, but I think people are much more realistic than that. We're actually, Demis Hassabis is and Jan is, and so the people are actually doing the work fairly realistic, I think.

To maybe phrase it another way, from a perspective, not of building it, but from understanding it, how complicated are human beings in the following sense? You know, I work with autonomous vehicles and pedestrians, so we tried to model pedestrians. How difficult is it to model a human being, their perception of the world, the two systems they operate under, sufficiently to be able to predict whether the pedestrian is going to cross the road or not?

I'm, you know, I'm fairly optimistic about that, actually, because what we're talking about is a huge amount of information that every vehicle has, and that feeds into one system, into one gigantic system. And so anything that any vehicle learns becomes part of what the whole system knows. And with a system multiplier like that, there is a lot that you can do.

So human beings are very complicated, but, and, you know, system is going to make mistakes, but human makes mistakes. I think that they'll be able to, I think they are able to anticipate pedestrians, otherwise a lot would happen. They're able to, you know, they're able to get into a roundabout and into traffic, so they must know both to expect or to anticipate how people will react when they're sneaking in.

And there's a lot of learning that's involved in that. Currently, the pedestrians are treated as things that cannot be hit, and they're not treated as agents with whom you interact in a game-theoretic way. So I mean, it's not, it's a totally open problem, and every time somebody tries to solve it, it seems to be harder than we think.

And nobody's really tried to seriously solve the problem of that dance, because I'm not sure if you've thought about the problem of pedestrians, but you're really putting your life in the hands of the driver. You know, there is a dance, there's part of the dance that would be quite complicated, but for example, when I cross the street and there is a vehicle approaching, I look the driver in the eye, and I think many people do that.

And you know, that's a signal that I'm sending, and I would be sending that machine to an autonomous vehicle and it had better understand it, because it means I'm crossing. So and there's another thing you do that actually, so I'll tell you what you do, because I've watched hundreds of hours of video on this, is when you step in the street, you do that before you step in the street, and when you step in the street, you actually look away.

Look away. Yeah. Now, what is that? What that's saying is, I mean, you're trusting that the car, who hasn't slown down yet, will slow down. Yeah. And you're telling him, I'm committed. I mean, this is like in a game of chicken. So I'm committed, and if I'm committed, I'm looking away.

So there is, you just have to stop. So the question is whether a machine that observes that needs to understand mortality. Here, I'm not sure that it's got to understand so much as it's got to anticipate. So and here, but you know, you're surprising me, because here I would think that maybe you can anticipate without understanding, because I think this is clearly what's happening in playing Go or in playing chess, there's a lot of anticipation and there is zero understanding.

So I thought that you didn't need a model of the human and a model of the human mind to avoid hitting pedestrians, but you are suggesting that actually. There you go, yeah. You do. And then it's a lot harder, I thought. Yeah.