Back to Index

Daniel Kahneman: How Hard is Autonomous Driving? | AI Podcast Clips


Transcript

is it seems that almost every robot-human collaboration system is a lot harder than people realize. So do you think it's possible for robots and humans to collaborate successfully? We talked a little bit about semi-autonomous vehicles like in the Tesla, Autopilot, but just in tasks in general. If you think we talked about current neural networks being kind of system one, do you think those same systems can borrow humans for system two type tasks and collaborate successfully?

Well, I think that in any system where humans and the machine interact, the human will be superfluous within a fairly short time. That is, if the machine is advanced enough so that it can really help the human, then it may not need the human for a long time. Now, it would be very interesting if there are problems that for some reason the machine cannot solve, but that people could solve.

Then you would have to build into the machine an ability to recognize that it is in that kind of problematic situation and to call the human. That cannot be easy without understanding. That is, it must be very difficult to program a recognition that you are in a problematic situation without understanding the problem.

That's very true. In order to understand the full scope of situations that are problematic, you almost need to be smart enough to solve all those problems. It's not clear to me how much the machine will need the human. I think the example of chess is very instructive. I mean, there was a time at which Kasparov was saying that human machine combinations will beat everybody.

Even stockfish doesn't need people. And alpha zero certainly doesn't need people. The question is, just like you said, how many problems are like chess and how many problems are the ones where are not like chess? Well, every problem probably in the end is like chess. The question is, how long is that transition period?

I mean, you know, that's a question I would ask you in terms of, I mean, autonomous vehicle, just driving is probably a lot more complicated than Go to solve that. And that's surprising. Because it's open. No, I mean, that's not surprising to me because there is a hierarchical aspect to this, which is recognizing a situation and then within the situation bringing up the relevant knowledge.

And for that hierarchical type of system to work, you need a more complicated system than we currently have. A lot of people think because as human beings, this is probably the cognitive biases, they think of driving as pretty simple because they think of their own experience. This is actually a big problem for AI researchers or people thinking about AI because they evaluate how hard a particular problem is based on very limited knowledge, based on how hard it is for them to do the task.

And then they take for granted. Maybe you can speak to that because most people tell me driving is trivial and humans in fact are terrible at driving is what people tell me. And I see humans and humans are actually incredible at driving and driving is really terribly difficult. So is that just another element of the effects that you've described in your work on the psychology side?

No, I mean, I haven't really, you know, I would say that my research has contributed nothing to understanding the ecology and to understanding the structure of situations and the complexity of problems. So all we know is very clear that that goal, it's endlessly complicated, but it's very constrained. So, and in the real world, far fewer constraints and many more potential surprises.

So that's obvious because it's not always obvious to people, right? So when you think about Well, I mean, you know, people thought that reasoning was hard and perceiving was easy, but you know, they quickly learned that actually modeling vision was tremendously complicated and modeling, even proving theorems was relatively straightforward.

To push back on that a little bit on the quickly part, it took several decades to learn that and most people still haven't learned that. I mean, our intuition, of course, AI researchers have, but you drift a little bit outside the specific AI field, the intuition is still perceptible as a solving task.

Oh yeah, no, I mean, that's true. Intuitions, the intuitions of the public haven't changed radically. And they are, as you said, they're evaluating the complexity of problems by how difficult it is for them to solve the problems. And that's got very little to do with the complexities of solving them in AI.

Yeah. Yeah.