back to index

Daniel Kahneman: How Hard is Autonomous Driving? | AI Podcast Clips


Whisper Transcript | Transcript Only Page

00:00:00.000 | is it seems that almost every robot-human collaboration system is a lot harder than
00:00:07.760 | people realize. So do you think it's possible for robots and humans to collaborate successfully?
00:00:17.160 | We talked a little bit about semi-autonomous vehicles like in the Tesla, Autopilot, but
00:00:22.780 | just in tasks in general. If you think we talked about current neural networks being
00:00:29.160 | kind of system one, do you think those same systems can borrow humans for system two type
00:00:38.680 | tasks and collaborate successfully?
00:00:41.920 | Well, I think that in any system where humans and the machine interact, the human will be
00:00:50.600 | superfluous within a fairly short time. That is, if the machine is advanced enough so that
00:00:57.120 | it can really help the human, then it may not need the human for a long time. Now, it
00:01:02.960 | would be very interesting if there are problems that for some reason the machine cannot solve,
00:01:11.280 | but that people could solve. Then you would have to build into the machine an ability
00:01:15.820 | to recognize that it is in that kind of problematic situation and to call the human. That cannot
00:01:25.280 | be easy without understanding. That is, it must be very difficult to program a recognition
00:01:34.720 | that you are in a problematic situation without understanding the problem.
00:01:42.040 | That's very true. In order to understand the full scope of situations that are problematic,
00:01:48.520 | you almost need to be smart enough to solve all those problems.
00:01:54.080 | It's not clear to me how much the machine will need the human. I think the example of
00:02:01.560 | chess is very instructive. I mean, there was a time at which Kasparov was saying that human
00:02:06.600 | machine combinations will beat everybody. Even stockfish doesn't need people. And alpha
00:02:13.760 | zero certainly doesn't need people.
00:02:16.480 | The question is, just like you said, how many problems are like chess and how many problems
00:02:21.960 | are the ones where are not like chess? Well, every problem probably in the end is like
00:02:27.360 | chess. The question is, how long is that transition period?
00:02:30.640 | I mean, you know, that's a question I would ask you in terms of, I mean, autonomous vehicle,
00:02:37.640 | just driving is probably a lot more complicated than Go to solve that.
00:02:42.720 | And that's surprising.
00:02:43.720 | Because it's open. No, I mean, that's not surprising to me because there is a hierarchical
00:02:55.520 | aspect to this, which is recognizing a situation and then within the situation bringing up
00:03:02.800 | the relevant knowledge. And for that hierarchical type of system to work, you need a more
00:03:13.600 | complicated system than we currently have.
00:03:16.720 | A lot of people think because as human beings, this is probably the cognitive biases, they
00:03:23.120 | think of driving as pretty simple because they think of their own experience. This is
00:03:29.080 | actually a big problem for AI researchers or people thinking about AI because they evaluate
00:03:36.800 | how hard a particular problem is based on very limited knowledge, based on how hard
00:03:43.640 | it is for them to do the task. And then they take for granted. Maybe you can speak to that
00:03:49.560 | because most people tell me driving is trivial and humans in fact are terrible at driving
00:03:57.160 | is what people tell me. And I see humans and humans are actually incredible at driving
00:04:02.480 | and driving is really terribly difficult. So is that just another element of the effects
00:04:08.920 | that you've described in your work on the psychology side?
00:04:12.600 | No, I mean, I haven't really, you know, I would say that my research has contributed
00:04:21.080 | nothing to understanding the ecology and to understanding the structure of situations
00:04:27.240 | and the complexity of problems. So all we know is very clear that that goal, it's
00:04:37.800 | endlessly complicated, but it's very constrained. So, and in the real world, far fewer constraints
00:04:47.280 | and many more potential surprises.
00:04:49.800 | So that's obvious because it's not always obvious to people, right? So when you think
00:04:55.160 | about
00:04:56.160 | Well, I mean, you know, people thought that reasoning was hard and perceiving was easy,
00:05:03.280 | but you know, they quickly learned that actually modeling vision was tremendously complicated
00:05:10.320 | and modeling, even proving theorems was relatively straightforward.
00:05:16.360 | To push back on that a little bit on the quickly part, it took several decades to learn that
00:05:23.240 | and most people still haven't learned that. I mean, our intuition, of course, AI researchers
00:05:28.800 | have, but you drift a little bit outside the specific AI field, the intuition is still
00:05:35.160 | perceptible as a solving task.
00:05:36.800 | Oh yeah, no, I mean, that's true. Intuitions, the intuitions of the public haven't changed
00:05:41.680 | radically. And they are, as you said, they're evaluating the complexity of problems by how
00:05:49.160 | difficult it is for them to solve the problems. And that's got very little to do with the
00:05:56.120 | complexities of solving them in AI.
00:05:58.920 | Yeah.
00:05:59.920 | Yeah.
00:05:59.920 | [BLANK_AUDIO]
00:06:04.920 | [BLANK_AUDIO]
00:06:14.920 | [BLANK_AUDIO]