back to index

Peter Norvig: We Are Seduced by Our Low-Dimensional Metaphors | AI Podcast Clips


Whisper Transcript | Transcript Only Page

00:00:00.000 | [MUSIC PLAYING]
00:00:03.440 | Any time you use neural networks, any time you learn
00:00:10.040 | from data, form representation from data in an automated way,
00:00:13.600 | it's not very explainable as to--
00:00:17.640 | or it's not introspective to us humans
00:00:21.760 | in terms of how this neural network sees the world,
00:00:24.880 | where why does it succeed so brilliantly in so many cases,
00:00:29.880 | and fail so miserably in surprising ways in small.
00:00:33.080 | So what do you think?
00:00:34.840 | Is the future there?
00:00:37.600 | Can simply more data, better data, more organized data
00:00:41.120 | solve that problem?
00:00:42.720 | Or is there elements of symbolic systems
00:00:45.880 | that need to be brought in which are
00:00:47.400 | a little bit more explainable?
00:00:48.760 | Yeah.
00:00:49.960 | So I prefer to talk about trust, and validation,
00:00:55.960 | and verification rather than just about explainability.
00:00:59.120 | And then I think explanations are one tool
00:01:01.880 | that you use towards those goals.
00:01:05.520 | And I think it is an important issue that we
00:01:08.160 | don't want to use these systems unless we trust them,
00:01:10.560 | and we want to understand where they work
00:01:12.280 | and where they don't work.
00:01:13.720 | And an explanation can be part of that.
00:01:17.400 | So I apply for a loan, and I get denied.
00:01:21.040 | I want some explanation of why.
00:01:22.760 | And in Europe, we have the GDPR that
00:01:26.960 | says you're required to be able to get that.
00:01:30.520 | But on the other hand, an explanation alone
00:01:32.440 | is not enough.
00:01:33.800 | So we are used to dealing with people,
00:01:37.880 | and with organizations, and corporations, and so on.
00:01:41.280 | And they can give you an explanation.
00:01:42.820 | Then you have no guarantee that that explanation
00:01:45.480 | relates to reality.
00:01:47.800 | So the bank can tell me, well, you
00:01:49.720 | didn't get the loan because you didn't have enough collateral.
00:01:52.680 | And that may be true, or it may be true that they just
00:01:55.280 | didn't like my religion or something else.
00:01:59.080 | I can't tell from the explanation.
00:02:01.200 | And that's true whether the decision was made
00:02:04.200 | by a computer or by a person.
00:02:07.600 | So I want more.
00:02:10.040 | I do want to have the explanations,
00:02:11.640 | and I want to be able to have a conversation to go back
00:02:14.420 | and forth and said, well, you gave this explanation,
00:02:17.120 | but what about this?
00:02:18.520 | And what would have happened if this had happened?
00:02:20.800 | And what would I need to change that?
00:02:24.680 | So I think a conversation is a better way
00:02:26.720 | to think about it than just an explanation
00:02:29.760 | as a single output.
00:02:31.920 | And I think we need testing of various kinds.
00:02:34.680 | So in order to know, was the decision really
00:02:38.320 | based on my collateral, or was it
00:02:40.760 | based on my religion, or skin color, or whatever?
00:02:45.080 | I can't tell if I'm only looking at my case.
00:02:47.520 | But if I look across all the cases,
00:02:49.640 | then I can detect a pattern.
00:02:52.240 | So you want to have that kind of capability.
00:02:54.960 | You want to have these adversarial testing.
00:02:57.800 | So we thought we were doing pretty good at object
00:03:00.720 | recognition and images.
00:03:02.440 | We said, look, we're at pretty close to human level
00:03:05.720 | performance on ImageNet and so on.
00:03:08.880 | And then you start seeing these adversarial images,
00:03:11.480 | and you say, wait a minute.
00:03:12.840 | That part is nothing like human performance.
00:03:15.760 | And you can mess with it really easily.
00:03:17.520 | You can mess with it really easily.
00:03:19.320 | And yeah, you can do that to humans too.
00:03:22.620 | In a different way, perhaps.
00:03:23.800 | Right.
00:03:24.300 | Humans don't know what color the dress was.
00:03:26.120 | Right.
00:03:27.160 | And so they're vulnerable to certain attacks
00:03:29.080 | that are different than the attacks on the machines.
00:03:32.280 | But the attacks on the machines are so striking,
00:03:36.000 | they really change the way you think about what we've done.
00:03:39.680 | And the way I think about it is I think part of the problem
00:03:43.280 | is we're seduced by our low dimensional metaphors.
00:03:50.240 | Yeah.
00:03:50.800 | So you look--
00:03:51.640 | I like that phrase.
00:03:52.400 | You look in a textbook, and you say, OK, now
00:03:55.640 | we've mapped out the space.
00:03:57.000 | And a cat is here, and dog is here,
00:04:01.600 | and maybe there's a tiny little spot in the middle
00:04:03.880 | where you can't tell the difference.
00:04:05.340 | But mostly, we've got it all covered.
00:04:07.400 | And if you believe that metaphor, then you say,
00:04:10.520 | well, we're nearly there.
00:04:11.720 | And there's only going to be a couple adversarial images.
00:04:15.800 | But I think that's the wrong metaphor.
00:04:17.380 | And what you should really say is it's not a 2D flat space
00:04:20.840 | that we've got mostly covered.
00:04:22.520 | It's a million dimension space.
00:04:24.240 | And a cat is this string that goes out in this crazy path.
00:04:29.360 | And if you step a little bit off the path in any direction,
00:04:32.440 | you're in nowhere's land, and you don't
00:04:34.760 | know what's going to happen.
00:04:36.040 | And so I think that's where we are.
00:04:37.720 | And now we've got to deal with that.
00:04:40.080 | So it wasn't so much an explanation,
00:04:42.800 | but it was an understanding of what the models are
00:04:46.360 | and what they're doing.
00:04:47.280 | And now we can start exploring how do you fix that.
00:04:49.400 | Yeah, validating the robustness of the system, so on.
00:04:51.880 | But take it back to this word trust.
00:04:56.600 | Do you think we're a little too hard on our robots
00:04:59.560 | in terms of the standards we apply?
00:05:02.280 | So there's a dance.
00:05:08.360 | There's a dance in nonverbal and verbal communication
00:05:11.560 | between humans.
00:05:13.720 | If we apply the same kind of standard in terms of humans,
00:05:17.320 | we trust each other pretty quickly.
00:05:20.880 | You and I haven't met before, and there's some degree
00:05:23.160 | of trust that nothing's going to go crazy wrong.
00:05:27.200 | And yet, to AI, when we look at AI systems,
00:05:31.040 | we seem to approach through skepticism always, always.
00:05:35.400 | And it's like they have to prove through a lot of hard work
00:05:39.680 | that they're even worthy of even inkling of our trust.
00:05:43.320 | What do you think about that?
00:05:45.000 | How do we break that barrier, close that gap?
00:05:47.520 | I think that's right.
00:05:48.520 | I think that's a big issue.
00:05:50.440 | Just listening, my friend Mark Moffat is a naturalist,
00:05:55.400 | and he says the most amazing thing about humans
00:05:58.880 | is that you can walk into a coffee shop
00:06:01.760 | or a busy street in a city, and there's
00:06:05.840 | lots of people around you that you've never met before,
00:06:08.720 | and you don't kill each other.
00:06:11.240 | He says chimpanzees cannot do that.
00:06:13.200 | Yeah, right.
00:06:15.320 | If a chimpanzee's in a situation where
00:06:17.600 | here's some that aren't from my tribe, bad things happen.
00:06:23.000 | Especially in a coffee shop, there's delicious food around.
00:06:25.600 | Yeah, yeah.
00:06:26.520 | But we humans have figured that out.
00:06:29.720 | And you know--
00:06:30.840 | For the most part.
00:06:31.640 | For the most part.
00:06:32.440 | We still go to war, we still do terrible things,
00:06:34.800 | but for the most part, we've learned to trust each other
00:06:37.400 | and live together.
00:06:39.520 | So that's going to be important for our AI systems as well.
00:06:45.160 | And also, I think a lot of the emphasis is on AI,
00:06:50.240 | but in many cases, AI is part of the technology,
00:06:54.640 | but isn't really the main thing.
00:06:56.040 | So a lot of what we've seen is more due to communications
00:07:00.760 | technology than AI technology.
00:07:04.000 | Yeah, you want to make these good decisions,
00:07:06.760 | but the reason we're able to have any kind of system at all
00:07:10.560 | is we've got the communications so that we're
00:07:12.840 | collecting the data and so that we can reach lots
00:07:16.120 | of people around the world.
00:07:18.160 | I think that's a bigger change that we're dealing with.
00:07:21.880 | [BLANK_AUDIO]
00:07:31.160 | [BLANK_AUDIO]