Back to Index

Peter Norvig: We Are Seduced by Our Low-Dimensional Metaphors | AI Podcast Clips


Transcript

Any time you use neural networks, any time you learn from data, form representation from data in an automated way, it's not very explainable as to-- or it's not introspective to us humans in terms of how this neural network sees the world, where why does it succeed so brilliantly in so many cases, and fail so miserably in surprising ways in small.

So what do you think? Is the future there? Can simply more data, better data, more organized data solve that problem? Or is there elements of symbolic systems that need to be brought in which are a little bit more explainable? Yeah. So I prefer to talk about trust, and validation, and verification rather than just about explainability.

And then I think explanations are one tool that you use towards those goals. And I think it is an important issue that we don't want to use these systems unless we trust them, and we want to understand where they work and where they don't work. And an explanation can be part of that.

So I apply for a loan, and I get denied. I want some explanation of why. And in Europe, we have the GDPR that says you're required to be able to get that. But on the other hand, an explanation alone is not enough. So we are used to dealing with people, and with organizations, and corporations, and so on.

And they can give you an explanation. Then you have no guarantee that that explanation relates to reality. So the bank can tell me, well, you didn't get the loan because you didn't have enough collateral. And that may be true, or it may be true that they just didn't like my religion or something else.

I can't tell from the explanation. And that's true whether the decision was made by a computer or by a person. So I want more. I do want to have the explanations, and I want to be able to have a conversation to go back and forth and said, well, you gave this explanation, but what about this?

And what would have happened if this had happened? And what would I need to change that? So I think a conversation is a better way to think about it than just an explanation as a single output. And I think we need testing of various kinds. So in order to know, was the decision really based on my collateral, or was it based on my religion, or skin color, or whatever?

I can't tell if I'm only looking at my case. But if I look across all the cases, then I can detect a pattern. So you want to have that kind of capability. You want to have these adversarial testing. So we thought we were doing pretty good at object recognition and images.

We said, look, we're at pretty close to human level performance on ImageNet and so on. And then you start seeing these adversarial images, and you say, wait a minute. That part is nothing like human performance. And you can mess with it really easily. You can mess with it really easily.

And yeah, you can do that to humans too. So-- In a different way, perhaps. Right. Humans don't know what color the dress was. Right. And so they're vulnerable to certain attacks that are different than the attacks on the machines. But the attacks on the machines are so striking, they really change the way you think about what we've done.

And the way I think about it is I think part of the problem is we're seduced by our low dimensional metaphors. Yeah. So you look-- I like that phrase. You look in a textbook, and you say, OK, now we've mapped out the space. And a cat is here, and dog is here, and maybe there's a tiny little spot in the middle where you can't tell the difference.

But mostly, we've got it all covered. And if you believe that metaphor, then you say, well, we're nearly there. And there's only going to be a couple adversarial images. But I think that's the wrong metaphor. And what you should really say is it's not a 2D flat space that we've got mostly covered.

It's a million dimension space. And a cat is this string that goes out in this crazy path. And if you step a little bit off the path in any direction, you're in nowhere's land, and you don't know what's going to happen. And so I think that's where we are.

And now we've got to deal with that. So it wasn't so much an explanation, but it was an understanding of what the models are and what they're doing. And now we can start exploring how do you fix that. Yeah, validating the robustness of the system, so on. But take it back to this word trust.

Do you think we're a little too hard on our robots in terms of the standards we apply? So there's a dance. There's a dance in nonverbal and verbal communication between humans. If we apply the same kind of standard in terms of humans, we trust each other pretty quickly. You and I haven't met before, and there's some degree of trust that nothing's going to go crazy wrong.

And yet, to AI, when we look at AI systems, we seem to approach through skepticism always, always. And it's like they have to prove through a lot of hard work that they're even worthy of even inkling of our trust. What do you think about that? How do we break that barrier, close that gap?

I think that's right. I think that's a big issue. Just listening, my friend Mark Moffat is a naturalist, and he says the most amazing thing about humans is that you can walk into a coffee shop or a busy street in a city, and there's lots of people around you that you've never met before, and you don't kill each other.

He says chimpanzees cannot do that. Yeah, right. If a chimpanzee's in a situation where here's some that aren't from my tribe, bad things happen. Especially in a coffee shop, there's delicious food around. Yeah, yeah. But we humans have figured that out. And you know-- For the most part. For the most part.

We still go to war, we still do terrible things, but for the most part, we've learned to trust each other and live together. So that's going to be important for our AI systems as well. And also, I think a lot of the emphasis is on AI, but in many cases, AI is part of the technology, but isn't really the main thing.

So a lot of what we've seen is more due to communications technology than AI technology. Yeah, you want to make these good decisions, but the reason we're able to have any kind of system at all is we've got the communications so that we're collecting the data and so that we can reach lots of people around the world.

I think that's a bigger change that we're dealing with.