Back to Index

David Ferrucci: What is Intelligence? | AI Podcast Clips


Chapters

0:0 What is intelligence
1:15 Understanding the world
2:5 Picking a goal
4:0 Alien Intelligence
6:10 Proof
7:54 Social constructs
9:56 We are bound together
10:48 How hard is that
13:50 Optimistic notion
14:35 Emotional manipulation

Transcript

- So let me ask, you've kind of alluded to it, but let me ask again, what is intelligence? Underlying the discussions we'll have with Jeopardy and beyond, how do you think about intelligence? Is it a sufficiently complicated problem, being able to reason your way through solving that problem? Is that kind of how you think about what it means to be intelligent?

- So I think of intelligence primarily two ways. One is the ability to predict. So in other words, if I have a problem, can I predict what's gonna happen next? Whether it's to predict the answer of a question or to say, look, I'm looking at all the market dynamics and I'm gonna tell you what's gonna happen next, or you're in a room and somebody walks in and you're gonna predict what they're gonna do next or what they're gonna say next.

- So in a highly dynamic environment full of uncertainty, be able to-- - Lots of, you know, the more variables, the more complex, the more possibilities, the more complex. But can I take a small amount of prior data and learn the pattern and then predict what's gonna happen next accurately and consistently?

That's certainly a form of intelligence. - What do you need for that, by the way? You need to have an understanding of the way the world works in order to be able to unroll it into the future, right? What do you think is needed to predict-- - Depends what you mean by understanding.

I need to be able to find that function, and this is very much what-- - What's a function. - Deep learning does, machine learning does, is if you give me enough prior data and you tell me what the output variable is that matters, I'm gonna sit there and be able to predict it.

And if I can predict it accurately so that I can get it right more often than not, I'm smart. If I can do that with less data and less training time, I'm even smarter. If I can figure out what's even worth predicting, I'm smarter, meaning I'm figuring out what path is gonna get me toward a goal.

- What about picking a goal? Sorry to interrupt again. - Well, that's interesting. About picking a goal is sort of an interesting thing, and I think that's where you bring in what are you pre-programmed to do? We talk about humans, and humans are pre-programmed to survive, so sort of their primary driving goal.

What do they have to do to do that? And that can be very complex, right? So it's not just figuring out that you need to run away from the ferocious tiger, but we survive in a social context as an example. So understanding the subtleties of social dynamics becomes something that's important for surviving, finding a mate, reproducing, right?

So we're continually challenged with complex sets of variables, complex constraints, rules, if you will, or patterns, and we learn how to find the functions and predict the things, in other words, represent those patterns efficiently, and be able to predict what's gonna happen, and that's a form of intelligence. That doesn't really require anything specific other than the ability to find that function and predict that right answer.

It's certainly a form of intelligence. But then when we say, well, do we understand each other? In other words, would you perceive me as intelligent beyond that ability to predict? So now I can predict, but I can't really articulate how I'm going through that process, what my underlying theory is for predicting, and I can't get you to understand what I'm doing so that you can follow, you can figure out how to do this yourself if you did not have, for example, the right pattern-managing machinery that I did.

And now we potentially have this breakdown where, in effect, I'm intelligent, but I'm sort of an alien intelligence relative to you. - You're intelligent, but nobody knows about it. - Well, I can see the output. - So you're saying, let's sort of separate the two things. One is you explaining why you were able to predict the future, and the second is me being able to, impressing me that you're intelligent, me being able to know that you successfully predicted the future.

Do you think that's-- - Well, it's not impressing you that I'm intelligent. In other words, you may be convinced that I'm intelligent in some form. - So how, what would convince-- - Because of my ability to predict. - So I would look at the metrics. - When you can't, I say, wow, you're right all, you're right more times than I am.

You're doing something interesting. That's a form of intelligence. But then what happens is, if I say, how are you doing that? And you can't communicate with me, and you can't describe that to me, now I may label you a savant. I may say, well, you're doing something weird, and it's just not very interesting to me, because you and I can't really communicate.

And so now, so this is interesting, right? Because now this is, you're in this weird place where for you to be recognized as intelligent the way I'm intelligent, then you and I sort of have to be able to communicate. And then my, we start to understand each other, and then my respect and my appreciation, my ability to relate to you starts to change.

So now you're not an alien intelligence anymore. You're a human intelligence now, because you and I can communicate. And so I think when we look at animals, for example, animals can do things we can't quite comprehend, we don't quite know how they do them, but they can't really communicate with us.

They can't put what they're going through in our terms. And so we think of them as sort of, well, they're these alien intelligences, and they're not really worth necessarily what we're worth. We don't treat them the same way as a result of that. But it's hard, because who knows what's going on.

- So just a quick elaboration on that. The explaining that you're intelligent, explaining the reasoning that went into the prediction is not some kind of mathematical proof. If we look at humans, look at political debates and discourse on Twitter, it's mostly just telling stories. So your task is, sorry, your task is not to tell an accurate depiction of how you reason, but to tell a story, real or not, that convinces me that there was a mechanism by which you-- - Ultimately, that's what a proof is.

I mean, even a mathematical proof is that. Because ultimately, the other mathematicians have to be convinced by your proof. Otherwise, in fact, there have been-- - That's the metric of success, yeah. - Yeah, there have been several proofs out there where mathematicians would study for a long time before they were convinced that it actually proved anything.

Right, you never know if it proved anything until the community mathematicians decided that it did. So I mean, but it's a real thing. And that's sort of the point, right, is that ultimately, this notion of understanding, us understanding something is ultimately a social concept. In other words, I have to convince enough people that I did this in a reasonable way.

I did this in a way that other people can understand and replicate and that it makes sense to them. So human intelligence is bound together in that way. We're bound up in that sense. We sort of never really get away with it until we can sort of convince others that our thinking process makes sense.

- Did you think the general question of intelligence is then also a social construct? So if we ask questions of an artificial intelligence system, is this system intelligent? The answer will ultimately be a socially constructed-- - I think, so I think, I'm making two statements. I'm saying we can try to define intelligence in this super objective way that says, here's this data.

I wanna predict this type of thing. Learn this function, and then if you get it right, often enough, we consider you intelligent. - But that's more like a savant. - I think it is. It doesn't mean it's not useful. It could be incredibly useful. It could be solving a problem we can't otherwise solve and can solve it more reliably than we can.

But then there's this notion of, can humans take responsibility for the decision that you're making? Can we make those decisions ourselves? Can we relate to the process that you're going through? And now, you as an agent, whether you're a machine or another human, frankly, are now obliged to make me understand how it is that you're arriving at that answer and allow me, me or obviously a community or a judge of people to decide whether or not that makes sense.

And by the way, that happens with humans as well. You're sitting down with your staff, for example, and you ask for suggestions about what to do next, and someone says, "Oh, I think you should buy, "and I think you should buy this much," or whatever, or sell, or whatever it is, or I think you should launch the product today or tomorrow or launch this product versus that product, whatever the decision may be, and you ask why, and the person says, "I just have a good feeling about it." And you're not very satisfied.

Now, that person could be, you might say, "Well, you've been right before, "but I'm gonna put the company on the line. "Can you explain to me why I should believe this?" - And that explanation may have nothing to do with the truth. It's how to convince the other person.

It could still be wrong. - It's just gotta be convincing. - But it's ultimately gotta be convincing. And that's why I'm saying we're bound together. Our intelligences are bound together in that sense. We have to understand each other. And if, for example, you're giving me an explanation, and this is a very important point, you're giving me an explanation, and I'm not good, and then I'm not good at reasoning well and being objective and following logical paths and consistent paths, and I'm not good at measuring and sort of computing probabilities across those paths, what happens is collectively, we're not gonna do well.

- How hard is that problem, the second one? So I think we'll talk quite a bit about the first on a specific objective metric benchmark performing well. But being able to explain the steps, the reasoning, how hard is that problem? - I think that's very hard. I mean, I think that that's, well, it's hard for humans.

- The thing that's hard for humans, as you know, may not necessarily be hard for computers and vice versa. So, sorry, so how hard is that problem for computers? - I think it's hard for computers, and the reason why I related to, or saying that it's also hard for humans is because I think when we step back and we say we wanna design computers to do that, one of the things we have to recognize is we're not sure how to do it well.

I'm not sure we have a recipe for that, and even if you wanted to learn it, it's not clear exactly what data we use and what judgments we use to learn that well. And so what I mean by that is, if you look at the entire enterprise of science, science is supposed to be about objective reason, right?

So we think about, gee, who's the most intelligent person or group of people in the world? Do we think about the savants who can close their eyes and give you a number? We think about the think tanks, or the scientists or the philosophers who kind of work through the details and write the papers and come up with the thoughtful, logical proofs and use the scientific method, and I think it's the latter.

And my point is that, how do you train someone to do that? And that's what I mean by it's hard. What's the process of training people to do that well? That's a hard process. We work, as a society, we work pretty hard to get other people to understand our thinking and to convince them of things.

Now we could persuade them, obviously we talked about this, like human flaws or weaknesses, we can persuade them through emotional means, but to get them to understand and connect to and follow a logical argument is difficult. We try it, we do it as scientists, we try to do it as journalists, we try to do it as even artists in many forms, as writers, as teachers.

We go through a fairly significant training process to do that, and then we could ask, well, why is that so hard? But it's hard, and for humans, it takes a lot of work. And when we step back and say, well, how do we get a machine to do that?

It's a vexing question. - How would you begin to try to solve that? And maybe just a quick pause, because there's an optimistic notion in the things you're describing, which is being able to explain something through reason. But if you look at algorithms that recommend things that we'll look at next, whether it's Facebook, Google, advertisement-based companies, you know, their goal is to convince you to buy things based on anything.

So that could be reason, 'cause the best of advertisement is showing you things that you really do need and explain why you need it. But it could also be through emotional manipulation. The algorithm that describes why a certain reason, a certain decision was made, how hard is it to do it through emotional manipulation?

And why is that a good or a bad thing? So you've kind of focused on reason, logic, really showing in a clear way why something is good. One, is that even a thing that us humans do? And two, how do you think of the difference in the reasoning aspect and the emotional manipulation?

- So you call it emotional manipulation, but more objectively, it's essentially saying, there are certain features of things that seem to attract your attention. I mean, it kind of give you more of that stuff. - Manipulation is a bad word. - Yeah, I mean, I'm not saying it's right or wrong.

It works to get your attention, and it works to get you to buy stuff. And when you think about algorithms that look at the patterns of features that you seem to be spending your money on, and say, I'm gonna give you something with a similar pattern, so I'm gonna learn that function, because the objective is to get you to click on it or get you to buy it or whatever it is.

I don't know, I mean, it is what it is. I mean, that's what the algorithm does. You can argue whether it's good or bad. It depends what your goal is. - I guess this seems to be very useful for convincing, for telling a story. - I think for convincing humans, it's good, because again, this goes back to, what is the human behavior like?

How does the human brain respond to things? I think there's a more optimistic view of that, too, which is that if you're searching for certain kinds of things, you've already reasoned that you need them. And these algorithms are saying, look, that's up to you to reason whether you need something or not.

That's your job. You may have an unhealthy addiction to this stuff, or you may have a reasoned and thoughtful explanation for why it's important to you, and the algorithms are saying, hey, that's whatever. That's your problem. All I know is you're buying stuff like that, you're interested in stuff like that.

Could be a bad reason, could be a good reason. That's up to you. I'm gonna show you more of that stuff. And I think that that's, it's not good or bad. It's not reasoned or not reasoned. The algorithm is doing what it does, which is saying, you seem to be interested in this, I'm gonna show you more of that stuff.

And I think we're seeing this not just in buying stuff, but even in social media. You're reading this kind of stuff. I'm not judging on whether it's good or bad. I'm not reasoning at all. I'm just saying, I'm gonna show you other stuff with similar features. And that's it, and I wash my hands from it, and I say, that's all that's going on.

- People are so harsh on AI systems. So one, the bar of performance is extremely high, and yet we also ask them to, in the case of social media, to help find the better angels of our nature, and help make a better society. So what do you think about the role of AI there?

- I agree with you. That's the interesting dichotomy, right? Because on one hand, we're sitting there, and we're sort of doing the easy part, which is finding the patterns. We're not building, the system's not building a theory that is consumable and understandable by other humans that can be explained and justified.

And so on one hand, to say, oh, AI is doing this. Why isn't it doing this other thing? Well, this other thing's a lot harder. And it's interesting to think about why it's harder. And because you're interpreting the data in the context of prior models. In other words, understandings of what's important in the world, what's not important.

What are all the other abstract features that drive our decision-making? What's sensible, what's not sensible, what's good, what's bad, what's moral, what's valuable, what isn't? Where is that stuff? No one's applying the interpretation. So when I see you clicking on a bunch of stuff, and I look at these simple features, the raw features, the features that are there in the data, like what words are being used, or how long the material is, or other very superficial features, what colors are being used in the material.

Like I don't know why you're clicking on the stuff you're looking, or if it's products, what the price is, or what the category is, and stuff like that. And I just feed you more of the same stuff. That's very different than kind of getting in there and saying, what does this mean?

The stuff you're reading, like why are you reading it? What assumptions are you bringing to the table? Are those assumptions sensible? Does the material make any sense? Does it lead you to thoughtful, good conclusions? Again, there's interpretation and judgment involved in that process. That isn't really happening in the AI today.

That's harder. Because you have to start getting at the meaning of the stuff, of the content. You have to get at how humans interpret the content relative to their value system and deeper thought processes. - So that's what meaning means, is not just some kind of deep, timeless, semantic thing that the statement represents, but also how a large number of people are likely to interpret.

So it's again, even meaning is a social construct. It's so you have to try to predict how most people would understand this kind of statement. - Yeah, meaning is often relative, but meaning implies that the connections go beneath the surface of the artifacts. If I show you a painting, it's a bunch of colors on a canvas, what does it mean to you?

And it may mean different things to different people because of their different experiences. It may mean something even different to the artist who painted it. As we try to get more rigorous with our communication, we try to really nail down that meaning. So we go from abstract art to precise mathematics, precise engineering drawings and things like that.

We're really trying to say, I wanna narrow that space of possible interpretations because the precision of the communication ends up becoming more and more important. And so that means that I have to specify, and I think that's why this becomes really hard. Because if I'm just showing you an artifact and you're looking at it superficially, whether it's a bunch of words on a page or whether it's brushstrokes on a canvas or pixels in a photograph, you can sit there and you can interpret lots of different ways at many, many different levels.

But when I wanna align our understanding of that, I have to specify a lot more stuff that's actually not directly in the artifact. Now I have to say, well, how are you interpreting this image and that image? And what about the colors and what do they mean to you?

What perspective are you bringing to the table? What are your prior experiences with those artifacts? What are your fundamental assumptions and values? What is your ability to kind of reason to chain together logical implication as you're sitting there and saying, well, if this is the case, then I would conclude this.

If that's the case, then I would conclude that. So your reasoning processes and how they work, your prior models and what they are, your values and your assumptions, all those things now come together into the interpretation. Getting in sync of that is hard. - And yet humans are able to intuit some of that without any pre-- - Because they have the shared experience.

- And we're not talking about shared, two people having a shared experience. I mean, as a society-- - That's correct. We have the shared experience and we have similar brains. So we tend to, in other words, part of our shared experience is our shared local experience. Like we may live in the same culture, we may live in the same society, and therefore we have similar educations.

We have similar, what we like to call prior models about the prior experiences. And we use that as a, think of it as a wide collection of interrelated variables and they're all bound to similar things. And so we take that as our background and we start interpreting things similarly.

But as humans, we have a lot of shared experience. We do have similar brains, similar goals, similar emotions under similar circumstances because we're both humans. So now one of the early questions you asked, how is biological and computer information systems fundamentally different? Well, one is humans come with a lot of pre-programmed stuff, a ton of program stuff, and they're able to communicate because they have a lot of, because they share that stuff.

(silence) (silence) (silence) (silence) (silence) (silence) (silence)