Back to Index

David Ferrucci: Humor as the Turing Test for Intelligence | AI Podcast Clips


Transcript

One of the benchmarks for me is humor, right? That seems to be one of the hardest. And to me, the biggest contrast is Watson. So one of the greatest comedy sketches of all time, right, is the SNL celebrity Jeopardy! With Alex Trebek and Sean Connery and Burt Reynolds and so on.

With Sean Connery commentating on Alex Trebek's mother a lot, so, and I think all of them are in the negative points wise. So they're clearly all losing in terms of the game of Jeopardy!, but they're winning in terms of comedy. So what do you think about humor in this whole interaction, in the dialogue that's productive?

Or even just whatever, what humor represents to me is the same idea that you're saying about framework, 'cause humor only exists within a particular human framework. So what do you think about humor? What do you think about things like humor that connect to the kind of creativity you mentioned that's needed?

- I think there's a couple of things going on there. So I sort of feel like, and I might be too optimistic this way, but I think that there are, we did a little bit about with this and with puns in Jeopardy! We literally sat down and said, how do puns work?

And it's like wordplay, and you could formalize these things. So I think there's a lot aspects of humor that you could formalize. You could also learn humor. You could just say, what do people laugh at? And if you have enough, again, if you have enough data to represent the phenomenon, you might be able to weigh the features and figure out what humans find funny and what they don't find funny.

The machine might not be able to explain why the human finding it funny, unless we sit back and think about that more formally. I think, again, I think you do a combination of both. And I'm always a big proponent of that. I think robust architectures and approaches are always a little bit combination of us reflecting and being creative about how things are structured, how to formalize them, and then taking advantage of large data and doing learning and figuring out how to combine these two approaches.

I think there's another aspect to humor though, which goes to the idea that I feel like I can relate to the person telling the story. And I think that's an interesting theme in the whole AI theme, which is, do I feel differently when I know it's a robot? And when I know, when I imagine that the robot is not conscious the way I'm conscious, when I imagine the robot does not actually have the experiences that I experience, do I find it funny?

Or do, because it's not as related, I don't imagine that the person's relating it to it the way I relate to it. I think this also, you see this in the arts and in entertainment where, sometimes you have savants who are remarkable at a thing, whether it's sculpture, it's music or whatever, but the people who get the most attention are the people who can evoke a similar emotional response, who can get you to emote, right?

About the way they are. In other words, who can basically make the connection from the artifact, from the music or the painting or the sculpture to the emotion and get you to share that emotion with them. And then, and that's when it becomes compelling. So they're communicating at a whole different level.

They're just not communicating the artifact. They're communicating their emotional response to the artifact. And then you feel like, oh, wow, I can relate to that person. I can connect to that. I can connect to that person. So I think humor has that aspect as well. - So the idea that you can connect to that person, person being the critical thing, but we're also able to anthropomorphize objects pretty, robots and AI systems pretty well.

So we're almost looking to make them human. But maybe from your experience with Watson, maybe you can comment on, did you consider that as part, well, obviously the problem of Jeopardy doesn't require anthropomorphization, but nevertheless-- - Well, there was some interest in doing that. And that's another thing I didn't wanna do 'cause I didn't wanna distract from the actual scientific task.

But you're absolutely right. I mean, humans do anthropomorphize and without necessarily a lot of work. I mean, you just put some eyes and a couple of eyebrow movements and you're getting humans to react emotionally. And I think you can do that. So I didn't mean to suggest that that connection cannot be mimicked.

I think that connection can be mimicked and can get you to, can produce that emotional response. I just wonder though, if you're told what's really going on, if you know that the machine is not conscious, not having the same richness of emotional reactions and understanding that it doesn't really share the understanding, but it's essentially just moving its eyebrow or drooping its eyes or making them bigger, whatever it's doing, just getting the emotional response.

Will you still feel it? Interesting, I think you probably would for a while. And then when it becomes more important that there's a deeper shared understanding, it may run flat, but I don't know. - No, I'm pretty confident that majority of the world, even if you tell them how it works, well, it will not matter, especially if the machine herself says that she is conscious.

- That's very possible. - So you, the scientist that made the machine, is saying that this is how the algorithm works. Everybody will just assume you're lying and that there's a conscious being there. - So you're deep into the science fiction genre now. - I don't think it's, it's actually psychology.

I think it's not science fiction. I think it's reality. I think it's a really powerful one that we'll have to be exploring in the next few decades. It's a very interesting element of intelligence. (upbeat music) (upbeat music) (upbeat music) (upbeat music) (upbeat music) (upbeat music)