Back to Index

Rohit Prasad: Deep Learning is Not Enough to Solve Reasoning | AI Podcast Clips


Transcript

- So deep learning has been at the core of a lot of this technology. Are you optimistic about the current deep learning approaches to solving the hardest aspects of what we're talking about? Or do you think there will come a time where new ideas need to, for the, you know, if we look at reasoning.

So open AI, deep mind, a lot of folks are now starting to work in reasoning, trying to see how we can make neural networks reason. Do you see that new approaches need to be invented to take the next big leap? - Absolutely. I think there has to be a lot more investment and I think in many different ways.

And there are these, I would say nuggets of research forming in a good way, like learning with less data or like zero short learning, one short learning. - And the active learning stuff you've talked about is incredible. - Yes, so transfer learning is also super critical, especially when you're thinking about applying knowledge from one task to another or one language to another, right?

That's really ripe. So these are great pieces. Deep learning has been useful too. And now we are sort of matting deep learning with transfer learning and active learning, of course, that's more straightforward in terms of applying deep learning in an active learning setup. But I do think in terms of now looking into more reasoning based approaches is going to be key for our next wave of the technology.

But there is a good news. The good news is that I think for keeping on to delight customers, that a lot of it can be done by prediction tasks. So, and so we haven't exhausted that. So we don't need to give up on the deep learning approaches for that.

So that's just, I wanted to sort of point that out. - Creating a rich, fulfilling, amazing experience that makes Amazon a lot of money and a lot of everybody a lot of money, because it does awesome things. Deep learning is enough. The point-- - I don't think, no, I wouldn't say deep learning is enough.

I think for the purposes of Alexa and accomplish the task for customers, I'm saying there are still a lot of things we can do with prediction based approaches that do not reason. Right, I'm not saying that, and we haven't exhausted those, but for the kind of high utility experiences that I'm personally passionate about of what Alexa needs to do, reasoning has to be solved to the same extent as you can think of natural language understanding and speech recognition to the extent of understanding intents has been, how accurate it has become.

But reasoning, we have very, very early days. - Let me ask that another way. How hard of a problem do you think that is? - Hardest of them. (laughing) I would say hardest of them, because again, the hypothesis space is really, really large. And when you go back in time, like you were saying, I want Alexa to remember more things, that once you go beyond a session of interaction, which is by session, I mean a time span, which is today, to versus remembering which restaurant I like.

And then when I'm planning a night out to say, do you wanna go to the same restaurant? Now you're up the stakes big time. And this is where the reasoning dimension also goes way, way bigger. - So you think the space, we'll be elaborating that a little bit, just philosophically speaking, do you think, when you reason about trying to model what the goal of a person is in the context of interacting with Alexa, you think that space is huge?

- It's huge, absolutely huge. - Do you think, so like another sort of devil's advocate would be that we human beings are really simple and we all want like just a small set of things. So do you think it's possible, 'cause we're not talking about a fulfilling general conversation, perhaps actually the Alexa prize is a little bit more after that.

Creating a customer, like there's so many of the interactions, it feels like are clustered in groups that don't require general reasoning. - I think, yeah, you're right in terms of the head of the distribution of all the possible things customers may wanna accomplish. But the tail is long and it's diverse, right?

So from that-- - There's many long tails. - So from that perspective, I think you have to solve that problem. Otherwise, and everyone's very different. Like, I mean, we see this already in terms of the skills, right? I mean, if you're an average surfer, which I am not, right?

But somebody is asking Alexa about surfing conditions, right? And there's a skill that is there for them to get to, right? That tells you that the tail is massive, like in terms of like what kind of skills people have created, it's humongous in terms of it. And which means there are these diverse needs.

And when you start looking at the combinations of these, right, even if you had pairs of skills and 90,000 choose two, it's still a big combination. So I'm saying there's a huge to do here now. And I think customers are wonderfully frustrated with things and they have to keep getting to do better things for them.

- And they're not known to be super patient. So you have to-- - Do it fast. - You have to do it fast. (upbeat music) (upbeat music) (upbeat music) (upbeat music) (upbeat music) (upbeat music)