What do you think it takes to, let's talk about AGI a little bit. What do you think it takes to build a system of human level intelligence? We talked about reasoning, we talked about long-term memory, but in general, what does it take, do you think? Well, I can't be sure, but I think that deep learning plus maybe another small idea.
Do you think self-play will be involved? So like you've spoken about the powerful mechanism of self-play where systems learn by sort of exploring the world in a competitive setting against other entities that are similarly skilled as them and so incrementally improve in this way. Do you think self-play will be a component of building an AGI system?
Yeah, so what I would say to build AGI, I think is going to be deep learning plus some ideas, and I think self-play will be one of those ideas. I think that that is a very, self-play has this amazing property that it can surprise us in truly novel ways.
For example, I mean, pretty much every self-play system, both our Dota bot, I don't know if OpenAI had a release about multi-agent where you had two little agents who were playing hide and seek and of course also AlphaZero. They were all produced surprising behaviors. They all produce behaviors that we didn't expect.
They are creative solutions to problems. And that seems like an important part of AGI that our systems don't exhibit routinely right now. And so that's why I like this area, I like this direction because of its ability to surprise us. - To surprise us. And an AGI system would surprise us fundamentally.
- Yes, and to be precise, not just a random surprise, but to find the surprising solution to a problem is also useful. - Right. Now, a lot of the self-play mechanisms have been used in the game context, or at least in the simulation context. How far along the path to AGI do you think will be done in simulation?
How much faith, promise do you have in simulation versus having to have a system that operates in the real world, whether it's the real world of digital real-world data or real-world, like actual physical world of robotics? - I don't think it's an either/or. I think simulation is a tool and it helps.
It has certain strengths and certain weaknesses and we should use it. - Yeah, but, okay, I understand that. That's true, but one of the criticisms of self-play, one of the criticisms of reinforcement learning is one of the, its current power, its current results, while amazing, have been demonstrated in its simulated environments, or very constrained physical environments.
Do you think it's possible to escape them, escape the simulated environments and be able to learn in non-simulated environments? Or do you think it's possible to also just simulate in a photorealistic and physics realistic way the real world in a way that we can solve real problems with self-play in simulation?
- So I think that transfer from simulation to the real world is definitely possible and has been exhibited many times by many different groups. It's been especially successful in vision. Also, open AI in the summer has demonstrated a robot hand which was trained entirely in simulation in a certain way that allowed for seem-to-real transfer to occur.
- Is this for the Rubik's Cube? - Yes, right. - I wasn't aware that was trained in simulation. - It was trained in simulation entirely. - Really, so it wasn't in the physical, the hand wasn't trained? - No. 100% of the training was done in simulation. And the policy that was learned in simulation was trained to be very adaptive.
So adaptive that when you transfer it, it could very quickly adapt to the physical world. - So the kind of perturbations with the giraffe or whatever the heck it was, were those part of the simulation? - Well, the simulation was generally, so the simulation was trained to be robust to many different things, but not the kind of perturbations we've had in the video.
So it's never been trained with a glove, it's never been trained with a stuffed giraffe. - So in theory, these are novel perturbations. - Correct, it's not in theory, in practice. - That those are novel perturbations? Well, that's okay. That's a clean, small scale, but clean example of a transfer from the simulated world to the physical world.
- Yeah, and I will also say that I expect the transfer capabilities of deep learning to increase in general. And the better the transfer capabilities are, the more useful simulation will become. Because then you could take, you could experience something in simulation and then learn a moral of the story, which you could then carry with you to the real world.
As humans do all the time when they play computer games. - So let me ask sort of an embodied question, staying on AGI for a sec. Do you think AGI says that we need to have a body? We need to have some of those human elements of self-awareness, consciousness, sort of fear of mortality, sort of self-preservation in the physical space, which comes with having a body?
- I think having a body will be useful. I don't think it's necessary, but I think it's very useful to have a body for sure, because you can learn a whole new, you can learn things which cannot be learned without a body. But at the same time, I think that you can, if you don't have a body, you could compensate for it and still succeed.
- You think so? - Yes. Well, there is evidence for this. For example, there are many people who were born deaf and blind and they were able to compensate for the lack of modalities. I'm thinking about Helen Keller specifically. - So even if you're not able to physically interact with the world, and if you're not able to, I mean, I actually was getting at, maybe let me ask on the more particular, I'm not sure if it's connected to having a body or not, but the idea of consciousness, and a more constrained version of that is self-awareness.
Do you think an AGI system should have consciousness? We can't define, whatever the heck you think consciousness is. - Yeah, hard question to answer, given how hard it is to define it. - Do you think it's useful to think about? - I mean, it's definitely interesting. It's fascinating. I think it's definitely possible that our systems will be conscious.
- Do you think that's an emergent thing that just comes from, do you think consciousness could emerge from the representation that's stored within your networks? So like that it naturally just emerges when you become more and more, you're able to represent more and more of the world. - Well, I'd say I'd make the following argument, which is, humans are conscious, and if you believe that artificial neural nets are sufficiently similar to the brain, then there should at least exist artificial neural nets we should be conscious to.
- You're leaning on that existence proof pretty heavily. Okay. - But that's the best answer I can give. - No, I know, I know, I know. There's still an open question if there's not some magic in the brain that we're not, I mean, I don't mean a non-materialistic magic, but that the brain might be a lot more complicated and interesting than we give it credit for.
- If that's the case, then it should show up. And at some point, - At some point. - We will find out that we can't continue to make progress. But I think it's unlikely. - So we talk about consciousness, but let me talk about another poorly defined concept of intelligence.
Again, we've talked about reasoning, we've talked about memory. What do you think is a good test of intelligence for you? Are you impressed by the test that Alan Turing formulated with the imitation game of natural language? Is there something in your mind that you will be deeply impressed by if a system was able to do?
- I mean, lots of things. There's a certain frontier of capabilities today. And there exists things outside of that frontier. And I would be impressed by any such thing. For example, I would be impressed by a deep learning system which solves a very pedestrian task, like machine translation or computer vision task or something which never makes mistake a human wouldn't make under any circumstances.
I think that is something which have not yet been demonstrated and I would find it very impressive. - Yeah, so right now they make mistakes in different, they might be more accurate than human beings, but they still, they make a different set of mistakes. - So my, I would guess that a lot of the skepticism that some people have about deep learning is when they look at their mistakes and they say, "Well, those mistakes, they make no sense." Like if you understood the concept, you wouldn't make that mistake.
And I think that changing that would be, that would inspire me, that would be yes, this is progress. - Yeah, that's a really nice way to put it. But I also just don't like that human instinct to criticize a model as not intelligent. That's the same instinct as we do when we criticize any group of creatures as the other.
Because it's very possible that GPT-2 is much smarter than human beings at many things. - That's definitely true. It is a lot more breadth of knowledge. - Yes, breadth of knowledge. And even perhaps depth on certain topics. - It's kind of hard to judge what depth means, but there's definitely a sense in which humans don't make mistakes, these models do.
- Yes, the same is applied to autonomous vehicles. The same is probably gonna continue being applied to a lot of artificial intelligence systems. We find, this is the annoying thing, this is the process of, in the 21st century, the process of analyzing the progress of AI is the search for one case where the system fails in a big way where humans would not.
And then many people writing articles about it. And then broadly, the public generally gets convinced that the system is not intelligent. And we pacify ourselves by thinking it's not intelligent because of this one anecdotal case. And this seems to continue happening. - Yeah, I mean, there is truth to that.
Although I'm sure that plenty of people are also extremely impressed by the systems that exist today. But I think this connects to the earlier point we discussed that it's just confusing to judge progress in AI. - Yeah. - And you have a new robot demonstrating something. How impressed should you be?
And I think that people will start to be impressed once AI starts to really move the needle on the GDP. - So you're one of the people that might be able to create an AGI system here, not you, but you and OpenAI. If you do create an AGI system and you get to spend sort of the evening with it, him, her, what would you talk about, do you think?
- The very first time? - First time. - Well, the first time I would just ask all kinds of questions and try to make it, to get it to make a mistake. And I would be amazed that it doesn't make mistakes and just keep asking broad questions. - What kind of questions do you think, would they be factual or would they be personal, emotional, psychological, what do you think?
- All of the above. - Would you ask for advice? - Definitely. I mean, why would I limit myself talking to a system like this? - Now, again, let me emphasize the fact that you truly are one of the people that might be in the room where this happens.
So let me ask sort of a profound question about, I just talked to a Stalin historian. (laughs) Been talking to a lot of people who are studying power. Abraham Lincoln said, "Nearly all men can stand adversity, "but if you want to test a man's character, give him power." I would say the power of the 21st century, maybe the 22nd, but hopefully the 21st, would be the creation of an AGI system and the people who have control, direct possession and control of the AGI system.
So what do you think, after spending that evening having a discussion with the AGI system, what do you think you would do? - Well, the ideal world I'd like to imagine is one where humanity, alike, the board members of a company, where the AGI is the CEO. So it would be, I would like, the picture which I would imagine is you have some kind of different entities, different countries or cities, and the people that leave there vote for what the AGI that represents them should do, and an AGI that represents them goes and does it.
I think a picture like that, I find very appealing. And you could have multiple, you would have an AGI for a city, for a country, and it would be trying to, in effect, take the democratic process to the next level. - And the board can always fire the CEO.
- Essentially, press the reset button, say. - Press the reset button. - Rerandomize the parameters. - Well, let me sort of, that's actually, okay, that's a beautiful vision, I think, as long as it's possible to press the reset button. Do you think it will always be possible to press the reset button?
- So I think that it's definitely will be possible to build. So you're talking, so the question that I really understand from you is, will humans or, humans people have control over the AI systems that they build? - Yes. - And my answer is, it's definitely possible to build AI systems which will want to be controlled by their humans.
- Wow, that's part of their, so it's not that just they can't help it be controlled, but that's, they exist, one of the objectives of their existence is to be controlled in the same way that human parents generally want to help their children, they want their children to succeed.
It's not a burden for them. They are excited to help the children and to feed them and to dress them and to take care of them. And I believe with high conviction that the same will be possible for an AGI. It will be possible to program an AGI, to design it in such a way that it will have a similar deep drive, that it will be delighted to fulfill and the drive will be to help humans flourish.
- But let me take a step back to that moment where you create the AGI system. I think this is a really crucial moment. And between that moment and the Democratic board members with the AGI at the head, there has to be a relinquishing of power. So as George Washington, despite all the bad things he did, one of the big things he did is he relinquished power.
He, first of all, didn't want to be president. And even when he became president, he gave, he didn't keep just serving as most dictators do for indefinitely. Do you see yourself being able to relinquish control over an AGI system, given how much power you can have over the world?
At first financial, just make a lot of money, right? And then control by having possession of this AGI system. - I'd find it trivial to do that. I'd find it trivial to relinquish this kind of power. I mean, you know, the kind of scenario you are describing sounds terrifying to me.
That's all. I would absolutely not want to be in that position. - Do you think you represent the majority or the minority of people in the AI community? - Well, I mean. - It's an open question and an important one. Are most people good is another way to ask it.
- So I don't know if most people are good, but I think that when it really counts, people can be better than we think. - That's beautifully put, yeah. Are there specific mechanism you can think of of aligning AI gene values to human values? Is that, do you think about these problems of continued alignment as we develop the AI systems?
- Yeah, definitely. In some sense, the kind of question which you are asking is, so if I were to translate the question to today's terms, it would be a question about how to get an RL agent that's optimizing a value function which itself is learned. And if you look at humans, humans are like that because the reward function, the value function of humans is not external, it is internal.
- That's right. - And there are definite ideas of how to train a value function. Basically an objective, you know, and as objective as possible perception system that will be trained separately. To recognize, to internalize human judgments on different situations. And then that component would then be integrated as the base value function for some more capable RL system.
You could imagine a process like this. I'm not saying this is the process, I'm saying this is an example of the kind of thing you could do. (silence) (silence) (silence) (silence) (silence) (silence) (silence)