Back to Index

Sophia is not AGI (Ben Goertzel) | AI Podcast Clips with Lex Fridman


Chapters

0:0
0:20 David Hanson
7:7 Agi Ethics
22:25 How Sofia Works

Transcript

You're the chief scientist of Hanson Robotics. You're still involved with Hanson Robotics, doing a lot of really interesting stuff there. This is, for people who don't know, the company that created Sophia the Robot. Can you tell me who Sophia is? I'd rather start by telling you who David Hanson is.

David is the brilliant mind behind the Sophia Robot. So far he remains more interesting than his creation, although she may be improving faster than he is, actually. That's a good point. I met David maybe 2007 or something at some futurist conference we were both speaking at, and I could see we had a great deal in common.

I mean, we were both kind of crazy, but we also both had a passion for AGI and the singularity, and we were both huge fans of the work of Philip K. Dick, the science fiction writer. I wanted to create benevolent AGI that would create massively better life for all humans and all sentient beings, including animals, plants, and superhuman beings.

And he wanted exactly the same thing, but he had a different idea of how to do it. He wanted to get computational compassion. He wanted to get machines that would love people and empathize with people, and he thought the way to do that was to make a machine that could look people eye to eye, face to face, look at people and make people love the machine, and the machine loves the people back.

So I thought that was a very different way of looking at it, because I'm very math-oriented, and I'm just thinking, like, what is the abstract cognitive algorithm that will let the system internalize the complex patterns of human values, blah, blah, blah, whereas he's like, look you in the face and the eye and love you, right?

So we hit it off quite well, and we talked to each other off and on. Then I moved to Hong Kong in 2011. So I've been living all over the place. I've been in Australia and New Zealand in my academic career, then in Las Vegas for a while, was in New York in the late '90s starting my entrepreneurial career, was in DC for nine years doing a bunch of US government consulting stuff, then moved to Hong Kong in 2011, mostly because I met a Chinese girl who I fell in love with, and we got married.

She's actually not from Hong Kong, she's from mainland China, but we converged together in Hong Kong, still married now, have a two-year-old baby. So went to Hong Kong to see about a girl, I guess. Yeah, pretty much, yeah. And on the other hand, I started doing some cool research there with Jin You at Hong Kong Polytechnic University.

I got involved with a project called IDEA using machine learning for stock and futures prediction, which was quite interesting. And I also got to know something about the consumer electronics and hardware manufacturer ecosystem in Shenzhen, across the border, which is the only place in the world that makes sense to make complex consumer electronics at large scale and low cost.

It's astounding, the hardware ecosystem that you have in South China. People here cannot imagine what it's like. So David was starting to explore that also. I invited him to Hong Kong to give a talk at Hong Kong PolyU, and I introduced him in Hong Kong to some investors who were interested in his robots.

And he didn't have Sophia then, he had a robot of Philip K. Dick, our favorite science fiction writer. He had a robot Einstein, he had some little toy robots that looked like his son Zeno. So through the investors I connected him to, he managed to get some funding to basically port Hanson Robotics to Hong Kong.

And when he first moved to Hong Kong, I was working on AGI research and also on this machine learning trading project. So I didn't get that tightly involved with Hanson Robotics. But as I hung out with David more and more, as we were both there in the same place, I started to think about what you could do to make his robots smarter than they were.

And so we started working together. And for a few years I was chief scientist and head of software at Hanson Robotics. Then when I got deeply into the blockchain side of things, I stepped back from that and co-founded SingularityNet. David Hanson was also one of the co-founders of SingularityNet.

So part of our goal there had been to make the blockchain-based CloudMind platform for Sophia and the other-- So Sophia would be just one of the robots in SingularityNet. Yeah, yeah, yeah, exactly. Many copies of the Sophia robot would be among the user interfaces to the globally distributed SingularityNet CloudMind.

And David and I talked about that for quite a while before co-founding SingularityNet. By the way, in his vision and your vision, was Sophia tightly coupled to a particular AI system? Or was the idea that you can just keep plugging in different AI systems within the head of it?

I think David's view was always that Sophia would be a platform, much like, say, the Pepper robot is a platform from SoftBank. Should be a platform with a set of nicely designed APIs that anyone can use to experiment with their different AI algorithms on that platform. And SingularityNet, of course, fits right into that, right?

Because SingularityNet, it's an API marketplace. So anyone can put their AI on there. OpenCog is a little bit different. I mean, David likes it, but I'd say it's my thing. It's not his. I think David has a little more passion for biologically-based approaches to AI than I do, which makes sense.

I mean, he's really into human physiology and biology. He's a character sculptor, right? But he also worked a lot with rule-based and logic-based AI systems, too. So, yeah, he's interested in not just Sophia, but all the handsome robots as a powerful social and emotional robotics platform. And what I saw in Sophia was a way to get AI algorithms out there in front of a whole lot of different people in an emotionally compelling way.

And part of my thought was really kind of abstract, connected to AGI ethics. And many people are concerned AGI is going to enslave everybody or turn everybody into computronium to make extra hard drives for their cognitive engine or whatever. And emotionally, I'm not driven to that sort of paranoia.

I'm really just an optimist by nature. But intellectually, I have to assign a non-zero probability to those sorts of nasty outcomes. Because if you're making something 10 times as smart as you, how can you know what it's going to do? You've got to do some uncertainty there, just as my dog can't predict what I'm going to do tomorrow.

So it seemed to me that based on our current state of knowledge, the best way to bias the AGIs we create toward benevolence would be to infuse them with love and compassion the way that we do our own children. So you want to interact with AIs in the context of doing compassionate, loving, and beneficial things.

And in that way, as your children will learn by doing compassionate, beneficial, loving things alongside you, and that way the AI will learn in practice what it means to be compassionate, beneficial, and loving, it will get a sort of ingrained intuitive sense of this, which it can then abstract in its own way as it gets more and more intelligent.

Now David saw this the same way. That's why he came up with the name Sophia, which means wisdom. So it seemed to me making these beautiful, loving robots to be rolled out for beneficial applications would be the perfect way to roll out early stage AGI systems so they can learn from people, and not just learn factual knowledge, but learn human values and ethics from people while being their home service robots, their education assistants, their nursing robots.

So that was the grand vision. Now, if you've ever worked with robots, the reality is quite different, right? The first principle is the robot is always broken. I mean, I worked with robots in the 90s a bunch, when you had to solder them together yourself. And I'd put neural nets doing reinforcement learning on overturned salad bowl type robots in the 90s when I was a professor.

Things of course advanced a lot, but... But the principle still holds. Yeah, the principle of the robot's always broken still holds. Yeah. So faced with the reality of making Sophia do stuff, many of my robo-AGI aspirations were temporarily cast aside. And I mean, there's just a practical problem of making this robot interact in a meaningful way, because you put nice computer vision on there, but there's always glare.

And then you have a dialogue system, but at the time I was there, no speech-to-text algorithm could deal with Hong Kongese people's English accents. So the speech-to-text was always bad. So the robot always sounded stupid, because it wasn't getting the right text, right? So I started to view that really as what in software engineering you call a walking skeleton, which is maybe the wrong metaphor to use for Sophia, or maybe the right one.

I mean, where the walking skeleton is in software development is, if you're building a complex system, how do you get started? Well, one way is to first build part one well, then build part two well, then build part three well, and so on. And the other way is you make a simple version of the whole system and put something in the place of every part the whole system will need, so that you have a whole system that does something, and then you work on improving each part in the context of that whole integrated system.

So that's what we did on a software level in Sophia. We made like a walking skeleton software system where, so there's something that sees, there's something that hears, there's something that moves, there's something that remembers, there's something that learns. You put a simple version of each thing in there, and you connect them all together so that the system will do its thing.

So there's a lot of AI in there. There's not any AGI in there. I mean, there's computer vision to recognize people's faces, recognize when someone comes in the room and leaves, try to recognize whether two people are together or not. I mean, the dialogue system, it's a mix of like hand-coded rules with deep neural nets that come up with their own responses.

And there's some attempt to have a narrative structure and sort of try to pull the conversation into something with a beginning, middle, and end in this sort of story arc. I mean, like if you look at the Lobner Prize and the systems that beat the Turing test currently, they're heavily rule-based because like you said, narrative structure to create compelling conversations, you currently, neural networks cannot do that well, even with Google Amina, when you actually look at full-scale conversations, it's just not-- Yeah, this is the thing.

So I've actually been running an experiment the last couple of weeks taking Sophia's chat bot and then Facebook's transformer chat bot, which they opened the model. We've had them chatting to each other for a number of weeks on the server. That's funny. We're generating training data of what Sophia says in a wide variety of conversations.

But we can see compared to Sophia's current chat bot, the Facebook deep neural chat bot comes up with a wider variety of fluent sounding sentences. On the other hand, it rambles like mad. The Sophia chat bot, it's a little more repetitive in the sentence structures it uses. On the other hand, it's able to keep like a conversation arc over a much longer period.

Now you can probably surmount that using Reformer and using various other deep neural architectures to improve the way these transformer models are trained. But in the end, neither one of them really understands what's going on. And I mean, that's the challenge I had with Sophia is if I were doing a robotics project aimed at AGI, I would want to make like a robo toddler that was just learning about what it was seeing because then the language is grounded in the experience of the robot.

But what Sophia needs to do to be Sophia is talk about sports or the weather or robotics or the conference she's talking at. She needs to be fluent talking about any damn thing in the world. And she doesn't have grounding for all those things. So there's this, just like, I mean, Google MENA and Facebook's chat bot don't have grounding for what they're talking about either.

So in a way, the need to speak fluently about things where there's no non-linguistic grounding pushes what you can do for Sophia in the short term a bit away from AGI. - I mean, it pushes you towards IBM Watson situation where you basically have to do heuristic and hard code stuff and rule-based stuff.

I have to ask you about this. So because, in part, Sophia is an art creation because it's beautiful. She's beautiful because she inspires through our human nature of anthropomorphize things. We immediately see an intelligent being there. - Because David is a great sculptor. - Is a great sculptor, that's right.

So in fact, if Sophia just had nothing inside her head, said nothing, if she just sat there, we're already prescribed some intelligence to-- - There's a long selfie line in front of her after every talk. - That's right. So it captivated the imagination of many people. I wasn't gonna say the world, but yeah, I mean, a lot of people.

- Billions of people, which is amazing. - It's amazing, right. Now, of course, many people have prescribed essentially AGI type of capabilities to Sophia when they see her. And of course, friendly French folk like Yann LeCun immediately see that, the people from the AI community, and get really frustrated because-- - It's understandable.

- And then they criticize people like you who sit back and don't say anything about, like basically allow the imagination of the world, allow the world to continue being captivated. So what's your sense of that kind of annoyance that the AI community has? - Well, I think there's several parts to my reaction there.

First of all, if I weren't involved with Hanson & Box and didn't know David Hanson personally, I probably would have been very annoyed initially at Sophia as well. I mean, I can understand the reaction. I would have been like, wait, all these stupid people out there think this is an AGI, but it's not an AGI, but they're tricking people that this very cool robot is an AGI.

And now those of us trying to raise funding to build AGI, people will think it's already there and already works, right? So on the other hand, I think even if I weren't directly involved with it, once I dug a little deeper into David and the robot and the intentions behind it, I think I would have stopped being pissed off, whereas folks like Jan LeCun have remained pissed off after their initial reaction.

- That's his thing. - I think that in particular struck me as somewhat ironic because Jan LeCun is working for Facebook, which is using machine learning to program the brains of the people in the world toward vapid consumerism and political extremism. So if your ethics allows you to use machine learning in such a blatantly destructive way, why would your ethics not allow you to use machine learning to make a lovable theatrical robot that draws some foolish people into its theatrical illusion?

If the pushback had come from Yoshua Bengio, I would have felt much more humbled by it because he's not using AI for blatant evil, right? On the other hand, he also is a super nice guy and doesn't bother to go out there trashing other people's work for no good reason.

- Shots fired. But I get you. - If you're gonna ask, I'm gonna answer. - No, for sure. I think we'll go back and forth. I'll talk to Jan again. - I would add on this, though. David Hansen is an artist and he often speaks off the cuff. And I have not agreed with everything that David has said or done regarding Sophia.

And David also does not agree with everything David has said or done about Sophia. I mean, David is an artistic wild man and that's part of his charm. That's part of his genius. So certainly there have been conversations within Hansen Robotics and between me and David where I was like, "Let's be more open about how this thing is working." And I did have some influence in nudging Hansen Robotics to be more open about how Sophia was working.

And David wasn't especially opposed to this. And he was actually quite right about it. What he said was, "You can tell people exactly how it's working and they won't care. They want to be drawn into the illusion." And he was 100% correct. I'll tell you what, this wasn't Sophia, this was Philip K.

Dick. But we did some interactions between humans and Philip K. Dick Robot in Austin, Texas a few years back. And in this case, the Philip K. Dick was just teleoperated by another human in the other room. So during the conversations, we didn't tell people the robot was teleoperated. We just said, "Here, have a conversation with Phil Dick.

We're going to film you." And they had a great conversation with Philip K. Dick, teleoperated by my friend, Stefan Bugaj. After the conversation, we brought the people in the back room to see Stefan, who was controlling the Philip K. Dick Robot, but they didn't believe it. These people were like, "Well, yeah, but I know I was talking to Phil.

Like, maybe Stefan was typing, but the spirit of Phil was animating his mind while he was typing." So even though they knew it was a human in the loop, even seeing the guy there, they still believed that was Phil they were talking to. A small part of me believes that they were right, actually.

Because our understanding... Well, we don't understand the universe. That's the thing. I mean, there is a cosmic mind field that we're all embedded in that yields many strange synchronicities in the world, which is a topic we don't have time to go into too much here. There's something to this where our imagination about Sophia and people like Jan LeCun being frustrated about it is all part of this beautiful dance of creating artificial intelligence that's almost essential.

You see with Boston Dynamics, whom I'm a huge fan of as well, you know, the kind of... I mean, these robots are very far from intelligent. I played with their last one, actually. With the spot mini. Yeah, very cool. I mean, it reacts quite in a fluid and flexible way.

But we immediately ascribe the kind of intelligence, we immediately ascribe AGI to them. Yeah, yeah, if you kick it and it falls down and goes, "Ow," you feel bad, right? You can't help it. And I mean, that's going to be part of our journey in creating intelligent systems. More and more and more and more.

As Sophia starts out with a walking skeleton, as you add more and more intelligence, I mean, we're going to have to deal with this kind of idea. Absolutely. And about Sophia, I would say, first of all, I have nothing against Jan LeCun. No, no, this is fine. He's a nice guy.

If he wants to play the media banter game, I'm happy to play it. He's a good researcher and a good human being. I'd happily work with the guy. The other thing I was going to say is, I have been explicit about how Sophia works. And I've posted online, on H+ Magazine, an online webzine, I mean, I posted a moderately detailed article explaining, like, there are three software systems we've used inside Sophia.

There's a timeline editor, which is like a rule-based authoring system, where she's really just being an outlet for what a human scripted. There's a chatbot, which has some rule-based and some neural aspects. And then sometimes we've used OpenCog behind Sophia, where there's more learning and reasoning. And you know, the funny thing is, I can't always tell which system is operating here, right?

Because whether she's really learning or thinking, or just appears to be, over a half hour I could tell, but over like three or four minutes of interaction, I couldn't tell. So even having three systems that's already sufficiently complex, where you can't really tell right away. Yeah, the thing is, even if you get up on stage and tell people how Sophia's working, and then they talk to her, they still attribute more agency and consciousness to her than is really there.

So I think there's a couple levels of ethical issue there. One issue is, should you be transparent about how Sophia is working? And I think you should. And I think we have been. I mean, there's articles online, there's some TV special that goes through me explaining the three subsystems behind Sophia.

You know, the way Sophia works is out there much more clearly than how Facebook's AI works or something, right? I mean, we've been fairly explicit about it. The other is, given that telling people how it works doesn't cause them to not attribute too much intelligence agency to it anyway, then should you keep fooling them when they want to be fooled?

And I mean, the whole media industry is based on fooling people the way they want to be fooled. And we are fooling people 100% toward a good end. I mean, we are playing on people's sense of empathy and compassion so that we can give them a good user experience with helpful robots, and so that we can fill the AI's mind with love and compassion.

So I've been talking a lot with Hanson Robotics lately about collaborations in the area of medical robotics. And we haven't quite pulled the trigger on a project in that domain yet, but we may well do so quite soon. So we've been talking a lot about, you know, robots can help with elder care, robots can help with kids.

David's done a lot of things with autism therapy and robots before. In the COVID era, having a robot that can be a nursing assistant in various senses can be quite valuable. The robots don't spread infection, and they can also deliver more attention than human nurses can give, right? So if you have a robot that's helping a patient with COVID, if that patient attributes more understanding and compassion and agency to that robot than it really has, because it looks like a human, I mean, is that really bad?

I mean, we can tell them it doesn't fully understand you, and they don't care because they're lying there with a fever and they're sick. But they'll react better to that robot with its loving, warm facial expression than they would to a pepper robot or a metallic-looking robot. So it's really, it's about how you use it, right?

If you made a human-looking, like, door-to-door sales robot that used its human-looking appearance to scam people out of their money, then you're using that connection in a bad way, but you could also use it in a good way. But then that's the same problem with every technology, right? Beautifully put.