back to indexSophia is not AGI (Ben Goertzel) | AI Podcast Clips with Lex Fridman
Chapters
0:0
0:20 David Hanson
7:7 Agi Ethics
22:25 How Sofia Works
00:00:00.000 |
You're the chief scientist of Hanson Robotics. 00:00:05.920 |
You're still involved with Hanson Robotics, doing a lot of really interesting stuff there. 00:00:10.620 |
This is, for people who don't know, the company that created Sophia the Robot. 00:00:19.200 |
I'd rather start by telling you who David Hanson is. 00:00:23.120 |
David is the brilliant mind behind the Sophia Robot. 00:00:28.080 |
So far he remains more interesting than his creation, although she may be improving faster 00:00:36.000 |
I met David maybe 2007 or something at some futurist conference we were both speaking 00:00:45.480 |
at, and I could see we had a great deal in common. 00:00:50.600 |
I mean, we were both kind of crazy, but we also both had a passion for AGI and the singularity, 00:00:59.400 |
and we were both huge fans of the work of Philip K. Dick, the science fiction writer. 00:01:05.280 |
I wanted to create benevolent AGI that would create massively better life for all humans 00:01:13.320 |
and all sentient beings, including animals, plants, and superhuman beings. 00:01:19.160 |
And he wanted exactly the same thing, but he had a different idea of how to do it. 00:01:28.120 |
He wanted to get machines that would love people and empathize with people, and he thought 00:01:34.040 |
the way to do that was to make a machine that could look people eye to eye, face to face, 00:01:40.460 |
look at people and make people love the machine, and the machine loves the people back. 00:01:45.400 |
So I thought that was a very different way of looking at it, because I'm very math-oriented, 00:01:50.200 |
and I'm just thinking, like, what is the abstract cognitive algorithm that will let the system 00:01:56.920 |
internalize the complex patterns of human values, blah, blah, blah, whereas he's like, 00:02:02.600 |
look you in the face and the eye and love you, right? 00:02:05.240 |
So we hit it off quite well, and we talked to each other off and on. 00:02:21.120 |
I've been in Australia and New Zealand in my academic career, then in Las Vegas for 00:02:26.160 |
a while, was in New York in the late '90s starting my entrepreneurial career, was in 00:02:31.720 |
DC for nine years doing a bunch of US government consulting stuff, then moved to Hong Kong 00:02:37.280 |
in 2011, mostly because I met a Chinese girl who I fell in love with, and we got married. 00:02:43.960 |
She's actually not from Hong Kong, she's from mainland China, but we converged together 00:02:47.800 |
in Hong Kong, still married now, have a two-year-old baby. 00:02:52.040 |
So went to Hong Kong to see about a girl, I guess. 00:02:57.040 |
And on the other hand, I started doing some cool research there with Jin You at Hong Kong 00:03:04.280 |
I got involved with a project called IDEA using machine learning for stock and futures 00:03:11.200 |
And I also got to know something about the consumer electronics and hardware manufacturer 00:03:16.320 |
ecosystem in Shenzhen, across the border, which is the only place in the world that 00:03:21.080 |
makes sense to make complex consumer electronics at large scale and low cost. 00:03:26.600 |
It's astounding, the hardware ecosystem that you have in South China. 00:03:39.800 |
I invited him to Hong Kong to give a talk at Hong Kong PolyU, and I introduced him in 00:03:45.200 |
Hong Kong to some investors who were interested in his robots. 00:03:49.480 |
And he didn't have Sophia then, he had a robot of Philip K. Dick, our favorite science fiction 00:03:55.240 |
He had a robot Einstein, he had some little toy robots that looked like his son Zeno. 00:03:59.880 |
So through the investors I connected him to, he managed to get some funding to basically 00:04:08.640 |
And when he first moved to Hong Kong, I was working on AGI research and also on this machine 00:04:17.200 |
So I didn't get that tightly involved with Hanson Robotics. 00:04:20.960 |
But as I hung out with David more and more, as we were both there in the same place, I 00:04:27.280 |
started to think about what you could do to make his robots smarter than they were. 00:04:38.200 |
And for a few years I was chief scientist and head of software at Hanson Robotics. 00:04:43.760 |
Then when I got deeply into the blockchain side of things, I stepped back from that and 00:04:52.340 |
David Hanson was also one of the co-founders of SingularityNet. 00:04:55.620 |
So part of our goal there had been to make the blockchain-based CloudMind platform for 00:05:04.240 |
So Sophia would be just one of the robots in SingularityNet. 00:05:12.720 |
Many copies of the Sophia robot would be among the user interfaces to the globally distributed 00:05:22.320 |
And David and I talked about that for quite a while before co-founding SingularityNet. 00:05:28.480 |
By the way, in his vision and your vision, was Sophia tightly coupled to a particular 00:05:37.000 |
Or was the idea that you can just keep plugging in different AI systems within the head of 00:05:43.760 |
I think David's view was always that Sophia would be a platform, much like, say, the Pepper 00:05:54.760 |
Should be a platform with a set of nicely designed APIs that anyone can use to experiment 00:06:01.280 |
with their different AI algorithms on that platform. 00:06:06.640 |
And SingularityNet, of course, fits right into that, right? 00:06:09.440 |
Because SingularityNet, it's an API marketplace. 00:06:16.840 |
I mean, David likes it, but I'd say it's my thing. 00:06:21.040 |
I think David has a little more passion for biologically-based approaches to AI than I 00:06:27.880 |
I mean, he's really into human physiology and biology. 00:06:35.760 |
But he also worked a lot with rule-based and logic-based AI systems, too. 00:06:39.520 |
So, yeah, he's interested in not just Sophia, but all the handsome robots as a powerful 00:06:49.280 |
And what I saw in Sophia was a way to get AI algorithms out there in front of a whole 00:07:01.400 |
lot of different people in an emotionally compelling way. 00:07:04.320 |
And part of my thought was really kind of abstract, connected to AGI ethics. 00:07:09.520 |
And many people are concerned AGI is going to enslave everybody or turn everybody into 00:07:16.560 |
computronium to make extra hard drives for their cognitive engine or whatever. 00:07:23.560 |
And emotionally, I'm not driven to that sort of paranoia. 00:07:32.040 |
But intellectually, I have to assign a non-zero probability to those sorts of nasty outcomes. 00:07:40.040 |
Because if you're making something 10 times as smart as you, how can you know what it's 00:07:44.600 |
You've got to do some uncertainty there, just as my dog can't predict what I'm going 00:07:50.680 |
So it seemed to me that based on our current state of knowledge, the best way to bias the 00:07:58.960 |
AGIs we create toward benevolence would be to infuse them with love and compassion the 00:08:09.540 |
So you want to interact with AIs in the context of doing compassionate, loving, and beneficial 00:08:17.880 |
And in that way, as your children will learn by doing compassionate, beneficial, loving 00:08:21.740 |
things alongside you, and that way the AI will learn in practice what it means to be 00:08:27.340 |
compassionate, beneficial, and loving, it will get a sort of ingrained intuitive sense 00:08:33.660 |
of this, which it can then abstract in its own way as it gets more and more intelligent. 00:08:40.700 |
That's why he came up with the name Sophia, which means wisdom. 00:08:46.060 |
So it seemed to me making these beautiful, loving robots to be rolled out for beneficial 00:08:52.020 |
applications would be the perfect way to roll out early stage AGI systems so they can learn 00:09:00.620 |
from people, and not just learn factual knowledge, but learn human values and ethics from people 00:09:06.280 |
while being their home service robots, their education assistants, their nursing robots. 00:09:13.940 |
Now, if you've ever worked with robots, the reality is quite different, right? 00:09:18.380 |
The first principle is the robot is always broken. 00:09:22.900 |
I mean, I worked with robots in the 90s a bunch, when you had to solder them together 00:09:27.900 |
And I'd put neural nets doing reinforcement learning on overturned salad bowl type robots 00:09:40.900 |
Yeah, the principle of the robot's always broken still holds. 00:09:44.940 |
So faced with the reality of making Sophia do stuff, many of my robo-AGI aspirations 00:09:54.420 |
And I mean, there's just a practical problem of making this robot interact in a meaningful 00:10:01.100 |
way, because you put nice computer vision on there, but there's always glare. 00:10:06.140 |
And then you have a dialogue system, but at the time I was there, no speech-to-text algorithm 00:10:14.060 |
could deal with Hong Kongese people's English accents. 00:10:19.500 |
So the robot always sounded stupid, because it wasn't getting the right text, right? 00:10:23.580 |
So I started to view that really as what in software engineering you call a walking skeleton, 00:10:30.780 |
which is maybe the wrong metaphor to use for Sophia, or maybe the right one. 00:10:34.300 |
I mean, where the walking skeleton is in software development is, if you're building a complex 00:10:41.780 |
Well, one way is to first build part one well, then build part two well, then build part 00:10:47.220 |
And the other way is you make a simple version of the whole system and put something in the 00:10:52.220 |
place of every part the whole system will need, so that you have a whole system that 00:10:56.200 |
does something, and then you work on improving each part in the context of that whole integrated 00:11:03.020 |
So that's what we did on a software level in Sophia. 00:11:05.980 |
We made like a walking skeleton software system where, so there's something that sees, there's 00:11:10.940 |
something that hears, there's something that moves, there's something that remembers, there's 00:11:17.840 |
You put a simple version of each thing in there, and you connect them all together so 00:11:29.140 |
I mean, there's computer vision to recognize people's faces, recognize when someone comes 00:11:33.840 |
in the room and leaves, try to recognize whether two people are together or not. 00:11:38.540 |
I mean, the dialogue system, it's a mix of like hand-coded rules with deep neural nets 00:11:49.580 |
And there's some attempt to have a narrative structure and sort of try to pull the conversation 00:11:56.200 |
into something with a beginning, middle, and end in this sort of story arc. 00:12:01.300 |
I mean, like if you look at the Lobner Prize and the systems that beat the Turing test 00:12:06.500 |
currently, they're heavily rule-based because like you said, narrative structure to create 00:12:12.060 |
compelling conversations, you currently, neural networks cannot do that well, even with Google 00:12:16.860 |
Amina, when you actually look at full-scale conversations, it's just not-- 00:12:22.300 |
So I've actually been running an experiment the last couple of weeks taking Sophia's chat 00:12:28.260 |
bot and then Facebook's transformer chat bot, which they opened the model. 00:12:33.220 |
We've had them chatting to each other for a number of weeks on the server. 00:12:37.900 |
We're generating training data of what Sophia says in a wide variety of conversations. 00:12:43.420 |
But we can see compared to Sophia's current chat bot, the Facebook deep neural chat bot 00:12:50.900 |
comes up with a wider variety of fluent sounding sentences. 00:12:57.980 |
The Sophia chat bot, it's a little more repetitive in the sentence structures it uses. 00:13:04.540 |
On the other hand, it's able to keep like a conversation arc over a much longer period. 00:13:11.020 |
Now you can probably surmount that using Reformer and using various other deep neural architectures 00:13:18.980 |
to improve the way these transformer models are trained. 00:13:21.980 |
But in the end, neither one of them really understands what's going on. 00:13:26.820 |
And I mean, that's the challenge I had with Sophia is if I were doing a robotics project 00:13:34.260 |
aimed at AGI, I would want to make like a robo toddler that was just learning about 00:13:38.820 |
what it was seeing because then the language is grounded in the experience of the robot. 00:13:42.800 |
But what Sophia needs to do to be Sophia is talk about sports or the weather or robotics 00:13:51.940 |
She needs to be fluent talking about any damn thing in the world. 00:13:56.420 |
And she doesn't have grounding for all those things. 00:14:00.340 |
So there's this, just like, I mean, Google MENA and Facebook's chat bot don't have grounding 00:14:07.920 |
So in a way, the need to speak fluently about things where there's no non-linguistic grounding 00:14:15.620 |
pushes what you can do for Sophia in the short term a bit away from AGI. 00:14:24.020 |
- I mean, it pushes you towards IBM Watson situation where you basically have to do heuristic 00:14:35.820 |
So because, in part, Sophia is an art creation because it's beautiful. 00:14:49.380 |
She's beautiful because she inspires through our human nature of anthropomorphize things. 00:14:57.460 |
We immediately see an intelligent being there. 00:15:03.380 |
So in fact, if Sophia just had nothing inside her head, said nothing, if she just sat there, 00:15:11.220 |
we're already prescribed some intelligence to-- 00:15:13.660 |
- There's a long selfie line in front of her after every talk. 00:15:17.900 |
So it captivated the imagination of many people. 00:15:21.620 |
I wasn't gonna say the world, but yeah, I mean, a lot of people. 00:15:29.940 |
Now, of course, many people have prescribed essentially AGI type of capabilities to Sophia 00:15:40.700 |
And of course, friendly French folk like Yann LeCun immediately see that, the people from 00:15:49.740 |
the AI community, and get really frustrated because-- 00:15:56.100 |
- And then they criticize people like you who sit back and don't say anything about, 00:16:04.100 |
like basically allow the imagination of the world, allow the world to continue being captivated. 00:16:11.780 |
So what's your sense of that kind of annoyance that the AI community has? 00:16:18.660 |
- Well, I think there's several parts to my reaction there. 00:16:23.440 |
First of all, if I weren't involved with Hanson & Box and didn't know David Hanson personally, 00:16:31.180 |
I probably would have been very annoyed initially at Sophia as well. 00:16:37.260 |
I would have been like, wait, all these stupid people out there think this is an AGI, but 00:16:44.300 |
it's not an AGI, but they're tricking people that this very cool robot is an AGI. 00:16:50.900 |
And now those of us trying to raise funding to build AGI, people will think it's already 00:16:59.100 |
So on the other hand, I think even if I weren't directly involved with it, once I dug a little 00:17:07.000 |
deeper into David and the robot and the intentions behind it, I think I would have stopped being 00:17:14.100 |
pissed off, whereas folks like Jan LeCun have remained pissed off after their initial reaction. 00:17:23.380 |
- I think that in particular struck me as somewhat ironic because Jan LeCun is working 00:17:31.660 |
for Facebook, which is using machine learning to program the brains of the people in the 00:17:37.580 |
world toward vapid consumerism and political extremism. 00:17:42.760 |
So if your ethics allows you to use machine learning in such a blatantly destructive way, 00:17:51.420 |
why would your ethics not allow you to use machine learning to make a lovable theatrical 00:17:56.980 |
robot that draws some foolish people into its theatrical illusion? 00:18:04.140 |
If the pushback had come from Yoshua Bengio, I would have felt much more humbled by it 00:18:08.700 |
because he's not using AI for blatant evil, right? 00:18:13.340 |
On the other hand, he also is a super nice guy and doesn't bother to go out there trashing 00:18:36.020 |
David Hansen is an artist and he often speaks off the cuff. 00:18:42.140 |
And I have not agreed with everything that David has said or done regarding Sophia. 00:18:47.460 |
And David also does not agree with everything David has said or done about Sophia. 00:18:53.540 |
I mean, David is an artistic wild man and that's part of his charm. 00:19:02.580 |
So certainly there have been conversations within Hansen Robotics and between me and 00:19:09.500 |
David where I was like, "Let's be more open about how this thing is working." 00:19:16.140 |
And I did have some influence in nudging Hansen Robotics to be more open about how Sophia 00:19:30.300 |
What he said was, "You can tell people exactly how it's working and they won't care. 00:19:40.340 |
I'll tell you what, this wasn't Sophia, this was Philip K. Dick. 00:19:43.660 |
But we did some interactions between humans and Philip K. Dick Robot in Austin, Texas 00:19:51.780 |
And in this case, the Philip K. Dick was just teleoperated by another human in the other 00:19:56.620 |
So during the conversations, we didn't tell people the robot was teleoperated. 00:20:00.700 |
We just said, "Here, have a conversation with Phil Dick. 00:20:05.020 |
And they had a great conversation with Philip K. Dick, teleoperated by my friend, Stefan 00:20:11.260 |
After the conversation, we brought the people in the back room to see Stefan, who was controlling 00:20:18.700 |
the Philip K. Dick Robot, but they didn't believe it. 00:20:22.660 |
These people were like, "Well, yeah, but I know I was talking to Phil. 00:20:26.100 |
Like, maybe Stefan was typing, but the spirit of Phil was animating his mind while he was 00:20:32.940 |
So even though they knew it was a human in the loop, even seeing the guy there, they 00:20:37.420 |
still believed that was Phil they were talking to. 00:20:40.420 |
A small part of me believes that they were right, actually. 00:20:47.740 |
I mean, there is a cosmic mind field that we're all embedded in that yields many strange 00:20:53.820 |
synchronicities in the world, which is a topic we don't have time to go into too much here. 00:21:00.380 |
There's something to this where our imagination about Sophia and people like Jan LeCun being 00:21:09.740 |
frustrated about it is all part of this beautiful dance of creating artificial intelligence 00:21:16.840 |
You see with Boston Dynamics, whom I'm a huge fan of as well, you know, the kind of... 00:21:22.060 |
I mean, these robots are very far from intelligent. 00:21:31.940 |
I mean, it reacts quite in a fluid and flexible way. 00:21:34.940 |
But we immediately ascribe the kind of intelligence, we immediately ascribe AGI to them. 00:21:40.300 |
Yeah, yeah, if you kick it and it falls down and goes, "Ow," you feel bad, right? 00:21:45.380 |
And I mean, that's going to be part of our journey in creating intelligent systems. 00:21:54.380 |
As Sophia starts out with a walking skeleton, as you add more and more intelligence, I mean, 00:22:00.180 |
we're going to have to deal with this kind of idea. 00:22:03.380 |
And about Sophia, I would say, first of all, I have nothing against Jan LeCun. 00:22:10.420 |
If he wants to play the media banter game, I'm happy to play it. 00:22:15.940 |
He's a good researcher and a good human being. 00:22:21.620 |
The other thing I was going to say is, I have been explicit about how Sophia works. 00:22:28.220 |
And I've posted online, on H+ Magazine, an online webzine, I mean, I posted a moderately 00:22:36.540 |
detailed article explaining, like, there are three software systems we've used inside Sophia. 00:22:43.220 |
There's a timeline editor, which is like a rule-based authoring system, where she's really 00:22:47.100 |
just being an outlet for what a human scripted. 00:22:50.620 |
There's a chatbot, which has some rule-based and some neural aspects. 00:22:54.380 |
And then sometimes we've used OpenCog behind Sophia, where there's more learning and reasoning. 00:22:59.980 |
And you know, the funny thing is, I can't always tell which system is operating here, 00:23:06.220 |
Because whether she's really learning or thinking, or just appears to be, over a half 00:23:11.580 |
hour I could tell, but over like three or four minutes of interaction, I couldn't tell. 00:23:16.180 |
So even having three systems that's already sufficiently complex, where you can't really 00:23:20.940 |
Yeah, the thing is, even if you get up on stage and tell people how Sophia's working, 00:23:27.500 |
and then they talk to her, they still attribute more agency and consciousness to her than 00:23:36.820 |
So I think there's a couple levels of ethical issue there. 00:23:41.900 |
One issue is, should you be transparent about how Sophia is working? 00:23:54.220 |
I mean, there's articles online, there's some TV special that goes through me explaining 00:24:03.320 |
You know, the way Sophia works is out there much more clearly than how Facebook's AI works 00:24:13.820 |
The other is, given that telling people how it works doesn't cause them to not attribute 00:24:20.140 |
too much intelligence agency to it anyway, then should you keep fooling them when they 00:24:28.980 |
And I mean, the whole media industry is based on fooling people the way they want to be 00:24:35.380 |
And we are fooling people 100% toward a good end. 00:24:41.300 |
I mean, we are playing on people's sense of empathy and compassion so that we can give 00:24:48.140 |
them a good user experience with helpful robots, and so that we can fill the AI's mind with 00:24:57.300 |
So I've been talking a lot with Hanson Robotics lately about collaborations in the area of 00:25:05.220 |
And we haven't quite pulled the trigger on a project in that domain yet, but we may well 00:25:12.580 |
So we've been talking a lot about, you know, robots can help with elder care, robots can 00:25:19.320 |
David's done a lot of things with autism therapy and robots before. 00:25:24.400 |
In the COVID era, having a robot that can be a nursing assistant in various senses can 00:25:30.160 |
The robots don't spread infection, and they can also deliver more attention than human 00:25:35.760 |
So if you have a robot that's helping a patient with COVID, if that patient attributes more 00:25:42.740 |
understanding and compassion and agency to that robot than it really has, because it 00:25:47.180 |
looks like a human, I mean, is that really bad? 00:25:50.700 |
I mean, we can tell them it doesn't fully understand you, and they don't care because 00:25:54.800 |
they're lying there with a fever and they're sick. 00:25:57.200 |
But they'll react better to that robot with its loving, warm facial expression than they 00:26:01.700 |
would to a pepper robot or a metallic-looking robot. 00:26:05.960 |
So it's really, it's about how you use it, right? 00:26:09.160 |
If you made a human-looking, like, door-to-door sales robot that used its human-looking appearance 00:26:14.800 |
to scam people out of their money, then you're using that connection in a bad way, but you 00:26:24.980 |
But then that's the same problem with every technology, right?