back to index

Kate Darling: Social Robotics | Lex Fridman Podcast #98


Chapters

0:0 Introduction
3:31 Robot ethics
4:36 Universal Basic Income
6:31 Mistreating robots
17:17 Robots teaching us about ourselves
20:27 Intimate connection with robots
24:29 Trolley problem and making difficult moral decisions
31:59 Anthropomorphism
38:9 Favorite robot
41:19 Sophia
42:46 Designing robots for human connection
47:1 Why is it so hard to build a personal robotics company?
50:3 Is it possible to fall in love with a robot?
56:39 Robots displaying consciousness and mortality
58:33 Manipulation of emotion by companies
64:40 Intellectual property
69:23 Lessons for robotics from parenthood
70:41 Hope for future of robotics

Whisper Transcript | Transcript Only Page

00:00:00.000 | "The following is a conversation with Kate Darling,
00:00:02.920 | "a researcher at MIT, interested in social robotics,
00:00:06.400 | "robot ethics, and generally,
00:00:08.360 | "how technology intersects with society.
00:00:11.080 | "She explores the emotional connection
00:00:12.720 | "between human beings and lifelike machines,
00:00:15.800 | "which for me is one of the most exciting topics
00:00:18.600 | "in all of artificial intelligence.
00:00:21.420 | "As she writes in her bio,
00:00:23.440 | "she's a caretaker of several domestic robots,
00:00:26.260 | "including her Plio dinosaur robots
00:00:28.980 | "named Yo-Chai, Peter, and Mr. Spaghetti.
00:00:33.620 | "She is one of the funniest and brightest minds
00:00:35.680 | "I've ever had the fortune to talk to.
00:00:37.960 | "This conversation was recorded recently,
00:00:40.020 | "but before the outbreak of the pandemic.
00:00:42.320 | "For everyone feeling the burden of this crisis,
00:00:44.660 | "I'm sending love your way."
00:00:46.760 | This is the Artificial Intelligence Podcast.
00:00:49.400 | If you enjoy it, subscribe on YouTube,
00:00:51.440 | review it with Five Stars and Apple Podcast,
00:00:53.800 | support it on Patreon,
00:00:55.120 | or simply connect with me on Twitter
00:00:57.000 | at Lex Friedman, spelled F-R-I-D-M-A-N.
00:01:00.820 | As usual, I'll do a few minutes of ads now
00:01:03.060 | and never any ads in the middle
00:01:04.540 | that can break the flow of the conversation.
00:01:06.820 | I hope that works for you
00:01:08.060 | and doesn't hurt the listening experience.
00:01:11.060 | Quick summary of the ads.
00:01:12.500 | Two sponsors, Masterclass and ExpressVPN.
00:01:16.620 | Please consider supporting the podcast
00:01:18.300 | by signing up to Masterclass at masterclass.com/lex
00:01:22.460 | and getting ExpressVPN at expressvpn.com/lexpod.
00:01:27.460 | This show is sponsored by Masterclass.
00:01:31.720 | Sign up at masterclass.com/lex to get a discount
00:01:35.560 | and to support this podcast.
00:01:37.920 | When I first heard about Masterclass,
00:01:39.920 | I thought it was too good to be true.
00:01:42.000 | For $180 a year, you get an all-access pass
00:01:45.360 | to watch courses from, to list some of my favorites,
00:01:48.920 | Chris Hadfield on space exploration,
00:01:51.520 | Neil deGrasse Tyson on scientific thinking
00:01:53.580 | and communication, Will Wright, creator of SimCity and Sims,
00:01:57.580 | love those games, on game design,
00:02:00.380 | Carlos Santana on guitar,
00:02:02.820 | Garry Kasparov on chess,
00:02:04.860 | Daniel Negrano on poker, and many more.
00:02:07.840 | Chris Hadfield explaining how rockets work
00:02:10.340 | and the experience of being launched into space alone
00:02:12.820 | is worth the money.
00:02:14.300 | By the way, you can watch it on basically any device.
00:02:18.400 | Once again, sign up on masterclass.com/lex
00:02:21.740 | to get a discount and to support this podcast.
00:02:25.180 | This show is sponsored by ExpressVPN.
00:02:28.680 | Get it at expressvpn.com/lexpod
00:02:32.280 | to get a discount and to support this podcast.
00:02:35.520 | I've been using ExpressVPN for many years.
00:02:38.080 | I love it.
00:02:39.240 | It's easy to use, press the big power on button
00:02:42.360 | and your privacy is protected.
00:02:44.040 | And if you like, you can make it look
00:02:46.700 | like your location's anywhere else in the world.
00:02:49.220 | I might be in Boston now,
00:02:50.760 | but I can make it look like I'm in New York,
00:02:52.600 | London, Paris, or anywhere else.
00:02:55.000 | This has a large number of obvious benefits.
00:02:57.620 | Certainly, it allows you to access international versions
00:03:00.600 | of streaming websites like the Japanese Netflix
00:03:03.320 | or the UK Hulu.
00:03:05.280 | ExpressVPN works on any device you can imagine.
00:03:08.480 | I use it on Linux.
00:03:10.600 | Shout out to Ubuntu, 2004, Windows, Android,
00:03:15.680 | but it's available everywhere else too.
00:03:17.960 | Once again, get it at expressvpn.com/lexpod
00:03:22.660 | to get a discount and to support this podcast.
00:03:25.360 | And now, here's my conversation with Kate Darling.
00:03:30.180 | You co-taught robot ethics at Harvard.
00:03:34.060 | What are some ethical issues that arise
00:03:36.340 | in the world with robots?
00:03:38.820 | - Yeah, that was a reading group that I did
00:03:41.960 | when I, like at the very beginning,
00:03:44.420 | first became interested in this topic.
00:03:46.660 | So I think if I taught that class today,
00:03:49.020 | it would look very, very different.
00:03:50.840 | Robot ethics, it sounds very science fictiony,
00:03:54.940 | especially did back then,
00:03:56.340 | but I think that some of the issues
00:04:00.220 | that people in robot ethics are concerned with
00:04:02.700 | are just around the ethical use
00:04:04.340 | of robotic technology in general.
00:04:05.700 | So for example, responsibility for harm,
00:04:08.340 | automated weapon systems,
00:04:09.980 | things like privacy and data security,
00:04:11.900 | things like automation and labor markets.
00:04:16.900 | And then personally, I'm really interested
00:04:19.740 | in some of the social issues
00:04:21.140 | that come out of our social relationships with robots.
00:04:23.940 | - One-on-one relationship with robots.
00:04:26.020 | - Yeah.
00:04:26.860 | - I think most of the stuff we have to talk about
00:04:28.340 | is like one-on-one social stuff.
00:04:29.780 | That's what I love.
00:04:30.660 | I think that's what you love as well,
00:04:33.020 | and they're expert in.
00:04:34.340 | But at societal level, there's like,
00:04:36.700 | there's a presidential candidate now, Andrew Yang, running.
00:04:41.540 | Concerned about automation and robots
00:04:44.620 | and AI in general taking away jobs.
00:04:46.900 | He has a proposal of UBI, universal basic income,
00:04:50.020 | of everybody gets a thousand bucks.
00:04:51.940 | - Yeah.
00:04:52.780 | - As a way to sort of save you
00:04:55.260 | if you lose your job from automation,
00:04:56.780 | to allow you time to discover what it is
00:04:59.940 | that you would like to or even love to do.
00:05:04.660 | - Yes.
00:05:05.700 | So I lived in Switzerland for 20 years,
00:05:08.100 | and universal basic income
00:05:10.200 | has been more of a topic there,
00:05:13.340 | separate from the whole robots and jobs issue.
00:05:15.780 | So it's so interesting to me to see
00:05:19.340 | kind of these Silicon Valley people
00:05:22.020 | latch onto this concept that came from
00:05:24.740 | a very kind of left-wing socialist,
00:05:28.760 | you know, kind of a different place in Europe.
00:05:33.160 | But on the automation labor markets topic,
00:05:38.060 | I think that it's very,
00:05:40.460 | so sometimes in those conversations,
00:05:43.020 | I think people overestimate
00:05:44.820 | where robotic technology is right now.
00:05:47.860 | And we also have this fallacy
00:05:49.940 | of constantly comparing robots to humans
00:05:52.220 | and thinking of this as a one-to-one replacement of jobs.
00:05:55.380 | So even like Bill Gates, a few years ago,
00:05:58.400 | said something about, you know,
00:05:59.860 | maybe we should have a system that taxes robots
00:06:02.900 | for taking people's jobs.
00:06:05.260 | And it just, I mean,
00:06:07.620 | I'm sure that was taken out of context.
00:06:09.540 | You know, he's a really smart guy,
00:06:10.660 | but that sounds to me like kind of
00:06:12.700 | viewing it as a one-to-one replacement
00:06:14.460 | versus viewing this technology
00:06:16.740 | as kind of a supplemental tool
00:06:19.280 | that of course is gonna shake up a lot of stuff,
00:06:21.660 | it's gonna change the job landscape.
00:06:23.600 | But I don't see, you know,
00:06:26.740 | robots taking all the jobs in the next 20 years.
00:06:28.820 | That's just not how it's gonna work.
00:06:30.980 | - Right, so maybe drifting into the land
00:06:33.660 | of more personal relationships with robots
00:06:36.220 | and interaction and so on.
00:06:37.560 | I gotta warn you, I go,
00:06:40.940 | I may ask some silly philosophical questions, I apologize.
00:06:44.140 | - Oh, please do.
00:06:45.180 | - Okay, do you think humans will abuse robots
00:06:50.180 | in their interactions?
00:06:51.020 | So you've had a lot of,
00:06:52.780 | and we'll talk about it,
00:06:53.860 | sort of anthropomorphization and,
00:06:56.580 | you know, this intricate dance,
00:07:00.820 | emotional dance between human and robot,
00:07:02.620 | but there seems to be also a darker side
00:07:05.540 | where people, when they treat the other,
00:07:09.420 | as servants especially,
00:07:10.940 | they can be a little bit abusive or a lot abusive.
00:07:13.620 | Do you think about that?
00:07:15.060 | Do you worry about that?
00:07:16.540 | - Yeah, I do think about that.
00:07:18.060 | So, I mean, one of my main interests
00:07:21.980 | is the fact that people subconsciously treat robots
00:07:24.540 | like living things,
00:07:26.140 | and even though they know
00:07:27.460 | that they're interacting with a machine,
00:07:29.540 | and what it means in that context to behave violently.
00:07:34.220 | I don't know if you could say abuse
00:07:35.820 | because you're not actually, you know,
00:07:38.500 | abusing the inner mind of the robot,
00:07:40.980 | the robot doesn't have any feelings.
00:07:42.900 | - As far as you know.
00:07:44.060 | - Well, yeah.
00:07:46.220 | It also depends on how we define feelings and consciousness,
00:07:48.560 | but I think that's another area
00:07:51.220 | where people kind of overestimate
00:07:52.700 | where we currently are with the technology.
00:07:54.320 | Like the robots are not even as smart as insects right now.
00:07:57.180 | And so I'm not worried about abuse in that sense,
00:08:01.300 | but it is interesting to think about
00:08:03.620 | what does people's behavior towards these things mean
00:08:06.740 | for our own behavior?
00:08:09.660 | Is it desensitizing the people to, you know,
00:08:12.660 | be verbally abusive to a robot or even physically abusive?
00:08:16.300 | And we don't know.
00:08:17.620 | - Right, it's a similar connection
00:08:19.160 | from like if you play violent video games,
00:08:21.740 | what connection does that have
00:08:23.420 | to desensitization to violence?
00:08:26.660 | - That's actually, I haven't read literature on that.
00:08:29.500 | I wonder about that.
00:08:30.740 | Because everything I've heard,
00:08:33.780 | people don't seem to any longer be so worried
00:08:36.220 | about violent video games.
00:08:38.220 | - Correct.
00:08:39.060 | We've seemed, the research on it is,
00:08:42.660 | it's a difficult thing to research.
00:08:44.620 | So it's sort of inconclusive,
00:08:46.820 | but we seem to have gotten the sense,
00:08:50.060 | at least as a society, that people can compartmentalize.
00:08:53.660 | When it's something on a screen
00:08:54.940 | and you're like, you know, shooting a bunch of characters
00:08:57.580 | or running over people with your car,
00:08:59.300 | that doesn't necessarily translate
00:09:00.980 | to you doing that in real life.
00:09:02.980 | We do, however, have some concerns
00:09:05.020 | about children playing violent video games.
00:09:06.980 | And so we do restrict it there.
00:09:08.800 | I'm not sure that's based on any real evidence either,
00:09:12.940 | but it's just the way that we've kind of decided,
00:09:15.140 | you know, we wanna be a little more cautious there.
00:09:17.580 | And the reason I think robots are a little bit different
00:09:19.860 | is because there is a lot of research showing
00:09:21.860 | that we respond differently to something
00:09:23.860 | in our physical space than something on a screen.
00:09:26.660 | We will treat it much more viscerally,
00:09:29.460 | much more like a physical actor.
00:09:32.140 | And so it's totally possible that this is not a problem.
00:09:37.140 | And it's the same thing as violence in video games.
00:09:40.500 | You know, maybe, you know, restrict it with kids to be safe,
00:09:43.460 | but adults can do what they want.
00:09:45.340 | But we just need to ask the question again,
00:09:47.740 | because we don't have any evidence at all yet.
00:09:51.020 | - Maybe there's an intermediate place to,
00:09:53.580 | I did my research on Twitter.
00:09:57.460 | By research, I mean scrolling through your Twitter feed.
00:10:00.700 | You mentioned that you were going at some point
00:10:02.660 | to an animal law conference.
00:10:04.700 | So I have to ask,
00:10:05.540 | do you think there's something that we can learn
00:10:08.260 | from animal rights that guides our thinking about robots?
00:10:13.320 | - Oh, I think there is so much to learn from that.
00:10:15.320 | I'm actually writing a book on it right now.
00:10:17.220 | That's why I'm going to this conference.
00:10:18.620 | So I'm writing a book that looks at the history
00:10:22.140 | of animal domestication and how we've used animals
00:10:24.500 | for work, for weaponry, for companionship.
00:10:27.460 | And, you know, one of the things the book tries to do
00:10:31.180 | is move away from this fallacy that I talked about
00:10:34.180 | of comparing robots and humans,
00:10:35.700 | because I don't think that's the right analogy.
00:10:38.300 | But I do think that on a social level,
00:10:41.380 | even on a social level,
00:10:42.340 | there's so much that we can learn
00:10:43.620 | from looking at that history.
00:10:44.820 | Because throughout history,
00:10:46.500 | we've treated most animals like tools, like products.
00:10:49.540 | And then some of them we've treated differently,
00:10:51.380 | and we're starting to see people treat robots
00:10:53.220 | in really similar ways.
00:10:54.360 | So I think it's a really helpful predictor
00:10:57.060 | to how we're going to interact with the robots.
00:10:59.260 | - Do you think we'll look back at this time,
00:11:01.260 | like 100 years from now,
00:11:02.700 | and see what we do to animals as like,
00:11:07.700 | similar to the way we view like the Holocaust
00:11:10.380 | in World War II?
00:11:13.540 | - That's a great question.
00:11:14.580 | I mean, I hope so.
00:11:16.820 | I am not convinced that we will.
00:11:21.400 | But I often wonder,
00:11:22.920 | what are my grandkids gonna view as abhorrent
00:11:27.080 | that my generation did, that they would never do?
00:11:29.880 | And I'm like, well, what's the big deal?
00:11:32.000 | You know, it's a fun question to ask yourself.
00:11:34.680 | - It always seems that there's atrocities
00:11:37.560 | that we discover later.
00:11:40.720 | So the things that at the time people didn't see as,
00:11:44.440 | you know, you look at everything from slavery
00:11:47.760 | to any kinds of abuse throughout history,
00:11:50.720 | to the kind of insane wars that were happening,
00:11:54.200 | to the way war was carried out,
00:11:57.120 | and rape and the kind of violence
00:11:59.800 | that was happening during war,
00:12:01.300 | that we now, you know, we see as atrocities,
00:12:06.660 | but at the time perhaps didn't as much.
00:12:09.480 | And so now I have this intuition that,
00:12:14.040 | I have this worry, maybe I'm,
00:12:15.960 | you're going to probably criticize me,
00:12:18.480 | but I do anthropomorphize robots.
00:12:22.320 | I have, I don't see a fundamental philosophical difference
00:12:27.320 | between a robot and a human being,
00:12:29.700 | in terms of once the capabilities are matched.
00:12:35.060 | So the fact that we're really far away doesn't,
00:12:39.660 | in terms of capabilities,
00:12:42.040 | and then that from natural language processing,
00:12:44.360 | understanding and generation,
00:12:45.560 | to just reasoning and all that stuff,
00:12:47.600 | I think once you solve it, I see the,
00:12:50.200 | this is a very gray area,
00:12:52.240 | and I don't feel comfortable with the kind of abuse
00:12:54.800 | that people throw at robots.
00:12:57.480 | Subtle, but I can see it becoming,
00:13:00.240 | I can see basically a civil rights movement
00:13:02.360 | for robots in the future.
00:13:04.560 | Do you think, let me put it in the form of a question,
00:13:07.560 | do you think robots should have some kinds of rights?
00:13:11.160 | - Well, it's interesting because I came at this originally
00:13:15.040 | from your perspective.
00:13:16.340 | I was like, you know what?
00:13:17.960 | There's no fundamental difference between technology
00:13:21.240 | and human consciousness.
00:13:23.680 | We can probably recreate anything,
00:13:25.960 | we just don't know how yet.
00:13:27.160 | And so there's no reason not to give machines
00:13:32.040 | the same rights that we have once,
00:13:34.100 | like you say, they're kind of on an equivalent level.
00:13:37.280 | But I realized that that is kind of a far future question.
00:13:41.120 | I still think we should talk about it
00:13:42.240 | 'cause I think it's really interesting,
00:13:43.280 | but I realized that it's actually,
00:13:46.340 | we might need to ask the robot rights question
00:13:48.460 | even sooner than that,
00:13:50.060 | while the machines are still, quote unquote,
00:13:52.700 | really dumb and not on our level,
00:13:57.140 | because of the way that we perceive them.
00:13:59.320 | And I think one of the lessons we learned
00:14:01.260 | from looking at the history of animal rights,
00:14:03.060 | and one of the reasons we may not get to a place
00:14:06.180 | in a hundred years where we view it as wrong to eat
00:14:09.180 | or otherwise use animals for our own purposes
00:14:12.900 | is because historically, we've always protected
00:14:15.960 | those things that we relate to the most.
00:14:18.580 | So one example is whales.
00:14:20.520 | No one gave a shit about the whales.
00:14:22.720 | Am I allowed to swear?
00:14:23.880 | - Yeah, you can swear as much as you want.
00:14:26.940 | Freedom.
00:14:27.780 | - Yeah, no one gave a shit about the whales
00:14:28.920 | until someone recorded them singing.
00:14:30.840 | And suddenly people were like,
00:14:32.200 | oh, this is a beautiful creature
00:14:34.160 | and now we need to save the whales.
00:14:35.760 | And that started the whole
00:14:36.840 | Save the Whales movement in the '70s.
00:14:38.520 | So as much as I, and I think a lot of people
00:14:43.520 | wanna believe that we care about
00:14:47.360 | consistent biological criteria,
00:14:51.100 | that's not historically how we formed our alliances.
00:14:55.100 | - Yeah, so why do we believe
00:14:58.780 | that all humans are created equal?
00:15:01.900 | Killing of a human being, no matter who the human being is,
00:15:05.760 | that's what I meant by equality, is bad.
00:15:09.060 | And then, 'cause I'm connecting that to robots
00:15:12.260 | and I'm wondering whether mortality,
00:15:14.620 | so the killing act is what makes something,
00:15:17.460 | that's the fundamental first right.
00:15:19.860 | So I am currently allowed to take a shotgun
00:15:23.740 | and shoot a Roomba, I think.
00:15:26.580 | I'm not sure, but I'm pretty sure.
00:15:29.100 | It's not considered murder, right?
00:15:31.860 | Or even shutting them off.
00:15:33.480 | So that's where the line appears to be, right?
00:15:38.060 | Is this mortality a critical thing here?
00:15:41.420 | - I think here again, the animal analogy is really useful
00:15:44.860 | because you're also allowed to shoot your dog,
00:15:47.840 | but people won't be happy about it.
00:15:50.140 | So we do give animals certain protections from like,
00:15:54.580 | you're not allowed to torture your dog
00:15:57.160 | and set it on fire,
00:15:58.500 | at least in most states and countries.
00:16:01.840 | But you're still allowed to treat it
00:16:04.360 | like a piece of property in a lot of other ways.
00:16:06.480 | And so we draw these arbitrary lines all the time.
00:16:11.480 | And there's a lot of philosophical thought
00:16:16.960 | on why viewing humans as something unique
00:16:21.080 | is just speciesism and not based on any criteria
00:16:29.360 | that would actually justify making a difference
00:16:32.420 | between us and other species.
00:16:34.580 | - Do you think in general,
00:16:36.060 | most people are good?
00:16:39.820 | (Amy laughs)
00:16:41.700 | Or do you think there's evil and good in all of us?
00:16:45.220 | That's revealed through our circumstances
00:16:50.940 | and through our interactions.
00:16:52.540 | - I like to view myself as a person
00:16:55.220 | who believes that there's no absolute evil and good
00:16:58.120 | and that everything is gray.
00:17:00.760 | But I do think it's an interesting question.
00:17:06.500 | Like when I see people being violent towards robotic objects,
00:17:09.820 | you said that bothers you
00:17:10.980 | because the robots might someday be smart.
00:17:15.300 | And is that what-
00:17:17.620 | - Well, it bothers me because it reveals,
00:17:19.660 | so I personally believe,
00:17:21.660 | 'cause I've studied way too much,
00:17:23.340 | so I'm Jewish,
00:17:24.180 | I studied the Holocaust and World War II exceptionally well.
00:17:27.860 | I personally believe that most of us have evil in us.
00:17:30.720 | That what bothers me is the abuse of robots
00:17:36.320 | reveals the evil in human beings.
00:17:38.720 | - Yeah.
00:17:39.560 | - And I think it doesn't just bother me,
00:17:43.220 | I think it's an opportunity for roboticists
00:17:47.920 | to help people find the better sides,
00:17:52.920 | the angels of their nature, right?
00:17:55.480 | - Yeah.
00:17:56.320 | - So this isn't just a fun side thing,
00:17:58.200 | that's you revealing a dark part
00:18:00.800 | that should be hidden deep inside.
00:18:04.240 | - Yeah, I mean, you laugh,
00:18:06.840 | but some of our research does indicate
00:18:09.360 | that maybe people's behavior towards robots
00:18:12.280 | reveals something about their tendencies
00:18:14.240 | for empathy generally,
00:18:15.160 | even using very simple robots that we have today
00:18:17.280 | that clearly don't feel anything.
00:18:19.600 | So Westworld is maybe,
00:18:24.680 | not so far off and it's like,
00:18:26.520 | depicting the bad characters
00:18:29.380 | as willing to go around and shoot and rape the robots
00:18:31.880 | and the good characters is not wanting to do that,
00:18:34.320 | even without assuming that the robots have consciousness.
00:18:37.720 | - So there's a opportunity,
00:18:39.320 | it's interesting,
00:18:40.160 | there's opportunity to almost practice empathy.
00:18:42.380 | Robots is an opportunity to practice empathy.
00:18:47.960 | - I agree with you.
00:18:49.240 | Some people would say,
00:18:51.780 | why are we practicing empathy on robots
00:18:53.860 | instead of on our fellow humans
00:18:56.120 | or on animals that are actually alive
00:18:58.320 | and experience the world?
00:18:59.980 | And I don't agree with them
00:19:01.480 | because I don't think empathy is a zero sum game
00:19:03.600 | and I do think that it's a muscle that you can train
00:19:05.760 | and that we should be doing that,
00:19:07.160 | but some people disagree.
00:19:10.840 | - So the interesting thing,
00:19:12.360 | you've heard raising kids,
00:19:15.740 | sort of asking them or telling them to be nice
00:19:21.820 | to the smart speakers, to Alexa and so on,
00:19:26.320 | saying please and so on during the requests.
00:19:28.840 | I don't know if,
00:19:30.120 | I'm a huge fan of that idea
00:19:32.600 | because yeah,
00:19:33.440 | that's towards the idea of practicing empathy.
00:19:35.440 | I feel like politeness,
00:19:36.560 | I'm always polite to all the systems that we build,
00:19:39.960 | especially anything that's speech interaction based,
00:19:42.040 | like when we talk to the car,
00:19:43.840 | I'll always have a pretty good detector for please.
00:19:46.640 | I feel like there should be a room
00:19:50.160 | for encouraging empathy in those interactions.
00:19:53.400 | Yeah.
00:19:54.240 | - Okay, so I agree with you,
00:19:55.080 | so I'm gonna play devil's advocate.
00:19:56.320 | - Sure.
00:19:57.160 | (both laughing)
00:19:58.440 | - So-- - Yeah,
00:19:59.280 | what is the devil's advocate argument there?
00:20:01.240 | - The devil's advocate argument is that
00:20:03.200 | if you are the type of person who has abusive tendencies
00:20:07.020 | or needs to get some sort of behavior like that out,
00:20:10.120 | needs an outlet for it,
00:20:11.600 | that it's great to have a robot that you can scream at
00:20:15.280 | so that you're not screaming at a person.
00:20:17.120 | And we just don't know whether that's true,
00:20:19.880 | whether it's an outlet for people,
00:20:21.440 | or whether it just kind of, as my friend once said,
00:20:23.620 | trains their cruelty muscles
00:20:25.000 | and makes them more cruel in other situations.
00:20:27.440 | - Oh boy, yeah, and that expands to other topics,
00:20:33.200 | which I don't know, you know,
00:20:35.680 | there's a topic of sex,
00:20:38.960 | which is a weird one that I tend to avoid
00:20:41.600 | from robotics perspective,
00:20:43.040 | and mostly the general public doesn't.
00:20:45.520 | They talk about sex robots and so on.
00:20:48.600 | Is that an area you've touched at all research-wise?
00:20:52.180 | Like the way, 'cause that's what people imagine
00:20:56.680 | sort of any kind of interaction between human and robot
00:20:59.800 | that shows any kind of compassion.
00:21:03.140 | They immediately think from a product perspective
00:21:05.800 | in the near term is sort of expansion of what pornography is
00:21:10.440 | and all that kind of stuff.
00:21:11.560 | - Yeah.
00:21:12.880 | - Do researchers touch this?
00:21:13.720 | - Well, that's kind of you to like characterize it as.
00:21:16.480 | Oh, they're thinking rationally about product.
00:21:18.840 | I feel like sex robots are just such a like titillating
00:21:21.960 | news hook for people that they become like the story.
00:21:26.320 | And it's really hard to not get fatigued by it
00:21:29.520 | when you're in the space,
00:21:30.680 | because you tell someone you do human-robot interaction,
00:21:33.280 | of course, the first thing they wanna talk about
00:21:34.800 | is sex robots.
00:21:35.640 | - Really? - Yeah, it happens a lot.
00:21:37.600 | And it's unfortunate that I'm so fatigued by it
00:21:41.460 | because I do think that there are some interesting questions
00:21:43.900 | that become salient when you talk about sex with robots.
00:21:48.900 | - See, what I think would happen when people get sex robots,
00:21:52.040 | like if you're like, "What's up, guys?"
00:21:53.320 | Okay, guys get female sex robots.
00:21:55.840 | What I think there's an opportunity for is an actual,
00:22:00.620 | like they'll actually interact.
00:22:05.340 | What I'm trying to say, they won't,
00:22:08.200 | outside of the sex would be the most fulfilling part.
00:22:11.440 | Like the interaction, it's like the folks who,
00:22:13.920 | there's movies and this, right?
00:22:15.260 | Who pay a prostitute and then end up just talking to her
00:22:20.140 | the whole time.
00:22:21.020 | So I feel like there's an opportunity.
00:22:22.900 | It's like most guys and people in general joke
00:22:25.820 | about the sex act, but really people are just lonely inside
00:22:30.340 | and they're looking for connection, many of them.
00:22:33.280 | And it'd be unfortunate if that connection is established
00:22:38.280 | through the sex industry.
00:22:41.180 | I feel like it should go into the front door
00:22:43.660 | of like people are lonely and they want a connection.
00:22:47.500 | - Well, I also feel like we should kind of destigmatize
00:22:51.620 | the sex industry because even prostitution,
00:22:56.420 | like there are prostitutes that specialize
00:22:58.140 | in disabled people who don't have the same kind
00:23:01.940 | of opportunities to explore their sexuality.
00:23:06.080 | So I feel like we should destigmatize all of that generally.
00:23:10.760 | - But yeah, that connection and that loneliness
00:23:13.220 | is an interesting topic that you bring up
00:23:16.100 | because while people are constantly worried
00:23:18.860 | about robots replacing humans and oh,
00:23:21.060 | if people get sex robots and the sex is really good,
00:23:23.340 | then they won't want their partner or whatever.
00:23:26.500 | But we rarely talk about robots actually filling a hole
00:23:29.660 | where there's nothing and what benefit
00:23:32.500 | that can provide to people.
00:23:34.780 | - Yeah, I think that's an exciting,
00:23:37.220 | there's a giant hole that's unfillable by humans.
00:23:42.180 | It's asking too much of your friends
00:23:45.080 | and people you're in a relationship with
00:23:46.280 | in your family to fill that hole.
00:23:47.920 | 'Cause it's exploring the full complexity
00:23:53.840 | and richness of who you are.
00:23:57.280 | Like who are you really?
00:23:58.580 | Your family doesn't have enough patience
00:24:03.440 | to really sit there and listen to who are you really?
00:24:06.280 | And I feel like there's an opportunity
00:24:07.840 | to really make that connection with robots.
00:24:10.660 | - I just feel like we're complex as humans
00:24:13.560 | and we're capable of lots of different types
00:24:16.280 | of relationships.
00:24:17.760 | So whether that's with family members, with friends,
00:24:20.520 | with our pets or with robots,
00:24:22.660 | I feel like there's space for all of that
00:24:24.960 | and all of that can provide value in a different way.
00:24:27.980 | - Yeah, absolutely.
00:24:30.200 | So I'm jumping around.
00:24:32.700 | Currently most of my work is in autonomous vehicles.
00:24:36.620 | So the most popular topic among general public
00:24:40.340 | is the trolley problem.
00:24:43.420 | So most roboticists kind of hate this question,
00:24:48.420 | but what do you think of this thought experiment?
00:24:52.660 | What do you think we can learn from it
00:24:54.100 | outside of the silliness of the actual application of it
00:24:57.580 | to the autonomous vehicle?
00:24:59.020 | I think it's still an interesting ethical question
00:25:01.700 | and that in itself,
00:25:04.460 | just like much of the interaction with robots
00:25:06.980 | has something to teach us,
00:25:08.100 | but from your perspective,
00:25:09.500 | do you think there's anything there?
00:25:11.060 | - Well, I think you're right
00:25:12.220 | that it does have something to teach us
00:25:14.460 | but I think what people are forgetting
00:25:16.860 | in all of these conversations
00:25:18.020 | is the origins of the trolley problem
00:25:20.300 | and what it was meant to show us,
00:25:22.660 | which is that there is no right answer
00:25:24.520 | and that sometimes our moral intuition
00:25:27.500 | that comes to us instinctively
00:25:29.900 | is not actually what we should follow
00:25:33.700 | if we care about creating systematic rules
00:25:35.900 | that apply to everyone.
00:25:36.940 | So I think that as a philosophical concept,
00:25:41.940 | it could teach us at least that,
00:25:44.820 | but that's not how people are using it right now.
00:25:47.100 | Like we have, and these are friends of mine
00:25:49.180 | and like I love them dearly
00:25:50.660 | and their project adds a lot of value,
00:25:53.140 | but if we're viewing the moral machine project
00:25:56.660 | as what we can learn from the trolley problems,
00:25:59.380 | so the moral machine is, I'm sure you're familiar,
00:26:02.380 | it's this website that you can go to
00:26:04.140 | and it gives you different scenarios like,
00:26:05.980 | oh, you're in a car, you can decide to run over
00:26:09.380 | these two people or this child, what do you choose?
00:26:12.580 | Do you choose the homeless person?
00:26:14.420 | Do you choose the person who's jaywalking?
00:26:16.180 | And so it pits these like moral choices against each other
00:26:20.700 | and then tries to crowdsource
00:26:22.500 | the quote unquote correct answer,
00:26:25.220 | which is really interesting and I think valuable data,
00:26:29.180 | but I don't think that's what we should base
00:26:31.380 | our rules in autonomous vehicles on
00:26:33.460 | because it is exactly what the trolley problem
00:26:36.340 | is trying to show, which is your first instinct
00:26:39.420 | might not be the correct one if you look at rules
00:26:42.940 | that then have to apply to everyone and everything.
00:26:45.860 | - So how do we encode these ethical choices
00:26:48.300 | in interaction with robots?
00:26:49.660 | So for example, in autonomous vehicles,
00:26:51.780 | there is a serious ethical question of,
00:26:55.420 | do I protect myself?
00:26:59.060 | Does my life have higher priority
00:27:01.060 | than the life of another human being?
00:27:03.860 | Because that changes certain controlled decisions
00:27:06.980 | that you make.
00:27:08.100 | So if your life matters more than other human beings,
00:27:11.660 | then you'd be more likely to swerve
00:27:13.860 | out of your current lane.
00:27:15.100 | So currently automated emergency braking systems
00:27:17.780 | that just brake, they don't ever swerve.
00:27:21.660 | - Right.
00:27:22.500 | - So swerving into oncoming traffic or no,
00:27:25.900 | just in a different lane can cause significant harm
00:27:28.780 | to others, but it's possible that it causes less harm
00:27:32.460 | to you.
00:27:33.280 | So that's a difficult ethical question.
00:27:35.060 | Do you have a hope that,
00:27:39.700 | like the trolley problem is not supposed
00:27:43.340 | to have a right answer, right?
00:27:45.420 | Do you hope that when we have robots at the table,
00:27:48.540 | we'll be able to discover the right answer
00:27:50.420 | for some of these questions?
00:27:51.860 | - Well, what's happening right now, I think,
00:27:55.020 | is this question that we're facing of,
00:27:58.620 | what ethical rules should we be programming
00:28:01.460 | into the machines is revealing to us
00:28:03.340 | that our ethical rules are much less programmable
00:28:07.100 | than we probably thought before.
00:28:09.900 | And so that's a really valuable insight,
00:28:13.980 | I think, that these issues are very complicated
00:28:18.060 | and that in a lot of these cases,
00:28:20.780 | you can't really make that call,
00:28:22.740 | like not even as a legislator.
00:28:24.820 | And so what's gonna happen in reality, I think,
00:28:27.540 | is that car manufacturers are just gonna try
00:28:31.700 | and avoid the problem and avoid liability
00:28:34.300 | in any way possible, or they're gonna always protect
00:28:37.100 | the driver because who's gonna buy a car
00:28:39.180 | if it's programmed to kill you instead of someone else?
00:28:43.820 | So that's what's gonna happen in reality.
00:28:47.300 | But what did you mean by, once we have robots at the table,
00:28:50.700 | do you mean when they can help us figure out what to do?
00:28:54.780 | - No, I mean when robots are part of the ethical decisions.
00:28:59.180 | So no, no, no, not they help us, well.
00:29:02.380 | - Oh, you mean when it's like,
00:29:06.460 | should I run over a robot or a person?
00:29:09.340 | - Right, that kind of thing.
00:29:10.540 | So, no, no, no, no.
00:29:12.580 | So when you, it's exactly what you said,
00:29:15.500 | which is when you have to encode the ethics
00:29:18.580 | into an algorithm, you start to try to really understand
00:29:22.900 | what are the fundamentals of the decision-making process
00:29:25.940 | you make to make certain decisions.
00:29:27.940 | Should you, like capital punishment,
00:29:32.360 | should you take a person's life or not
00:29:34.180 | to punish them for a certain crime?
00:29:36.660 | Sort of, you can use, you can develop an algorithm
00:29:40.180 | to make that decision, right?
00:29:41.720 | And the hope is that the act of making that algorithm,
00:29:47.380 | however you make it, so there's a few approaches,
00:29:51.260 | will help us actually get to the core
00:29:54.340 | of what is right and what is wrong
00:29:57.780 | under our current societal standards.
00:30:00.980 | - But isn't that what's happening right now?
00:30:02.740 | And we're realizing that we don't have a consensus
00:30:05.540 | on what's right and wrong.
00:30:06.780 | - You mean in politics in general?
00:30:08.420 | - Well, like when we're thinking about these trolley problems
00:30:10.980 | and autonomous vehicles and how to program ethics
00:30:13.720 | into machines and how to make AI algorithms fair.
00:30:21.040 | And equitable, we're realizing that this is so complicated
00:30:24.640 | and it's complicated in part because there doesn't seem
00:30:28.240 | to be a one right answer in any of these cases.
00:30:30.720 | - Do you have a hope for, like one of the ideas
00:30:33.080 | of the moral machine is that crowdsourcing
00:30:35.240 | can help us converge towards, like democracy
00:30:39.440 | can help us converge towards the right answer.
00:30:42.200 | Do you have a hope for crowdsourcing?
00:30:44.000 | - Well, yes and no.
00:30:45.920 | So I think that in general, I have a legal background
00:30:49.320 | and policymaking is often about trying to suss out,
00:30:52.440 | what rules does this particular society agree on
00:30:56.360 | and then trying to codify that.
00:30:57.540 | So the law makes these choices all the time
00:30:59.920 | and then tries to adapt according to changing culture.
00:31:02.760 | But in the case of the moral machine project,
00:31:06.200 | I don't think that people's choices on that website
00:31:08.720 | necessarily reflect what laws they would want in place.
00:31:13.720 | If given, I think you would have to ask them a series
00:31:16.740 | of different questions in order to get at
00:31:18.840 | what their consensus is.
00:31:20.920 | - I agree, but that has to do more
00:31:22.640 | with the artificial nature of, I mean,
00:31:25.400 | they're showing some cute icons on a screen.
00:31:28.120 | That's almost, so if you, for example,
00:31:31.640 | we do a lot of work in virtual reality.
00:31:33.920 | And so if you put those same people into virtual reality
00:31:38.040 | where they have to make that decision,
00:31:40.040 | their decision would be very different, I think.
00:31:42.880 | - I agree with that.
00:31:44.120 | That's one aspect.
00:31:45.360 | And the other aspect is, it's a different question
00:31:47.640 | to ask someone, would you run over the homeless person
00:31:51.320 | or the doctor in this scene?
00:31:53.680 | Or do you want cars to always run over the homeless people?
00:31:57.280 | - I see, yeah.
00:31:58.400 | So let's talk about anthropomorphism.
00:32:01.200 | To me, anthropomorphism, if I can pronounce it correctly,
00:32:05.840 | is one of the most fascinating phenomena
00:32:09.240 | from both the engineering perspective
00:32:12.080 | and the psychology perspective, machine learning perspective
00:32:14.600 | and robotics in general.
00:32:16.800 | Can you step back and define anthropomorphism,
00:32:20.640 | how you see it in general terms in your work?
00:32:25.440 | - Sure, so anthropomorphism is this tendency
00:32:28.620 | that we have to project human-like traits
00:32:32.160 | and behaviors and qualities onto non-humans.
00:32:35.240 | And we often see it with animals,
00:32:37.520 | like we'll project emotions on animals
00:32:40.040 | that may or may not actually be there.
00:32:42.200 | We often see that we're trying to interpret things
00:32:44.800 | according to our own behavior when we get it wrong.
00:32:47.360 | But we do it with more than just animals.
00:32:50.040 | We do it with objects, teddy bears.
00:32:52.120 | We see faces in the headlights of cars.
00:32:55.480 | And we do it with robots, very, very extremely.
00:32:59.360 | - You think that can be engineered?
00:33:01.080 | Can that be used to enrich an interaction
00:33:03.920 | between an AI system and a human?
00:33:07.320 | - Oh yeah, for sure.
00:33:08.600 | - And do you see it being used that way often?
00:33:13.600 | - I don't.
00:33:14.680 | I haven't seen, whether it's Alexa
00:33:18.280 | or any of the smart speaker systems,
00:33:20.800 | often trying to optimize for the anthropomorphization.
00:33:25.560 | - You said you haven't seen?
00:33:28.080 | - I haven't seen.
00:33:28.920 | They keep moving away from that.
00:33:30.440 | I think they're afraid of that.
00:33:32.560 | - They actually, so I only recently found out,
00:33:35.860 | but did you know that Amazon has a whole team of people
00:33:39.120 | who are just there to work on Alexa's personality?
00:33:42.900 | - So I know, it depends on your personality.
00:33:48.160 | I didn't know that exact thing,
00:33:50.640 | but I do know that how the voice is perceived
00:33:55.640 | is worked on a lot,
00:33:58.400 | whether if it's a pleasant feeling about the voice,
00:34:02.040 | but that has to do more with the texture of the sound
00:34:04.080 | and the audio and so on.
00:34:05.360 | But personality is more like...
00:34:08.720 | It's like, what's her favorite beer when you ask her?
00:34:11.080 | And the personality team is different for every country too.
00:34:14.520 | Like there's a different personality for a German Alexa
00:34:17.160 | than there is for American Alexa.
00:34:19.560 | That said, I think it's very difficult to use the,
00:34:24.280 | or really, really harness the anthropomorphism
00:34:29.720 | with these voice assistants
00:34:31.680 | because the voice interface is still very primitive.
00:34:36.320 | And I think that in order to get people
00:34:38.720 | to really suspend their disbelief
00:34:40.800 | and treat a robot like it's alive, less is sometimes more.
00:34:45.480 | You want them to project onto the robot
00:34:48.440 | and you want the robot to not disappoint their expectations
00:34:50.760 | for how it's going to answer or behave
00:34:53.320 | in order for them to have this kind of illusion.
00:34:56.180 | And with Alexa, I don't think we're there yet,
00:34:59.580 | or Siri, that they're just not good at that.
00:35:03.000 | But if you look at some of the more animal-like robots,
00:35:06.880 | like the baby seal that they use with the dementia patients,
00:35:10.440 | it's a much more simple design.
00:35:11.780 | It doesn't try to talk to you.
00:35:13.080 | It can't disappoint you in that way.
00:35:14.640 | It just makes little movements and sounds.
00:35:16.660 | And people stroke it and it responds to their touch.
00:35:19.960 | And that is a very effective way to harness people's tendency
00:35:24.960 | to kind of treat the robot like a living thing.
00:35:28.080 | - Yeah, so you bring up some interesting ideas
00:35:32.400 | in your paper chapter, I guess,
00:35:35.440 | anthropomorphic framing human-robot interaction
00:35:38.520 | that I read the last time we scheduled this.
00:35:40.880 | - Oh my God, that was a long time ago.
00:35:42.840 | - What are some good and bad cases
00:35:46.280 | of anthropomorphism in your perspective?
00:35:48.480 | Like, when is it good, when is it bad?
00:35:52.120 | - Well, I should start by saying that,
00:35:54.920 | while design can really enhance the anthropomorphism,
00:35:57.360 | it doesn't take a lot to get people
00:35:59.840 | to treat a robot like it's alive.
00:36:01.360 | Like people will, over 85% of Roombas have a name,
00:36:05.400 | which I don't know the numbers
00:36:07.320 | for your regular type of vacuum cleaner,
00:36:09.080 | but they're not that high, right?
00:36:10.920 | So people will feel bad for the Roomba when it gets stuck.
00:36:13.340 | They'll send it in for repair
00:36:14.760 | and wanna get the same one back.
00:36:15.920 | And that one is not even designed to like make you do that.
00:36:19.260 | So I think that some of the cases
00:36:23.480 | where it's maybe a little bit concerning
00:36:25.800 | that anthropomorphism is happening
00:36:27.360 | is when you have something
00:36:29.120 | that's supposed to function like a tool
00:36:30.600 | and people are using it in the wrong way.
00:36:32.680 | And one of the concerns is military robots,
00:36:35.880 | where, so, gosh,
00:36:39.920 | like early 2000s, which is a long time ago,
00:36:45.840 | iRobot, the Roomba company,
00:36:47.800 | made this robot called the Pakbot
00:36:49.720 | that was deployed in Iraq and Afghanistan
00:36:53.760 | with the bomb disposal units that were there.
00:36:56.160 | And the soldiers became very emotionally attached
00:36:59.640 | to the robots.
00:37:00.680 | - Wow.
00:37:02.040 | - And that's fine until a soldier risks his life
00:37:06.960 | to save a robot, which you really don't want.
00:37:09.880 | But they were treating them like pets.
00:37:11.760 | Like they would name them,
00:37:12.800 | they would give them funerals with gun salutes.
00:37:15.080 | They would get really upset and traumatized
00:37:17.400 | when the robot got broken.
00:37:18.680 | So in situations where you want a robot to be a tool,
00:37:23.320 | in particular when it's supposed to like do a dangerous job
00:37:26.020 | that you don't want a person doing,
00:37:28.120 | it can be hard when people get emotionally attached to it.
00:37:32.560 | That's maybe something that you would want to discourage.
00:37:35.360 | Another case for concern is maybe when companies try
00:37:39.400 | to leverage the emotional attachment to exploit people.
00:37:43.580 | So if it's something that's not in the consumer's interest,
00:37:47.920 | trying to like sell them products or services
00:37:50.680 | or exploit an emotional connection to keep them,
00:37:52.840 | you know, paying for a cloud service for a social robot
00:37:55.520 | or something like that might be,
00:37:57.840 | I think that's a little bit concerning as well.
00:38:00.240 | - Yeah, the emotional manipulation,
00:38:01.960 | which probably happens behind the scenes now
00:38:04.280 | with some like social networks and so on,
00:38:06.400 | but making it more explicit.
00:38:08.940 | What's your favorite robot?
00:38:11.520 | Like- - Fictional or real?
00:38:13.840 | - No, real.
00:38:15.060 | Real robot, which you have felt a connection with,
00:38:19.320 | or not like, not anthropomorphic connection,
00:38:24.060 | but I mean like you sit back and said,
00:38:27.200 | "Damn, this is an impressive system."
00:38:32.200 | - Wow, so two different robots.
00:38:34.440 | So the Plio baby dinosaur robot that is no longer sold
00:38:39.000 | that came out in 2007, that one I was very impressed with.
00:38:44.000 | But from an anthropomorphic perspective,
00:38:46.200 | I was impressed with how much I bonded with it,
00:38:48.200 | how much I like wanted to believe
00:38:50.300 | that it had this inner life.
00:38:51.920 | - Can you describe Plio, can you describe what it is?
00:38:55.540 | How big is it?
00:38:56.580 | What can it actually do?
00:38:58.300 | - Yeah, Plio is about the size of a small cat.
00:39:02.540 | It had a lot of like motors
00:39:05.860 | that gave it this kind of lifelike movement.
00:39:08.460 | It had things like touch sensors and an infrared camera.
00:39:11.220 | So it had all these like cool little technical features,
00:39:13.860 | even though it was a toy.
00:39:15.160 | And the thing that really struck me about it
00:39:19.660 | was that it could mimic pain and distress really well.
00:39:23.780 | So if you held it up by the tail,
00:39:25.220 | it had a tilt sensor that told it
00:39:27.340 | what direction it was facing
00:39:28.460 | and it would start to squirm and cry out.
00:39:31.980 | If you hit it too hard, it would start to cry.
00:39:34.260 | So it was very impressive in design.
00:39:38.420 | - And what's the second robot that you were,
00:39:41.100 | you said there might've been two that you liked?
00:39:43.780 | - Yeah, so the Boston Dynamics robots
00:39:46.900 | are just impressive feats of engineering.
00:39:50.020 | - Have you met them in person?
00:39:51.460 | - Yeah, I recently got a chance to go visit.
00:39:53.380 | And I was always one of those people who watched the videos
00:39:56.260 | and was like, this is super cool,
00:39:58.140 | but also it's a product video.
00:39:59.820 | Like I don't know how many times
00:40:01.340 | that they had to shoot this to get it right.
00:40:03.580 | But visiting them, I'm pretty sure that,
00:40:07.700 | I was very impressed, let's put it that way.
00:40:10.200 | - Yeah, in terms of the control,
00:40:11.780 | I think that was a transformational moment for me
00:40:15.820 | when I met Spot Mini in person.
00:40:17.940 | Because, okay, maybe this is a psychology experiment,
00:40:23.700 | but I anthropomorphized the crap out of it.
00:40:27.860 | So I immediately, it was like my best friend, right?
00:40:31.660 | - I think it's really hard for anyone to watch Spot move
00:40:33.940 | and not feel like it has agency.
00:40:35.940 | - Yeah, especially the arm on Spot Mini
00:40:40.940 | really obviously looks like a head.
00:40:45.700 | They say, no, I wouldn't mean it that way,
00:40:48.260 | but it obviously, it looks exactly like that.
00:40:51.620 | And so it's almost impossible to not think of it
00:40:54.140 | as almost like the baby dinosaur, but slightly larger.
00:40:59.140 | And this movement of the, of course, the intelligence is,
00:41:02.500 | their whole idea is that it's not supposed
00:41:05.060 | to be intelligent, it's a platform
00:41:06.940 | on which you build higher intelligence.
00:41:09.620 | Actually, really, really dumb.
00:41:11.500 | It's just a basic movement platform.
00:41:13.740 | - Yeah, but even dumb robots can,
00:41:16.180 | we can immediately respond to them in this visceral way.
00:41:20.100 | - What are your thoughts about Sophia the robot?
00:41:23.140 | This kind of mix of some basic natural language processing
00:41:27.540 | and basically an art experiment.
00:41:31.140 | - Yeah, an art experiment is a good way to characterize it.
00:41:34.700 | I'm much less impressed with Sophia
00:41:36.580 | than I am with Boston Dynamics.
00:41:37.980 | - She said she likes you.
00:41:39.260 | She said she admires you.
00:41:40.420 | - Is she?
00:41:41.780 | - She followed me on Twitter at some point, yeah.
00:41:44.300 | - And she tweets about how much she likes you, so.
00:41:46.580 | - So what does that mean?
00:41:47.500 | I have to be nice or?
00:41:48.500 | - No, I don't know.
00:41:49.340 | (Sophia laughing)
00:41:50.260 | See, I was emotionally manipulating you.
00:41:52.260 | No, how do you think of the whole thing
00:41:57.340 | that happened with Sophia is quite a large number of people
00:42:01.540 | kind of immediately had a connection
00:42:03.420 | and thought that maybe we're far more advanced
00:42:06.220 | with robotics than we are,
00:42:07.460 | or actually didn't even think much.
00:42:09.500 | I was surprised how little people cared
00:42:12.260 | that they kind of assumed that,
00:42:16.820 | well, of course AI can do this.
00:42:19.300 | - Yeah.
00:42:20.340 | - And then if they assume that,
00:42:23.260 | I felt they should be more impressed.
00:42:25.940 | (Sophia laughing)
00:42:27.060 | - Well, people really overestimate where we are.
00:42:30.220 | And so when something,
00:42:31.820 | I don't even think Sophia was very impressive
00:42:34.540 | or is very impressive.
00:42:35.500 | I think she's kind of a puppet, to be honest.
00:42:37.620 | But yeah, I think people have,
00:42:40.220 | are a little bit influenced by science fiction
00:42:42.180 | and pop culture to think that we should be further along
00:42:44.860 | than we are.
00:42:45.700 | - So what's your favorite robots and movies and fiction?
00:42:49.780 | - "WALL-E".
00:42:50.980 | - "WALL-E".
00:42:52.540 | What do you like about "WALL-E"?
00:42:54.420 | The humor, the cuteness,
00:42:56.460 | the perception control systems operating on "WALL-E"
00:43:00.860 | that makes it all work out,
00:43:01.940 | (Sophia laughing)
00:43:03.300 | just in general?
00:43:04.500 | - The design of "WALL-E", the robot,
00:43:07.180 | I think that animators figured out,
00:43:10.420 | starting in like the 1940s,
00:43:12.420 | how to create characters that don't look real,
00:43:16.420 | but look like something that's even better than real,
00:43:20.460 | that we really respond to and think is really cute.
00:43:22.780 | They figured out how to make them move
00:43:24.460 | and look in the right way.
00:43:26.500 | And "WALL-E" is just such a great example of that.
00:43:29.060 | - You think eyes, big eyes,
00:43:30.700 | or big something that's kind of eye-ish.
00:43:33.300 | So it's always playing on some aspect
00:43:36.780 | of the human face, right?
00:43:38.860 | - Often, yeah.
00:43:40.060 | So big eyes.
00:43:41.300 | Well, I think one of the first animations
00:43:45.100 | to really play with this was "Bambi",
00:43:47.180 | and they weren't originally gonna do that.
00:43:49.180 | They were originally trying to make the deer
00:43:51.100 | look as lifelike as possible.
00:43:52.700 | They brought deer into the studio
00:43:54.260 | and had a little zoo there
00:43:55.340 | so that the animators could work with them.
00:43:57.220 | And then at some point they were like,
00:43:58.820 | "Hmm, if we make really big eyes
00:44:00.940 | and a small nose and big cheeks,
00:44:02.980 | kind of more like a baby face,
00:44:04.660 | then people like it even better than if it looks real."
00:44:08.420 | - Do you think the future of things like Alexa
00:44:13.260 | and the home has possibility to take advantage of that,
00:44:17.540 | to build on that,
00:44:18.840 | to create these systems that are better than real,
00:44:23.840 | that create a close human connection?
00:44:27.260 | - I can pretty much guarantee you
00:44:28.660 | without having any knowledge
00:44:30.540 | that those companies are working on that,
00:44:34.340 | on that design behind the scenes.
00:44:36.260 | Like I'm pretty sure-
00:44:37.700 | - I totally disagree with you.
00:44:39.060 | - Really?
00:44:39.900 | - So that's what I'm interested in.
00:44:41.220 | I'd like to build such a company.
00:44:42.880 | I know a lot of those folks and they're afraid of that
00:44:45.660 | because you don't, how do you make money off of it?
00:44:48.260 | - Well, but even just like making Alexa
00:44:51.900 | look a little bit more interesting
00:44:53.220 | than just like a cylinder would do so much.
00:44:56.900 | - It's an interesting thought,
00:44:58.580 | but I don't think people are,
00:45:01.780 | from Amazon perspective,
00:45:03.060 | are looking for that kind of connection.
00:45:05.180 | They want you to be addicted to the services
00:45:08.300 | provided by Alexa, not to the device.
00:45:10.980 | So the device itself, it's felt that you can lose a lot
00:45:15.980 | because if you create a connection
00:45:19.660 | and then it creates more opportunity for frustration,
00:45:23.980 | for negative stuff than it does for positive stuff
00:45:28.980 | is I think the way they think about it.
00:45:31.100 | - That's interesting.
00:45:32.100 | Like I agree that there's,
00:45:33.780 | it's very difficult to get right
00:45:36.020 | and you have to get it exactly right.
00:45:37.420 | Otherwise you wind up with Microsoft's Clippy.
00:45:40.100 | - Okay, easy now.
00:45:42.620 | What's your problem with Clippy?
00:45:44.500 | - You like Clippy?
00:45:45.340 | Is Clippy your friend?
00:45:46.180 | - Yeah, I miss Clippy.
00:45:47.740 | I was just, I just talked to, we just had this argument
00:45:51.020 | and they said, Microsoft CTO,
00:45:52.900 | and they said, he said he's not bringing Clippy back.
00:45:55.860 | They're not bringing Clippy back
00:45:58.300 | and that's very disappointing.
00:46:00.180 | I think it was, Clippy was the greatest assistance
00:46:05.180 | we've ever built.
00:46:07.300 | It was a horrible attempt, of course,
00:46:09.220 | but it's the best we've ever done
00:46:11.020 | because it was in real attempt to have it
00:46:13.140 | like a actual personality.
00:46:16.220 | And I mean, it was obviously technology
00:46:18.460 | was way not there at the time
00:46:22.020 | of being able to be a recommender system
00:46:24.300 | for assisting you in anything and typing in Word
00:46:28.220 | or any kind of other application,
00:46:29.620 | but it still was an attempt of personality
00:46:31.420 | that was legitimate.
00:46:32.620 | - That's true.
00:46:33.460 | - Which I thought was brave.
00:46:34.980 | - Yes, yes, okay.
00:46:36.980 | You know, you've convinced me
00:46:37.860 | I'll be slightly less hard on Clippy.
00:46:39.940 | - And I know I have like an army of people behind me
00:46:42.540 | who also miss Clippy.
00:46:44.020 | - Really?
00:46:44.940 | I wanna meet these people.
00:46:46.020 | Who are these people?
00:46:47.300 | - It's the people who like to hate stuff when it's there
00:46:51.980 | and miss it when it's gone.
00:46:53.940 | (laughing)
00:46:55.460 | - So everyone.
00:46:56.300 | - It's everyone, exactly.
00:46:58.980 | - All right, so Enki and Jibo,
00:47:03.940 | the two companies, two amazing companies,
00:47:07.020 | social robotics companies
00:47:08.980 | that have recently been closed down.
00:47:10.740 | - Yes.
00:47:11.580 | - Why do you think it's so hard
00:47:14.540 | to create a personal robotics company?
00:47:16.700 | So making a business out of essentially something
00:47:20.780 | that people would anthropomorphize,
00:47:22.940 | have a deep connection with.
00:47:24.620 | Why is it so hard to make it work?
00:47:26.500 | Is the business case not there
00:47:27.860 | or what is it?
00:47:29.540 | - I think it's a number of different things.
00:47:32.580 | I don't think it's going to be this way forever.
00:47:35.780 | I think at this current point in time,
00:47:38.020 | it takes so much work to build something
00:47:42.340 | that only barely meets people's like minimal expectations
00:47:46.980 | because of science fiction and pop culture
00:47:49.340 | giving people this idea
00:47:50.500 | that we should be further than we already are.
00:47:52.500 | Like when people think about a robot assistant in the home,
00:47:55.460 | they think about Rosie from the Jetsons
00:47:57.700 | or something like that.
00:47:59.020 | And Anki and Jibo did such a beautiful job
00:48:02.820 | with the design and getting that interaction just right.
00:48:06.460 | But I think people just wanted more.
00:48:08.740 | They wanted more functionality.
00:48:10.180 | I think you're also right that the business case
00:48:12.780 | isn't really there
00:48:13.780 | because there hasn't been a killer application
00:48:17.180 | that's useful enough to get people to adopt the technology
00:48:20.140 | in great numbers.
00:48:22.100 | I think what we did see from the people who did get Jibo
00:48:25.740 | is a lot of them became very emotionally attached to it.
00:48:29.700 | But that's not, I mean,
00:48:31.500 | it's kind of like the Palm Pilot back in the day.
00:48:33.660 | Most people are like, why do I need this?
00:48:35.180 | Why would I?
00:48:36.020 | They don't see how they would benefit from it
00:48:37.580 | until they have it or some other company comes in
00:48:41.220 | and makes it a little better.
00:48:42.660 | - Yeah, like how far away are we, do you think?
00:48:46.580 | Like how hard is this problem?
00:48:48.380 | - It's a good question.
00:48:49.200 | And I think it has a lot to do with people's expectations
00:48:51.900 | and those keep shifting
00:48:53.380 | depending on what science fiction that is popular.
00:48:56.340 | - But also it's two things.
00:48:58.020 | It's people's expectation
00:48:59.460 | and people's need for an emotional connection.
00:49:03.420 | - Yeah.
00:49:04.260 | - And I believe the need is pretty high.
00:49:07.340 | - Yes, but I don't think we're aware of it.
00:49:10.260 | - That's right.
00:49:11.100 | There's like, I really think this is like the life
00:49:15.080 | as we know it.
00:49:16.100 | So we've just kind of gotten used to it.
00:49:18.580 | I've really, I hate to be dark
00:49:21.460 | 'cause I have close friends,
00:49:23.780 | but we've gotten used to really never being close to anyone.
00:49:28.780 | All right.
00:49:30.180 | And we're deeply, I believe, okay, this is hypothesis.
00:49:33.620 | I think we're deeply lonely, all of us,
00:49:35.500 | even those in deep fulfilling relationships.
00:49:37.900 | In fact, what makes those relationships fulfilling,
00:49:40.340 | I think, is that they at least tap
00:49:42.340 | into that deep loneliness a little bit.
00:49:45.060 | But I feel like there's more opportunity to explore that
00:49:48.660 | doesn't interfere with the human relationships you have.
00:49:52.660 | It expands more on the, yeah,
00:49:56.060 | the rich, deep, unexplored complexity
00:49:58.620 | that's all of us weird apes.
00:50:01.620 | Okay.
00:50:02.660 | - I think you're right.
00:50:03.500 | - Do you think it's possible to fall in love with a robot?
00:50:06.220 | - Oh yeah, totally.
00:50:08.940 | - Do you think it's possible
00:50:11.100 | to have a long-term committed monogamous relationship
00:50:14.020 | with a robot?
00:50:15.660 | - Well, yeah, there are lots of different types
00:50:17.000 | of long-term committed monogamous relationships.
00:50:20.420 | - I think monogamous implies
00:50:22.220 | like you're not going to see other humans sexually,
00:50:25.740 | or like you basically on Facebook have to say,
00:50:29.700 | I'm in a relationship with this person, this robot.
00:50:33.020 | - I just don't, like, again,
00:50:34.660 | I think this is comparing robots to humans
00:50:36.700 | when I would rather compare them to pets.
00:50:39.580 | Like you get a robot, it fulfills, you know,
00:50:42.980 | this loneliness that you have,
00:50:45.900 | maybe not the same way as a pet,
00:50:48.680 | maybe in a different way that is even, you know,
00:50:51.680 | supplemental in a different way.
00:50:53.680 | But, you know, I'm not saying that people won't like do this
00:50:57.600 | be like, oh, I want to marry my robot,
00:50:59.480 | or I want to have like a, you know,
00:51:01.160 | sexual relation, monogamous relationship with my robot.
00:51:04.420 | But I don't think that that's the main use case for them.
00:51:08.760 | - But you think that there's still a gap between human
00:51:12.640 | and pet.
00:51:14.560 | So between a husband and pet, there's a--
00:51:22.480 | - It's a different relationship.
00:51:23.480 | - It's an engineering,
00:51:24.560 | so that's a gap that can be closed through--
00:51:27.400 | - I think it could be closed someday,
00:51:29.500 | but why would we close that?
00:51:31.320 | Like, I think it's so boring to think about recreating
00:51:33.920 | things that we already have when we could create something
00:51:37.520 | that's different.
00:51:42.200 | - I know you're thinking about the people who like,
00:51:43.760 | don't have a husband and like, what can we give them?
00:51:46.860 | - Yeah, but let's, I guess what I'm getting at is,
00:51:51.260 | maybe not.
00:51:54.080 | So like the movie "Her."
00:51:56.400 | - Yeah.
00:51:57.240 | - Right, so a better husband.
00:52:01.000 | - Well, maybe better in some ways.
00:52:02.760 | Like it's, I do think that robots are gonna continue
00:52:06.000 | to be a different type of relationship,
00:52:08.860 | even if we get them like very human looking,
00:52:11.440 | or when the voice interactions we have with them
00:52:14.720 | feel very like natural and human-like.
00:52:17.120 | I think there's still gonna be differences,
00:52:19.840 | and there were in that movie too, like towards the end,
00:52:22.000 | it kind of goes off the rails.
00:52:23.360 | - But it's just a movie.
00:52:24.200 | So your intuition is that,
00:52:27.480 | 'cause you kind of said two things, right?
00:52:31.640 | So one is, why would you want to basically
00:52:36.360 | replicate the husband, right?
00:52:39.160 | And the other is kind of implying that
00:52:42.040 | it's kind of hard to do.
00:52:43.660 | So like, anytime you try,
00:52:47.080 | you might build something very impressive,
00:52:48.680 | but it'll be different.
00:52:49.840 | I guess my question is about human nature.
00:52:54.320 | It's like, how hard is it to satisfy that role
00:52:59.320 | of the husband?
00:53:01.020 | So removing any of the sexual stuff aside,
00:53:03.600 | is more like the mystery, the tension,
00:53:07.740 | the dance of relationships,
00:53:09.720 | you think with robots that's difficult to build?
00:53:13.040 | What's your intuition?
00:53:13.880 | - I think that,
00:53:15.160 | well, it also depends on are we talking about robots now,
00:53:20.560 | in 50 years, in like indefinite amount of time,
00:53:24.120 | where like-- - I'm thinking like
00:53:25.520 | five to 10 years.
00:53:26.360 | - Five or 10 years.
00:53:27.200 | I think that robots at best will be like,
00:53:30.220 | it's more similar to the relationship we have with our pets
00:53:34.000 | than relationship that we have with other people.
00:53:36.720 | - I got it.
00:53:37.540 | So what do you think it takes to build a system
00:53:41.280 | that exhibits greater and greater levels of intelligence?
00:53:44.320 | Like, it impresses us with its intelligence.
00:53:47.080 | You know, Arumba,
00:53:48.260 | so you talk about anthropomorphization,
00:53:50.680 | that doesn't, I think intelligence is not required.
00:53:53.960 | In fact, intelligence probably gets in the way sometimes,
00:53:56.000 | like you mentioned.
00:53:56.960 | But what do you think it takes to create a system
00:54:03.000 | where we sense that it has a human level intelligence?
00:54:07.240 | So something that probably something conversational,
00:54:11.100 | human level intelligence.
00:54:12.140 | How hard do you think that problem is?
00:54:13.980 | It'd be interesting to sort of hear your perspective,
00:54:16.580 | not just purely, so I talk to a lot of people,
00:54:20.700 | how hard is the conversational agents?
00:54:23.340 | - Yeah.
00:54:24.180 | - How hard is it to pass a Turing test?
00:54:25.760 | But my sense is it's easier than just solving,
00:54:30.760 | it's easier than solving
00:54:33.380 | the pure natural language processing problem,
00:54:36.740 | because I feel like you can cheat.
00:54:38.380 | - Yeah.
00:54:39.660 | - So yeah, so how hard is it to pass a Turing test
00:54:43.360 | in your view?
00:54:44.340 | - Well, I think, again,
00:54:45.620 | it's all about expectation management.
00:54:47.420 | If you set up people's expectations
00:54:49.740 | to think that they're communicating with,
00:54:52.260 | what was it, a 13-year-old boy from the Ukraine?
00:54:54.500 | - Yeah, that's right, yeah.
00:54:55.500 | - Then they're not gonna expect perfect English,
00:54:57.480 | they're not gonna expect perfect understanding of concepts,
00:55:01.180 | or even being on the same wavelength
00:55:03.500 | in terms of conversation flow.
00:55:05.560 | So it's much easier to pass in that case.
00:55:09.100 | - Do you think, you kind of alluded this too with audio,
00:55:14.700 | do you think it needs to have a body?
00:55:16.700 | - I think that we definitely have,
00:55:21.340 | so we treat physical things with more social agency,
00:55:25.060 | 'cause we're very physical creatures.
00:55:26.660 | I think a body can be useful.
00:55:28.840 | - Does it get in the way?
00:55:34.220 | Is there a negative aspects like--
00:55:37.020 | - Yeah, there can be.
00:55:38.500 | So if you're trying to create a body
00:55:40.140 | that's too similar to something
00:55:41.480 | that people are familiar with,
00:55:42.580 | like I have this robot cat at home that Hasbro makes,
00:55:45.900 | and it's very disturbing to watch
00:55:48.780 | because I'm constantly assuming
00:55:50.820 | that it's gonna move like a real cat,
00:55:52.620 | and it doesn't 'cause it's like a $100 piece of technology.
00:55:57.180 | So it's very disappointing,
00:56:00.460 | and it's very hard to treat it like it's alive.
00:56:04.000 | So you can get a lot wrong with the body too,
00:56:06.020 | but you can also use tricks,
00:56:07.620 | same as the expectation management
00:56:10.260 | of the 13-year-old boy from the Ukraine.
00:56:11.880 | If you pick an animal
00:56:13.060 | that people aren't intimately familiar with,
00:56:15.240 | like the baby dinosaur, like the baby seal
00:56:17.340 | that people have never actually held in their arms,
00:56:20.180 | you can get away with much more
00:56:21.540 | because they don't have these preformed expectations.
00:56:24.620 | - Yeah, I remember you, I'm thinking of a TED Talk
00:56:26.460 | or something that clicked for me
00:56:28.300 | that nobody actually knows what a dinosaur looks like.
00:56:33.100 | So you can actually get away with a lot more.
00:56:35.360 | That was great.
00:56:37.800 | Do you think it needs,
00:56:38.900 | so what do you think about consciousness and mortality
00:56:44.660 | being displayed in a robot?
00:56:48.680 | So not actually having consciousness,
00:56:54.560 | but having these kind of human elements
00:56:57.060 | that are much more than just the interaction,
00:57:00.040 | much more than just, like you mentioned,
00:57:02.660 | with a dinosaur moving kind of in interesting ways,
00:57:05.860 | but really being worried about its own death
00:57:09.860 | and really acting as if it's aware
00:57:13.680 | and self-aware and identity.
00:57:15.540 | Have you seen that done in robotics?
00:57:17.820 | What do you think about doing that?
00:57:19.580 | Is that a powerful, good thing?
00:57:24.700 | - Well, I think it can be a design tool
00:57:27.580 | that you can use for different purposes.
00:57:29.200 | So I can't say whether it's inherently good or bad,
00:57:31.340 | but I do think it can be a powerful tool.
00:57:34.780 | The fact that the, you know,
00:57:36.660 | Plio mimics distress when you quote-unquote hurt it
00:57:41.660 | is a really powerful tool
00:57:46.060 | to get people to engage with it in a certain way.
00:57:49.180 | I had a research partner
00:57:50.500 | that I did some of the empathy work with named Pulasanandi,
00:57:53.660 | and he had built a robot for himself
00:57:55.700 | that had like a lifespan
00:57:57.340 | and that would stop working after a certain amount of time
00:58:00.620 | just because he was interested in like
00:58:02.020 | whether he himself would treat it differently.
00:58:04.260 | And we know from, you know, Tamagotchis,
00:58:07.540 | those like those little games that we used to have
00:58:10.740 | that were extremely primitive,
00:58:12.080 | that like people respond to like this idea of mortality.
00:58:15.720 | And, you know, you can get people to do a lot
00:58:18.940 | with little design tricks like that.
00:58:20.980 | Now, whether it's a good thing
00:58:21.820 | depends on what you're trying to get them to do.
00:58:24.580 | - Have a deeper relationship,
00:58:27.020 | have a deeper connection, so in our relationship.
00:58:29.400 | If it's for their own benefit, that sounds great.
00:58:32.780 | - Okay.
00:58:33.620 | - You could do that for a lot of other reasons.
00:58:36.700 | - I see, so what kind of stuff are you worried about?
00:58:38.900 | So is it mostly about manipulation of your emotions
00:58:41.580 | for like advertisements and so on, things like that?
00:58:44.260 | - Yeah, or data collection, or I mean,
00:58:46.580 | you could think of governments misusing this
00:58:48.740 | to extract information from people.
00:58:52.340 | It's, you know, just like any other technological tool,
00:58:56.580 | it just raises a lot of questions.
00:58:59.020 | What's, if you look at Facebook,
00:59:01.140 | if you look at Twitter and social networks,
00:59:02.740 | there's a lot of concern of data collection now.
00:59:05.080 | What's from the legal perspective or in general,
00:59:10.980 | how do we prevent the violation of sort of
00:59:16.940 | these companies crossing a line?
00:59:19.080 | It's a gray area, but crossing a line they shouldn't
00:59:22.100 | in terms of manipulating, like we're talking about,
00:59:24.580 | and manipulating our emotion,
00:59:26.180 | manipulating our behavior using tactics
00:59:30.220 | that are not so savory.
00:59:32.260 | - Yeah, it's really difficult because
00:59:34.580 | we are starting to create technology that relies
00:59:38.840 | on data collection to provide functionality,
00:59:42.420 | and there's not a lot of incentive,
00:59:44.880 | even on the consumer side, to curb that,
00:59:46.800 | because the other problem is that the harms aren't tangible.
00:59:51.080 | They're not really apparent to a lot of people
00:59:53.220 | because they kind of trickle down on a societal level,
00:59:56.220 | and then suddenly we're living in like 1984,
00:59:58.640 | which sounds extreme, but that book was very prescient,
01:00:05.100 | and I'm not worried about these systems.
01:00:09.260 | I have Amazon's Echo at home,
01:00:13.860 | and tell Alexa all sorts of stuff,
01:00:17.060 | and it helps me because Alexa knows what brand of diaper
01:00:23.120 | we use, and so I can just easily order it again,
01:00:25.340 | so I don't have any incentive to ask a lawmaker to curb that,
01:00:29.460 | but when I think about that data then being used
01:00:32.740 | against low-income people to target them
01:00:35.620 | for scammy loans or education programs,
01:00:39.380 | that's then a societal effect that I think is very severe,
01:00:44.220 | and legislators should be thinking about.
01:00:47.420 | - But yeah, the gray area is
01:00:51.920 | removing ourselves from consideration
01:00:53.840 | of explicitly defining objectives,
01:00:58.120 | and more saying, well, we want to maximize engagement
01:01:01.760 | in our social network,
01:01:03.220 | and then just, 'cause you're not actually doing a bad thing.
01:01:08.960 | It makes sense.
01:01:10.040 | You want people to keep a conversation going,
01:01:13.800 | to have more conversations, to keep coming back
01:01:16.640 | again and again to have conversations,
01:01:19.240 | and whatever happens after that,
01:01:21.740 | you're kind of not exactly directly responsible.
01:01:25.660 | You're only indirectly responsible,
01:01:27.860 | so I think it's a really hard problem.
01:01:29.760 | Are you optimistic about us ever being able to solve it?
01:01:35.360 | - You mean the problem of capitalism?
01:01:39.740 | Because the problem is that the companies
01:01:43.300 | are acting in the company's interests
01:01:45.100 | and not in people's interests,
01:01:46.980 | and when those interests are aligned, that's great,
01:01:49.660 | but the completely free market doesn't seem to work
01:01:53.380 | because of this information asymmetry.
01:01:55.260 | - But it's hard to know how to,
01:01:57.460 | so say you were trying to do the right thing.
01:01:59.940 | I guess what I'm trying to say is
01:02:02.020 | it's not obvious for these companies
01:02:04.740 | what the good thing for society is to do.
01:02:08.460 | I don't think they sit there with, I don't know,
01:02:12.940 | with a glass of wine and a cat,
01:02:15.060 | like petting a cat, evil cat.
01:02:16.620 | (Bridget laughs)
01:02:17.820 | And there's two decisions,
01:02:19.340 | and one of them is good for society,
01:02:21.220 | one is good for the profit, and they choose the profit.
01:02:25.540 | I think they actually, there's a lot of money to be made
01:02:28.240 | by doing the right thing for society.
01:02:31.080 | 'Cause Google, Facebook have so much cash
01:02:35.780 | that they actually, especially Facebook,
01:02:38.580 | would significantly benefit from making decisions
01:02:40.940 | that are good for society.
01:02:41.900 | It's good for their brand, right?
01:02:44.940 | But I don't know if they know what's good for society.
01:02:47.740 | I don't think we know what's good for society
01:02:52.620 | in terms of how we manage the conversation on Twitter
01:02:57.620 | or how we design, we're talking about robots.
01:03:03.660 | Should we emotionally manipulate you
01:03:08.260 | into having a deep connection with Alexa or not?
01:03:11.860 | - Yeah, yeah.
01:03:14.580 | Do you have optimism that we'll be able
01:03:16.060 | to solve some of these questions?
01:03:17.660 | - Well, I'm gonna say something that's controversial
01:03:21.420 | in my circles, which is that I don't think
01:03:23.960 | that companies who are reaching out to ethicists
01:03:27.660 | and trying to create interdisciplinary ethics boards,
01:03:30.020 | I don't think that that's totally just trying
01:03:31.720 | to whitewash the problem
01:03:32.980 | so that they look like they've done something.
01:03:35.800 | I think that a lot of companies actually do,
01:03:38.340 | like you say, care about what the right answer is.
01:03:41.620 | They don't know what that is,
01:03:43.060 | and they're trying to find people to help them find them.
01:03:45.860 | Not in every case, but I think it's much too easy
01:03:49.140 | to just vilify the companies as, like you say,
01:03:51.420 | sitting there with their cat going,
01:03:52.700 | "Ha, ha, ha, $1 million."
01:03:55.100 | - Yeah.
01:03:56.140 | - That's not what happens.
01:03:57.220 | A lot of people are well-meaning even within companies.
01:04:01.120 | I think that what we do absolutely need
01:04:06.060 | is more interdisciplinarity, both within companies,
01:04:11.140 | but also within the policymaking space,
01:04:14.420 | because we've hurtled into the world
01:04:18.180 | where technological progress is much faster,
01:04:22.180 | it seems much faster than it was,
01:04:23.860 | and things are getting very complex,
01:04:25.540 | and you need people who understand the technology,
01:04:28.140 | but also people who understand
01:04:29.920 | what the societal implications are,
01:04:32.060 | and people who are thinking about this
01:04:34.060 | in a more systematic way to be talking to each other.
01:04:37.060 | There's no other solution, I think.
01:04:40.020 | - You've also done work on intellectual property.
01:04:42.620 | So if you look at the algorithms
01:04:45.340 | that these companies are using,
01:04:46.700 | like YouTube, Twitter, Facebook, so on,
01:04:48.820 | I mean, that's kind of, those are mostly secretive,
01:04:53.820 | the recommender systems behind these algorithms.
01:04:57.220 | Do you think about IP and the transparency
01:05:00.500 | of algorithms like this?
01:05:02.140 | Like what is the responsibility of these companies
01:05:05.100 | to open source the algorithms,
01:05:07.540 | or at least reveal to the public
01:05:10.420 | what's, how these algorithms work?
01:05:13.220 | - So I personally don't work on that.
01:05:14.660 | There are a lot of people who do, though,
01:05:16.100 | and there are a lot of people calling for transparency.
01:05:18.480 | In fact, Europe's even trying to legislate transparency,
01:05:22.260 | maybe they even have at this point,
01:05:24.340 | where like if an algorithmic system
01:05:27.700 | makes some sort of decision that affects someone's life,
01:05:30.100 | that you need to be able to see how that decision was made.
01:05:35.740 | I, you know, it's a tricky balance
01:05:39.500 | because obviously companies need to have, you know,
01:05:41.780 | some sort of competitive advantage
01:05:43.260 | and you can't take all of that away
01:05:44.700 | or you stifle innovation.
01:05:46.000 | But yeah, for some of the ways
01:05:48.860 | that these systems are already being used,
01:05:50.600 | I think it is pretty important
01:05:52.500 | that people understand how they work.
01:05:55.180 | - What are your thoughts in general
01:05:56.460 | on intellectual property in this weird age
01:05:59.060 | of software, AI, robotics?
01:06:01.820 | - Oh, that it's broken.
01:06:03.540 | I mean, the system is just broken.
01:06:05.580 | So can you describe,
01:06:07.820 | I actually, I don't even know what intellectual property
01:06:11.420 | is in the space of software, what it means to,
01:06:15.500 | I mean, so I believe I have a patent
01:06:18.940 | on a piece of software from my PhD.
01:06:21.500 | - You believe, you don't know?
01:06:22.940 | - No, we went through a whole process, yeah, I do.
01:06:26.540 | - You get the spam emails,
01:06:27.420 | like we'll frame your patent for you.
01:06:30.260 | - Yeah, it's much like a thesis.
01:06:32.220 | - So, but that's useless, right?
01:06:36.860 | Or not?
01:06:37.980 | Where does IP stand in this age?
01:06:40.260 | What's the right way to do it?
01:06:42.740 | What's the right way to protect and own ideas
01:06:44.860 | when it's just code and this mishmash
01:06:49.620 | of something that feels much softer
01:06:52.260 | than a piece of machinery or an idea?
01:06:55.060 | - I mean, it's hard because there are different types
01:06:58.220 | of intellectual property
01:06:59.180 | and there are kind of these blunt instruments.
01:07:01.660 | It's like, patent law is like a wrench,
01:07:03.940 | like it works really well for an industry
01:07:05.660 | like the pharmaceutical industry,
01:07:06.980 | but when you try and apply it to something else,
01:07:09.260 | it's like, I don't know, I'll just like hit this thing
01:07:11.700 | with the wrench and hope it works.
01:07:14.140 | So software, you know, software,
01:07:16.380 | you have a couple of different options.
01:07:18.500 | Software, like any code that's written down
01:07:21.820 | in some tangible form is automatically copyrighted.
01:07:25.940 | So you have that protection, but that doesn't do much
01:07:28.460 | because if someone takes the basic idea
01:07:31.740 | that the code is executing and just does it
01:07:35.420 | in a slightly different way,
01:07:37.060 | they can get around the copyright.
01:07:39.220 | So that's not a lot of protection.
01:07:40.580 | Then you can patent software, but that's kind of,
01:07:43.940 | I mean, getting a patent costs,
01:07:47.380 | I don't know if you remember what yours cost
01:07:49.700 | or like, was it through an institution?
01:07:51.460 | - Yeah, it was through a university.
01:07:52.860 | That's why they, it was insane.
01:07:54.700 | There were so many lawyers, so many meetings.
01:07:57.660 | It made me feel like it must've been
01:07:59.580 | hundreds of thousands of dollars.
01:08:01.220 | It must've been something crazy.
01:08:02.860 | - It's insane, the cost of getting a patent.
01:08:05.740 | And so this idea of like protecting the like inventor
01:08:08.380 | in their own garage, like came up with a great ideas,
01:08:11.020 | kind of, that's the thing of the past.
01:08:13.100 | It's all just companies trying to protect things
01:08:15.940 | and it costs a lot of money.
01:08:18.260 | And then with code, it's oftentimes like, you know,
01:08:22.060 | by the time the patent is issued,
01:08:23.540 | which can take like five years, you know,
01:08:25.220 | probably your code is obsolete at that point.
01:08:28.620 | So it's a very, again, a very blunt instrument
01:08:31.220 | that doesn't work well for that industry.
01:08:33.740 | And so, you know, at this point,
01:08:35.660 | we should really have something better, but we don't.
01:08:38.700 | - Do you like open source?
01:08:39.860 | Yeah, is open source good for society?
01:08:42.100 | You think all of us should open source code?
01:08:44.020 | - Well, so at the Media Lab at MIT,
01:08:48.580 | we have an open source default
01:08:49.980 | because what we've noticed is that people will come in,
01:08:52.780 | they'll like write some code and they'll be like,
01:08:54.940 | how do I protect this?
01:08:56.020 | And we're like, mm, like that's not your problem right now.
01:08:58.860 | Your problem isn't that someone's gonna steal your project.
01:09:01.100 | Your problem is getting people to use it at all.
01:09:03.700 | Like there's so much stuff out there.
01:09:05.180 | Like we don't even know if you're gonna get traction
01:09:07.300 | for your work.
01:09:08.140 | And so open sourcing can sometimes help, you know,
01:09:11.660 | get people's work out there,
01:09:12.940 | but ensure that they get attribution for it,
01:09:15.780 | for the work that they've done.
01:09:17.100 | So like, I'm a fan of it in a lot of contexts.
01:09:20.100 | Obviously it's not like a one size fits all solution.
01:09:23.940 | - So what I gleaned from your Twitter is you're a mom.
01:09:27.860 | I saw a quote, a reference to baby bot.
01:09:33.020 | What have you learned about robotics and AI
01:09:38.540 | from raising a human baby bot?
01:09:41.980 | - Well, I think that my child has made it more apparent
01:09:46.740 | to me that the systems we're currently creating
01:09:49.380 | aren't like human intelligence.
01:09:51.700 | There's not a lot to compare there.
01:09:53.740 | It's just he has learned and developed
01:09:56.820 | in such a different way than a lot of the AI systems
01:10:00.060 | we're creating that that's not really interesting
01:10:03.140 | to me to compare.
01:10:05.380 | But what is interesting to me is how these systems
01:10:09.260 | are gonna shape the world that he grows up in.
01:10:11.780 | And so I'm like even more concerned about
01:10:15.180 | kind of the societal effects of developing systems
01:10:17.700 | that rely on massive amounts of data collection,
01:10:21.700 | for example.
01:10:23.260 | - So is he gonna be allowed to use like Facebook or?
01:10:27.380 | - Facebook is over.
01:10:30.420 | Kids don't use that anymore.
01:10:31.540 | - Snapchat?
01:10:32.380 | What do they use, Instagram?
01:10:33.780 | - Snapchat's over too, I don't know.
01:10:35.020 | I just heard that TikTok is over,
01:10:36.820 | which I've never even seen, so I don't know.
01:10:39.300 | - No.
01:10:40.140 | - We're old, we don't know.
01:10:41.700 | - I need to, I'm gonna start gaming and streaming my gameplay
01:10:46.620 | - So what do you see as the future of personal robotics,
01:10:51.620 | social robotics, interaction with other robots?
01:10:53.860 | Like what are you excited about
01:10:56.060 | if you were to sort of philosophize about
01:10:57.940 | what might happen in the next five, 10 years
01:11:01.100 | that would be cool to see?
01:11:02.980 | - Oh, I really hope that we get kind of a home robot
01:11:06.620 | that makes it, that's a social robot and not just Alexa.
01:11:09.660 | Like it's, you know, I really love the Anki products.
01:11:14.900 | And I thought Jibo was, had some really great aspects.
01:11:18.660 | So I'm hoping that a company cracks that.
01:11:21.700 | - Me too.
01:11:22.780 | So Kate, it was wonderful talking to you today.
01:11:26.140 | - Likewise, thank you so much.
01:11:27.900 | - It was fun.
01:11:28.740 | Thanks for listening to this conversation with Kate Darling.
01:11:32.260 | And thank you to our sponsors, ExpressVPN and Masterclass.
01:11:36.540 | Please consider supporting the podcast
01:11:38.340 | by signing up to Masterclass at masterclass.com/lex
01:11:42.860 | and getting ExpressVPN at expressvpn.com/lexpod.
01:11:47.860 | If you enjoy this podcast, subscribe on YouTube,
01:11:51.700 | review it with five stars on Apple Podcasts,
01:11:54.100 | support it on Patreon,
01:11:55.460 | or simply connect with me on Twitter @LexFriedman.
01:11:58.540 | And now let me leave you with some tweets from Kate Darling.
01:12:04.060 | First tweet is,
01:12:05.620 | the pandemic has fundamentally changed who I am.
01:12:08.660 | I now drink the leftover milk
01:12:11.420 | in the bottom of the cereal bowl.
01:12:13.220 | Second tweet is,
01:12:15.460 | I came on here to complain that I had a really bad day
01:12:19.180 | and saw that a bunch of you are hurting too.
01:12:21.820 | Love to everyone.
01:12:23.860 | Thank you for listening.
01:12:24.940 | I hope to see you next time.
01:12:26.940 | (upbeat music)
01:12:29.540 | (upbeat music)
01:12:32.140 | [BLANK_AUDIO]