back to indexKate Darling: Social Robotics | Lex Fridman Podcast #98
Chapters
0:0 Introduction
3:31 Robot ethics
4:36 Universal Basic Income
6:31 Mistreating robots
17:17 Robots teaching us about ourselves
20:27 Intimate connection with robots
24:29 Trolley problem and making difficult moral decisions
31:59 Anthropomorphism
38:9 Favorite robot
41:19 Sophia
42:46 Designing robots for human connection
47:1 Why is it so hard to build a personal robotics company?
50:3 Is it possible to fall in love with a robot?
56:39 Robots displaying consciousness and mortality
58:33 Manipulation of emotion by companies
64:40 Intellectual property
69:23 Lessons for robotics from parenthood
70:41 Hope for future of robotics
00:00:00.000 |
"The following is a conversation with Kate Darling, 00:00:02.920 |
"a researcher at MIT, interested in social robotics, 00:00:15.800 |
"which for me is one of the most exciting topics 00:00:23.440 |
"she's a caretaker of several domestic robots, 00:00:33.620 |
"She is one of the funniest and brightest minds 00:00:42.320 |
"For everyone feeling the burden of this crisis, 00:01:18.300 |
by signing up to Masterclass at masterclass.com/lex 00:01:22.460 |
and getting ExpressVPN at expressvpn.com/lexpod. 00:01:31.720 |
Sign up at masterclass.com/lex to get a discount 00:01:45.360 |
to watch courses from, to list some of my favorites, 00:01:53.580 |
and communication, Will Wright, creator of SimCity and Sims, 00:02:10.340 |
and the experience of being launched into space alone 00:02:14.300 |
By the way, you can watch it on basically any device. 00:02:21.740 |
to get a discount and to support this podcast. 00:02:32.280 |
to get a discount and to support this podcast. 00:02:39.240 |
It's easy to use, press the big power on button 00:02:46.700 |
like your location's anywhere else in the world. 00:02:57.620 |
Certainly, it allows you to access international versions 00:03:00.600 |
of streaming websites like the Japanese Netflix 00:03:05.280 |
ExpressVPN works on any device you can imagine. 00:03:22.660 |
to get a discount and to support this podcast. 00:03:25.360 |
And now, here's my conversation with Kate Darling. 00:03:50.840 |
Robot ethics, it sounds very science fictiony, 00:04:00.220 |
that people in robot ethics are concerned with 00:04:21.140 |
that come out of our social relationships with robots. 00:04:26.860 |
- I think most of the stuff we have to talk about 00:04:36.700 |
there's a presidential candidate now, Andrew Yang, running. 00:04:46.900 |
He has a proposal of UBI, universal basic income, 00:05:13.340 |
separate from the whole robots and jobs issue. 00:05:28.760 |
you know, kind of a different place in Europe. 00:05:52.220 |
and thinking of this as a one-to-one replacement of jobs. 00:05:59.860 |
maybe we should have a system that taxes robots 00:06:19.280 |
that of course is gonna shake up a lot of stuff, 00:06:26.740 |
robots taking all the jobs in the next 20 years. 00:06:40.940 |
I may ask some silly philosophical questions, I apologize. 00:06:45.180 |
- Okay, do you think humans will abuse robots 00:07:10.940 |
they can be a little bit abusive or a lot abusive. 00:07:21.980 |
is the fact that people subconsciously treat robots 00:07:29.540 |
and what it means in that context to behave violently. 00:07:46.220 |
It also depends on how we define feelings and consciousness, 00:07:54.320 |
Like the robots are not even as smart as insects right now. 00:07:57.180 |
And so I'm not worried about abuse in that sense, 00:08:03.620 |
what does people's behavior towards these things mean 00:08:12.660 |
be verbally abusive to a robot or even physically abusive? 00:08:26.660 |
- That's actually, I haven't read literature on that. 00:08:33.780 |
people don't seem to any longer be so worried 00:08:50.060 |
at least as a society, that people can compartmentalize. 00:08:54.940 |
and you're like, you know, shooting a bunch of characters 00:09:08.800 |
I'm not sure that's based on any real evidence either, 00:09:12.940 |
but it's just the way that we've kind of decided, 00:09:15.140 |
you know, we wanna be a little more cautious there. 00:09:17.580 |
And the reason I think robots are a little bit different 00:09:19.860 |
is because there is a lot of research showing 00:09:23.860 |
in our physical space than something on a screen. 00:09:32.140 |
And so it's totally possible that this is not a problem. 00:09:37.140 |
And it's the same thing as violence in video games. 00:09:40.500 |
You know, maybe, you know, restrict it with kids to be safe, 00:09:47.740 |
because we don't have any evidence at all yet. 00:09:57.460 |
By research, I mean scrolling through your Twitter feed. 00:10:00.700 |
You mentioned that you were going at some point 00:10:05.540 |
do you think there's something that we can learn 00:10:08.260 |
from animal rights that guides our thinking about robots? 00:10:13.320 |
- Oh, I think there is so much to learn from that. 00:10:18.620 |
So I'm writing a book that looks at the history 00:10:22.140 |
of animal domestication and how we've used animals 00:10:27.460 |
And, you know, one of the things the book tries to do 00:10:31.180 |
is move away from this fallacy that I talked about 00:10:35.700 |
because I don't think that's the right analogy. 00:10:46.500 |
we've treated most animals like tools, like products. 00:10:49.540 |
And then some of them we've treated differently, 00:10:51.380 |
and we're starting to see people treat robots 00:10:57.060 |
to how we're going to interact with the robots. 00:11:07.700 |
similar to the way we view like the Holocaust 00:11:22.920 |
what are my grandkids gonna view as abhorrent 00:11:27.080 |
that my generation did, that they would never do? 00:11:32.000 |
You know, it's a fun question to ask yourself. 00:11:40.720 |
So the things that at the time people didn't see as, 00:11:44.440 |
you know, you look at everything from slavery 00:11:50.720 |
to the kind of insane wars that were happening, 00:12:22.320 |
I have, I don't see a fundamental philosophical difference 00:12:29.700 |
in terms of once the capabilities are matched. 00:12:35.060 |
So the fact that we're really far away doesn't, 00:12:42.040 |
and then that from natural language processing, 00:12:52.240 |
and I don't feel comfortable with the kind of abuse 00:13:04.560 |
Do you think, let me put it in the form of a question, 00:13:07.560 |
do you think robots should have some kinds of rights? 00:13:11.160 |
- Well, it's interesting because I came at this originally 00:13:17.960 |
There's no fundamental difference between technology 00:13:27.160 |
And so there's no reason not to give machines 00:13:34.100 |
like you say, they're kind of on an equivalent level. 00:13:37.280 |
But I realized that that is kind of a far future question. 00:13:46.340 |
we might need to ask the robot rights question 00:14:01.260 |
from looking at the history of animal rights, 00:14:03.060 |
and one of the reasons we may not get to a place 00:14:06.180 |
in a hundred years where we view it as wrong to eat 00:14:09.180 |
or otherwise use animals for our own purposes 00:14:12.900 |
is because historically, we've always protected 00:14:51.100 |
that's not historically how we formed our alliances. 00:15:01.900 |
Killing of a human being, no matter who the human being is, 00:15:09.060 |
And then, 'cause I'm connecting that to robots 00:15:33.480 |
So that's where the line appears to be, right? 00:15:41.420 |
- I think here again, the animal analogy is really useful 00:15:44.860 |
because you're also allowed to shoot your dog, 00:15:50.140 |
So we do give animals certain protections from like, 00:16:04.360 |
like a piece of property in a lot of other ways. 00:16:06.480 |
And so we draw these arbitrary lines all the time. 00:16:21.080 |
is just speciesism and not based on any criteria 00:16:29.360 |
that would actually justify making a difference 00:16:41.700 |
Or do you think there's evil and good in all of us? 00:16:55.220 |
who believes that there's no absolute evil and good 00:17:06.500 |
Like when I see people being violent towards robotic objects, 00:17:24.180 |
I studied the Holocaust and World War II exceptionally well. 00:17:27.860 |
I personally believe that most of us have evil in us. 00:18:15.160 |
even using very simple robots that we have today 00:18:29.380 |
as willing to go around and shoot and rape the robots 00:18:31.880 |
and the good characters is not wanting to do that, 00:18:34.320 |
even without assuming that the robots have consciousness. 00:18:40.160 |
there's opportunity to almost practice empathy. 00:18:42.380 |
Robots is an opportunity to practice empathy. 00:19:01.480 |
because I don't think empathy is a zero sum game 00:19:03.600 |
and I do think that it's a muscle that you can train 00:19:15.740 |
sort of asking them or telling them to be nice 00:19:33.440 |
that's towards the idea of practicing empathy. 00:19:36.560 |
I'm always polite to all the systems that we build, 00:19:39.960 |
especially anything that's speech interaction based, 00:19:43.840 |
I'll always have a pretty good detector for please. 00:19:50.160 |
for encouraging empathy in those interactions. 00:20:03.200 |
if you are the type of person who has abusive tendencies 00:20:07.020 |
or needs to get some sort of behavior like that out, 00:20:11.600 |
that it's great to have a robot that you can scream at 00:20:21.440 |
or whether it just kind of, as my friend once said, 00:20:25.000 |
and makes them more cruel in other situations. 00:20:27.440 |
- Oh boy, yeah, and that expands to other topics, 00:20:48.600 |
Is that an area you've touched at all research-wise? 00:20:52.180 |
Like the way, 'cause that's what people imagine 00:20:56.680 |
sort of any kind of interaction between human and robot 00:21:03.140 |
They immediately think from a product perspective 00:21:05.800 |
in the near term is sort of expansion of what pornography is 00:21:13.720 |
- Well, that's kind of you to like characterize it as. 00:21:16.480 |
Oh, they're thinking rationally about product. 00:21:18.840 |
I feel like sex robots are just such a like titillating 00:21:21.960 |
news hook for people that they become like the story. 00:21:26.320 |
And it's really hard to not get fatigued by it 00:21:30.680 |
because you tell someone you do human-robot interaction, 00:21:33.280 |
of course, the first thing they wanna talk about 00:21:37.600 |
And it's unfortunate that I'm so fatigued by it 00:21:41.460 |
because I do think that there are some interesting questions 00:21:43.900 |
that become salient when you talk about sex with robots. 00:21:48.900 |
- See, what I think would happen when people get sex robots, 00:21:55.840 |
What I think there's an opportunity for is an actual, 00:22:08.200 |
outside of the sex would be the most fulfilling part. 00:22:11.440 |
Like the interaction, it's like the folks who, 00:22:15.260 |
Who pay a prostitute and then end up just talking to her 00:22:22.900 |
It's like most guys and people in general joke 00:22:25.820 |
about the sex act, but really people are just lonely inside 00:22:30.340 |
and they're looking for connection, many of them. 00:22:33.280 |
And it'd be unfortunate if that connection is established 00:22:43.660 |
of like people are lonely and they want a connection. 00:22:47.500 |
- Well, I also feel like we should kind of destigmatize 00:22:58.140 |
in disabled people who don't have the same kind 00:23:06.080 |
So I feel like we should destigmatize all of that generally. 00:23:10.760 |
- But yeah, that connection and that loneliness 00:23:21.060 |
if people get sex robots and the sex is really good, 00:23:23.340 |
then they won't want their partner or whatever. 00:23:26.500 |
But we rarely talk about robots actually filling a hole 00:23:37.220 |
there's a giant hole that's unfillable by humans. 00:24:03.440 |
to really sit there and listen to who are you really? 00:24:17.760 |
So whether that's with family members, with friends, 00:24:24.960 |
and all of that can provide value in a different way. 00:24:32.700 |
Currently most of my work is in autonomous vehicles. 00:24:36.620 |
So the most popular topic among general public 00:24:43.420 |
So most roboticists kind of hate this question, 00:24:48.420 |
but what do you think of this thought experiment? 00:24:54.100 |
outside of the silliness of the actual application of it 00:24:59.020 |
I think it's still an interesting ethical question 00:25:04.460 |
just like much of the interaction with robots 00:25:44.820 |
but that's not how people are using it right now. 00:25:53.140 |
but if we're viewing the moral machine project 00:25:56.660 |
as what we can learn from the trolley problems, 00:25:59.380 |
so the moral machine is, I'm sure you're familiar, 00:26:05.980 |
oh, you're in a car, you can decide to run over 00:26:09.380 |
these two people or this child, what do you choose? 00:26:16.180 |
And so it pits these like moral choices against each other 00:26:25.220 |
which is really interesting and I think valuable data, 00:26:33.460 |
because it is exactly what the trolley problem 00:26:36.340 |
is trying to show, which is your first instinct 00:26:39.420 |
might not be the correct one if you look at rules 00:26:42.940 |
that then have to apply to everyone and everything. 00:27:03.860 |
Because that changes certain controlled decisions 00:27:08.100 |
So if your life matters more than other human beings, 00:27:15.100 |
So currently automated emergency braking systems 00:27:25.900 |
just in a different lane can cause significant harm 00:27:28.780 |
to others, but it's possible that it causes less harm 00:27:45.420 |
Do you hope that when we have robots at the table, 00:28:03.340 |
that our ethical rules are much less programmable 00:28:13.980 |
I think, that these issues are very complicated 00:28:24.820 |
And so what's gonna happen in reality, I think, 00:28:34.300 |
in any way possible, or they're gonna always protect 00:28:39.180 |
if it's programmed to kill you instead of someone else? 00:28:47.300 |
But what did you mean by, once we have robots at the table, 00:28:50.700 |
do you mean when they can help us figure out what to do? 00:28:54.780 |
- No, I mean when robots are part of the ethical decisions. 00:29:18.580 |
into an algorithm, you start to try to really understand 00:29:22.900 |
what are the fundamentals of the decision-making process 00:29:36.660 |
Sort of, you can use, you can develop an algorithm 00:29:41.720 |
And the hope is that the act of making that algorithm, 00:29:47.380 |
however you make it, so there's a few approaches, 00:30:02.740 |
And we're realizing that we don't have a consensus 00:30:08.420 |
- Well, like when we're thinking about these trolley problems 00:30:10.980 |
and autonomous vehicles and how to program ethics 00:30:13.720 |
into machines and how to make AI algorithms fair. 00:30:21.040 |
And equitable, we're realizing that this is so complicated 00:30:24.640 |
and it's complicated in part because there doesn't seem 00:30:28.240 |
to be a one right answer in any of these cases. 00:30:30.720 |
- Do you have a hope for, like one of the ideas 00:30:39.440 |
can help us converge towards the right answer. 00:30:45.920 |
So I think that in general, I have a legal background 00:30:49.320 |
and policymaking is often about trying to suss out, 00:30:52.440 |
what rules does this particular society agree on 00:30:59.920 |
and then tries to adapt according to changing culture. 00:31:02.760 |
But in the case of the moral machine project, 00:31:06.200 |
I don't think that people's choices on that website 00:31:08.720 |
necessarily reflect what laws they would want in place. 00:31:13.720 |
If given, I think you would have to ask them a series 00:31:33.920 |
And so if you put those same people into virtual reality 00:31:40.040 |
their decision would be very different, I think. 00:31:45.360 |
And the other aspect is, it's a different question 00:31:47.640 |
to ask someone, would you run over the homeless person 00:31:53.680 |
Or do you want cars to always run over the homeless people? 00:32:01.200 |
To me, anthropomorphism, if I can pronounce it correctly, 00:32:12.080 |
and the psychology perspective, machine learning perspective 00:32:16.800 |
Can you step back and define anthropomorphism, 00:32:20.640 |
how you see it in general terms in your work? 00:32:42.200 |
We often see that we're trying to interpret things 00:32:44.800 |
according to our own behavior when we get it wrong. 00:32:55.480 |
And we do it with robots, very, very extremely. 00:33:08.600 |
- And do you see it being used that way often? 00:33:20.800 |
often trying to optimize for the anthropomorphization. 00:33:32.560 |
- They actually, so I only recently found out, 00:33:35.860 |
but did you know that Amazon has a whole team of people 00:33:39.120 |
who are just there to work on Alexa's personality? 00:33:50.640 |
but I do know that how the voice is perceived 00:33:58.400 |
whether if it's a pleasant feeling about the voice, 00:34:02.040 |
but that has to do more with the texture of the sound 00:34:08.720 |
It's like, what's her favorite beer when you ask her? 00:34:11.080 |
And the personality team is different for every country too. 00:34:14.520 |
Like there's a different personality for a German Alexa 00:34:19.560 |
That said, I think it's very difficult to use the, 00:34:24.280 |
or really, really harness the anthropomorphism 00:34:31.680 |
because the voice interface is still very primitive. 00:34:40.800 |
and treat a robot like it's alive, less is sometimes more. 00:34:48.440 |
and you want the robot to not disappoint their expectations 00:34:53.320 |
in order for them to have this kind of illusion. 00:34:56.180 |
And with Alexa, I don't think we're there yet, 00:35:03.000 |
But if you look at some of the more animal-like robots, 00:35:06.880 |
like the baby seal that they use with the dementia patients, 00:35:16.660 |
And people stroke it and it responds to their touch. 00:35:19.960 |
And that is a very effective way to harness people's tendency 00:35:24.960 |
to kind of treat the robot like a living thing. 00:35:28.080 |
- Yeah, so you bring up some interesting ideas 00:35:35.440 |
anthropomorphic framing human-robot interaction 00:35:54.920 |
while design can really enhance the anthropomorphism, 00:36:01.360 |
Like people will, over 85% of Roombas have a name, 00:36:10.920 |
So people will feel bad for the Roomba when it gets stuck. 00:36:15.920 |
And that one is not even designed to like make you do that. 00:36:53.760 |
with the bomb disposal units that were there. 00:36:56.160 |
And the soldiers became very emotionally attached 00:37:02.040 |
- And that's fine until a soldier risks his life 00:37:06.960 |
to save a robot, which you really don't want. 00:37:12.800 |
they would give them funerals with gun salutes. 00:37:18.680 |
So in situations where you want a robot to be a tool, 00:37:23.320 |
in particular when it's supposed to like do a dangerous job 00:37:28.120 |
it can be hard when people get emotionally attached to it. 00:37:32.560 |
That's maybe something that you would want to discourage. 00:37:35.360 |
Another case for concern is maybe when companies try 00:37:39.400 |
to leverage the emotional attachment to exploit people. 00:37:43.580 |
So if it's something that's not in the consumer's interest, 00:37:47.920 |
trying to like sell them products or services 00:37:50.680 |
or exploit an emotional connection to keep them, 00:37:52.840 |
you know, paying for a cloud service for a social robot 00:37:57.840 |
I think that's a little bit concerning as well. 00:38:15.060 |
Real robot, which you have felt a connection with, 00:38:34.440 |
So the Plio baby dinosaur robot that is no longer sold 00:38:39.000 |
that came out in 2007, that one I was very impressed with. 00:38:46.200 |
I was impressed with how much I bonded with it, 00:38:51.920 |
- Can you describe Plio, can you describe what it is? 00:38:58.300 |
- Yeah, Plio is about the size of a small cat. 00:39:08.460 |
It had things like touch sensors and an infrared camera. 00:39:11.220 |
So it had all these like cool little technical features, 00:39:19.660 |
was that it could mimic pain and distress really well. 00:39:31.980 |
If you hit it too hard, it would start to cry. 00:39:41.100 |
you said there might've been two that you liked? 00:39:53.380 |
And I was always one of those people who watched the videos 00:40:11.780 |
I think that was a transformational moment for me 00:40:17.940 |
Because, okay, maybe this is a psychology experiment, 00:40:27.860 |
So I immediately, it was like my best friend, right? 00:40:31.660 |
- I think it's really hard for anyone to watch Spot move 00:40:48.260 |
but it obviously, it looks exactly like that. 00:40:51.620 |
And so it's almost impossible to not think of it 00:40:54.140 |
as almost like the baby dinosaur, but slightly larger. 00:40:59.140 |
And this movement of the, of course, the intelligence is, 00:41:16.180 |
we can immediately respond to them in this visceral way. 00:41:20.100 |
- What are your thoughts about Sophia the robot? 00:41:23.140 |
This kind of mix of some basic natural language processing 00:41:31.140 |
- Yeah, an art experiment is a good way to characterize it. 00:41:41.780 |
- She followed me on Twitter at some point, yeah. 00:41:44.300 |
- And she tweets about how much she likes you, so. 00:41:57.340 |
that happened with Sophia is quite a large number of people 00:42:03.420 |
and thought that maybe we're far more advanced 00:42:27.060 |
- Well, people really overestimate where we are. 00:42:31.820 |
I don't even think Sophia was very impressive 00:42:35.500 |
I think she's kind of a puppet, to be honest. 00:42:40.220 |
are a little bit influenced by science fiction 00:42:42.180 |
and pop culture to think that we should be further along 00:42:45.700 |
- So what's your favorite robots and movies and fiction? 00:42:56.460 |
the perception control systems operating on "WALL-E" 00:43:12.420 |
how to create characters that don't look real, 00:43:16.420 |
but look like something that's even better than real, 00:43:20.460 |
that we really respond to and think is really cute. 00:43:26.500 |
And "WALL-E" is just such a great example of that. 00:44:04.660 |
then people like it even better than if it looks real." 00:44:08.420 |
- Do you think the future of things like Alexa 00:44:13.260 |
and the home has possibility to take advantage of that, 00:44:18.840 |
to create these systems that are better than real, 00:44:42.880 |
I know a lot of those folks and they're afraid of that 00:44:45.660 |
because you don't, how do you make money off of it? 00:45:10.980 |
So the device itself, it's felt that you can lose a lot 00:45:19.660 |
and then it creates more opportunity for frustration, 00:45:23.980 |
for negative stuff than it does for positive stuff 00:45:37.420 |
Otherwise you wind up with Microsoft's Clippy. 00:45:47.740 |
I was just, I just talked to, we just had this argument 00:45:52.900 |
and they said, he said he's not bringing Clippy back. 00:46:00.180 |
I think it was, Clippy was the greatest assistance 00:46:24.300 |
for assisting you in anything and typing in Word 00:46:39.940 |
- And I know I have like an army of people behind me 00:46:47.300 |
- It's the people who like to hate stuff when it's there 00:47:16.700 |
So making a business out of essentially something 00:47:32.580 |
I don't think it's going to be this way forever. 00:47:42.340 |
that only barely meets people's like minimal expectations 00:47:50.500 |
that we should be further than we already are. 00:47:52.500 |
Like when people think about a robot assistant in the home, 00:48:02.820 |
with the design and getting that interaction just right. 00:48:10.180 |
I think you're also right that the business case 00:48:13.780 |
because there hasn't been a killer application 00:48:17.180 |
that's useful enough to get people to adopt the technology 00:48:22.100 |
I think what we did see from the people who did get Jibo 00:48:25.740 |
is a lot of them became very emotionally attached to it. 00:48:31.500 |
it's kind of like the Palm Pilot back in the day. 00:48:36.020 |
They don't see how they would benefit from it 00:48:37.580 |
until they have it or some other company comes in 00:48:42.660 |
- Yeah, like how far away are we, do you think? 00:48:49.200 |
And I think it has a lot to do with people's expectations 00:48:53.380 |
depending on what science fiction that is popular. 00:48:59.460 |
and people's need for an emotional connection. 00:49:11.100 |
There's like, I really think this is like the life 00:49:23.780 |
but we've gotten used to really never being close to anyone. 00:49:30.180 |
And we're deeply, I believe, okay, this is hypothesis. 00:49:37.900 |
In fact, what makes those relationships fulfilling, 00:49:45.060 |
But I feel like there's more opportunity to explore that 00:49:48.660 |
doesn't interfere with the human relationships you have. 00:50:03.500 |
- Do you think it's possible to fall in love with a robot? 00:50:11.100 |
to have a long-term committed monogamous relationship 00:50:15.660 |
- Well, yeah, there are lots of different types 00:50:17.000 |
of long-term committed monogamous relationships. 00:50:22.220 |
like you're not going to see other humans sexually, 00:50:25.740 |
or like you basically on Facebook have to say, 00:50:29.700 |
I'm in a relationship with this person, this robot. 00:50:48.680 |
maybe in a different way that is even, you know, 00:50:53.680 |
But, you know, I'm not saying that people won't like do this 00:51:01.160 |
sexual relation, monogamous relationship with my robot. 00:51:04.420 |
But I don't think that that's the main use case for them. 00:51:08.760 |
- But you think that there's still a gap between human 00:51:31.320 |
Like, I think it's so boring to think about recreating 00:51:33.920 |
things that we already have when we could create something 00:51:42.200 |
- I know you're thinking about the people who like, 00:51:43.760 |
don't have a husband and like, what can we give them? 00:51:46.860 |
- Yeah, but let's, I guess what I'm getting at is, 00:52:02.760 |
Like it's, I do think that robots are gonna continue 00:52:11.440 |
or when the voice interactions we have with them 00:52:19.840 |
and there were in that movie too, like towards the end, 00:52:54.320 |
It's like, how hard is it to satisfy that role 00:53:09.720 |
you think with robots that's difficult to build? 00:53:15.160 |
well, it also depends on are we talking about robots now, 00:53:20.560 |
in 50 years, in like indefinite amount of time, 00:53:30.220 |
it's more similar to the relationship we have with our pets 00:53:34.000 |
than relationship that we have with other people. 00:53:37.540 |
So what do you think it takes to build a system 00:53:41.280 |
that exhibits greater and greater levels of intelligence? 00:53:50.680 |
that doesn't, I think intelligence is not required. 00:53:53.960 |
In fact, intelligence probably gets in the way sometimes, 00:53:56.960 |
But what do you think it takes to create a system 00:54:03.000 |
where we sense that it has a human level intelligence? 00:54:07.240 |
So something that probably something conversational, 00:54:13.980 |
It'd be interesting to sort of hear your perspective, 00:54:16.580 |
not just purely, so I talk to a lot of people, 00:54:25.760 |
But my sense is it's easier than just solving, 00:54:33.380 |
the pure natural language processing problem, 00:54:39.660 |
- So yeah, so how hard is it to pass a Turing test 00:54:52.260 |
what was it, a 13-year-old boy from the Ukraine? 00:54:55.500 |
- Then they're not gonna expect perfect English, 00:54:57.480 |
they're not gonna expect perfect understanding of concepts, 00:55:09.100 |
- Do you think, you kind of alluded this too with audio, 00:55:21.340 |
so we treat physical things with more social agency, 00:55:42.580 |
like I have this robot cat at home that Hasbro makes, 00:55:52.620 |
and it doesn't 'cause it's like a $100 piece of technology. 00:56:00.460 |
and it's very hard to treat it like it's alive. 00:56:04.000 |
So you can get a lot wrong with the body too, 00:56:17.340 |
that people have never actually held in their arms, 00:56:21.540 |
because they don't have these preformed expectations. 00:56:24.620 |
- Yeah, I remember you, I'm thinking of a TED Talk 00:56:28.300 |
that nobody actually knows what a dinosaur looks like. 00:56:33.100 |
So you can actually get away with a lot more. 00:56:38.900 |
so what do you think about consciousness and mortality 00:56:57.060 |
that are much more than just the interaction, 00:57:02.660 |
with a dinosaur moving kind of in interesting ways, 00:57:29.200 |
So I can't say whether it's inherently good or bad, 00:57:36.660 |
Plio mimics distress when you quote-unquote hurt it 00:57:46.060 |
to get people to engage with it in a certain way. 00:57:50.500 |
that I did some of the empathy work with named Pulasanandi, 00:57:57.340 |
and that would stop working after a certain amount of time 00:58:02.020 |
whether he himself would treat it differently. 00:58:07.540 |
those like those little games that we used to have 00:58:12.080 |
that like people respond to like this idea of mortality. 00:58:15.720 |
And, you know, you can get people to do a lot 00:58:21.820 |
depends on what you're trying to get them to do. 00:58:27.020 |
have a deeper connection, so in our relationship. 00:58:29.400 |
If it's for their own benefit, that sounds great. 00:58:33.620 |
- You could do that for a lot of other reasons. 00:58:36.700 |
- I see, so what kind of stuff are you worried about? 00:58:38.900 |
So is it mostly about manipulation of your emotions 00:58:41.580 |
for like advertisements and so on, things like that? 00:58:52.340 |
It's, you know, just like any other technological tool, 00:59:02.740 |
there's a lot of concern of data collection now. 00:59:05.080 |
What's from the legal perspective or in general, 00:59:19.080 |
It's a gray area, but crossing a line they shouldn't 00:59:22.100 |
in terms of manipulating, like we're talking about, 00:59:34.580 |
we are starting to create technology that relies 00:59:46.800 |
because the other problem is that the harms aren't tangible. 00:59:51.080 |
They're not really apparent to a lot of people 00:59:53.220 |
because they kind of trickle down on a societal level, 00:59:58.640 |
which sounds extreme, but that book was very prescient, 01:00:17.060 |
and it helps me because Alexa knows what brand of diaper 01:00:23.120 |
we use, and so I can just easily order it again, 01:00:25.340 |
so I don't have any incentive to ask a lawmaker to curb that, 01:00:29.460 |
but when I think about that data then being used 01:00:39.380 |
that's then a societal effect that I think is very severe, 01:00:58.120 |
and more saying, well, we want to maximize engagement 01:01:03.220 |
and then just, 'cause you're not actually doing a bad thing. 01:01:10.040 |
You want people to keep a conversation going, 01:01:13.800 |
to have more conversations, to keep coming back 01:01:21.740 |
you're kind of not exactly directly responsible. 01:01:29.760 |
Are you optimistic about us ever being able to solve it? 01:01:46.980 |
and when those interests are aligned, that's great, 01:01:49.660 |
but the completely free market doesn't seem to work 01:01:57.460 |
so say you were trying to do the right thing. 01:02:08.460 |
I don't think they sit there with, I don't know, 01:02:21.220 |
one is good for the profit, and they choose the profit. 01:02:25.540 |
I think they actually, there's a lot of money to be made 01:02:38.580 |
would significantly benefit from making decisions 01:02:44.940 |
But I don't know if they know what's good for society. 01:02:47.740 |
I don't think we know what's good for society 01:02:52.620 |
in terms of how we manage the conversation on Twitter 01:02:57.620 |
or how we design, we're talking about robots. 01:03:08.260 |
into having a deep connection with Alexa or not? 01:03:17.660 |
- Well, I'm gonna say something that's controversial 01:03:23.960 |
that companies who are reaching out to ethicists 01:03:27.660 |
and trying to create interdisciplinary ethics boards, 01:03:30.020 |
I don't think that that's totally just trying 01:03:32.980 |
so that they look like they've done something. 01:03:38.340 |
like you say, care about what the right answer is. 01:03:43.060 |
and they're trying to find people to help them find them. 01:03:45.860 |
Not in every case, but I think it's much too easy 01:03:49.140 |
to just vilify the companies as, like you say, 01:03:57.220 |
A lot of people are well-meaning even within companies. 01:04:06.060 |
is more interdisciplinarity, both within companies, 01:04:25.540 |
and you need people who understand the technology, 01:04:34.060 |
in a more systematic way to be talking to each other. 01:04:40.020 |
- You've also done work on intellectual property. 01:04:48.820 |
I mean, that's kind of, those are mostly secretive, 01:04:53.820 |
the recommender systems behind these algorithms. 01:05:02.140 |
Like what is the responsibility of these companies 01:05:16.100 |
and there are a lot of people calling for transparency. 01:05:18.480 |
In fact, Europe's even trying to legislate transparency, 01:05:27.700 |
makes some sort of decision that affects someone's life, 01:05:30.100 |
that you need to be able to see how that decision was made. 01:05:39.500 |
because obviously companies need to have, you know, 01:06:07.820 |
I actually, I don't even know what intellectual property 01:06:11.420 |
is in the space of software, what it means to, 01:06:22.940 |
- No, we went through a whole process, yeah, I do. 01:06:42.740 |
What's the right way to protect and own ideas 01:06:55.060 |
- I mean, it's hard because there are different types 01:06:59.180 |
and there are kind of these blunt instruments. 01:07:06.980 |
but when you try and apply it to something else, 01:07:09.260 |
it's like, I don't know, I'll just like hit this thing 01:07:21.820 |
in some tangible form is automatically copyrighted. 01:07:25.940 |
So you have that protection, but that doesn't do much 01:07:40.580 |
Then you can patent software, but that's kind of, 01:07:54.700 |
There were so many lawyers, so many meetings. 01:08:05.740 |
And so this idea of like protecting the like inventor 01:08:08.380 |
in their own garage, like came up with a great ideas, 01:08:13.100 |
It's all just companies trying to protect things 01:08:18.260 |
And then with code, it's oftentimes like, you know, 01:08:25.220 |
probably your code is obsolete at that point. 01:08:28.620 |
So it's a very, again, a very blunt instrument 01:08:35.660 |
we should really have something better, but we don't. 01:08:49.980 |
because what we've noticed is that people will come in, 01:08:52.780 |
they'll like write some code and they'll be like, 01:08:56.020 |
And we're like, mm, like that's not your problem right now. 01:08:58.860 |
Your problem isn't that someone's gonna steal your project. 01:09:01.100 |
Your problem is getting people to use it at all. 01:09:05.180 |
Like we don't even know if you're gonna get traction 01:09:08.140 |
And so open sourcing can sometimes help, you know, 01:09:17.100 |
So like, I'm a fan of it in a lot of contexts. 01:09:20.100 |
Obviously it's not like a one size fits all solution. 01:09:23.940 |
- So what I gleaned from your Twitter is you're a mom. 01:09:41.980 |
- Well, I think that my child has made it more apparent 01:09:46.740 |
to me that the systems we're currently creating 01:09:56.820 |
in such a different way than a lot of the AI systems 01:10:00.060 |
we're creating that that's not really interesting 01:10:05.380 |
But what is interesting to me is how these systems 01:10:09.260 |
are gonna shape the world that he grows up in. 01:10:15.180 |
kind of the societal effects of developing systems 01:10:17.700 |
that rely on massive amounts of data collection, 01:10:23.260 |
- So is he gonna be allowed to use like Facebook or? 01:10:41.700 |
- I need to, I'm gonna start gaming and streaming my gameplay 01:10:46.620 |
- So what do you see as the future of personal robotics, 01:10:51.620 |
social robotics, interaction with other robots? 01:11:02.980 |
- Oh, I really hope that we get kind of a home robot 01:11:06.620 |
that makes it, that's a social robot and not just Alexa. 01:11:09.660 |
Like it's, you know, I really love the Anki products. 01:11:14.900 |
And I thought Jibo was, had some really great aspects. 01:11:22.780 |
So Kate, it was wonderful talking to you today. 01:11:28.740 |
Thanks for listening to this conversation with Kate Darling. 01:11:32.260 |
And thank you to our sponsors, ExpressVPN and Masterclass. 01:11:38.340 |
by signing up to Masterclass at masterclass.com/lex 01:11:42.860 |
and getting ExpressVPN at expressvpn.com/lexpod. 01:11:47.860 |
If you enjoy this podcast, subscribe on YouTube, 01:11:55.460 |
or simply connect with me on Twitter @LexFriedman. 01:11:58.540 |
And now let me leave you with some tweets from Kate Darling. 01:12:05.620 |
the pandemic has fundamentally changed who I am. 01:12:15.460 |
I came on here to complain that I had a really bad day