back to indexAyanna Howard: Human-Robot Interaction & Ethics of Safety-Critical Systems | Lex Fridman Podcast #66
Chapters
0:0 Introduction
2:9 Favorite robot
5:5 Autonomous vehicles
8:43 Tesla Autopilot
20:3 Ethical responsibility of safety-critical algorithms
28:11 Bias in robotics
38:20 AI in politics and law
40:35 Solutions to bias in algorithms
47:44 HAL 9000
49:57 Memories from working at NASA
51:53 SpotMini and Bionic Woman
54:27 Future of robots in space
57:11 Human-robot interaction
62:38 Trust
69:26 AI in education
75:6 Andrew Yang, automation, and job loss
77:17 Love, AI, and the movie Her
85:1 Why do so many robotics companies fail?
92:22 Fear of robots
94:17 Existential threats of AI
95:57 Matrix
97:37 Hang out for a day with a robot
00:00:00.000 |
The following is a conversation with Ayana Howard. 00:00:03.400 |
She's a roboticist, professor at Georgia Tech 00:00:06.200 |
and director of the Human Automation Systems Lab 00:00:09.840 |
with research interests in human-robot interaction, 00:00:12.800 |
assistive robots in the home, therapy gaming apps, 00:00:16.000 |
and remote robotic exploration of extreme environments. 00:00:45.640 |
I recently started doing ads at the end of the introduction. 00:00:48.680 |
I'll do one or two minutes after introducing the episode 00:01:04.800 |
I personally use Cash App to send money to friends, 00:01:14.600 |
You can buy fractions of a stock, say $1 worth, 00:01:19.680 |
Brokerage services are provided by Cash App Investing, 00:01:28.160 |
to support one of my favorite organizations called FIRST, 00:01:31.600 |
best known for their FIRST robotics and Lego competitions. 00:01:35.120 |
They educate and inspire hundreds of thousands of students 00:01:40.240 |
and have a perfect rating at Charity Navigator, 00:01:46.880 |
When you get Cash App from the App Store or Google Play 00:01:58.280 |
that I've personally seen inspire girls and boys 00:02:04.280 |
And now here's my conversation with Ayana Howard. 00:02:08.320 |
What or who is the most amazing robot you've ever met 00:02:13.640 |
or perhaps had the biggest impact on your career? 00:02:37.920 |
but project whatever they wish for the robot to be onto. 00:02:41.960 |
- Onto Rosie, but also, I mean, think about it. 00:02:57.080 |
but she cared about people, especially the kids. 00:03:12.080 |
Just like you said, Rosie pushed back a little bit 00:03:19.920 |
we want them because they enhance our quality of life. 00:03:24.000 |
And usually that's linked to something that's functional. 00:03:31.600 |
Like there's the Saturday driving where I can just speed, 00:03:35.400 |
but then there's the, I have to go to work every day 00:03:40.480 |
And so robots are designed to basically enhance our ability 00:03:49.800 |
And so the perfection comes from this aspect of interaction. 00:03:54.400 |
If I think about how we drive, if we drove perfectly, 00:04:02.240 |
So think about how many times you had to run past the light 00:04:10.440 |
or that little kid kind of runs into the street 00:04:18.440 |
Like if you think about it, we are not perfect drivers. 00:04:28.720 |
they wouldn't really be able to function with us. 00:04:31.200 |
- Can you linger a little bit on the word perfection? 00:04:34.520 |
So from the robotics perspective, what does that word mean? 00:04:55.640 |
can I complete it at 100% accuracy with zero errors? 00:04:59.520 |
And so that's kind of, if you think about perfection 00:05:15.600 |
sort of defining, staying in the lane, changing lanes. 00:05:29.160 |
That's the limit of what we think of as perfection. 00:05:33.760 |
is that when people think about perfection for robotics, 00:05:44.680 |
but she actually wasn't perfect in terms of accuracy, 00:05:47.440 |
but she was perfect in terms of how she interacted 00:06:03.560 |
with respect to the rules that we just made up anyway. 00:06:06.480 |
Right, and so I think there's this disconnect sometimes 00:06:09.560 |
between what we really want and what happens. 00:06:13.320 |
And we see this all the time, like in my research, right? 00:06:16.000 |
Like the optimal quote unquote optimal interactions 00:06:20.440 |
are when the robot is adapting based on the person, 00:06:24.400 |
not 100% following what's optimal based on the rules. 00:06:29.400 |
- Just to linger on autonomous vehicles for a second, 00:06:32.720 |
just your thoughts, maybe off the top of your head, 00:06:41.680 |
in the automotive industry that are very confident 00:06:43.880 |
from Elon Musk to Waymo to all these companies. 00:06:51.440 |
- The gap between the perfection and the human definition 00:07:13.240 |
and corporations, Elon Musk being one of them, 00:07:15.400 |
that said, you know, self-driving cars on the road 00:07:24.600 |
And now people are saying five years, 10 years, 20 years. 00:07:44.080 |
You still have people walking, you still have stores, 00:07:55.680 |
Because if you think about robotics in general, 00:08:16.720 |
So a closed campus where you don't have self-driving cars 00:08:20.600 |
and maybe some protection so that the students 00:08:23.800 |
don't jet in front just because they wanna see what happens. 00:08:29.000 |
I think that's where we're gonna see the most success 00:08:45.200 |
in the automotive industry robots operating today 00:08:50.280 |
are ones that are traveling over 55 miles an hour 00:08:58.880 |
So I just, I would love to hear sort of your, 00:09:04.240 |
So one, I don't know if you've gotten to see, 00:09:06.960 |
you've heard about something called Smart Summon, 00:09:17.960 |
slowly sort of tries to navigate the parking lot 00:09:22.680 |
And there's some incredible amounts of videos 00:09:27.640 |
as it awkwardly tries to navigate this environment, 00:09:39.320 |
in this kind of interesting human-robot interaction space. 00:09:42.040 |
So what are your thoughts in general about it? 00:09:47.840 |
- I do, mainly because I'm a gadget freak, right? 00:09:52.120 |
So I'd say it's a gadget that happens to have some wheels. 00:09:59.400 |
I mean, you're a human-robot interaction roboticist. 00:10:05.560 |
So what does it feel for a machine to come to you? 00:10:25.440 |
I'm doing things like automated backing into, 00:10:28.760 |
so there's like a feature where you can do this 00:10:35.720 |
or even, you know, pseudo autopilot on the freeway, right? 00:10:46.960 |
Like I am very aware of it, but I'm also fascinated by it. 00:10:57.480 |
from all of these people who are cutting it on. 00:11:00.440 |
Like every time I cut it on, it's getting better, right? 00:11:04.160 |
And so I think that's what's amazing about it is that. 00:11:07.160 |
- This nice dance of you're still hyper vigilant. 00:11:17.600 |
as a roboticist, we'll talk about trust a little bit. 00:11:26.520 |
Like where you just enjoy exploring technology 00:11:33.720 |
between robotics and humans is where you use it, 00:11:42.160 |
- Yeah, so I think I just don't necessarily trust technology, 00:11:50.180 |
So when it first comes out, I will use everything, 00:11:54.280 |
but I will be very, very cautious of how I use it. 00:11:57.400 |
- Do you read about it or do you explore it by just try it? 00:12:05.000 |
do you read the manual or do you learn through exploration? 00:12:08.800 |
If I have to read the manual, then I do design. 00:12:15.040 |
- Elon Musk is very confident that you kind of take it 00:12:24.480 |
where you don't really trust and then you try 00:12:29.160 |
it's going to incrementally improve itself into full, 00:12:41.040 |
So the promise there is by the end of next year, 00:12:47.200 |
What's your sense about that journey that Tesla's on? 00:12:52.200 |
- So there's kind of three things going on now. 00:13:10.080 |
Like there are some users and it's because what happens is 00:13:23.880 |
And as people, we tend to swing to the other extreme, right? 00:13:28.280 |
Because like, oh, I was like hyper, hyper fearful 00:13:33.980 |
And we just tend to swing, that's just human nature. 00:13:46.500 |
And it's a scary notion that there's a certain point 00:13:48.920 |
where you allow yourself to look at the smartphone 00:14:10.000 |
I mean, just everything on the internet, right? 00:14:12.780 |
Think about how reliant we are on certain apps 00:14:19.420 |
20 years ago, people have been like, oh yeah, that's stupid. 00:14:25.900 |
Like now it's just like, oh, of course I've been using it. 00:14:30.780 |
Of course, aliens, I didn't think they existed, 00:14:45.800 |
And I think there will be a group of individuals 00:15:10.400 |
would probably be the optimal kind of person, right? 00:15:15.640 |
very comfortable and not hypersensitive, right? 00:15:20.000 |
I'm just hypersensitive 'cause I designed this stuff. 00:15:23.560 |
So there is a target demographic that will swing. 00:15:25.920 |
The other one though is you still have these humans 00:15:34.680 |
And as long as we have people that are on the same streets, 00:15:47.000 |
some of the silliness of human drivers, right? 00:15:51.480 |
Like as an example, when you're next to that car 00:15:56.320 |
that has that big sticker called student driver, right? 00:15:59.880 |
Like you are like, oh, either I am going to like go around. 00:16:04.680 |
Like we know that that person is just gonna make mistakes 00:16:14.360 |
and I see two fairly young looking individuals 00:16:28.560 |
and that experience into basically an autopilot? 00:16:33.560 |
- Yeah, and there's millions of cases like that 00:16:37.320 |
where we take little hints to establish context. 00:16:41.240 |
I mean, you said kind of beautifully poetic human things, 00:16:44.400 |
but there's probably subtle things about the environment, 00:16:55.360 |
and therefore you can make some kind of judgment 00:16:57.200 |
about the group behavior of pedestrians, blah, blah, blah. 00:17:02.720 |
Like if you're in Boston, how people cross the street, 00:17:07.160 |
like lights are not an issue versus other places 00:17:10.680 |
where people will actually wait for the crosswalk. 00:17:22.560 |
that intersection to intersection is different. 00:17:25.520 |
So every intersection has a personality of its own. 00:17:28.960 |
So certain neighborhoods of Boston are different. 00:17:30.880 |
So we're kind of, based on different timing of day, 00:17:35.320 |
at night, it's all, there's a dynamic to human behavior 00:17:42.480 |
We're not able to introspect and figure it out, 00:17:56.440 |
is there something that could be done, you think, 00:18:06.520 |
Do you necessarily need to build a bird that flies 00:18:15.400 |
and I kind of, I talk about it as a fixed space, 00:18:19.360 |
where, so imagine that there's a neighborhood 00:18:23.320 |
that's a new smart city or a new neighborhood that says, 00:18:26.840 |
you know what, we are going to design this new city 00:18:33.840 |
And then doing things, knowing that there's anomalies, 00:18:45.560 |
So you still have people, but you do very specific things 00:18:49.320 |
to try to minimize the noise a little bit, as an example. 00:18:53.840 |
- And the people themselves become accepting of the notion 00:19:03.600 |
Like individuals will move into this neighborhood 00:19:06.280 |
knowing like this is part of like the real estate pitch. 00:19:10.680 |
And so I think that's a way to do a shortcut. 00:19:24.080 |
but it's a safer space and is more of an accepting space. 00:19:28.880 |
I.e. when something in that space might happen, 00:19:40.800 |
- And you said three things, did we cover all of them? 00:19:51.000 |
- And the mishmash with like, with policy as well, 00:20:06.040 |
you know, if you're kind of honest of what cars do, 00:20:09.840 |
they kind of threaten each other's life all the time. 00:20:17.680 |
in order to navigate intersections, there's an assertiveness, 00:20:22.360 |
and if you were to reduce it to an objective function, 00:20:25.320 |
there's a probability of murder in that function, 00:20:34.440 |
it has to be low enough to be acceptable to you 00:20:38.480 |
on an ethical level as an individual human being, 00:20:41.360 |
but it has to be high enough for people to respect you, 00:20:45.360 |
to not sort of take advantage of you completely 00:20:49.680 |
So, I mean, I don't think there's a right answer here, 00:20:56.160 |
How do we solve that from a robotics perspective 00:21:08.720 |
- Now robotic algorithms would be killing people. 00:21:10.840 |
- Right, so it will be robotics algorithms that are, 00:21:14.480 |
no, it will be robotic algorithms don't kill people, 00:21:17.080 |
developers of robotic algorithms kill people, right? 00:21:19.840 |
I mean, one of the things is people are still in the loop, 00:21:26.640 |
I think people will still be in the loop at some point, 00:21:30.360 |
Like we're not necessarily at the stage where robots 00:21:50.640 |
- I mean, I think that's why the whole aspect of ethics 00:22:00.240 |
If you think about it, you can basically say, 00:22:04.880 |
I'm not going to work on weaponized AI, right? 00:22:07.520 |
Like people can say, that's not what I'm gonna do. 00:22:27.360 |
that you do have that power when you're coding 00:22:30.040 |
and things like that, I think that's just not a good thing. 00:22:35.040 |
Like we need to think about this responsibility 00:22:43.520 |
- Yeah, so it's not an option to not think about ethics. 00:22:47.000 |
I think it's a majority, I would say, of computer science. 00:22:54.080 |
about bias and so on, and we'll talk about it, 00:23:09.360 |
There's other smart people thinking about it. 00:23:11.160 |
It seems that everybody has to think about it. 00:23:17.040 |
whether it's bias or just every aspect of ethics 00:23:25.720 |
but I remember when we didn't have testers, right? 00:23:31.080 |
As a developer, you had to test your own code, right? 00:23:33.600 |
Like you had to go through all the cases and figure it out, 00:23:42.440 |
And so from there, what happens is most developers, 00:23:47.320 |
but it's usually like, okay, did my compiler bug out? 00:23:53.760 |
Like that's how you typically think about it as a developer 00:23:58.280 |
to another process and they're gonna test it out. 00:24:01.160 |
But I think we need to go back to those early days 00:24:09.560 |
okay, let me look at the ethical outcomes of this 00:24:21.240 |
I think that's where we are with respect to ethics. 00:24:23.320 |
Like, let's go back to what was good practices 00:24:26.360 |
only because we were just developing the field. 00:24:34.400 |
I've had to feel it recently in the last few months, 00:24:39.480 |
I've gotten a message, more than one from people. 00:24:42.660 |
I've unfortunately gotten some attention recently 00:24:52.360 |
because of working on semi-autonomous vehicles. 00:25:14.560 |
I mean, it's many nights where I wasn't able to sleep 00:25:19.080 |
You really do think about people that might die 00:25:23.840 |
Of course, you can then start rationalizing and saying, 00:25:27.400 |
40,000 people die in the United States every year 00:25:29.640 |
and we're trying to ultimately try to save lives. 00:25:36.680 |
And that's an important burden to carry with you 00:25:43.800 |
if we train this concept correctly from the beginning. 00:25:49.640 |
is like being a medical doctor, but think about it. 00:25:52.400 |
Medical doctors, if they've been in situations 00:26:17.200 |
They are given some of the tools to address that 00:26:35.840 |
if you think about the medical schools, right? 00:26:39.560 |
I think if we just change the messaging a little, 00:26:42.160 |
great gift, being a developer, great responsibility. 00:26:48.400 |
- But do you think, I mean, this is really interesting. 00:26:57.300 |
I mean, what does it feel like to make a mistake 00:27:01.320 |
in a surgery and somebody to die because of that? 00:27:07.040 |
in medical school, sort of how to be accepting of that risk? 00:27:10.560 |
- So, because I do a lot of work with healthcare robotics, 00:27:25.920 |
So they teach responsibility, but they also teach the value. 00:27:45.360 |
Versus a, well, I just picked the first widget and did, 00:27:48.840 |
right, like, so every decision is actually thought through. 00:28:06.360 |
It's a gift, but you have to treat it extremely seriously. 00:28:16.360 |
"The Ugly Truth About Ourselves and Our Robot Creations," 00:28:24.320 |
that may affect the function of various robotic systems. 00:28:27.120 |
Can you talk through, if you remember examples of some? 00:28:37.080 |
and so bias, which is different than prejudice. 00:28:38.840 |
So bias is that we all have these preconceived notions 00:28:41.880 |
about particular, everything from particular groups 00:28:56.040 |
those preconceived notions might affect our outputs, 00:29:02.240 |
- So there, the bias could be positive and negative, 00:29:04.680 |
and then is prejudice the negative kind of bias? 00:29:09.200 |
So prejudice is that not only are you aware of your bias, 00:29:13.560 |
but you are then take it and have a negative outcome, 00:29:24.680 |
- That's the challenging aspect of all ethical questions. 00:29:30.040 |
and in fact, I think it might be in the paper, 00:29:31.760 |
'cause I think I talk about self-driving cars, 00:29:40.600 |
insurance companies charge quite a bit of money 00:29:50.880 |
But no one will, I mean, parents will be grumpy, 00:30:01.720 |
Everybody in human factors and safety research 00:30:07.120 |
almost, I mean, is quite ruthlessly critical of teenagers. 00:30:15.040 |
Is that okay to be ageist in this kind of way? 00:30:18.600 |
It's definitely age, there's no question about it. 00:30:24.960 |
'Cause you know that teenagers are more likely 00:30:29.840 |
to be in accidents, and so there's actually some data to it. 00:30:35.000 |
and you say, well, I'm going to make the insurance higher 00:30:39.400 |
for an area of Boston, because there's a lot of accidents. 00:30:44.400 |
And then they find out that that's correlated 00:30:52.440 |
Like that is not acceptable, but yet the teenager, 00:31:01.720 |
- And the way we figure that out as a society 00:31:03.960 |
by having conversations, by having discourse, 00:31:15.480 |
- So in terms of bias or prejudice in algorithms, 00:31:25.560 |
- So I think about quite a bit the medical domain, 00:31:34.520 |
typically based on gender and ethnicity, primarily, 00:31:42.320 |
Historically, if you think about FDA and drug trials, 00:31:48.680 |
it's harder to find women that aren't childbearing, 00:31:54.240 |
and so you may not test on drugs at the same level, right? 00:32:16.960 |
I will say that in the US, women average height 00:32:25.880 |
Like, if you're not thinking about it from the beginning, 00:32:38.280 |
well, you have different types of body structure, 00:32:44.600 |
Oh, this fits all the folks in my lab, right? 00:32:48.240 |
- So think about it from the very beginning is important. 00:32:56.120 |
Sadly, our society already has a lot of negative bias. 00:33:06.320 |
it's going to contain the same bias that a society contains. 00:33:09.040 |
And so, yeah, is there things there that bother you? 00:33:25.160 |
So the data that we're collecting is historic. 00:33:37.880 |
that was used in the first place to discriminate. 00:33:45.000 |
from the whole aspect of predictive policing, 00:33:51.440 |
There was a recent paper that had the healthcare algorithms, 00:34:16.320 |
on what kind of bias creeps into the healthcare space. 00:34:27.560 |
Okay, like, okay, that's totally a clickbait title. 00:34:32.200 |
and so there was data that these researchers had collected. 00:34:36.720 |
I believe, I wanna say it was either science or nature. 00:34:47.560 |
I believe, between black and white women, right? 00:34:59.280 |
And so, and it was tied to ethnicity, tied to race. 00:35:48.040 |
And so I think, and that's why the sensational title, 00:36:04.280 |
- I think that's really important to linger on. 00:36:28.160 |
then if we set the bar of perfection, essentially, 00:36:33.080 |
of it has to be perfectly fair, whatever that means, 00:36:39.640 |
But that's really important to say what you just said, 00:36:42.040 |
which is, well, it's still better than some things. 00:36:55.920 |
So it's harder to say, you know, is this hospital, 00:37:04.480 |
Well, with AI, it can process through all this data 00:37:18.160 |
versus like waiting for someone to sue someone else 00:37:25.160 |
we need to capitalize on a little bit more, right? 00:37:43.040 |
she basically called out a couple of companies and said, 00:37:45.880 |
hey, and most of them were like, oh, embarrassment, 00:37:54.960 |
And then it was like, oh, here's some more issues. 00:37:56.880 |
And I think that conversation then moves that needle 00:38:01.880 |
to having much more fair and unbiased and ethical aspects, 00:38:06.880 |
as long as both sides, the developers are willing to say, 00:38:10.640 |
okay, I hear you, yes, we are going to improve, 00:38:19.720 |
- Yes, so speaking of this really nice notion 00:38:23.080 |
that AI is maybe flawed but better than humans, 00:38:29.240 |
one example of flawed humans is our political system. 00:38:38.760 |
do you have a hope for AI sort of being elected 00:38:49.840 |
or being able to be a powerful representative of the people? 00:38:57.160 |
that this whole world of AI is in partnerships with people. 00:39:07.680 |
I don't believe that we should have an AI for president, 00:39:28.040 |
And you put smart people with smart expertise 00:39:35.760 |
as one of those smart individuals giving input. 00:39:43.880 |
Like all of these things that a human is processing, right? 00:39:53.600 |
that are going to be at the end of the decision. 00:39:55.560 |
And I don't think as a world, as a culture, as a society, 00:39:59.360 |
that we would totally believe, and this is us, 00:40:05.360 |
but we need to see that leader, that person as human. 00:40:10.360 |
And most people don't realize that like leaders 00:40:20.480 |
they don't wake up in the morning and be like, 00:40:27.680 |
but let me get a little bit of feedback on this. 00:40:31.120 |
And then it's a, yeah, that was an awesome idea. 00:41:11.480 |
that have adopted the algorithms may try to fix it, right? 00:41:14.280 |
And so it's really ad hoc and it's not systematic. 00:41:18.880 |
There's, it's just, it's kind of like, I'm a researcher. 00:41:24.760 |
which means that there's a whole lot out there 00:41:35.720 |
but that process I think could be done a little bit better. 00:41:48.320 |
Like maybe the corporations, when they think about a product 00:41:51.760 |
they should, instead of, in addition to hiring these bug, 00:42:10.320 |
for the people who find these security holes, 00:42:31.240 |
their own bias lens, like I'm interested in age 00:42:43.440 |
So like, sort of, if we look at a company like Twitter, 00:42:51.680 |
for discriminating against certain political beliefs. 00:43:00.720 |
and I know the Twitter folks are working really hard at it. 00:43:06.880 |
You know, the kind of evidence that people bring 00:43:47.200 |
that's very difficult to sort of contextualize 00:43:51.620 |
Do you have a hope for companies like Twitter and Facebook? 00:43:55.760 |
- Yeah, so I think there's a couple of things going on. 00:44:09.360 |
We're also becoming reliant on a lot of these, 00:44:14.360 |
the apps and the resources that are provided. 00:44:18.000 |
So some of it is kind of anger, like, I need you, right? 00:44:32.840 |
So some of it is like, oh, we'll fix it in-house. 00:44:40.920 |
because I think it's a problem that foxes eat hens. 00:45:02.000 |
If I find something, I'm probably going to want to fix it, 00:45:04.480 |
and hopefully the media won't pick it up, right? 00:45:09.360 |
because someone inside is going to be mad at you 00:45:13.640 |
yeah, they can the resume survey because, right? 00:45:31.320 |
us as a human civilization on the whole is good 00:45:35.360 |
and can be trusted to guide the future of our civilization 00:45:44.160 |
And, you know, there were some dark times in history, 00:45:50.040 |
I think now we're in one of those dark times. 00:45:57.600 |
So if it was just US, I'd be like, yeah, it's a US thing, 00:46:00.080 |
but we're seeing it, like, worldwide, this polarization. 00:46:10.240 |
that at the end of the day, people are good, right? 00:46:26.680 |
People at the time, so the city closed for, you know, 00:46:30.520 |
little snow, but it was ice, and the city closed down. 00:46:33.480 |
But you had people opening up their homes and saying, 00:46:35.800 |
hey, you have nowhere to go, come to my house, right? 00:46:39.120 |
Hotels were just saying, like, sleep on the floor. 00:46:41.840 |
Like, places like, you know, the grocery stores were like, 00:46:45.960 |
There was no like, oh, how much are you gonna pay me? 00:46:52.160 |
strangers were just like, can I give you a ride home? 00:46:55.560 |
And that was a point I was like, you know what? 00:47:03.120 |
there's a compassionate love that we all have within us. 00:47:06.960 |
It's just that when all of that is taken care of 00:47:51.960 |
have you happened to have ever seen Space Odyssey, 00:48:38.640 |
I'm gonna make something that I think is perfect, 00:48:44.640 |
it'll be perfect based on the wrong assumptions, right? 00:48:47.600 |
That's something that you don't know until you deploy 00:48:53.880 |
But what that means is that when we design software, 00:49:04.120 |
There has to be the ability that once it's out there, 00:49:23.840 |
It was perfectly correct based on those assumptions. 00:49:34.080 |
- And the change, the fallback would be to a human. 00:49:37.080 |
So you ultimately think like human should be, 00:49:51.440 |
I still think the human needs to be part of the equation 00:49:58.480 |
what are some fascinating things in robotic space 00:50:03.520 |
Or just in general, what have you gotten to play with 00:50:07.720 |
and what are your memories from working at NASA? 00:50:12.600 |
was they were working on a surgical robot system 00:50:21.880 |
And this was back in, oh my gosh, it must've been, 00:50:30.560 |
- So it's like almost like a remote operation. 00:50:34.480 |
And in fact, you can even find some old tech reports on it. 00:50:38.360 |
So think of it, like now we have Da Vinci, right? 00:50:41.600 |
Like think of it, but these were like the late '90s, right? 00:50:53.880 |
'Cause the technology, but it was like functional 00:50:59.200 |
a version of haptics to actually do the surgery. 00:51:04.320 |
and like the eyeballs and you can see this little drill. 00:51:13.680 |
because it was so outside of my like possible thoughts 00:51:21.320 |
And I mean, what's the most amazing of a thing like that? 00:51:28.200 |
It was the kind of first time that I had physically seen 00:51:48.040 |
There's a person and a robot like in the same space. 00:51:53.040 |
Like for me, it was a magical moment that I can't, 00:51:56.040 |
as life transforming that I recently met Spot Mini 00:52:01.320 |
I don't know why, but on the human robot interaction, 00:52:09.800 |
And it was, I don't know, it was almost like falling in love 00:52:22.440 |
So have you had a robot like that in your life 00:52:25.160 |
that made you maybe fall in love with robotics? 00:52:37.000 |
I was a 12-year-old, like I'm gonna be a roboticist. 00:52:46.480 |
And so, I mean, that was like a seminal moment, 00:52:52.440 |
Like it wasn't like I was in the same space and I met, 00:52:56.640 |
- Just linking on bionic woman, which by the way, 00:52:58.920 |
because I read that about you, I watched a bit of it 00:53:16.120 |
- It was probably the sound of my imagination. 00:53:19.200 |
Especially when you're younger, it just catches you. 00:53:24.760 |
you mentioned cybernetics, did you think of it as robotics 00:53:37.040 |
or was it the whole thing, like even just the limbs 00:53:42.960 |
I probably would have been more of a biomedical engineer 00:53:50.040 |
like the bionic parts, the limbs, those aspects of it. 00:53:55.040 |
- Are you especially drawn to humanoid or human-like robots? 00:53:59.640 |
- I would say human-like, not humanoid, right? 00:54:04.200 |
I think it's this aspect of that interaction, 00:54:07.800 |
whether it's social and it's like a dog, right? 00:54:10.680 |
Like that's human-like because it understands us, 00:54:14.120 |
it interacts with us at that very social level. 00:54:21.860 |
but only if they interact with us as if we are human. 00:54:26.860 |
- But just to linger on NASA for a little bit, 00:54:30.920 |
what do you think, maybe if you have other memories, 00:54:34.080 |
but also what do you think is the future of robots in space? 00:54:38.560 |
We mentioned how, but there's incredible robots 00:54:50.440 |
What do you think the future of robots is there? 00:55:00.720 |
that's kind of exciting, but that's like near term. 00:55:06.040 |
You know, my favorite, favorite, favorite series 00:55:13.300 |
You know, I really hope, and even "Star Trek," 00:55:17.160 |
like if I calculate the years, I wouldn't be alive, 00:55:20.060 |
but I would really, really love to be in that world. 00:55:28.440 |
like, you know, like "Voyage," like "Adventure 1." 00:55:41.420 |
- The data would have to be, even though that wasn't, 00:55:44.760 |
- So data is a robot that has human-like qualities. 00:56:08.600 |
The issue is, is that emotions make us irrational agents. 00:56:20.040 |
even if it was based on an emotional scenario, right? 00:56:50.740 |
Right, and so that is a developmental process, 00:56:54.600 |
and I'm sure there's a bunch of psychologists 00:56:57.600 |
that could go through, like you can have a 60-year-old adult 00:57:00.640 |
who has the emotional maturity of a 10-year-old, right? 00:57:04.640 |
And so there's various phases that people should go through 00:57:19.720 |
when you're talking about HRI, human-robot interaction, 00:57:30.400 |
read a lot in the cognitive science literature 00:57:36.200 |
because they understand a lot about human-human relations 00:57:41.200 |
and developmental milestones and things like that. 00:57:45.960 |
And so we tend to look to see what's been done out there. 00:57:50.960 |
Sometimes what we'll do is we'll try to match that 00:58:01.020 |
Sometimes it is, and sometimes it's different. 00:58:03.100 |
And then when it's different, we try to figure out, 00:58:09.060 |
but it's the same in the other scenario, right? 00:58:15.360 |
- Would you say that's, if we're looking at the future 00:58:19.160 |
would you say the psychology piece is the hardest? 00:58:28.440 |
do you consider yourself a roboticist or a psychologist? 00:58:42.400 |
do you see yourself more and more wearing the psychology hat? 00:58:49.040 |
are the hard problems in human-robot interactions 00:58:51.640 |
fundamentally psychology, or is it still robotics, 00:59:01.720 |
The hardest part is the adaptation and the interaction. 00:59:10.820 |
I've become much more of a roboticist/AI person 00:59:20.200 |
I was electrical engineer, I was control theory, right? 00:59:24.080 |
And then I started realizing that my algorithms 00:59:30.640 |
And so then I was like, okay, what is this human thing? 00:59:34.400 |
And then I realized that human perception had, 00:59:38.720 |
there was a lot in terms of how we perceive the world, 00:59:51.200 |
and realizing that humans actually offered quite a bit. 00:59:56.080 |
you become more of an artificial intelligence, AI, 00:59:59.320 |
and so I see myself evolving more in this AI world 01:00:12.140 |
- So you're a world-class expert researcher in robotics, 01:00:29.440 |
So why did you brave into the interaction with humans? 01:00:36.880 |
- It's a hard problem, and it's very risky as an academic. 01:00:41.120 |
- And I knew that when I started down that journey, 01:00:49.960 |
in this world that was nuanced, it was just developing. 01:00:53.520 |
We didn't even have a conference, right, at the time. 01:01:01.600 |
It was the fact that I looked at what interests me 01:01:06.600 |
in terms of the application space and the problems, 01:01:21.280 |
I'd probably still be sending rovers to glaciers, right? 01:01:28.120 |
And the other thing was that they were hard, right? 01:01:30.640 |
So I like having to go into a room and being like, 01:01:48.200 |
I'd go someplace and make a lot more money, right? 01:01:51.060 |
I think I stay an academic and choose to do this 01:01:55.020 |
because I can go into a room and like, "That's hard." 01:02:03.240 |
but if I just look at the field of AI broadly, 01:02:15.080 |
People, especially relative to how many people 01:02:22.480 |
Because most people are just afraid of the humans, 01:02:30.880 |
exciting spaces, it seems to be incredible for that. 01:02:51.320 |
- So some of the things I study in this domain 01:02:53.920 |
is not just trust, but it really is overtrust. 01:03:11.900 |
based on the decision or the actions of the technology, 01:03:29.100 |
Would you follow this robot in an abandoned building? 01:03:39.640 |
Or what you think you would like to think, right? 01:03:41.980 |
And so I'm really concerned about the behavior, 01:03:50.480 |
It's not whether before you went onto the street, 01:03:52.920 |
you clicked on, like, I don't trust self-driving cars. 01:04:01.480 |
so I'm insider in a certain philosophical sense. 01:04:06.040 |
It's frustrating to me how often trust is used in surveys, 01:04:24.640 |
I mean, the action, your own behavior is what trust is. 01:04:35.640 |
poetry that you weave around your actual behavior. 01:05:08.080 |
would you get in the car with a stranger and pay them? 01:05:12.720 |
- How many people do you think would have said, 01:05:22.000 |
to have them drop you home as a single female? 01:05:25.680 |
Like, how many people would say, that's stupid? 01:05:37.760 |
and I, yeah, I'm gonna put my kid in this car 01:05:49.760 |
- Yeah, and certainly with robots, with autonomous vehicles, 01:05:54.760 |
that's, it's, yeah, it's, the way you answer it, 01:06:00.440 |
especially if you've never interacted with that robot before. 01:06:05.720 |
you being able to respond correctly on a survey is impossible. 01:06:09.640 |
But what role does trust play in the interaction, 01:06:14.280 |
Like, is it good to, is it good to trust a robot? 01:06:31.640 |
- Yeah, so this is still an open area of research, 01:06:35.040 |
but basically what I would like in a perfect world 01:06:40.040 |
is that people trust the technology when it's working 100%, 01:06:45.000 |
and people will be hypersensitive and identify when it's not. 01:06:52.800 |
And, but we find is that people swing, right? 01:07:07.760 |
with robotics is positive, it mitigates any risk, 01:07:16.960 |
it means that I'm more likely to either not see it 01:07:30.400 |
because technology is not 100% accurate, right? 01:07:32.680 |
It's not 100% accurate, although it may be perfect. 01:07:35.120 |
- How do you get that first moment right, do you think? 01:07:37.720 |
There's also an education about the capabilities 01:07:42.520 |
Do you have a sense of how you educate people correctly 01:07:50.280 |
So one of the study that actually has given me some hope 01:07:55.040 |
that I was trying to figure out how to put in robotics. 01:08:07.880 |
here, you need to look at these areas on the X-ray. 01:08:12.880 |
What they found was that when the system provided one choice, 01:08:20.600 |
there was this aspect of either no trust or overtrust, right? 01:08:25.600 |
Like, I don't believe it at all, or a yes, yes, yes, yes. 01:08:36.440 |
Instead, when the system gave them multiple choices, 01:08:45.320 |
that you need to look at was some place on the X-ray. 01:08:57.600 |
and the accuracy of the entire population increased, right? 01:09:02.600 |
So basically it was a, you're still trusting the system, 01:09:11.600 |
like your human decision processing into the equation. 01:09:18.640 |
- Yeah, so there's a fascinating balance to have to strike. 01:09:35.720 |
but in general also, what other space can robots interact 01:10:06.720 |
It's like, how can you have learning in that classroom? 01:10:10.400 |
Because you just don't have the human capital. 01:10:20.360 |
where they offset some of this lack of resources 01:10:25.120 |
in certain communities, I think that's a good place. 01:10:30.920 |
is using these systems then for workforce retraining 01:10:38.960 |
that are going to come out later on of job loss, 01:10:48.320 |
I think that's exciting areas that can be pushed even more, 01:10:56.760 |
- What would you say are some of the open problems 01:11:08.720 |
or just folks of all ages who need to be retrained, 01:11:28.880 |
- So identifying whether a person is focused is, 01:11:44.640 |
is that personalized adaptation based on any concepts. 01:11:54.680 |
and I'm working with a kid learning, I don't know, 01:12:05.880 |
some type of new coding skill to a displaced mechanic? 01:12:10.880 |
Like, what does that actually look like, right? 01:12:17.680 |
content is different, two different target demographics 01:12:32.040 |
but like literally to the individual human being? 01:12:35.400 |
- I think personalization is really important, 01:12:44.720 |
And so if I can label you as along some certain dimensions, 01:12:49.720 |
then even though it may not be you specifically, 01:12:58.240 |
So the sample size, this is how they best learn, 01:13:06.840 |
And it's because, I mean, it's one of the reasons 01:13:09.680 |
why educating in large classrooms is so hard, right? 01:13:13.400 |
You teach to the median, but there's these individuals 01:13:22.400 |
and those are the ones that are usually kind of left out. 01:13:26.400 |
So highly intelligent individuals may be disruptive, 01:13:28.960 |
and those who are struggling might be disruptive 01:13:33.040 |
- Yeah, and if you narrow the definition of the group 01:13:40.400 |
it's not individual needs, but really the most-- 01:13:47.320 |
of successful recommender systems do, Spotify and so on. 01:13:51.000 |
So it's sad to believe, but I'm, as a music listener, 01:13:59.280 |
- Yeah, I've been labeled, and successfully so, 01:14:02.120 |
because they're able to recommend stuff that I-- 01:14:04.640 |
- Yeah, but applying that to education, right? 01:14:09.800 |
- Do you have a hope for our education system? 01:14:13.120 |
- I have more hope for workforce development, 01:14:19.720 |
Even if you look at VC investments in education, 01:14:28.600 |
And so I think that government investments is increasing. 01:14:32.960 |
There's like a claim, and some of it's based on fear, right? 01:14:36.160 |
Like AI's gonna come and take over all these jobs. 01:14:38.080 |
What are we gonna do with all these non-paying taxes 01:14:51.840 |
because it's this, it's still a who's gonna pay for it, 01:14:56.440 |
and you won't see the results for like 16 to 18 years. 01:15:01.440 |
It's hard for people to wrap their heads around that. 01:15:06.040 |
- But on the retraining part, what are your thoughts? 01:15:10.680 |
There's a candidate, Andrew Yang, running for president, 01:15:21.080 |
- Universal basic income in order to support us 01:15:26.760 |
and allows you to explore and find other means. 01:15:30.200 |
Like, do you have a concern of society transforming effects 01:15:41.360 |
I do know that AI robotics will displace workers. 01:15:54.520 |
What I worry about is, that's not what I worry about, 01:15:59.520 |
What I worry about is the type of jobs that will come out. 01:16:02.240 |
Right, like people who graduate from Georgia Tech 01:16:50.400 |
But that's such a small part of the population 01:16:58.560 |
whether it's AI in healthcare, AI in education, 01:17:06.040 |
- And that's part of the thing that you were talking about 01:17:12.480 |
have to be thinking about access and all those things, 01:17:17.920 |
Let me ask some philosophical, slightly romantic questions. 01:17:26.320 |
Okay, do you think one day we'll build an AI system 01:17:39.960 |
- Yeah, although she kind of didn't fall in love with him, 01:17:43.400 |
or she fell in love with like a million other people, 01:17:50.960 |
- Yes, so I do believe that we can design systems 01:17:55.120 |
where people would fall in love with their robot, 01:18:13.400 |
if you understand the cognitive science about it, right? 01:18:18.560 |
of all close relationship and love in general 01:18:27.200 |
I mean, manipulation is a negative connotation. 01:18:29.560 |
- And that's why I don't like to use that word particularly. 01:18:46.400 |
there are some individuals that have been dating 01:19:12.280 |
Like there's no reason why that can't happen. 01:19:17.680 |
so what role, you've kind of mentioned with data, 01:19:20.640 |
emotion being, can be problematic if not implemented well, 01:19:26.320 |
What role does emotion and some other human-like things, 01:19:32.880 |
for good human-robot interaction and something like love? 01:19:49.040 |
It just means that if you think about their programming, 01:19:52.320 |
they might put the other person's needs in front of theirs 01:19:58.040 |
You look at, think about it as return on investment. 01:20:01.800 |
As part of that equation, that person's happiness, 01:20:08.000 |
And the reason why is because I care about them, right? 01:20:20.600 |
So you can think of how to do this actually quite easily. 01:20:32.600 |
because we don't have a classical definition of love. 01:20:41.640 |
to look into each other's minds to see the algorithm. 01:20:48.760 |
is it possible that, especially if that's learned, 01:21:00.680 |
if the system says, "I'm conscious, I'm afraid of death," 01:21:20.160 |
You've kind of phrased the robot in a very roboticist way, 01:21:30.600 |
"a compelling human-robot interaction experience 01:21:33.360 |
"that makes you believe that the robot cares for your needs, 01:21:39.040 |
But what if the robot says, "Please don't turn me off"? 01:21:46.560 |
like there's an entity, a being, a soul there, right? 01:22:00.120 |
- So I can see a future if we don't address it 01:22:12.360 |
"Hey, this should be something that's fundamental." 01:22:35.960 |
- Careful what you say, because the robots 50 years from now 01:22:39.320 |
will be listening to this, and you'll be on TV saying, 01:22:45.360 |
And so this is my, and as I said, I have a biased lens, 01:22:54.000 |
and I actually put this in kind of the, as a roboticist, 01:22:59.000 |
you don't necessarily think of robots as human, 01:23:02.600 |
with human rights, but you could think of them 01:23:09.280 |
or you could think of them in the category of animals. 01:23:12.960 |
And so both of those have different types of rights. 01:23:18.440 |
So animals have their own rights as a living being, 01:23:28.240 |
But as humans, if we abuse them, we go to jail. 01:23:32.080 |
So they do have some rights that protect them, 01:23:36.120 |
but don't give them the rights of citizenship. 01:23:42.400 |
property, the rights are associated with the person. 01:23:49.640 |
or steals your property, there are some rights, 01:23:53.960 |
but it's associated with the person who owns that. 01:24:03.480 |
how society has changed, women were property, right? 01:24:11.960 |
They were thought of as property of, like their-- 01:24:15.880 |
- Yeah, assaulting a woman meant assaulting the property 01:24:22.960 |
is that we will establish some type of norm at some point, 01:24:32.000 |
like there are still some countries that don't have, 01:24:39.760 |
And so I do see a world where we do establish 01:24:51.200 |
I think we will have this conversation at that time, 01:25:01.960 |
Just out of curiosity, Anki, Jibo, Mayfield Robotics, 01:25:06.960 |
with their robot Curie, Sci-Fi Works, Rethink Robotics, 01:25:16.240 |
and they've all went out of business recently. 01:25:38.960 |
Only one of them I don't understand, and that was Anki. 01:25:43.040 |
That's actually the only one I don't understand. 01:25:59.200 |
and I'm like, they seem to have product-market fit. 01:26:11.960 |
- Yeah, so although we think robotics was getting there, 01:26:20.400 |
I think if they'd been given a couple of more years, 01:26:32.760 |
I have a product that I wanna sell at a certain price. 01:26:37.200 |
Are there enough people out there, the market, 01:26:40.080 |
that are willing to buy the product at that market price 01:26:50.440 |
If it costs you $1,000, and everyone wants it, 01:27:07.640 |
the company that makes Roombas, vacuum cleaners, 01:27:10.840 |
can you comment on, did they find the right product, 01:27:18.680 |
is also another kind of question underlying all this. 01:27:20.360 |
- So if you think about iRobot and their story, right? 01:27:23.800 |
Like when they first, they had enough of a runway, right? 01:27:31.440 |
They were a military, they were contracts, primarily, 01:27:46.760 |
- To then try to, the vacuum cleaner is what I've been told 01:28:13.560 |
but they found enough people who were willing to fund it. 01:28:16.760 |
And I mean, I forgot what their loss profile was 01:28:22.240 |
but they became profitable in sufficient time 01:28:29.240 |
there's still people willing to pay a large amount of money, 01:28:39.240 |
and figured it all out, now there's competitors. 01:28:43.560 |
The competition, and they have quite a number, 01:28:46.680 |
even internationally, like there's some products out there 01:29:03.680 |
although as a roboticist, it's kind of depressing, 01:29:24.880 |
Facebook was not the first, sorry, MySpace, right? 01:29:28.440 |
Like think about it, they were not the first players. 01:29:31.200 |
Those first players, like they're not in the top five, 01:29:39.520 |
They proved, they started to prove out the market, 01:29:52.400 |
The second batch, I think might make it to the next level. 01:30:15.640 |
And with robotics, one of the things you have to make sure 01:30:20.240 |
is to be transparent and have people deeply trust you 01:30:23.160 |
to let a robot into their lives, into their home. 01:30:25.960 |
When do you think the second batch of robots, 01:30:36.720 |
- So if I think about, 'cause I try to follow the VC 01:30:40.200 |
kind of space in terms of robotic investments. 01:30:53.040 |
And then there's all these self-driving Xs, right? 01:30:56.440 |
And so I don't know if they're a first batch of something 01:31:15.880 |
some of them have some of the flavor of like Curie. 01:31:20.840 |
- So basically a robot and human working in the same space. 01:31:25.840 |
So some of the companies are focused on manufacturing. 01:31:30.600 |
So having a robot and human working together in a factory, 01:31:35.600 |
some of these co-robots are robots and humans 01:31:42.200 |
Like there's different versions of these companies 01:31:47.040 |
so Rethink Robotics would be like one of the first, 01:31:50.320 |
at least well-known companies focused on this space. 01:32:11.280 |
- Yeah, so there's a lot of mystery about this now. 01:32:14.000 |
Of course, it's hard to say that this is the second batch 01:32:41.520 |
I don't think people should be afraid, but with a caveat. 01:32:51.520 |
understand that we need to change something, right? 01:33:01.600 |
- Which is the dimension of change that's needed? 01:33:09.520 |
thinking about like the conversation is going on, right? 01:33:12.840 |
It's no longer a, we're gonna deploy it and forget that, 01:34:01.360 |
- But overall, you're saying if we start to think more 01:34:03.720 |
and more as a community about these ethical issues, 01:34:07.080 |
- Yeah, I don't think people should be afraid. 01:34:17.600 |
- Do you have worries of existential threats of robots 01:34:41.520 |
And I always correlate this with a parent and a child, right? 01:34:45.360 |
So think about it, as a parent, what do we want? 01:34:47.400 |
We want our kids to have a better life than us. 01:34:56.080 |
And then as we grow older, our kids think and know 01:35:00.000 |
they're smarter and better and more intelligent 01:35:14.480 |
We instilled in them this whole aspect of community. 01:35:22.760 |
it's still about this love, caring relationship. 01:35:28.040 |
So even if like, you know, we've created the singularity 01:35:35.680 |
the fact is it might say, I am smarter, I am sentient. 01:35:49.600 |
Still just to come back for Thanksgiving dinner 01:35:59.840 |
may be one of your more favorite AI-related movies. 01:36:20.200 |
That symbiotic relationship is that they don't destroy us, 01:36:38.720 |
But then there were humans that had a choice, right? 01:36:44.480 |
Like you had a choice to stay in this horrific, 01:36:47.760 |
horrific world where it was your fantasy life 01:36:51.320 |
with all of the anomalies, perfection, but not accurate. 01:36:58.040 |
and like have maybe no food for a couple of days, 01:37:09.800 |
but I think about us having the symbiotic relationship. 01:37:23.920 |
that we'll have to find some symbiotic relationship. 01:37:32.400 |
I will take the other pill in order to make a difference. 01:37:36.320 |
- So if you could hang out for a day with a robot, 01:37:41.200 |
real or from science fiction, movies, books, safely, 01:37:45.320 |
and get to pick his or her, their brain, who would you pick? 01:38:09.840 |
- But don't you think it'd be a more interesting 01:38:25.320 |
is because I could have a conversation with him 01:38:35.680 |
And he could go through like the rational thinking 01:38:40.280 |
he could also help me think through it as well. 01:38:44.760 |
fundamental questions I think I could ask him 01:38:52.480 |
- I don't think there's a better place to end it. 01:39:03.280 |
and thank you to our presenting sponsor, Cash App. 01:39:11.920 |
a STEM education nonprofit that inspires hundreds 01:39:19.280 |
If you enjoy this podcast, subscribe on YouTube, 01:39:29.800 |
And now let me leave you with some words of wisdom 01:39:40.560 |
We should each be treated with appropriate respect. 01:39:43.600 |
Thank you for listening and hope to see you next time.