back to indexChris Gerdes (Stanford) on Technology, Policy and Vehicle Safety - MIT Self-Driving Cars
Chapters
0:0 Introduction
1:20 Chris Gerdes
7:21 What is vehicle safety
11:46 Federal motor vehicle safety standards
14:26 Setting in best practices
15:18 Federal Automated Vehicle Policy
16:36 Safety Assessment
17:6 Operational Design Domain
18:58 Validation Methods
21:30 Ethical Considerations
26:18 Double Yellow Line
27:51 Speed Limits
28:38 Learning and Programming
30:39 Marty Marty
33:1 The Potential
34:8 Data Sharing
38:12 Automated Vehicle Policy
40:50 Safety Requirements
42:50 Learning from Humans
45:0 Liability
46:33 Safety
49:1 Policy
53:10 Sharing data
55:17 Accident data simulations
55:59 Testing in urban and rural environments
58:29 Opensource cars
00:00:06.920 |
where he studies how to build autonomous cars that perform at or 00:00:11.120 |
beyond human levels both on the racetrack and on public roads. 00:00:15.560 |
So that includes a race car that goes 120 miles an hour 00:00:19.560 |
autonomously on the racetrack. This is awesome. 00:00:23.120 |
He spent most of 2016 as the chief innovation officer at the United States 00:00:29.440 |
Department of Transportation and was part of the team that developed the 00:00:33.480 |
federal automated vehicle policy. So he deeply cares about the role that 00:00:39.720 |
artificial intelligence plays in our society both from the technology side 00:00:44.800 |
and the policy perspective. So he is now I guess you could say a policy wonk, 00:00:51.080 |
a world-renowned engineer and I think always a car guy. 00:00:56.360 |
Yes. So he told me that he did a Q&A session with a group of third graders 00:01:03.080 |
last week and he answered all of their heart hitting questions. 00:01:07.080 |
So I encourage you guys to continue on that thread and ask Chris questions 00:01:12.080 |
after his talk. So please give a warm welcome to Chris. 00:01:14.840 |
Great, Lex. Thanks for that great introduction and thanks for having me 00:01:22.600 |
here to talk to everybody today. So this is sort of my first week back in 00:01:28.400 |
a civilian role. I wrapped up at USDOT last week. So I'm no longer speaking 00:01:35.320 |
and officially representing the department, although some of the slides are very 00:01:39.000 |
similar to things that I use to speak and represent the department. So I think 00:01:42.960 |
as of Friday, this was still fairly current, but I am sort of talking in my 00:01:46.480 |
own capacity here. So I wanted to talk about both the technology side and the 00:01:52.160 |
policy side of automated vehicles and in particular how some of the techniques 00:01:55.600 |
that you're learning in this class around deep learning and neural networks 00:01:59.560 |
really place some challenges on regulators and policy makers attempting 00:02:05.240 |
to ensure vehicle safety. So just a bit about some of the cars in my background. 00:02:10.520 |
I am a car guy and I've gotten a chance to work on a lot of cool ones. 00:02:14.360 |
I actually have been working in automated vehicles since 1992 and the 00:02:18.440 |
Lincoln Town Cars in the upper corner are part of an automated highway project 00:02:23.080 |
I worked on as a PhD student at Berkeley. I then went to Freightliner Heavy 00:02:26.880 |
Trucks and Daimler Benz and worked with suspensions on heavy trucks before 00:02:31.640 |
coming to Stanford and doing things like building P1 in the upper right corner 00:02:36.680 |
there. That's an entirely student-built electric steer-by-wire, drive-by-wire 00:02:40.720 |
vehicle. We've also instrumented vintage race cars, electrified a DeLorean, which 00:02:46.040 |
I'll show a little bit later, and worked, as Lex mentioned, with Shelly, which is 00:02:51.440 |
our self-driving Audi TT, which is an automated race car. In addition to the 00:02:56.680 |
Stanford work, I was a co-founder of Peloton Technology, which is a truck 00:03:00.080 |
platooning firm, looking at bringing platooning technology, so vehicle-to- 00:03:05.080 |
vehicle communication, which allows for shorter following distance out on the 00:03:09.200 |
highway. So these are some of the things I've had a chance to work with. To give 00:03:13.560 |
you a little bit of a sense, this is Shelly going around the racetrack at 00:03:16.480 |
Thunder Hill. She can actually go up to about 120 miles an hour or so on that 00:03:21.160 |
track. It's really just limited by the length of the straight. It's kind of fun 00:03:24.800 |
to watch from the outside, a little disconcerting. Occasionally, as you see, 00:03:28.280 |
there's nobody in the car, although from inside it actually looks all pretty 00:03:33.320 |
chill. So Shelly, we've been working with her for a while out on the track. She's 00:03:38.960 |
able to get performance now, which exceeds the capability of anybody on the 00:03:44.080 |
development team. Many of us are amateur racers. In fact, actually, most of 00:03:49.480 |
my PhD students have their novice racing license. We make sure that they get that 00:03:53.720 |
license before going out on the track and testing. So Shelly can beat anybody 00:03:57.940 |
in the research group. She actually can beat the president of the track, David 00:04:02.080 |
Vaughn, now. And we've had the opportunity to work recently with J.R. Hildebrandt, 00:04:06.800 |
the IndyCar driver who finished sixth this last year in the Indy 500. He's 00:04:11.720 |
faster, but he's actually only about a second or so faster on a minute and 25 00:04:17.300 |
second lap. So we're approaching his performance and he's actually helping us 00:04:22.560 |
get there. Now, the interesting thing about this is that we've approached this 00:04:26.480 |
problem really from one of physics. Force equals mass times acceleration. So the 00:04:31.120 |
car is really out there calculating what it needs to do to break down into the 00:04:35.960 |
next corner, how much grip that it thinks it has, and so forth as it's going 00:04:41.160 |
around the track. It's not actually a learning approach at its core, although 00:04:45.800 |
we've added on top a number of algorithms for learning because it turns 00:04:48.920 |
out that the difference between the car's performance and the human 00:04:52.120 |
performance is really getting that last little bit of capability out of the 00:04:56.320 |
tires. Humans drive instinctively in a way, the best humans at any rate, drive 00:05:01.280 |
instinctively in a way which is constantly pushing to the limits of the 00:05:04.020 |
car's capability. And so if you sort of prejudge what those limits are, you're 00:05:08.480 |
not going to be quite as fast. And so that's one of the things we've actually 00:05:11.240 |
been working with learning algorithms on is to try to figure out, well, how much 00:05:14.880 |
friction do I have in this particular corner and how is that changing as the 00:05:19.640 |
tires warm up and as a track warms up from the course of the morning till the 00:05:23.920 |
afternoon. These are the things that we need to be fast on the racetrack, but 00:05:28.280 |
there are also the things that you need to take into account to be safe in the 00:05:31.720 |
real world. So what we're trying to do with this project is understand how the 00:05:35.200 |
car can drive at the maximum capability of the limits of the friction between 00:05:39.720 |
the tire and the road. Now race car drivers do that to be fast. As they say 00:05:43.700 |
in racing, if you want to finish first, first you have to finish. So it's 00:05:48.280 |
important that they actually be fast but also accident-free. So we're trying to 00:05:52.400 |
learn the same things so that on the road when you may have unknown 00:05:55.560 |
conditions ahead of you, the car can make the safest maneuver that's using all the 00:06:00.280 |
friction between the tire and the road to avoid ultimately any accident that 00:06:04.600 |
the car would be physically capable of avoiding. That's our goal with that. So 00:06:08.680 |
we've had a lot of fun with Shelly. We've gotten to drive the car up Pikes Peak in 00:06:12.520 |
the Bonneville Salt Flats. Actually Shelly appeared in an Audi commercial 00:06:16.560 |
with Zach Quinto and Leonard Nimoy and so at the end of the commercial they 00:06:21.360 |
both look at each other and declare it fascinating. So if you're as big of a 00:06:25.880 |
science fiction fan as I am, you realize that once your work has been declared 00:06:29.080 |
fascinating by two Spocks, there's nowhere to go. So I had to take a stint 00:06:35.240 |
and try something different in government. And so I spent the last year 00:06:38.720 |
as the first Chief Innovation Officer at the US Department of Transportation, 00:06:42.960 |
which I think honestly was the coolest gig in the federal government because I 00:06:47.280 |
really didn't have any assigned day-to-day responsibilities but I got 00:06:50.360 |
to kind of dive in and help with all manner of really cool projects, including 00:06:54.560 |
the development of the first federal automated vehicle policy. So it's a 00:06:59.840 |
really great opportunity to sort of see things from a different perspective. And 00:07:03.120 |
so what I wanted to do was, you know, kind of coming into this from an engineer, 00:07:06.400 |
give you a perspective of what is it like from somebody looking at the 00:07:10.240 |
regulatory side on vehicle safety and how are they thinking about the 00:07:13.500 |
technologies you're developing and where does that actually leave some 00:07:16.600 |
opportunities for engineers to make some big contributions to society. So let's 00:07:21.700 |
start with what vehicle safety is like today. So today we have a system of 00:07:27.840 |
federal motor vehicle safety standards. So these are rules, they're minimum 00:07:32.480 |
performance requirements, and each of them must have associated with it an 00:07:36.600 |
objective test. So you can tell does a vehicle meet this requirement or does it 00:07:40.920 |
not meet this requirement. Now interestingly there is no federal agency 00:07:45.280 |
that is testing vehicles before they are sold. We rely in this country on a system 00:07:50.480 |
of manufacturer self-certification. So the government puts these rules out 00:07:54.320 |
there and manufacturers go, "We got this, we can meet this," and then they 00:07:59.240 |
self-certify and put the vehicles out on the market. The National Highway Traffic 00:08:03.440 |
Safety Administration can then purchase vehicles and test them and make sure 00:08:06.960 |
that they comply. But we rely on manufacturer self-certification. This is 00:08:11.240 |
a different system than in most of the rest of the world, which actually has 00:08:14.040 |
pre-market certification, where before you can sell it the government agency 00:08:18.400 |
has to say, "Yes, we've checked it and it meets all the requirements." Aviation in 00:08:23.320 |
this country, for instance, has that. Aircraft require certification before 00:08:27.760 |
they can be sold. Cars do not. Now where did that system come from? So a little 00:08:32.480 |
quick history lesson. In 1965 Ralph Nader released a book entitled "Unsafe at Any 00:08:38.320 |
Speed." And this is often thought of as a book about the Corvair. It's not. 00:08:43.200 |
The Corvair featured prominently in there as an example of a design that 00:08:48.040 |
Nader considered to be unsafe. What was very interesting about this book 00:08:53.240 |
was that he was actually advocating for things like airbags and anti-lock brakes 00:08:57.960 |
back in 1965. These technologies didn't come along until much later. His 00:09:03.480 |
argument was that the auto industry had failed. It wasn't a failure of 00:09:07.880 |
engineering, but it was a failure of imagination. And if you're interested in 00:09:11.840 |
vehicle safety, I would really recommend you read this book because it's 00:09:15.280 |
fascinating. They have quotes from people in the 1960s basically saying that we 00:09:20.160 |
believe that any collision more than about 40 or 45 miles an hour is not 00:09:25.000 |
survivable. Therefore, there's no reason for seatbelts. There's no reason for 00:09:29.200 |
collapsible steering wheels. In fact, there's a quote from somebody who made 00:09:32.560 |
great advances in road safety saying, "I can't conceive of what help a seatbelt 00:09:37.360 |
would give you beyond like firmly bracing yourself with your hands." Those 00:09:42.560 |
of you who have studied physics know that's kind of patently ridiculous. But 00:09:46.680 |
there was a common feeling that there was no sense of doing anything about 00:09:49.600 |
vehicle crash worthiness because once you got above a certain speed, it was 00:09:53.680 |
inherently unsurvivable. And I think it's interesting to look at that today because 00:09:57.680 |
if we were to be in a collision, I think if any of us were to be in a collision 00:10:01.040 |
around about 40 miles an hour in a modern automobile, we'd probably expect 00:10:05.680 |
to walk away. You know, we wouldn't really be thinking about our survival. And so 00:10:10.440 |
what this did is it led to a lot of public outcry and ultimately the 00:10:15.160 |
National Traffic and Motor Vehicle Safety Act in 1966, which established 00:10:19.680 |
NHTSA and established this set of federal motor vehicle safety standards. 00:10:23.560 |
Now the process to get a new standard made, which is a rule-making process in 00:10:28.000 |
government, is very time-consuming. Optimistically, about the minimum time it 00:10:32.440 |
can possibly take is two years. Realistically, it's more like seven. And 00:10:39.160 |
so if you think about going through this process, that's really problematic. I mean, 00:10:44.520 |
think about what we were talking about with automated vehicles two years ago or 00:10:48.800 |
seven years ago. And think about trying to start seven years ago and make laws 00:10:53.800 |
that are going to determine how those vehicles operate on the road today. It's 00:10:57.560 |
crazy, right? There's really no way to do that. And the other thing is is that if 00:11:02.480 |
you think about it, our system evolved from really this sense of failure of 00:11:05.960 |
imagination that the government needs to say, "Hey, industry, do this. Stop slacking 00:11:11.600 |
off. These are the requirements. Get there." But I think it's hard to argue today 00:11:15.680 |
with all the advances in automation that there is any failure of imagination on 00:11:19.480 |
the part of industry. People are coming up with all sorts of ideas and concepts 00:11:24.320 |
for new transportation and automation. Tech companies, startup companies, large 00:11:29.120 |
OEMs. There's all sorts of concepts being tested out on the road. It's hard to 00:11:34.480 |
argue that there's still any lack of imagination. Now the question is, are 00:11:38.160 |
things like this legal? It's an interesting question, right? Can I 00:11:42.680 |
actually legally do this? Well, from the federal level, there's an interesting 00:11:47.160 |
report that came out about 10 months ago from the folks across the street at 00:11:50.480 |
Volpe who did a scan and said, "Well, what are the things that might prevent you, 00:11:54.720 |
based on the current federal motor vehicle safety standards, from putting an 00:11:58.960 |
automated vehicle out on the road?" And the answer was, honestly, not much. If you 00:12:04.660 |
have a vehicle, if you start and you automate a vehicle that is currently 00:12:08.080 |
meeting all the standards, because there are no standards that relate 00:12:11.640 |
specifically to automation, you can certify your vehicle as meeting the 00:12:15.880 |
federal motor vehicle safety standards. Therefore, there's nothing at the federal 00:12:19.480 |
level that prevents, in general, an automated vehicle from being put on the 00:12:23.720 |
road. So it makes sense. So if there isn't a safety standard that you have to meet, 00:12:27.960 |
then you can put a vehicle out on the road that meets all the existing ones 00:12:31.840 |
and does something new, and there's no federal barrier to that. Now there are a 00:12:37.080 |
couple of exceptions. There were a few points in there that referenced a driver, 00:12:41.960 |
and in fact NHTSA gave an interpretation of the rule, which is one of the 00:12:47.040 |
things that they can do, is to say, "Well, we're going to give an interpretation. 00:12:50.160 |
It's not making a new rule, but basically interpreting the ones that we have." And 00:12:54.200 |
they said that actually these references to the driver could, in fact, refer to the 00:12:59.040 |
AI system. And so that actually is now a policy statement from the 00:13:05.000 |
department, that many of the references to driver in the federal motor vehicle 00:13:08.960 |
safety standards can be replaced with your self-driving AI system, and the 00:13:13.340 |
rules applied accordingly. So in fact, there's very little that prevents you 00:13:18.240 |
from putting a vehicle out on the road if it meets the current standards. So if 00:13:21.700 |
it's a modern production car, automate it. Federal motor vehicle safety standards 00:13:26.600 |
don't stop that. Now a lot of the designs that I showed, though, things that 00:13:29.680 |
wouldn't have a steering wheel or other things, are actually not compliant, 00:13:34.200 |
because there are requirements that you have a steering wheel, that you have 00:13:37.840 |
pedals. Again, these are best practices that evolved in the days, of course, when 00:13:43.480 |
people were not thinking of cars that could drive themselves. And so these 00:13:48.580 |
things would require an exemption by NHTSA, a process of saying that, "Okay, 00:13:53.960 |
this vehicle is allowed on the road, even though it doesn't meet the current 00:13:57.160 |
standards because it meets some equivalent." And setting that equivalent 00:14:00.680 |
can be a bit of a challenge. Okay, so the question then is, "Well, all right, if the 00:14:04.900 |
federal government is responsible, and NHTSA, by the Traffic Safety Act, is 00:14:08.740 |
responsible for safety on the roads, but it can't prevent people from putting 00:14:12.760 |
anything out, what do you do?" Right? One approach is to say, "Well, let's get some 00:14:17.360 |
federal motor vehicle safety standards out there." But as we already said, that's 00:14:20.400 |
probably about a seven-year process, and if you were to start setting in best 00:14:23.960 |
practices now, what would that look like? So we've got this challenge. We want to 00:14:28.240 |
encourage this technology to come out onto the roads and be tested, because 00:14:33.640 |
that's the way you're going to learn, to get the real-world data, to get the 00:14:36.840 |
real-world experience. At the same time, the federal government is responsible 00:14:40.400 |
for safety on the nation's roads. It can recall things that don't work. So if you 00:14:45.640 |
do put your automated system out on the highway, and it's deemed to present an 00:14:50.200 |
unreasonable risk to safety, even if you're an aftermarket manufacturer, the 00:14:54.440 |
government can tell you to take that off the road. But the question is, "How can you 00:14:57.640 |
do better? How can you be proactive to try to have a discussion here?" So we 00:15:03.280 |
know standards are maybe not the best way of doing that, because they're too 00:15:06.400 |
slow. We'd like to make sure the public is protected, but this technology gets 00:15:10.120 |
tested. And so the approach taken to sort of provide some encouragement for this 00:15:15.320 |
innovation, while at the same time looking at safety, was the federal 00:15:18.720 |
automated vehicle policy, which rolled out in September. So this was an attempt 00:15:25.040 |
to really say, "Okay, let's put out a different framework from the federal 00:15:30.000 |
motor vehicle safety standards. Let's actually put out a system of voluntary 00:15:33.720 |
guidance." So what NHTSA is doing is to ask manufacturers to voluntarily follow 00:15:41.000 |
certain guidance and submit to the agency a letter that they have followed 00:15:45.440 |
a certain safety assessment. Now the interesting thing is, is that the way 00:15:49.000 |
that this is set up is not to tell manufacturers how to do something, but 00:15:53.200 |
really to say, "These are the things that we want you to address, and we want you 00:15:57.200 |
to come to us to explain how you've addressed them." With the idea that from 00:16:02.160 |
this, best practices will emerge, and we'll be able to figure out in the future what 00:16:06.720 |
really is the best way of ensuring some of these safety items. So this rolled out 00:16:12.880 |
in September. We've got the MIT car here on the side. So you see you've got 00:16:19.320 |
the Massachusetts license plate. So thanks to Brian for bringing that. If you 00:16:22.960 |
do put Gaudi stickers on your car, then you get closer to the center. So that's 00:16:26.280 |
something to consider for future reference. But this was was rolled 00:16:31.200 |
out in Washington, Washington DC by the Secretary and consists largely of 00:16:38.120 |
multiple parts, but I think the most relevant to vehicle design is this 15 00:16:42.280 |
point safety assessment. So these are the 15 points that are assessed, and 00:16:47.480 |
I'd like to kind of talk about a few of these in some more detail. And it starts 00:16:52.600 |
with this concept of an operational design domain and minimal risk or 00:16:57.760 |
fallback conditions. And what that means is instead of trying to put a taxonomy 00:17:03.240 |
on here and say, "Well, your automation system could be an adaptive cruise 00:17:08.760 |
control that works on the highway, or it could be fully self-driving, or it might 00:17:12.240 |
be something that operates a low-speed shuttle," the guidance asks the 00:17:16.440 |
manufacturers to define this. And the definition is known as operational 00:17:20.660 |
design domain. So in other words, you tell us where your system is supposed to work. 00:17:25.440 |
Is it supposed to work on the highway? Is it supposed to work in restricted areas? 00:17:30.400 |
Can it work in all weather? Or is this sort of something that operates only in 00:17:36.240 |
daylight hours in the sunshine in this area of South Florida? All of those are 00:17:40.760 |
fine, but it's incumbent upon the manufacturer or developer to define the 00:17:46.540 |
operational design domain. And then once you've defined where the system operates, 00:17:50.720 |
you need to define how you make sure that it is only operating in those 00:17:54.320 |
conditions. How do you make sure that the system stays there? And what's your 00:17:58.280 |
fallback in case it doesn't? And that fallback can be different. Obviously, if 00:18:02.440 |
this is a car which is normally human-driven, as you see here from the 00:18:06.020 |
Volvo Drive Me experiment, it might be reasonable to say, "We're going to ask the 00:18:11.800 |
human driver to retake control." Whereas, clearly, if you're going to enable blind 00:18:17.520 |
passengers or you are going to have a vehicle that has no steering wheel, you 00:18:22.920 |
need a different fallback system. And so within the guidance, it really allows 00:18:28.000 |
manufacturers to have a lot of different concepts of what they want their 00:18:31.600 |
automation to be, so long as they can define where it works, what the fallback 00:18:36.480 |
is in the event that it doesn't work, and how you have educated the consumer 00:18:41.200 |
about what your technology does and what it doesn't do, so that people have a good 00:18:47.160 |
understanding of the system performance. A few things, if we go down, you see also 00:18:52.080 |
validation methods and ethical considerations are aspects that are 00:18:56.720 |
brought up here as well. And so validation methods are really 00:19:00.000 |
interesting as it applies to AI. So really, the idea is that there's lots of 00:19:05.920 |
different ways that you might test an automated vehicle. You might go out on a 00:19:10.120 |
test track and run it through a series of standard maneuvers. You may develop a 00:19:15.080 |
certain number of miles of experience driving in real-world traffic and figure 00:19:19.320 |
out how does the vehicle behave in a limited environment. There's questions 00:19:23.960 |
about a test track, obviously, because you don't have the sort of unknowns that can 00:19:28.400 |
happen in the real-world environment. But if you test in one real-world 00:19:31.480 |
environment, you also have a question of, is this transferable information? So if 00:19:36.600 |
I've driven a certain number of miles in Mountain View, California, does that tell 00:19:40.400 |
me anything about how the vehicle is likely to behave in Cambridge, 00:19:43.480 |
Massachusetts? Maybe, maybe not. It's a little bit hard to extrapolate 00:19:48.480 |
sometimes. And then finally, there's also the idea of simulation and analysis. So 00:19:52.520 |
if I can record these situations, if I can actually create a virtual environment 00:19:57.240 |
of the sorts of things that I see on the road, maybe I can actually run the 00:20:00.880 |
vehicle through many, many of these scenarios, perturbed in some way, and 00:20:04.640 |
actually test the system much more robustly in simulation than I could ever 00:20:08.840 |
actually do out on the road. So the guidance is actually neutral on which of 00:20:13.720 |
these techniques manufacturers take and allow manufacturers to approach it in 00:20:18.320 |
different ways. And I think, you know, based upon conversations, when you think 00:20:22.440 |
about the way customers or companies develop this, they do take all these 00:20:26.560 |
different approaches. A company like Tesla, for instance, which is recording 00:20:29.880 |
all the data streams from all their vehicles, basically, is able to run ideas 00:20:35.400 |
or technologies silently in their vehicle. They can actually test systems 00:20:40.240 |
out, get real-world data, and then decide whether or not to make that system 00:20:43.800 |
active. Companies that don't have that access to data really can't use that 00:20:48.600 |
sort of development method and may rely much more heavily on simulation or 00:20:53.480 |
test-track experience. So the guidance really doesn't have a particular blend 00:20:58.280 |
of this, and in fact, it does envision that you might have over-the-air software 00:21:02.880 |
updates in the future. So it is interesting, though, to think about 00:21:07.120 |
whether you have data-driven approaches, things like artificial neural networks, 00:21:12.360 |
or whether you actually start to program in hard and fast rules. Because as you 00:21:17.600 |
start to think about requirements on a system, how do you actually set 00:21:21.320 |
requirements on a system which has learned its behavior, and you don't 00:21:24.840 |
necessarily know what the internal workings or algorithms look like. There's 00:21:30.480 |
another one that comes up, which is the ethical considerations. I'm going 00:21:33.800 |
to pick on MIT for a moment here. So this is an area that I actually did a lot of 00:21:37.600 |
work on with Stanford together with some philosophers who joined 00:21:43.960 |
our group. And so when people hear ethical considerations in automated 00:21:47.920 |
vehicles, it often conjures up the trolley car problem. And so this sort 00:21:52.600 |
of classic formulation here about the fact that you have a self-driving car 00:21:57.880 |
which is heading towards a group of ten people, and it can either plow in and 00:22:02.720 |
kill those ten people, or it can divert and kill the driver. What do you do? And 00:22:06.840 |
these are classic questions in philosophy. You actually look, in fact, at 00:22:12.880 |
the trolley car problem, which is I have a runaway trolley car, and I need to 00:22:17.960 |
either divert it to another track, or it will kill somebody who's wandering 00:22:21.080 |
across that track, or the five people on the trolley car are killed. What do I do? 00:22:25.640 |
Well, in fact, as this article points out, it's like, you know, before 00:22:29.520 |
automated vehicles can become widespread, car makers must solve an 00:22:33.200 |
impossible ethical dilemma of algorithmic morality. So if all this 00:22:37.360 |
wasn't hard enough, I mean, you're understanding how tough the technology 00:22:40.440 |
is to actually program this stuff, and then you have to get the regulations 00:22:44.440 |
right, and now we actually have to solve impossible philosophical questions. Well, 00:22:49.840 |
I don't think that's actually true, and I think, you know, it's good for 00:22:53.480 |
engineers to work with philosophers, but not to be so literal about this. This is 00:22:59.920 |
a question that philosophers can ask, but engineers might ask a number of 00:23:03.560 |
different questions, like, who's responsible for the brakes on this 00:23:06.560 |
trolley? Why wasn't there a backup system? I mean, why am I headed into a group of 00:23:11.440 |
ten people without any capability to stop? So an engineer would, in fact, have 00:23:18.520 |
to answer this question, but might approach it much differently. So if I 00:23:21.400 |
look at the trolley car problem, I might say, okay, let's see, my options are I've 00:23:26.040 |
got a trolley car which is out of control. First of all, I'd like to have an 00:23:29.440 |
emergency braking system. Let's make sure that I have that. Well, there's a chance 00:23:33.760 |
that that could break as well. So if my emergency, if my base braking system 00:23:38.160 |
goes, and my emergency braking system goes, my next option would be to divert 00:23:43.080 |
it to this side track. Well, knowing that that's my option, I should probably put 00:23:46.640 |
up a fence with a warning sign that says, "Do not cross runaway trolley track." Okay, 00:23:52.120 |
now let's say that I've done all of that. The brakes fail, the emergency 00:23:58.360 |
brakes fail. I have to divert the trolley, and somebody has ignored my sign and 00:24:02.720 |
crossed over the fence, and now is hit by the trolley. Do I feel a little 00:24:06.600 |
differently about this whole scenario than I did at the beginning of just 00:24:10.640 |
trying to decide who lived and who died? The solution was made, but by thinking of 00:24:15.240 |
it as an engineer trying to reduce risk, and not by thinking of levels of 00:24:19.400 |
morality and who deserves to live or die. And so I think this is a very important 00:24:24.820 |
issue, and the reason it's in the guidance is not to basically have 00:24:27.960 |
everybody solve trolley car problems, but to try to think about these larger 00:24:31.920 |
issues. And so I think ethics is not just about these sorts of situations, which 00:24:37.840 |
actually will be in automated vehicles, I think addressed much more by engineering 00:24:42.280 |
principles than by trying to figure out from philosophical merits who deserves 00:24:46.760 |
to live and die. But there's broader issues here. Just any time that you have 00:24:51.200 |
concern for human safety. How close do I get to pedestrians? How close do I get to 00:24:56.920 |
bicycles? How much care should I put in to other people in the environment? 00:25:03.280 |
That's very much an ethical question, and it's an ethical question that 00:25:07.880 |
manufacturers are actually already addressing today. If you look at the 00:25:12.280 |
automatic emergency braking systems that most manufacturers are putting on their 00:25:16.320 |
vehicles, they will actually use a different algorithm depending upon 00:25:20.240 |
whether that obstacle in front of it is a vehicle or a human. So they're already 00:25:25.000 |
detecting and making a decision that the impact of this vehicle with a human 00:25:28.880 |
could be far worse than the impact of this vehicle with a vehicle, and so 00:25:32.520 |
they're choosing to brake a little bit more heavily in that case. That's 00:25:36.560 |
actually where these ethical considerations come in, and the idea of 00:25:39.640 |
the guidance is to begin to share and have a discussion openly about how 00:25:42.920 |
manufacturers are approaching this with the idea of getting to a best practice 00:25:46.840 |
where not only the people in the automated vehicles, but other road users 00:25:50.440 |
feel that there's an appropriate level of care taken for their well-being. 00:25:54.320 |
That's one of the areas where ethics is important. The other area where ethics is 00:25:58.720 |
important is that we have different objectives as we drive down the road. We 00:26:02.520 |
have objectives for safety, we'd like to get there. We have objectives for 00:26:06.200 |
mobility, we'd like you to get there probably pretty quickly. And we also have 00:26:10.360 |
the idea of legality, we'd like to follow the rules. But sometimes these things 00:26:15.160 |
come into conflict with each other. So let's say you're driving down the road 00:26:18.720 |
and there's a van that's parked where it has absolutely no business parking. 00:26:22.600 |
You've got a double yellow line. Is it okay to cross? Well, at least in 00:26:28.120 |
California, there's no exception to the double yellow line representing the lane 00:26:33.560 |
boundary for a vehicle that's parked where it has no business being parked. So 00:26:38.640 |
according to the vehicle code, you're supposed to kind of come to a stop here. 00:26:43.360 |
I don't think any of us would, right? In fact, actually when you're in 00:26:47.720 |
California and you're riding through the hills and you come upon a cyclist, 00:26:52.080 |
virtually every vehicle on the road is deviating across the double yellow line 00:26:56.920 |
to give extra room to the cyclist. That's also not what you're supposed to do by 00:27:00.800 |
the vehicle code. You're supposed to stay on your side of the double yellow line 00:27:04.240 |
but slow to an appropriate speed to pass. All right? So there's behaviors where our 00:27:10.560 |
desire for mobility or our desire for safety are outweighing our desire for 00:27:15.720 |
legality. This becomes a challenge if you think about how do I program the 00:27:19.200 |
self-driving car. Should it be based on the way that humans drive or should it 00:27:23.320 |
be based on the way that the legal code tells me to drive? Of course, the legal 00:27:28.080 |
code was never actually anticipating a self-driving car. From a human standpoint, 00:27:33.000 |
that double yellow line is a great shorthand that says maybe there's 00:27:36.240 |
something coming up here where you don't want to be in this other lane. But if I 00:27:40.080 |
actually have a car with the sensing capability to make that determination 00:27:43.560 |
itself, is the double yellow line actually all that meaningful anymore? 00:27:47.840 |
These are things that have to be sorted out. Speed limits being another one. You 00:27:52.080 |
know, if we're out on the highway, it's usually a little bit flexible. Do we give 00:27:56.720 |
that same flexibility to the automated vehicle or do we create these wonderful 00:28:00.800 |
automated vehicle roadblocks of vehicles going to the speed limit when nobody 00:28:06.000 |
else around them is? Do we allow them to accelerate a little bit to merge into 00:28:11.040 |
the flow of traffic? Do we allow vehicles to speed if they could avoid an accident? 00:28:15.120 |
Is our desire for safety greater than our desire for legality? These are the 00:28:19.600 |
sort of ethical questions that I think are really important. These are things 00:28:23.160 |
that need to be talked through because I believe if we actually have vehicles 00:28:26.840 |
that follow the law, nobody will want to drive with them. And so we need to think 00:28:31.920 |
about either ways of giving flexibility to the vehicles or to the law in the 00:28:35.880 |
sense that vehicles can drive like humans do. So this brings up some really 00:28:40.760 |
interesting areas, I think, with respect to learning and programming. And so the 00:28:45.680 |
question is, you know, should our automated vehicles drive like humans and 00:28:49.000 |
exhibit the same behavior that humans do? Or should they drive like robots and 00:28:53.120 |
actually execute the way that the law tells them that they should drive? 00:28:58.320 |
Obviously fixed rules can be one solution to this. Behavior learned from 00:29:03.880 |
human drivers could be another solution to this. We might have some sort of 00:29:07.720 |
balance of different objectives that we do more analytically in terms of how 00:29:13.480 |
much we want to obey the double yellow line when there are other things 00:29:16.560 |
influencing it in the environment. Now what's interesting is that as you start 00:29:20.320 |
to think about this, there's limits to any of these approaches in the extreme. 00:29:24.080 |
You know, as we found with our self-driving race car, if you're not 00:29:26.920 |
learning from experience, you're not making use of all the data. You're not 00:29:31.400 |
going to do as well. And there's no way that you can possibly pre-program an 00:29:35.720 |
automated vehicle for every scenario it's going to encounter. Somehow you have 00:29:41.480 |
to think about interpolating. Somehow you have to think about learning. At the same 00:29:45.640 |
time you can say, well why don't we just measure humans? Well, human error is 00:29:50.280 |
actually the cause or a factor, the primary factor in 94% of accidents. 00:29:56.400 |
It's either a lack of judgment or some lack of perception on the part of the 00:30:01.400 |
human. So if we're simply following humans, we're actually only learning how 00:30:06.360 |
well humans can do things. We're leaving a lot on the table in terms of the 00:30:10.040 |
potential of the car. And so this is a really interesting discussion that I 00:30:14.520 |
think will continue to be both in the development side of these vehicles and 00:30:19.400 |
the policy side. What is the right balance? What do I want to learn versus 00:30:22.920 |
what do I want to program? How do I avoid leaving anything on the table here? So 00:30:29.560 |
because it's the point where, you know, I've had a bunch of slides with words 00:30:32.880 |
here, I want to give people a little bit of a sense for what you could be leaving 00:30:36.720 |
on the table if in fact you don't adapt. This is Marty. Marty is a DeLorean that 00:30:46.200 |
we've been working with in my lab. Now DeLoreans are really fantastic cars 00:30:50.760 |
unless you want to accelerate, brake, or turn. It really didn't do any of those 00:30:57.000 |
things terribly well. There's no power steering, there's an underpowered engine, 00:31:02.760 |
and very small brakes. All of these things are fixable. In fact, what's nice 00:31:08.280 |
about the DeLorean is it separates quite nicely. The whole fiberglass tub comes up. 00:31:13.640 |
You can take out the engine. You can take out the brakes. You can make some 00:31:18.000 |
modifications to the frame, stiffen the suspension, work with Renovo Motors, a 00:31:23.400 |
startup in Silicon Valley, to put in a new electric drivetrain and put it all 00:31:31.480 |
back together. And when you do, you come up with a car that's actually pretty 00:31:35.280 |
darn fun. And one we've programmed to drive itself. This is Adam Savage from 00:31:49.960 |
And what you see is Marty doing something at a level of precision that 00:31:54.120 |
we're pretty sure no human driver can meet. JR said there's no way he can do 00:31:57.720 |
this. You see it's going into a perfect drift, doing a perfect donut around this 00:32:04.240 |
cone, and then it launches itself through the next gate, sideways into the next 00:32:13.960 |
cone. Now it's doing this, you see it shoots through the gate, missing those 00:32:17.760 |
cones, and then launches into a tight circle around the next cone. It's 00:32:21.320 |
actually doing this as sort of an algorithm similar to orbital mechanics, 00:32:24.400 |
if you think about how it's actually orbiting these different 00:32:27.960 |
points as it sets a trajectory. Now the limit on this is tires, as you can see as 00:32:33.960 |
it comes around here. The tires disintegrate into many chunks flying at 00:32:38.760 |
the camera as we do this. But the ability of the car to really continue, even as 00:32:45.080 |
the tires heat up, to execute this pretty nice trajectory. Here you see it 00:32:49.040 |
going through the gates again and launching into a stable equilibrium, 00:32:53.720 |
putting pretty much the tire tracks right over where they were in the 00:32:56.760 |
previous run, and then finally ending. So this is the sort of thing that I think 00:33:03.880 |
is possible. As you look at these vehicles, there's a huge potential out 00:33:08.520 |
there for these things to not drive about as well as an average human, but to 00:33:13.120 |
far exceed human performance in their abilities to use all the capabilities of 00:33:18.960 |
the tires to do some amazing things. Now maybe that's not the way that you want 00:33:22.520 |
your daily drive to go, although when we first posted some of this 00:33:27.120 |
video, one of the commenters was like, "I want this car. That way I can go 00:33:31.280 |
into the store to buy donuts while it sits in the parking lot doing donuts." 00:33:35.760 |
It wasn't a use case that I had thought of, but that's one of the 00:33:40.520 |
things that we thought of is really how if you limit yourself to only thinking 00:33:45.960 |
about what the tires can do before they get to the saturation of the friction 00:33:50.000 |
of the road, you're only taking into account one class of trajectories. 00:33:53.320 |
There's a lot more beyond that that could be very advantageous in some 00:33:57.400 |
emergency situations. Wouldn't it be great if the car had access to that? 00:34:01.460 |
That's not a way that we're going to get if we only sort of monitor day-to-day 00:34:05.760 |
driving. We're not going to get that capability in our cars. So one other 00:34:10.680 |
aspect that came through in the policy, which I think is extremely 00:34:15.320 |
important as we think about neural networks and learning, is this idea of 00:34:19.320 |
data sharing. And there's a huge potential to accelerate the development 00:34:23.520 |
of automated vehicles if we can share some information about edge case 00:34:28.480 |
scenarios in particular. So if you think about trying to train a neural network 00:34:32.760 |
to handle some extreme situations, that's really much easier if your set of 00:34:37.440 |
training data contains those extreme situations. So if you think about 00:34:41.640 |
the weird things that can happen out on the road, if you had a database of those 00:34:45.280 |
and those comprised your training set, you'd have a head start in terms of 00:34:49.480 |
being able to get a neural network and begin to validate that it would work in 00:34:52.880 |
these situations. So the question is, is there a way for the ecosystem 00:34:57.120 |
around self-driving cars to actually share some of this information so that 00:35:01.040 |
different players can actually share some information about the critical 00:35:06.760 |
situations and be able to make sure that if you learn something, that yes, you can 00:35:10.920 |
make your cars safer, but actually all the cars out on the road get safer. Now 00:35:15.680 |
clearly you need to balance this with some other considerations. There's 00:35:18.760 |
the intellectual property concerns of the company. There's privacy concerns of 00:35:22.880 |
any individuals who might be involved. But it does seem to me that there's a 00:35:27.480 |
big potential here to think about ways of sharing certain data that can 00:35:32.000 |
contribute to safety. And this is a discussion that's going to be ongoing 00:35:36.240 |
and I think academia can do a lot to sort of help broker this discussion 00:35:40.840 |
because, you know, the first level people say, you know, data sharing, I don't know, 00:35:44.760 |
companies aren't going to share, we're not going to get the information we need. 00:35:48.120 |
But most of the time people stay in the abstract as opposed to saying, well, what 00:35:52.240 |
information would be most helpful? What information is really going to give 00:35:55.600 |
people confidence in the safety of these cars? It's going to let regulators 00:35:59.720 |
understand how they operate and yet at the same time is going to protect the 00:36:04.120 |
amount of development effort that companies put in there. I think there is 00:36:08.320 |
a solution here and in fact if you look at aviation, there's a really good 00:36:11.720 |
example that already exists. It's known as the Asias system. It started with only 00:36:15.720 |
four airlines that decided to share safety information with each other. And 00:36:20.280 |
this goes through MITRE, which is a federally funded R&D center. And it's 00:36:24.520 |
actually now up to 40 airlines. And if companies get kicked out of the MITRE 00:36:29.080 |
project, they really try very hard to get back in. Now this is anonymized data. 00:36:33.560 |
It's anonymized data so that, you know, companies actually get an assessment of 00:36:38.560 |
what their safety record is like and they can compare it to other airlines in 00:36:43.120 |
the abstract, but they can't compare it to any identifiable airline. So there's 00:36:47.520 |
no ranking of this. It's not used for any enforcement techniques. And it took 00:36:52.520 |
people a long time to kind of build up and begin to share that. But now there's 00:36:56.760 |
a huge amount of trust and they're sharing more and more data and looking 00:37:00.720 |
at ways that they can perhaps actually start to code in things like weather and 00:37:05.560 |
time of day, which had been removed for anonymization purposes in the original 00:37:10.080 |
version of the system. So I think there's some good examples out there and this is 00:37:14.480 |
something that's very important to think about for automated vehicles. And I think 00:37:17.820 |
as this discussion goes forward, those of you who are interested in developing 00:37:21.560 |
these vehicles, using techniques that rely on data are going to be an 00:37:25.880 |
important voice for the importance of data sharing. I think there's a 00:37:31.120 |
large role here to kind of make people aware that this actually does have value 00:37:36.160 |
in the larger ecosystem. So this is something that I was able to work on 00:37:40.960 |
more broadly as well. So I was part, and I was the DOT representative on the 00:37:46.320 |
National Science and Technology Committee's subcommittee on machine 00:37:51.680 |
learning and artificial intelligence. And this was one of the recommendations that 00:37:55.040 |
was really pushed forward as well because AI has tended to really make 00:37:59.240 |
great advances with the availability of good data sets. And in order to make 00:38:03.680 |
those sort of good advances in transportation, this group was also 00:38:07.480 |
advocating that those data sets need to be made broadly available. So this is a 00:38:14.200 |
little bit about the vision behind the automated vehicle policy, what the goal 00:38:19.200 |
was to really achieve here. The idea of trying to move towards a proactive safety 00:38:24.960 |
culture, not to necessarily put in regulations prematurely and try to set 00:38:29.760 |
standards when honestly we don't know the best way to develop automated 00:38:33.040 |
vehicles, but to allow the government to kind of get involved in discussions with 00:38:36.920 |
manufacturers early and be comfortable with what's going out on the roadway. And 00:38:42.480 |
actually to kind of help the U.S. to continue to play a leading role in this. 00:38:47.280 |
Obviously if vehicles were going to be banned from the roads, it would be very 00:38:51.440 |
difficult for the country to continue to be a place where people could test 00:38:56.240 |
and develop this technology. And then the belief really that there can be an 00:39:00.140 |
acceleration of the safety benefits of this through data sharing. So each car 00:39:04.640 |
doesn't have to encounter all the weird situations itself, but in fact can learn 00:39:10.520 |
from what other vehicles experience. And the idea is that really this is meant to 00:39:16.000 |
be an evolving framework. So it comes out as guidance, it really generates 00:39:20.520 |
conversations, it generates best practices, which can eventually evolve 00:39:24.600 |
into standards and law. And there's a huge opportunity here because the belief 00:39:29.680 |
isn't that the National Highway Traffic Safety Administration will be doing all 00:39:34.040 |
of the development of these best practices, but that that'll really evolve 00:39:37.680 |
from what companies do and what all of us at universities are able to do to 00:39:42.520 |
sort of generate ways to solve these problems in creative manners. Ways to 00:39:47.880 |
actually keep the innovation going, but ensure that we have safety. So as you 00:39:53.160 |
start to think about all of the AI systems that you're developing, you 00:39:56.320 |
start to flip around a little bit and think about how is the regulator going 00:39:59.960 |
to get comfortable, that it's not going to do something weird. These are great 00:40:03.200 |
research questions. I think these are great practical questions and these are 00:40:07.340 |
things that will need to be worked out going forward. So I leave you with that 00:40:11.720 |
as a challenge to think about, to think as you take this course, not only about 00:40:16.040 |
the technology that you're learning, but how do you communicate that to other 00:40:19.920 |
people? And where are the gaps that need to be filled? Because I think you'll find 00:40:24.400 |
some great opportunities for research, startup companies, and ultimately 00:40:28.840 |
work with policy and government there. So thanks for the opportunity to talk to 00:40:33.040 |
all of you, and I want to stop there because probably the things that you 00:40:35.280 |
want to talk about are more interesting than the things that I wanted to talk 00:40:38.160 |
about. So I'm happy to take questions along there. 00:41:09.520 |
I do. I think that's a great question. Thanks for reminding me. 00:41:15.000 |
So the question was whether in the future when you have all vehicles 00:41:18.200 |
automated, would we be able to actually roll back things like airbags and seat 00:41:23.640 |
belts and other things that we have on there, what we might know as passive 00:41:27.040 |
safety devices in vehicles. I believe that we will, and in fact actually one of 00:41:32.320 |
the things that I think is most extraordinary, if you think about this 00:41:35.280 |
from a sustainability standpoint, when you look at the average sort of mass of 00:41:40.080 |
vehicles and average occupancy of vehicles in the U.S., you know, with 00:41:45.280 |
passenger cars we're using maybe about 90% of the energy to move the vehicle as 00:41:50.120 |
opposed to moving the people inside. And one of the reasons for that is crash 00:41:53.560 |
worthiness standards, which are great because that's what's enabled us to be 00:41:56.700 |
surviving these crashes at 40 miles an hour. But if we do have vehicles that are 00:42:00.680 |
not going to crash or if they are going to have certain modes which might be 00:42:05.840 |
designed with very carefully designed, you know, crash areas or things like 00:42:12.200 |
this, we can potentially take a lot of that mass out. Particularly if these are 00:42:15.920 |
low-speed vehicles which are designed only for the urban environment and 00:42:19.320 |
they're not going to crash because they're going to drive, you know, 00:42:22.800 |
somewhat conservatively or in some ways separated from pedestrians, then I think 00:42:27.000 |
you can get a lot of the mass out and then you start to actually have 00:42:31.320 |
transportation options which, you know, from an environmental standpoint are 00:42:35.640 |
comparable to cycling. So I think that's actually a really 00:42:40.760 |
good goal to strive for, although we either have to kind of limit the 00:42:44.440 |
environment or think in the far future with some of those techniques. 00:42:49.520 |
Good. Yeah, that's a great question. So what are we doing with 00:43:07.120 |
Shelly? Is our mission really just to drive as fast as possible and faster 00:43:11.080 |
than a human or are we trying to learn from this something that we can apply to 00:43:15.320 |
other automated vehicles? It really is a desire to learn from other automated, you 00:43:21.560 |
know, for the development of other automated vehicles. And we've often said 00:43:24.520 |
that at the point where, you know, the difference between Shelly's performance 00:43:29.240 |
and the human driver, you know, starts to be really mundane things like, you know, 00:43:33.440 |
our shift pattern or something which isn't applicable, we kind of lose 00:43:37.200 |
interest at that. However, you know, up to this point, every insight that we've 00:43:42.000 |
gotten from Shelly has been directly transferable. And we've programmed the 00:43:47.640 |
car to do some emergency lane changes in situations where you don't have enough 00:43:52.000 |
room to brake. And we've actually been demonstrating in some cases that the car 00:43:55.960 |
can do this much faster than a human, even an expert human's response can be. 00:44:01.960 |
So there's certain scenarios that we've done like that. And I would say from the 00:44:05.600 |
bigger picture, what's really fascinating is that we originally started out with 00:44:10.360 |
this idea of let's find the best path around the track and track it as close 00:44:14.800 |
as we can. But in fact, when you look at human race car drivers, what they're 00:44:18.880 |
doing is actually very different. They're pushing the car to the limits and then 00:44:23.040 |
sort of seeing what paths that opens up to them. And it flips the problem a bit 00:44:27.720 |
on its head in a way that I think is actually very applicable for developing 00:44:32.640 |
safety systems out on the road. But it's not a way that people have looked at it, 00:44:36.360 |
to the best of my knowledge, up to this point. And so, you know, that's really what 00:44:41.040 |
we're hoping is that the inspiration in trying to reproduce human performance 00:44:44.560 |
there leads us to better safety algorithms. So long, you know, so far 00:44:48.080 |
that's been the case. And when that ceases to be the case, I think we are 00:45:00.280 |
Yeah, so liability is a good question. So, you know, what, who is liable, if I can 00:45:05.680 |
sort of rephrase, you know, for an accident in an automated vehicle. 00:45:10.320 |
On the one hand, that's kind of an open question. On the other hand, we do have a 00:45:15.240 |
court system. And so whenever there are new technologies, these things are 00:45:20.160 |
actually generally figured out in the courts and it can be different from 00:45:23.280 |
state to state. So this is one aspect where, you know, potentially some 00:45:27.640 |
discussion so that manufacturers aren't subject to different conditions in 00:45:30.960 |
different states would be helpful. But the way that it works now is that it's 00:45:35.560 |
it's usually not binary. We have in the U.S. the sense of a joint and several 00:45:40.640 |
liability. And so you can actually assign different portions of responsibility to 00:45:46.120 |
different players in the game. You have had companies like Volvo and, in fact, 00:45:50.520 |
Google make statements that if their vehicles are involved in accidents, then 00:45:54.960 |
they would expect to be liable for it. So people have often talked about needing 00:45:59.480 |
something really new for liability, but I'm not sure that's the case. We do have 00:46:05.360 |
a court system that can ultimately figure out who is liable with new 00:46:08.880 |
technologies. We have some manufacturers that are starting to make some 00:46:12.760 |
statements about assuming product liability for that. The one thing that 00:46:17.120 |
really could be helpful, as I mentioned, is perhaps some harmonization. Because 00:46:20.400 |
right now insurance is something that is set state by state. And so the rules in 00:46:25.600 |
one state as to who's at fault for an accident may be very different in 00:46:57.160 |
Okay, so what if companies, you know, as they send in the safety letters, are 00:47:01.720 |
using criteria to set safety that may not be broadly acceptable to the 00:47:06.840 |
public, where the public would like these vehicles to have greater safety? I think, 00:47:11.760 |
you know, the nice thing about this process is, first of all, we would know 00:47:15.080 |
that, right? So we would have a sense that companies are developing with certain 00:47:22.320 |
measures of safety in mind, and there could actually be a discussion as to, you 00:47:26.720 |
know, whether that is setting an acceptable level. It's a difficult 00:47:31.640 |
question because it's not clear that people really know what an acceptable 00:47:35.200 |
level is. Is it, does it have to be safer than than humans drive now? You know, my 00:47:40.440 |
personal feeling, I would say yes. And does it have to be much, much safer? Well, 00:47:48.560 |
that's hard to say. You know, you start to then get into this situation of, we're 00:47:52.520 |
comfortable to a certain extent with our existing legal system and with the 00:47:56.640 |
fact that humans could cause errors that have fatal consequences. Do we feel the 00:48:01.000 |
same way about machines? All right, you know, we tend to think the machines 00:48:04.040 |
really should have a higher level of perfection, so we may, as a society, be 00:48:07.840 |
less tolerant. People will often say, well, so long as the overall national 00:48:11.760 |
figures go down, that would be good, but that's really not going to matter much 00:48:16.200 |
to the families who are impacted by an automated vehicle, particularly if it's 00:48:20.880 |
a, if it's a scenario with very, very bad optics. What do I mean by that? It's, if 00:48:27.080 |
you think about the failures of mechanical systems, because they're 00:48:31.400 |
different than the failures of human beings, they can often, like, look really 00:48:35.600 |
bad, right? If you sort of think about a vehicle that doesn't detect something 00:48:39.900 |
and then just continues to plow ahead, you know, visually that's, that's really 00:48:44.560 |
striking, and that's the sort of thing that, you know, would get replayed and be 00:48:48.520 |
in people's consciousness and raise some fears, and so, you know, I think that's, 00:48:53.040 |
that's an issue that's going to have to be, have to be sorted out. 00:48:58.920 |
Yeah, my question is about automated vehicles on a global scale. 00:49:04.920 |
You talked about, you know, data sharing and the potential for collaboration in terms of, you know, technology, but also maybe policy. 00:49:11.920 |
Is there anything, you know, any sort of collaboration between different, you know, 00:49:17.920 |
between research in different parts of the world to exchange, you know, different policies and different technologies? 00:49:26.920 |
Yes, so that's a, that's a good question. What's being done, really, from a global 00:49:30.920 |
standpoint to sort of share ideas, to share research, and to kind of work 00:49:35.040 |
through some of these things, particularly on the policy side? So most 00:49:38.200 |
of the auto manufacturers are global corporations, and so a lot of the 00:49:42.440 |
research in this is done in very different parts of the world. So Renault, 00:49:47.400 |
Nissan, for instance, is doing a lot in Silicon Valley, in Europe, and in Japan. 00:49:52.000 |
And I think you see a lot of that with the different manufacturers. One of the 00:49:56.240 |
cool things that I got to do as part of my role was to go with the Secretary of 00:50:00.200 |
Transportation to the G7 Transportation Ministers meeting in Japan and address 00:50:04.800 |
the ministers about sort of the the U.S. policy on automated vehicles. And one of 00:50:11.440 |
the parts of that discussion was, well, the U.S. has a very different set of 00:50:16.040 |
rules. So we have this manufacturer self-certification as opposed to 00:50:20.680 |
pre-market certification. But testing, for instance, is something that has to be 00:50:25.360 |
done regardless. So either it's testing that's done by a manufacturer, or it's 00:50:29.760 |
testing that's done by, for instance, you know, in Germany, the, the, the TÜV and 00:50:35.840 |
other agencies that are responsible for, for road safety. And so the idea is maybe 00:50:41.440 |
we should be sharing best practices on testing, so we have a set of standard 00:50:45.960 |
tests. And then manufacturers across the globe could test to a certain set of 00:50:50.400 |
standards that might be translated differently according to the policies 00:50:54.200 |
and regulatory environments in different countries. So that was, that was part of 00:50:58.200 |
the idea that we advanced at the G7, and it seemed to kick off really well. 00:51:06.480 |
I never had a conscious decision on this. I actually got a call from the White 00:51:11.120 |
House one day. You know, and, and, you know, I got this message, or this email, you 00:51:15.880 |
know, I'm, I'm reaching out from the White House for you to give my call, you know, 00:51:19.080 |
give me a call back. So of course I called back immediately, and Pam Coleman on the 00:51:22.520 |
other end of the line was like, "I love doing that." She's like, you know, when 00:51:25.000 |
you're calling for the White House, everybody returns your call. And, and so 00:51:28.880 |
honestly, you know, she said, "Here's the situation. We're looking at a lot of these 00:51:33.240 |
areas in the Department of Transportation that seem to hit upon 00:51:37.080 |
your areas of expertise. We want to talk about working with you in some way, the 00:51:40.920 |
holy grail would be for you to come out and work in DC for a while." And then I 00:51:45.040 |
got a call from the Department of Transportation, and they're like, "Well, we 00:51:47.800 |
know you wouldn't want to come out to DC for a while." I'm like, "Try me. Could I do, 00:51:50.880 |
you know, could I do cool stuff, and could I make an impact?" And then, you know, I 00:51:55.440 |
met with the Secretary of Transportation out in San Francisco, and, you know, he 00:51:58.840 |
assured me, he's like, "You would be surprised. You would be very surprised at 00:52:02.760 |
how much of an impact you could have." And this ended up being really true. A lot of 00:52:09.160 |
times this stuff moves quickly, and people who are involved in policymaking 00:52:12.040 |
may or may not have a technical background in this. They may have come 00:52:14.560 |
through the campaign, for instance, and then ended up in political roles. Yet the 00:52:20.120 |
folks that I worked with were really trying to get good information and make 00:52:22.960 |
good decisions. And so I just kept getting called in for advice on all 00:52:26.640 |
sorts of things, and I found that people actually really wanted to have that 00:52:29.840 |
technical information and then used it. So, so that, that's the way it happened. 00:52:34.960 |
It seemed like it was an opportunity to take things that I've worked on, as I 00:52:38.080 |
mentioned, you know, automated vehicles since 1992, and then to be part of this 00:52:42.240 |
policy development, which went really quickly. It was a one-page outline when I 00:52:46.080 |
arrived in February, and then in September it rolled out. And along the 00:52:50.760 |
way was all sorts of editing and negotiations at the White House and 00:52:54.000 |
other agencies. Fascinating, fascinating process. So, so I kind of fell into this, 00:53:00.920 |
but, you know, as Lex mentioned, I think I'm emerging as a policy wonk here 00:53:05.520 |
because it was a, it was a very fun experience. Let's see over here, what we 00:53:09.840 |
got? With data sharing, you have a lot of companies that have somewhat of a 00:53:15.520 |
monopoly on a lot of data, especially like Google has so much more data 00:53:18.720 |
available. Yeah. A lot of smaller startups. How do you incentivize these big 00:53:22.960 |
companies to actually share their data? Good, so how do you incentivize companies to share 00:53:27.920 |
their data when they have an awful lot in, invested in them, in that, in the 00:53:32.720 |
gathering of that data and being able to process that data? And I think the answer 00:53:36.360 |
is to start small and to try to say, are there certain high-value things that 00:53:39.760 |
could, again, make the public comfortable, make policy makers comfortable, that 00:53:43.480 |
really aren't going to be a burden on the company? You know, so, so one of the, you 00:53:48.760 |
know, one of the things that, from the Peloton standpoint, that was bounced 00:53:53.080 |
around at one point, so our, our trucks actually use vehicle-to-vehicle 00:53:56.920 |
communications as part of their link. Well, when you do that, you discover that 00:54:01.560 |
there's actually an awful lot of places where that drops out, because cell phone 00:54:06.920 |
towers, which are not supposed to be broadcasting on that frequency, seem to 00:54:11.080 |
create an awful lot of interference there. Well, that could be very 00:54:13.840 |
interesting from a public policy perspective to know, you know, where are, 00:54:17.440 |
you know, we were sort of monitoring for incursions in that, in that frequency 00:54:21.720 |
range everywhere we go. That, for instance, might be a very useful piece of 00:54:25.200 |
information to share with policy makers that wouldn't be any real proprietary 00:54:29.880 |
issue to share from the company's perspective. And so I think that the 00:54:34.680 |
trick is to start small and find what are the high-value data where there 00:54:38.040 |
isn't a big issue of sharing. I mean, if you go to Google and say, "All right, 00:54:41.720 |
Google, what will it take for you to share all of the data you're acquiring 00:54:46.320 |
from your entire self-driving car program?" I guess Waymo now. I think that 00:54:50.840 |
would be a very big number, and so I don't think that's a starting point. I 00:54:54.560 |
think you start with, you know, what is the high-value data, data that's of high 00:54:57.720 |
value for the public policy sense, and really minimal hassle to the, to the 00:55:02.880 |
companies. I don't know how much longer you, I'm happy to stay and, and answer, 00:55:08.840 |
answer as many questions, but I know you have a class to run. How are we? Okay, 00:55:17.920 |
Are there any efforts that you know to come up with standards for sharing map data, accident data, simulations? 00:55:25.560 |
Good. So is there any effort underway for, for sharing map data, some of the edge 00:55:32.000 |
case accident data, simulation capabilities, and things like that? This 00:55:36.600 |
is one of the next steps that NHTSA outlined in the policy, and so there are 00:55:40.720 |
people at, at NHTSA actually working on taking some of these next steps again in 00:55:45.040 |
sort of a pilot or prototype mode. So, so that's something that's, that's currently 00:55:50.240 |
being worked on in the, in the department. You can probably expect to hear more 00:55:59.360 |
I have a question about the learning from data. 00:56:02.360 |
Driving in the urban areas and driving in the highway and rural areas are very different. Do you see the federal government to make like a standard data set for like an air company, so that's good before they can take a car on production? Or do you see that you should be asking the company to do the test itself? 00:56:17.360 |
Okay, so the question is, testing in urban and rural environments, or even driving in, in urban and rural environments, are very 00:56:23.880 |
different. And should the, the government actually come up with a standard set of 00:56:26.920 |
data that all companies have to attest to? I think one of the reasons that the 00:56:33.240 |
policy was designed the way it was, was to make sure we had this concept of 00:56:37.960 |
operational design domain. So, in fact, if the only area that I've mapped, and the 00:56:43.920 |
only area that I want to drive is, say, in a campus environment, or in one quarter 00:56:50.880 |
square mile, then, then the idea is that we would like the companies to explain 00:56:55.000 |
how they handle the eventualities in that one quarter square mile, but they 00:56:59.700 |
should really have no reason to handle other situations, right? Because their 00:57:04.040 |
vehicle won't encounter that, so long as it's been designed to stay within its 00:57:08.320 |
operational design domain. So, I think in the short term, you know, what you see is 00:57:12.400 |
people often looking at hyperlocal solutions, or kind of the low-hanging 00:57:17.480 |
fruit for, for a lot of automation. And even if you think about offering 00:57:21.240 |
mobility as a service, if I'm gonna offer sort of an automated taxi, I'm probably 00:57:27.320 |
gonna do that in a limited environment to start with. And so, if I'm only doing 00:57:31.560 |
this in Cambridge, does it really matter if I can drive in Mountain View or not? 00:57:36.280 |
And so, you know, I think the idea is to start with the definition of the 00:57:40.440 |
operational design domain, with a data set that is appropriate for that 00:57:44.200 |
operational design domain. And then, as people's design domains start to expand 00:57:48.680 |
nationwide, then I think, you know, the idea of common data sets starts to be, 00:57:53.560 |
starts to be interesting. Although, you know, there is a sense that no finite 00:57:57.560 |
data set is really going to capture every eventuality. And so, you know, people 00:58:02.440 |
will be able to develop, or sort of, you know, design to the test in some ways. Is 00:58:07.240 |
that sufficient? I think it'll make people feel better, but I, I personally 00:58:11.080 |
wonder how much value there is. You know, it's same with test track testing. I can 00:58:15.320 |
think of 20 different tests that automated vehicles will have to pass, and 00:58:18.200 |
people will design ways to pass all 20 of those tests. It may make some people 00:58:22.800 |
more comfortable, but it doesn't make me all that much more comfortable that 00:58:25.800 |
they'd be able to handle a real-world situation. All right, let's see, one last 00:58:49.200 |
Could you, could you make an open-source car under, okay, so the question is, could 00:58:53.720 |
you make an open-source car under the, the guidance provided by US DOT? The 00:59:02.120 |
question would be, you know, from a, from a practical question, you're supposed to 00:59:06.960 |
submit a safety assessment letter, which is supposed to be signed by somebody 00:59:10.920 |
responsible for that. And so, an issue, if you were to open-source, would be, you 00:59:16.920 |
know, do I use this module, and who is actually signing, signing off on that? 00:59:21.120 |
Would I feel comfortable signing off on something which I then allowed to be 00:59:24.320 |
open-source? I, you know, not a lawyer, but I would think that, you know, I don't 00:59:30.160 |
think there would be anything that would prevent that if you had a development 00:59:33.760 |
team that was doing that, and people who are willing to sign off on whatever 00:59:37.080 |
version of the software was actually used in an open-source car. You know, I 00:59:43.920 |
will say that the, the guidance does apply to universities or to, or to other 00:59:49.000 |
groups that would be putting a car out on the road. And I think if you look 00:59:52.760 |
through the 15 points, they're not really meant to be overly restrictive. In fact, I 00:59:59.240 |
would argue that pretty much any group that is going to sort of put real people 01:00:02.800 |
at risk by, by putting an automated vehicle out on the road should really 01:00:08.000 |
have thought through these things. So I don't think it's a, I don't think it's a 01:00:10.760 |
terribly high, high burden to, to meet. I think it would be, you know, it would be 01:00:16.440 |
meetable by a group. It just, the question would be, you know, from the open-source 01:00:19.680 |
sense, how do you sort of trace who's responsible and who's signing off on that? 01:00:24.680 |
All right. I think we gave those third graders a run for their money. Yeah, 01:00:30.040 |
absolutely. Thank you so much. Let's give Chris a big hand.