Back to Index

Chris Gerdes (Stanford) on Technology, Policy and Vehicle Safety - MIT Self-Driving Cars


Chapters

0:0 Introduction
1:20 Chris Gerdes
7:21 What is vehicle safety
11:46 Federal motor vehicle safety standards
14:26 Setting in best practices
15:18 Federal Automated Vehicle Policy
16:36 Safety Assessment
17:6 Operational Design Domain
18:58 Validation Methods
21:30 Ethical Considerations
26:18 Double Yellow Line
27:51 Speed Limits
28:38 Learning and Programming
30:39 Marty Marty
33:1 The Potential
34:8 Data Sharing
38:12 Automated Vehicle Policy
40:50 Safety Requirements
42:50 Learning from Humans
45:0 Liability
46:33 Safety
49:1 Policy
53:10 Sharing data
55:17 Accident data simulations
55:59 Testing in urban and rural environments
58:29 Opensource cars

Transcript

So today we have Chris Gerdes with us. He's a professor at Stanford University where he studies how to build autonomous cars that perform at or beyond human levels both on the racetrack and on public roads. So that includes a race car that goes 120 miles an hour autonomously on the racetrack.

This is awesome. He spent most of 2016 as the chief innovation officer at the United States Department of Transportation and was part of the team that developed the federal automated vehicle policy. So he deeply cares about the role that artificial intelligence plays in our society both from the technology side and the policy perspective.

So he is now I guess you could say a policy wonk, a world-renowned engineer and I think always a car guy. Yes. So he told me that he did a Q&A session with a group of third graders last week and he answered all of their heart hitting questions. So I encourage you guys to continue on that thread and ask Chris questions after his talk.

So please give a warm welcome to Chris. Great, Lex. Thanks for that great introduction and thanks for having me here to talk to everybody today. So this is sort of my first week back in a civilian role. I wrapped up at USDOT last week. So I'm no longer speaking and officially representing the department, although some of the slides are very similar to things that I use to speak and represent the department.

So I think as of Friday, this was still fairly current, but I am sort of talking in my own capacity here. So I wanted to talk about both the technology side and the policy side of automated vehicles and in particular how some of the techniques that you're learning in this class around deep learning and neural networks really place some challenges on regulators and policy makers attempting to ensure vehicle safety.

So just a bit about some of the cars in my background. I am a car guy and I've gotten a chance to work on a lot of cool ones. I actually have been working in automated vehicles since 1992 and the Lincoln Town Cars in the upper corner are part of an automated highway project I worked on as a PhD student at Berkeley.

I then went to Freightliner Heavy Trucks and Daimler Benz and worked with suspensions on heavy trucks before coming to Stanford and doing things like building P1 in the upper right corner there. That's an entirely student-built electric steer-by-wire, drive-by-wire vehicle. We've also instrumented vintage race cars, electrified a DeLorean, which I'll show a little bit later, and worked, as Lex mentioned, with Shelly, which is our self-driving Audi TT, which is an automated race car.

In addition to the Stanford work, I was a co-founder of Peloton Technology, which is a truck platooning firm, looking at bringing platooning technology, so vehicle-to- vehicle communication, which allows for shorter following distance out on the highway. So these are some of the things I've had a chance to work with.

To give you a little bit of a sense, this is Shelly going around the racetrack at Thunder Hill. She can actually go up to about 120 miles an hour or so on that track. It's really just limited by the length of the straight. It's kind of fun to watch from the outside, a little disconcerting.

Occasionally, as you see, there's nobody in the car, although from inside it actually looks all pretty chill. So Shelly, we've been working with her for a while out on the track. She's able to get performance now, which exceeds the capability of anybody on the development team. Many of us are amateur racers.

In fact, actually, most of my PhD students have their novice racing license. We make sure that they get that license before going out on the track and testing. So Shelly can beat anybody in the research group. She actually can beat the president of the track, David Vaughn, now. And we've had the opportunity to work recently with J.R.

Hildebrandt, the IndyCar driver who finished sixth this last year in the Indy 500. He's faster, but he's actually only about a second or so faster on a minute and 25 second lap. So we're approaching his performance and he's actually helping us get there. Now, the interesting thing about this is that we've approached this problem really from one of physics.

Force equals mass times acceleration. So the car is really out there calculating what it needs to do to break down into the next corner, how much grip that it thinks it has, and so forth as it's going around the track. It's not actually a learning approach at its core, although we've added on top a number of algorithms for learning because it turns out that the difference between the car's performance and the human performance is really getting that last little bit of capability out of the tires.

Humans drive instinctively in a way, the best humans at any rate, drive instinctively in a way which is constantly pushing to the limits of the car's capability. And so if you sort of prejudge what those limits are, you're not going to be quite as fast. And so that's one of the things we've actually been working with learning algorithms on is to try to figure out, well, how much friction do I have in this particular corner and how is that changing as the tires warm up and as a track warms up from the course of the morning till the afternoon.

These are the things that we need to be fast on the racetrack, but there are also the things that you need to take into account to be safe in the real world. So what we're trying to do with this project is understand how the car can drive at the maximum capability of the limits of the friction between the tire and the road.

Now race car drivers do that to be fast. As they say in racing, if you want to finish first, first you have to finish. So it's important that they actually be fast but also accident-free. So we're trying to learn the same things so that on the road when you may have unknown conditions ahead of you, the car can make the safest maneuver that's using all the friction between the tire and the road to avoid ultimately any accident that the car would be physically capable of avoiding.

That's our goal with that. So we've had a lot of fun with Shelly. We've gotten to drive the car up Pikes Peak in the Bonneville Salt Flats. Actually Shelly appeared in an Audi commercial with Zach Quinto and Leonard Nimoy and so at the end of the commercial they both look at each other and declare it fascinating.

So if you're as big of a science fiction fan as I am, you realize that once your work has been declared fascinating by two Spocks, there's nowhere to go. So I had to take a stint and try something different in government. And so I spent the last year as the first Chief Innovation Officer at the US Department of Transportation, which I think honestly was the coolest gig in the federal government because I really didn't have any assigned day-to-day responsibilities but I got to kind of dive in and help with all manner of really cool projects, including the development of the first federal automated vehicle policy.

So it's a really great opportunity to sort of see things from a different perspective. And so what I wanted to do was, you know, kind of coming into this from an engineer, give you a perspective of what is it like from somebody looking at the regulatory side on vehicle safety and how are they thinking about the technologies you're developing and where does that actually leave some opportunities for engineers to make some big contributions to society.

So let's start with what vehicle safety is like today. So today we have a system of federal motor vehicle safety standards. So these are rules, they're minimum performance requirements, and each of them must have associated with it an objective test. So you can tell does a vehicle meet this requirement or does it not meet this requirement.

Now interestingly there is no federal agency that is testing vehicles before they are sold. We rely in this country on a system of manufacturer self-certification. So the government puts these rules out there and manufacturers go, "We got this, we can meet this," and then they self-certify and put the vehicles out on the market.

The National Highway Traffic Safety Administration can then purchase vehicles and test them and make sure that they comply. But we rely on manufacturer self-certification. This is a different system than in most of the rest of the world, which actually has pre-market certification, where before you can sell it the government agency has to say, "Yes, we've checked it and it meets all the requirements." Aviation in this country, for instance, has that.

Aircraft require certification before they can be sold. Cars do not. Now where did that system come from? So a little quick history lesson. In 1965 Ralph Nader released a book entitled "Unsafe at Any Speed." And this is often thought of as a book about the Corvair. It's not. The Corvair featured prominently in there as an example of a design that Nader considered to be unsafe.

What was very interesting about this book was that he was actually advocating for things like airbags and anti-lock brakes back in 1965. These technologies didn't come along until much later. His argument was that the auto industry had failed. It wasn't a failure of engineering, but it was a failure of imagination.

And if you're interested in vehicle safety, I would really recommend you read this book because it's fascinating. They have quotes from people in the 1960s basically saying that we believe that any collision more than about 40 or 45 miles an hour is not survivable. Therefore, there's no reason for seatbelts.

There's no reason for collapsible steering wheels. In fact, there's a quote from somebody who made great advances in road safety saying, "I can't conceive of what help a seatbelt would give you beyond like firmly bracing yourself with your hands." Those of you who have studied physics know that's kind of patently ridiculous.

But there was a common feeling that there was no sense of doing anything about vehicle crash worthiness because once you got above a certain speed, it was inherently unsurvivable. And I think it's interesting to look at that today because if we were to be in a collision, I think if any of us were to be in a collision around about 40 miles an hour in a modern automobile, we'd probably expect to walk away.

You know, we wouldn't really be thinking about our survival. And so what this did is it led to a lot of public outcry and ultimately the National Traffic and Motor Vehicle Safety Act in 1966, which established NHTSA and established this set of federal motor vehicle safety standards. Now the process to get a new standard made, which is a rule-making process in government, is very time-consuming.

Optimistically, about the minimum time it can possibly take is two years. Realistically, it's more like seven. And so if you think about going through this process, that's really problematic. I mean, think about what we were talking about with automated vehicles two years ago or seven years ago. And think about trying to start seven years ago and make laws that are going to determine how those vehicles operate on the road today.

It's crazy, right? There's really no way to do that. And the other thing is is that if you think about it, our system evolved from really this sense of failure of imagination that the government needs to say, "Hey, industry, do this. Stop slacking off. These are the requirements. Get there." But I think it's hard to argue today with all the advances in automation that there is any failure of imagination on the part of industry.

People are coming up with all sorts of ideas and concepts for new transportation and automation. Tech companies, startup companies, large OEMs. There's all sorts of concepts being tested out on the road. It's hard to argue that there's still any lack of imagination. Now the question is, are things like this legal?

It's an interesting question, right? Can I actually legally do this? Well, from the federal level, there's an interesting report that came out about 10 months ago from the folks across the street at Volpe who did a scan and said, "Well, what are the things that might prevent you, based on the current federal motor vehicle safety standards, from putting an automated vehicle out on the road?" And the answer was, honestly, not much.

If you have a vehicle, if you start and you automate a vehicle that is currently meeting all the standards, because there are no standards that relate specifically to automation, you can certify your vehicle as meeting the federal motor vehicle safety standards. Therefore, there's nothing at the federal level that prevents, in general, an automated vehicle from being put on the road.

So it makes sense. So if there isn't a safety standard that you have to meet, then you can put a vehicle out on the road that meets all the existing ones and does something new, and there's no federal barrier to that. Now there are a couple of exceptions. There were a few points in there that referenced a driver, and in fact NHTSA gave an interpretation of the rule, which is one of the things that they can do, is to say, "Well, we're going to give an interpretation.

It's not making a new rule, but basically interpreting the ones that we have." And they said that actually these references to the driver could, in fact, refer to the AI system. And so that actually is now a policy statement from the department, that many of the references to driver in the federal motor vehicle safety standards can be replaced with your self-driving AI system, and the rules applied accordingly.

So in fact, there's very little that prevents you from putting a vehicle out on the road if it meets the current standards. So if it's a modern production car, automate it. Federal motor vehicle safety standards don't stop that. Now a lot of the designs that I showed, though, things that wouldn't have a steering wheel or other things, are actually not compliant, because there are requirements that you have a steering wheel, that you have pedals.

Again, these are best practices that evolved in the days, of course, when people were not thinking of cars that could drive themselves. And so these things would require an exemption by NHTSA, a process of saying that, "Okay, this vehicle is allowed on the road, even though it doesn't meet the current standards because it meets some equivalent." And setting that equivalent can be a bit of a challenge.

Okay, so the question then is, "Well, all right, if the federal government is responsible, and NHTSA, by the Traffic Safety Act, is responsible for safety on the roads, but it can't prevent people from putting anything out, what do you do?" Right? One approach is to say, "Well, let's get some federal motor vehicle safety standards out there." But as we already said, that's probably about a seven-year process, and if you were to start setting in best practices now, what would that look like?

So we've got this challenge. We want to encourage this technology to come out onto the roads and be tested, because that's the way you're going to learn, to get the real-world data, to get the real-world experience. At the same time, the federal government is responsible for safety on the nation's roads.

It can recall things that don't work. So if you do put your automated system out on the highway, and it's deemed to present an unreasonable risk to safety, even if you're an aftermarket manufacturer, the government can tell you to take that off the road. But the question is, "How can you do better?

How can you be proactive to try to have a discussion here?" So we know standards are maybe not the best way of doing that, because they're too slow. We'd like to make sure the public is protected, but this technology gets tested. And so the approach taken to sort of provide some encouragement for this innovation, while at the same time looking at safety, was the federal automated vehicle policy, which rolled out in September.

So this was an attempt to really say, "Okay, let's put out a different framework from the federal motor vehicle safety standards. Let's actually put out a system of voluntary guidance." So what NHTSA is doing is to ask manufacturers to voluntarily follow certain guidance and submit to the agency a letter that they have followed a certain safety assessment.

Now the interesting thing is, is that the way that this is set up is not to tell manufacturers how to do something, but really to say, "These are the things that we want you to address, and we want you to come to us to explain how you've addressed them." With the idea that from this, best practices will emerge, and we'll be able to figure out in the future what really is the best way of ensuring some of these safety items.

So this rolled out in September. We've got the MIT car here on the side. So you see you've got the Massachusetts license plate. So thanks to Brian for bringing that. If you do put Gaudi stickers on your car, then you get closer to the center. So that's something to consider for future reference.

But this was was rolled out in Washington, Washington DC by the Secretary and consists largely of multiple parts, but I think the most relevant to vehicle design is this 15 point safety assessment. So these are the 15 points that are assessed, and I'd like to kind of talk about a few of these in some more detail.

And it starts with this concept of an operational design domain and minimal risk or fallback conditions. And what that means is instead of trying to put a taxonomy on here and say, "Well, your automation system could be an adaptive cruise control that works on the highway, or it could be fully self-driving, or it might be something that operates a low-speed shuttle," the guidance asks the manufacturers to define this.

And the definition is known as operational design domain. So in other words, you tell us where your system is supposed to work. Is it supposed to work on the highway? Is it supposed to work in restricted areas? Can it work in all weather? Or is this sort of something that operates only in daylight hours in the sunshine in this area of South Florida?

All of those are fine, but it's incumbent upon the manufacturer or developer to define the operational design domain. And then once you've defined where the system operates, you need to define how you make sure that it is only operating in those conditions. How do you make sure that the system stays there?

And what's your fallback in case it doesn't? And that fallback can be different. Obviously, if this is a car which is normally human-driven, as you see here from the Volvo Drive Me experiment, it might be reasonable to say, "We're going to ask the human driver to retake control." Whereas, clearly, if you're going to enable blind passengers or you are going to have a vehicle that has no steering wheel, you need a different fallback system.

And so within the guidance, it really allows manufacturers to have a lot of different concepts of what they want their automation to be, so long as they can define where it works, what the fallback is in the event that it doesn't work, and how you have educated the consumer about what your technology does and what it doesn't do, so that people have a good understanding of the system performance.

A few things, if we go down, you see also validation methods and ethical considerations are aspects that are brought up here as well. And so validation methods are really interesting as it applies to AI. So really, the idea is that there's lots of different ways that you might test an automated vehicle.

You might go out on a test track and run it through a series of standard maneuvers. You may develop a certain number of miles of experience driving in real-world traffic and figure out how does the vehicle behave in a limited environment. There's questions about a test track, obviously, because you don't have the sort of unknowns that can happen in the real-world environment.

But if you test in one real-world environment, you also have a question of, is this transferable information? So if I've driven a certain number of miles in Mountain View, California, does that tell me anything about how the vehicle is likely to behave in Cambridge, Massachusetts? Maybe, maybe not. It's a little bit hard to extrapolate sometimes.

And then finally, there's also the idea of simulation and analysis. So if I can record these situations, if I can actually create a virtual environment of the sorts of things that I see on the road, maybe I can actually run the vehicle through many, many of these scenarios, perturbed in some way, and actually test the system much more robustly in simulation than I could ever actually do out on the road.

So the guidance is actually neutral on which of these techniques manufacturers take and allow manufacturers to approach it in different ways. And I think, you know, based upon conversations, when you think about the way customers or companies develop this, they do take all these different approaches. A company like Tesla, for instance, which is recording all the data streams from all their vehicles, basically, is able to run ideas or technologies silently in their vehicle.

They can actually test systems out, get real-world data, and then decide whether or not to make that system active. Companies that don't have that access to data really can't use that sort of development method and may rely much more heavily on simulation or test-track experience. So the guidance really doesn't have a particular blend of this, and in fact, it does envision that you might have over-the-air software updates in the future.

So it is interesting, though, to think about whether you have data-driven approaches, things like artificial neural networks, or whether you actually start to program in hard and fast rules. Because as you start to think about requirements on a system, how do you actually set requirements on a system which has learned its behavior, and you don't necessarily know what the internal workings or algorithms look like.

There's another one that comes up, which is the ethical considerations. I'm going to pick on MIT for a moment here. So this is an area that I actually did a lot of work on with Stanford together with some philosophers who joined our group. And so when people hear ethical considerations in automated vehicles, it often conjures up the trolley car problem.

And so this sort of classic formulation here about the fact that you have a self-driving car which is heading towards a group of ten people, and it can either plow in and kill those ten people, or it can divert and kill the driver. What do you do? And these are classic questions in philosophy.

You actually look, in fact, at the trolley car problem, which is I have a runaway trolley car, and I need to either divert it to another track, or it will kill somebody who's wandering across that track, or the five people on the trolley car are killed. What do I do?

Well, in fact, as this article points out, it's like, you know, before automated vehicles can become widespread, car makers must solve an impossible ethical dilemma of algorithmic morality. So if all this wasn't hard enough, I mean, you're understanding how tough the technology is to actually program this stuff, and then you have to get the regulations right, and now we actually have to solve impossible philosophical questions.

Well, I don't think that's actually true, and I think, you know, it's good for engineers to work with philosophers, but not to be so literal about this. This is a question that philosophers can ask, but engineers might ask a number of different questions, like, who's responsible for the brakes on this trolley?

Why wasn't there a backup system? I mean, why am I headed into a group of ten people without any capability to stop? So an engineer would, in fact, have to answer this question, but might approach it much differently. So if I look at the trolley car problem, I might say, okay, let's see, my options are I've got a trolley car which is out of control.

First of all, I'd like to have an emergency braking system. Let's make sure that I have that. Well, there's a chance that that could break as well. So if my emergency, if my base braking system goes, and my emergency braking system goes, my next option would be to divert it to this side track.

Well, knowing that that's my option, I should probably put up a fence with a warning sign that says, "Do not cross runaway trolley track." Okay, now let's say that I've done all of that. The brakes fail, the emergency brakes fail. I have to divert the trolley, and somebody has ignored my sign and crossed over the fence, and now is hit by the trolley.

Do I feel a little differently about this whole scenario than I did at the beginning of just trying to decide who lived and who died? The solution was made, but by thinking of it as an engineer trying to reduce risk, and not by thinking of levels of morality and who deserves to live or die.

And so I think this is a very important issue, and the reason it's in the guidance is not to basically have everybody solve trolley car problems, but to try to think about these larger issues. And so I think ethics is not just about these sorts of situations, which actually will be in automated vehicles, I think addressed much more by engineering principles than by trying to figure out from philosophical merits who deserves to live and die.

But there's broader issues here. Just any time that you have concern for human safety. How close do I get to pedestrians? How close do I get to bicycles? How much care should I put in to other people in the environment? That's very much an ethical question, and it's an ethical question that manufacturers are actually already addressing today.

If you look at the automatic emergency braking systems that most manufacturers are putting on their vehicles, they will actually use a different algorithm depending upon whether that obstacle in front of it is a vehicle or a human. So they're already detecting and making a decision that the impact of this vehicle with a human could be far worse than the impact of this vehicle with a vehicle, and so they're choosing to brake a little bit more heavily in that case.

That's actually where these ethical considerations come in, and the idea of the guidance is to begin to share and have a discussion openly about how manufacturers are approaching this with the idea of getting to a best practice where not only the people in the automated vehicles, but other road users feel that there's an appropriate level of care taken for their well-being.

That's one of the areas where ethics is important. The other area where ethics is important is that we have different objectives as we drive down the road. We have objectives for safety, we'd like to get there. We have objectives for mobility, we'd like you to get there probably pretty quickly.

And we also have the idea of legality, we'd like to follow the rules. But sometimes these things come into conflict with each other. So let's say you're driving down the road and there's a van that's parked where it has absolutely no business parking. You've got a double yellow line.

Is it okay to cross? Well, at least in California, there's no exception to the double yellow line representing the lane boundary for a vehicle that's parked where it has no business being parked. So according to the vehicle code, you're supposed to kind of come to a stop here. I don't think any of us would, right?

In fact, actually when you're in California and you're riding through the hills and you come upon a cyclist, virtually every vehicle on the road is deviating across the double yellow line to give extra room to the cyclist. That's also not what you're supposed to do by the vehicle code.

You're supposed to stay on your side of the double yellow line but slow to an appropriate speed to pass. All right? So there's behaviors where our desire for mobility or our desire for safety are outweighing our desire for legality. This becomes a challenge if you think about how do I program the self-driving car.

Should it be based on the way that humans drive or should it be based on the way that the legal code tells me to drive? Of course, the legal code was never actually anticipating a self-driving car. From a human standpoint, that double yellow line is a great shorthand that says maybe there's something coming up here where you don't want to be in this other lane.

But if I actually have a car with the sensing capability to make that determination itself, is the double yellow line actually all that meaningful anymore? These are things that have to be sorted out. Speed limits being another one. You know, if we're out on the highway, it's usually a little bit flexible.

Do we give that same flexibility to the automated vehicle or do we create these wonderful automated vehicle roadblocks of vehicles going to the speed limit when nobody else around them is? Do we allow them to accelerate a little bit to merge into the flow of traffic? Do we allow vehicles to speed if they could avoid an accident?

Is our desire for safety greater than our desire for legality? These are the sort of ethical questions that I think are really important. These are things that need to be talked through because I believe if we actually have vehicles that follow the law, nobody will want to drive with them.

And so we need to think about either ways of giving flexibility to the vehicles or to the law in the sense that vehicles can drive like humans do. So this brings up some really interesting areas, I think, with respect to learning and programming. And so the question is, you know, should our automated vehicles drive like humans and exhibit the same behavior that humans do?

Or should they drive like robots and actually execute the way that the law tells them that they should drive? Obviously fixed rules can be one solution to this. Behavior learned from human drivers could be another solution to this. We might have some sort of balance of different objectives that we do more analytically in terms of how much we want to obey the double yellow line when there are other things influencing it in the environment.

Now what's interesting is that as you start to think about this, there's limits to any of these approaches in the extreme. You know, as we found with our self-driving race car, if you're not learning from experience, you're not making use of all the data. You're not going to do as well.

And there's no way that you can possibly pre-program an automated vehicle for every scenario it's going to encounter. Somehow you have to think about interpolating. Somehow you have to think about learning. At the same time you can say, well why don't we just measure humans? Well, human error is actually the cause or a factor, the primary factor in 94% of accidents.

It's either a lack of judgment or some lack of perception on the part of the human. So if we're simply following humans, we're actually only learning how well humans can do things. We're leaving a lot on the table in terms of the potential of the car. And so this is a really interesting discussion that I think will continue to be both in the development side of these vehicles and the policy side.

What is the right balance? What do I want to learn versus what do I want to program? How do I avoid leaving anything on the table here? So because it's the point where, you know, I've had a bunch of slides with words here, I want to give people a little bit of a sense for what you could be leaving on the table if in fact you don't adapt.

This is Marty. Marty is a DeLorean that we've been working with in my lab. Now DeLoreans are really fantastic cars unless you want to accelerate, brake, or turn. It really didn't do any of those things terribly well. There's no power steering, there's an underpowered engine, and very small brakes.

All of these things are fixable. In fact, what's nice about the DeLorean is it separates quite nicely. The whole fiberglass tub comes up. You can take out the engine. You can take out the brakes. You can make some modifications to the frame, stiffen the suspension, work with Renovo Motors, a startup in Silicon Valley, to put in a new electric drivetrain and put it all back together.

And when you do, you come up with a car that's actually pretty darn fun. And one we've programmed to drive itself. This is Adam Savage from Mythbusters going along for a drive. And what you see is Marty doing something at a level of precision that we're pretty sure no human driver can meet.

JR said there's no way he can do this. You see it's going into a perfect drift, doing a perfect donut around this cone, and then it launches itself through the next gate, sideways into the next cone. Now it's doing this, you see it shoots through the gate, missing those cones, and then launches into a tight circle around the next cone.

It's actually doing this as sort of an algorithm similar to orbital mechanics, if you think about how it's actually orbiting these different points as it sets a trajectory. Now the limit on this is tires, as you can see as it comes around here. The tires disintegrate into many chunks flying at the camera as we do this.

But the ability of the car to really continue, even as the tires heat up, to execute this pretty nice trajectory. Here you see it going through the gates again and launching into a stable equilibrium, putting pretty much the tire tracks right over where they were in the previous run, and then finally ending.

So this is the sort of thing that I think is possible. As you look at these vehicles, there's a huge potential out there for these things to not drive about as well as an average human, but to far exceed human performance in their abilities to use all the capabilities of the tires to do some amazing things.

Now maybe that's not the way that you want your daily drive to go, although when we first posted some of this video, one of the commenters was like, "I want this car. That way I can go into the store to buy donuts while it sits in the parking lot doing donuts." It wasn't a use case that I had thought of, but that's one of the things that we thought of is really how if you limit yourself to only thinking about what the tires can do before they get to the saturation of the friction of the road, you're only taking into account one class of trajectories.

There's a lot more beyond that that could be very advantageous in some emergency situations. Wouldn't it be great if the car had access to that? That's not a way that we're going to get if we only sort of monitor day-to-day driving. We're not going to get that capability in our cars.

So one other aspect that came through in the policy, which I think is extremely important as we think about neural networks and learning, is this idea of data sharing. And there's a huge potential to accelerate the development of automated vehicles if we can share some information about edge case scenarios in particular.

So if you think about trying to train a neural network to handle some extreme situations, that's really much easier if your set of training data contains those extreme situations. So if you think about the weird things that can happen out on the road, if you had a database of those and those comprised your training set, you'd have a head start in terms of being able to get a neural network and begin to validate that it would work in these situations.

So the question is, is there a way for the ecosystem around self-driving cars to actually share some of this information so that different players can actually share some information about the critical situations and be able to make sure that if you learn something, that yes, you can make your cars safer, but actually all the cars out on the road get safer.

Now clearly you need to balance this with some other considerations. There's the intellectual property concerns of the company. There's privacy concerns of any individuals who might be involved. But it does seem to me that there's a big potential here to think about ways of sharing certain data that can contribute to safety.

And this is a discussion that's going to be ongoing and I think academia can do a lot to sort of help broker this discussion because, you know, the first level people say, you know, data sharing, I don't know, companies aren't going to share, we're not going to get the information we need.

But most of the time people stay in the abstract as opposed to saying, well, what information would be most helpful? What information is really going to give people confidence in the safety of these cars? It's going to let regulators understand how they operate and yet at the same time is going to protect the amount of development effort that companies put in there.

I think there is a solution here and in fact if you look at aviation, there's a really good example that already exists. It's known as the Asias system. It started with only four airlines that decided to share safety information with each other. And this goes through MITRE, which is a federally funded R&D center.

And it's actually now up to 40 airlines. And if companies get kicked out of the MITRE project, they really try very hard to get back in. Now this is anonymized data. It's anonymized data so that, you know, companies actually get an assessment of what their safety record is like and they can compare it to other airlines in the abstract, but they can't compare it to any identifiable airline.

So there's no ranking of this. It's not used for any enforcement techniques. And it took people a long time to kind of build up and begin to share that. But now there's a huge amount of trust and they're sharing more and more data and looking at ways that they can perhaps actually start to code in things like weather and time of day, which had been removed for anonymization purposes in the original version of the system.

So I think there's some good examples out there and this is something that's very important to think about for automated vehicles. And I think as this discussion goes forward, those of you who are interested in developing these vehicles, using techniques that rely on data are going to be an important voice for the importance of data sharing.

I think there's a large role here to kind of make people aware that this actually does have value in the larger ecosystem. So this is something that I was able to work on more broadly as well. So I was part, and I was the DOT representative on the National Science and Technology Committee's subcommittee on machine learning and artificial intelligence.

And this was one of the recommendations that was really pushed forward as well because AI has tended to really make great advances with the availability of good data sets. And in order to make those sort of good advances in transportation, this group was also advocating that those data sets need to be made broadly available.

So this is a little bit about the vision behind the automated vehicle policy, what the goal was to really achieve here. The idea of trying to move towards a proactive safety culture, not to necessarily put in regulations prematurely and try to set standards when honestly we don't know the best way to develop automated vehicles, but to allow the government to kind of get involved in discussions with manufacturers early and be comfortable with what's going out on the roadway.

And actually to kind of help the U.S. to continue to play a leading role in this. Obviously if vehicles were going to be banned from the roads, it would be very difficult for the country to continue to be a place where people could test and develop this technology. And then the belief really that there can be an acceleration of the safety benefits of this through data sharing.

So each car doesn't have to encounter all the weird situations itself, but in fact can learn from what other vehicles experience. And the idea is that really this is meant to be an evolving framework. So it comes out as guidance, it really generates conversations, it generates best practices, which can eventually evolve into standards and law.

And there's a huge opportunity here because the belief isn't that the National Highway Traffic Safety Administration will be doing all of the development of these best practices, but that that'll really evolve from what companies do and what all of us at universities are able to do to sort of generate ways to solve these problems in creative manners.

Ways to actually keep the innovation going, but ensure that we have safety. So as you start to think about all of the AI systems that you're developing, you start to flip around a little bit and think about how is the regulator going to get comfortable, that it's not going to do something weird.

These are great research questions. I think these are great practical questions and these are things that will need to be worked out going forward. So I leave you with that as a challenge to think about, to think as you take this course, not only about the technology that you're learning, but how do you communicate that to other people?

And where are the gaps that need to be filled? Because I think you'll find some great opportunities for research, startup companies, and ultimately work with policy and government there. So thanks for the opportunity to talk to all of you, and I want to stop there because probably the things that you want to talk about are more interesting than the things that I wanted to talk about.

So I'm happy to take questions along there. We had a quick hand here. I do. I think that's a great question. Thanks for reminding me. So the question was whether in the future when you have all vehicles automated, would we be able to actually roll back things like airbags and seat belts and other things that we have on there, what we might know as passive safety devices in vehicles.

I believe that we will, and in fact actually one of the things that I think is most extraordinary, if you think about this from a sustainability standpoint, when you look at the average sort of mass of vehicles and average occupancy of vehicles in the U.S., you know, with passenger cars we're using maybe about 90% of the energy to move the vehicle as opposed to moving the people inside.

And one of the reasons for that is crash worthiness standards, which are great because that's what's enabled us to be surviving these crashes at 40 miles an hour. But if we do have vehicles that are not going to crash or if they are going to have certain modes which might be designed with very carefully designed, you know, crash areas or things like this, we can potentially take a lot of that mass out.

Particularly if these are low-speed vehicles which are designed only for the urban environment and they're not going to crash because they're going to drive, you know, somewhat conservatively or in some ways separated from pedestrians, then I think you can get a lot of the mass out and then you start to actually have transportation options which, you know, from an environmental standpoint are comparable to cycling.

So I think that's actually a really good goal to strive for, although we either have to kind of limit the environment or think in the far future with some of those techniques. Good. Yeah, that's a great question. So what are we doing with Shelly? Is our mission really just to drive as fast as possible and faster than a human or are we trying to learn from this something that we can apply to other automated vehicles?

It really is a desire to learn from other automated, you know, for the development of other automated vehicles. And we've often said that at the point where, you know, the difference between Shelly's performance and the human driver, you know, starts to be really mundane things like, you know, our shift pattern or something which isn't applicable, we kind of lose interest at that.

However, you know, up to this point, every insight that we've gotten from Shelly has been directly transferable. And we've programmed the car to do some emergency lane changes in situations where you don't have enough room to brake. And we've actually been demonstrating in some cases that the car can do this much faster than a human, even an expert human's response can be.

So there's certain scenarios that we've done like that. And I would say from the bigger picture, what's really fascinating is that we originally started out with this idea of let's find the best path around the track and track it as close as we can. But in fact, when you look at human race car drivers, what they're doing is actually very different.

They're pushing the car to the limits and then sort of seeing what paths that opens up to them. And it flips the problem a bit on its head in a way that I think is actually very applicable for developing safety systems out on the road. But it's not a way that people have looked at it, to the best of my knowledge, up to this point.

And so, you know, that's really what we're hoping is that the inspiration in trying to reproduce human performance there leads us to better safety algorithms. So long, you know, so far that's been the case. And when that ceases to be the case, I think we are definitely much less interested.

Yeah, so liability is a good question. So, you know, what, who is liable, if I can sort of rephrase, you know, for an accident in an automated vehicle. On the one hand, that's kind of an open question. On the other hand, we do have a court system. And so whenever there are new technologies, these things are actually generally figured out in the courts and it can be different from state to state.

So this is one aspect where, you know, potentially some discussion so that manufacturers aren't subject to different conditions in different states would be helpful. But the way that it works now is that it's it's usually not binary. We have in the U.S. the sense of a joint and several liability.

And so you can actually assign different portions of responsibility to different players in the game. You have had companies like Volvo and, in fact, Google make statements that if their vehicles are involved in accidents, then they would expect to be liable for it. So people have often talked about needing something really new for liability, but I'm not sure that's the case.

We do have a court system that can ultimately figure out who is liable with new technologies. We have some manufacturers that are starting to make some statements about assuming product liability for that. The one thing that really could be helpful, as I mentioned, is perhaps some harmonization. Because right now insurance is something that is set state by state.

And so the rules in one state as to who's at fault for an accident may be very different in another state. Okay, so what if companies, you know, as they send in the safety letters, are using criteria to set safety that may not be broadly acceptable to the public, where the public would like these vehicles to have greater safety?

I think, you know, the nice thing about this process is, first of all, we would know that, right? So we would have a sense that companies are developing with certain measures of safety in mind, and there could actually be a discussion as to, you know, whether that is setting an acceptable level.

It's a difficult question because it's not clear that people really know what an acceptable level is. Is it, does it have to be safer than than humans drive now? You know, my personal feeling, I would say yes. And does it have to be much, much safer? Well, that's hard to say.

You know, you start to then get into this situation of, we're comfortable to a certain extent with our existing legal system and with the fact that humans could cause errors that have fatal consequences. Do we feel the same way about machines? All right, you know, we tend to think the machines really should have a higher level of perfection, so we may, as a society, be less tolerant.

People will often say, well, so long as the overall national figures go down, that would be good, but that's really not going to matter much to the families who are impacted by an automated vehicle, particularly if it's a, if it's a scenario with very, very bad optics. What do I mean by that?

It's, if you think about the failures of mechanical systems, because they're different than the failures of human beings, they can often, like, look really bad, right? If you sort of think about a vehicle that doesn't detect something and then just continues to plow ahead, you know, visually that's, that's really striking, and that's the sort of thing that, you know, would get replayed and be in people's consciousness and raise some fears, and so, you know, I think that's, that's an issue that's going to have to be, have to be sorted out.

Yeah, my question is about automated vehicles on a global scale. You talked about, you know, data sharing and the potential for collaboration in terms of, you know, technology, but also maybe policy. Is there anything, you know, any sort of collaboration between different, you know, between research in different parts of the world to exchange, you know, different policies and different technologies?

Because, you know, it's very different. Yes, so that's a, that's a good question. What's being done, really, from a global standpoint to sort of share ideas, to share research, and to kind of work through some of these things, particularly on the policy side? So most of the auto manufacturers are global corporations, and so a lot of the research in this is done in very different parts of the world.

So Renault, Nissan, for instance, is doing a lot in Silicon Valley, in Europe, and in Japan. And I think you see a lot of that with the different manufacturers. One of the cool things that I got to do as part of my role was to go with the Secretary of Transportation to the G7 Transportation Ministers meeting in Japan and address the ministers about sort of the the U.S.

policy on automated vehicles. And one of the parts of that discussion was, well, the U.S. has a very different set of rules. So we have this manufacturer self-certification as opposed to pre-market certification. But testing, for instance, is something that has to be done regardless. So either it's testing that's done by a manufacturer, or it's testing that's done by, for instance, you know, in Germany, the, the, the TÜV and other agencies that are responsible for, for road safety.

And so the idea is maybe we should be sharing best practices on testing, so we have a set of standard tests. And then manufacturers across the globe could test to a certain set of standards that might be translated differently according to the policies and regulatory environments in different countries.

So that was, that was part of the idea that we advanced at the G7, and it seemed to kick off really well. I never had a conscious decision on this. I actually got a call from the White House one day. You know, and, and, you know, I got this message, or this email, you know, I'm, I'm reaching out from the White House for you to give my call, you know, give me a call back.

So of course I called back immediately, and Pam Coleman on the other end of the line was like, "I love doing that." She's like, you know, when you're calling for the White House, everybody returns your call. And, and so honestly, you know, she said, "Here's the situation. We're looking at a lot of these areas in the Department of Transportation that seem to hit upon your areas of expertise.

We want to talk about working with you in some way, the holy grail would be for you to come out and work in DC for a while." And then I got a call from the Department of Transportation, and they're like, "Well, we know you wouldn't want to come out to DC for a while." I'm like, "Try me.

Could I do, you know, could I do cool stuff, and could I make an impact?" And then, you know, I met with the Secretary of Transportation out in San Francisco, and, you know, he assured me, he's like, "You would be surprised. You would be very surprised at how much of an impact you could have." And this ended up being really true.

A lot of times this stuff moves quickly, and people who are involved in policymaking may or may not have a technical background in this. They may have come through the campaign, for instance, and then ended up in political roles. Yet the folks that I worked with were really trying to get good information and make good decisions.

And so I just kept getting called in for advice on all sorts of things, and I found that people actually really wanted to have that technical information and then used it. So, so that, that's the way it happened. It seemed like it was an opportunity to take things that I've worked on, as I mentioned, you know, automated vehicles since 1992, and then to be part of this policy development, which went really quickly.

It was a one-page outline when I arrived in February, and then in September it rolled out. And along the way was all sorts of editing and negotiations at the White House and other agencies. Fascinating, fascinating process. So, so I kind of fell into this, but, you know, as Lex mentioned, I think I'm emerging as a policy wonk here because it was a, it was a very fun experience.

Let's see over here, what we got? With data sharing, you have a lot of companies that have somewhat of a monopoly on a lot of data, especially like Google has so much more data available. Yeah. A lot of smaller startups. How do you incentivize these big companies to actually share their data?

Good, so how do you incentivize companies to share their data when they have an awful lot in, invested in them, in that, in the gathering of that data and being able to process that data? And I think the answer is to start small and to try to say, are there certain high-value things that could, again, make the public comfortable, make policy makers comfortable, that really aren't going to be a burden on the company?

You know, so, so one of the, you know, one of the things that, from the Peloton standpoint, that was bounced around at one point, so our, our trucks actually use vehicle-to-vehicle communications as part of their link. Well, when you do that, you discover that there's actually an awful lot of places where that drops out, because cell phone towers, which are not supposed to be broadcasting on that frequency, seem to create an awful lot of interference there.

Well, that could be very interesting from a public policy perspective to know, you know, where are, you know, we were sort of monitoring for incursions in that, in that frequency range everywhere we go. That, for instance, might be a very useful piece of information to share with policy makers that wouldn't be any real proprietary issue to share from the company's perspective.

And so I think that the trick is to start small and find what are the high-value data where there isn't a big issue of sharing. I mean, if you go to Google and say, "All right, Google, what will it take for you to share all of the data you're acquiring from your entire self-driving car program?" I guess Waymo now.

I think that would be a very big number, and so I don't think that's a starting point. I think you start with, you know, what is the high-value data, data that's of high value for the public policy sense, and really minimal hassle to the, to the companies. I don't know how much longer you, I'm happy to stay and, and answer, answer as many questions, but I know you have a class to run.

How are we? Okay, good. Yes? Are there any efforts that you know to come up with standards for sharing map data, accident data, simulations? Good. So is there any effort underway for, for sharing map data, some of the edge case accident data, simulation capabilities, and things like that? This is one of the next steps that NHTSA outlined in the policy, and so there are people at, at NHTSA actually working on taking some of these next steps again in sort of a pilot or prototype mode.

So, so that's something that's, that's currently being worked on in the, in the department. You can probably expect to hear more from in the not so distant future. Yes? I have a question about the learning from data. Yeah. Driving in the urban areas and driving in the highway and rural areas are very different.

Do you see the federal government to make like a standard data set for like an air company, so that's good before they can take a car on production? Or do you see that you should be asking the company to do the test itself? Okay, so the question is, testing in urban and rural environments, or even driving in, in urban and rural environments, are very different.

And should the, the government actually come up with a standard set of data that all companies have to attest to? I think one of the reasons that the policy was designed the way it was, was to make sure we had this concept of operational design domain. So, in fact, if the only area that I've mapped, and the only area that I want to drive is, say, in a campus environment, or in one quarter square mile, then, then the idea is that we would like the companies to explain how they handle the eventualities in that one quarter square mile, but they should really have no reason to handle other situations, right?

Because their vehicle won't encounter that, so long as it's been designed to stay within its operational design domain. So, I think in the short term, you know, what you see is people often looking at hyperlocal solutions, or kind of the low-hanging fruit for, for a lot of automation. And even if you think about offering mobility as a service, if I'm gonna offer sort of an automated taxi, I'm probably gonna do that in a limited environment to start with.

And so, if I'm only doing this in Cambridge, does it really matter if I can drive in Mountain View or not? And so, you know, I think the idea is to start with the definition of the operational design domain, with a data set that is appropriate for that operational design domain.

And then, as people's design domains start to expand nationwide, then I think, you know, the idea of common data sets starts to be, starts to be interesting. Although, you know, there is a sense that no finite data set is really going to capture every eventuality. And so, you know, people will be able to develop, or sort of, you know, design to the test in some ways.

Is that sufficient? I think it'll make people feel better, but I, I personally wonder how much value there is. You know, it's same with test track testing. I can think of 20 different tests that automated vehicles will have to pass, and people will design ways to pass all 20 of those tests.

It may make some people more comfortable, but it doesn't make me all that much more comfortable that they'd be able to handle a real-world situation. All right, let's see, one last hand up. Could you, could you make an open-source car under, okay, so the question is, could you make an open-source car under the, the guidance provided by US DOT?

The question would be, you know, from a, from a practical question, you're supposed to submit a safety assessment letter, which is supposed to be signed by somebody responsible for that. And so, an issue, if you were to open-source, would be, you know, do I use this module, and who is actually signing, signing off on that?

Would I feel comfortable signing off on something which I then allowed to be open-source? I, you know, not a lawyer, but I would think that, you know, I don't think there would be anything that would prevent that if you had a development team that was doing that, and people who are willing to sign off on whatever version of the software was actually used in an open-source car.

You know, I will say that the, the guidance does apply to universities or to, or to other groups that would be putting a car out on the road. And I think if you look through the 15 points, they're not really meant to be overly restrictive. In fact, I would argue that pretty much any group that is going to sort of put real people at risk by, by putting an automated vehicle out on the road should really have thought through these things.

So I don't think it's a, I don't think it's a terribly high, high burden to, to meet. I think it would be, you know, it would be meetable by a group. It just, the question would be, you know, from the open-source sense, how do you sort of trace who's responsible and who's signing off on that?

All right. I think we gave those third graders a run for their money. Yeah, absolutely. Thank you so much. Let's give Chris a big hand. Thanks a lot. (applause)