Back to Index

Oliver Cameron (CEO, Voyage) - MIT Self-Driving Cars


Chapters

0:0 Lex introducing Oliver
0:39 Oliver background
4:40 Udacity self-driving car engineer nanodegree program
14:10 Autonomous trip from Mountain View to San Francisco
23:11 Open source challenges
26:48 Birth of Voyage
31:58 Retirement communities
38:35 Sensor and technology stack
40:45 Example challenge for perception (foliage)
41:58 Survey of recent perception research
45:51 Lessons learned
48:45 Q&A

Transcript

All right, welcome back to 6.094, Deep Learning for Self-Driving Cars. Today we have Oliver Cameron. He's the co-founder and the CEO of Voyage. Before that, he was the lead of the Udacity Self-Driving Car Program that made ideas in autonomous vehicle research and development accessible to the entire world. He has a passion for the topic and a genuine open nature that makes him one of my favorite people in general and one of my favorite people working in this space. And I think thousands of people agree with that. So please give Oliver a warm welcome. (audience applauding) - Thank you very much, Lex. And thank you all for having me here today. Super excited to speak all about Voyage. But in reality, the kind of thing I wanna share today is kind of like this title says, how to start a self-driving car startup. Rarely do you kind of get an inside scoop of how a startup is formed. You kind of hear all the PR, all the kind of very lovey-dovey press releases out there. I wanna share kind of the inside of how at least Voyage came to be, which was a little unconventional compared to your average self-driving car startup. They always tell you that the path to a startup, getting to the goal you want, is kind of a zigzag. Ours was kind of a insane zigzag. So we'll go through all of that stuff. Let's talk about my background. Also a little unconventional. I'm not very good at learning in a classroom. For me, I found learning by doing, by building, has always been the thing that's worked best for me. So going all the way back to when I was a teenager, software just in general was my passion. This idea that you can make something out of absolutely nothing, and then all of a sudden, millions, and in Facebook's case, billions of people can be using that thing. And after building lots of crazy stuff, and perhaps not being too popular in high school because that's all I did, I started a company. I won't bore you with all the details, but learned a lot during that experience, and joined, went through Y Combinator, which I believe started right here in Cambridge, which is very cool. And then this very pivotal moment happened to me. I heard about this online class which was generating a whole bunch of scandal and lots of controversy, and it was from this guy called Sebastian Thrun. He'd taken this Stanford class he taught in artificial intelligence and just said, "Screw it, we're gonna put the whole thing online." And back then, and this was around 2011, this was a very controversial thing to do. Today, MIT and many others do this all the time, but back then, there was a hell of a lot of controversy around doing something like this. But this learning format really just appealed to me. Being able to sit in front of my laptop, learn at my own pace, build, build, build, was something that really resonated with me. And I took this class in 2013, artificial intelligence for robotics. And this again was just this pivotal moment. My head exploded, all the enthusiasm. I'd had the software kind of transferred to artificial intelligence and robotics, and just became addicted to the format of what are now called MOOCs, massively open online courses. And I loved them so much that I decided, hey, I wanna go do this and help others learn this stuff. So, hey, let's go join Udacity and build more classes like this. So I did that for four years, led our machine learning robotics, and eventually our self-driving car curriculum, which was a lot of fun. And I got to learn directly from two great company builders, like truly great company builders. One was Vishal Makhijani. He was the operator extraordinaire at Udacity, understood how to build a company, how to build a culture, how to incentivize, and how to do all those things that we don't often talk about, and Sebastian Thrun. He, of course, founded the Google Self-Driving Car Project in its early days, and right now I believe he's building flying cars. Just in general, I learned so much from him, but this idea that you are literally in control of your destiny, you can build absolutely anything if you put your mind to it, was always pretty inspirational. Today, of course, I build self-driving cars at Voyage, and we'll talk more about what makes us special compared to the other self-driving car companies you may have heard of in this class and beyond. Let's talk about Udacity. Can you raise your hand if you've heard of Udacity? Very curious. There you go, that's most of the room. Udacity, like I said, was founded by Sebastian Thrun. He took this class online and it all just exploded, and he built a company around it. Udacity's real focus is on increasing the world's GDP, this idea that talent is everywhere, that it isn't now just constrained to the best schools in the world, that because of this proliferation of content, there are talented students all over the world, and all they need is the content in which to be able to build crazy cool, world-changing things. And what I see as my job today is to go out into the world and find these ridiculously talented people, and then put them to work on the hardest problems that exist, and Udacity, to me, felt like the perfect place to do this. As a kind of prelude to this, about three years into Udacity, we had had this real focus, like I said, on machine learning and robotics, but we really wanted to take it to the next step. And we came up with this kind of concept internally that we called Only at Udacity. What if we taught the things that other places weren't teaching? What if people all around the world could come learn from what may appear to be niche topics, but were just being taught at the right time, because that industry's about to blow up? And the first one we did of this, and we've done some after, including Flying Cars, a much more in-depth curriculum on artificial intelligence, was self-driving cars. So this is a quick video that introduces it, and this is, of course, Sebastian Thrun, robotics legend. Let's see if this plays. (soft music) (speaking in foreign language) (speaking in foreign language) (speaking in foreign language) (speaking in foreign language) (speaking in foreign language) (soft music) (speaking in foreign language) And why did we want to do this? What was our goal? It was to accelerate the deployment of self-driving cars. Like Sebastian says in that video, there's a number of reasons why self-driving cars are transformational. And at the time, this was around 2016, it felt like self-driving cars were just taking a little bit too long. We rewind to that particular spot in time. Google was the really, the only main effort going on. And what we believed is that it needed to happen faster, and that one of the reasons it wasn't happening fast enough is because there wasn't enough talent in the space. So what we decided to do is, like I said, build something quite special. We wanted to pair up a world-class curriculum, an actual self-driving car, which we'll talk about more, and what we called our open-source challenges. And all of that would come together to build this quite special curriculum. So let's start with the curriculum. One of our beliefs was that partnering with industry was the right way to go. That was because it felt, and I believe this, that the knowledge of how to build a self-driving car was not necessarily trapped in academia, it was trapped in industry. So we had to go straight to industry, work with engineers that were already challenging themselves with these problems, and get them on camera. Have them teach the concepts that they know and build day in, day out, and have that be transplanted to thousands of minds around the world. So these are just some of those partners. There was many, many more. But we had a real focus on finding these engineers wherever they may be and getting those folks on camera. We also built an incredibly talented team. This is just a small snippet of the curriculum team. But of course, Sebastian Thrun was a big part of this curriculum. When I told folks that I'd gotten the chance to work with him on specifically self-driving cars, he likened it to getting basketball lessons from Michael Jordan, which I thought was pretty fun. And they were probably just as entertaining. But some really, truly great folks working on this curriculum and still doing that to this day, who deserve all of the credit, frankly. Here's a quick photo of our first lecture recordings with eventual Voyage co-founders, Eric and Mac. Eric, who's on the left, hates this picture. And here's why. There you go. He still isn't at Mac's height, but he still has that box on his desk. And we built a whole 12-month curriculum to take an intermediate software engineer who may be in consumer software or just some other part of the software world and take them into self-driving cars. We wanted to cover perception, prediction, planning, localization, controls even, just the whole breadth of the industry. And the reason we wanted to do that is because we saw the best fit for a Udacity student not necessarily being a specialist in a niche-- for example, just perception, although there's been a whole bunch of folks doing that as well-- but that the skills of a Udacity student tend to pair themselves well with being a generalist, someone that can contribute all across the stack. So we tried to give these folks that breadth of knowledge. So here's another quick video of just the curriculum that we built with some previews. In the first term, you'll build projects on deep learning and computer vision. For example, you'll build a behavioral cloning project where you drive a car yourself in a simulator, kind of like in a video game. And then you use data from your own driving in the simulator to train a neural network to drive that car for you. This is the type of project that cutting-edge Silicon Valley startups are working on right now. And it puts you at the forefront of the deep learning and autonomous vehicle industry. You'll also build a project to detect and track vehicles in a video stream, just like real autonomous vehicles have to do out in the highway. In term two, you'll learn about sensor fusion, localization, and control. This is hardcore robotics that every self-driving car engineer needs to know in order to actuate and move the vehicle through space. In the localization module, you'll build a kidnapped vehicle project, which takes a vehicle that's lost and figures out where it is in the world with the help of sensor readings and a map. This is exactly what real self-driving cars have to do every time they turn on in order to figure out where they are in the world. In the control module, you'll build a model-predictive controller, which is a really advanced type of controller that's actually how most self-driving cars move through the world and use the steering wheel, throttle, and brake to follow a set of waypoints or a trajectory to get from one point to the next. In term three, you'll learn about path planning. You'll have an elective month. And you'll learn about system integration. Path planning is really the brains of a self-driving car. It's how the car figures out how to get from one point to another, as well as how to react when you meet obstacles in terms of seating. I'm going to give you a sneak preview of how path planning works. And this is something that nobody's ever seen before. So get ready. Path planning involves three parts. There is prediction, which is figuring out what the other vehicles are going to do around us. There is behavior, which is figuring out what we want to do. This goes on for a while. So we'll pause it there. The impact of this curriculum was bigger than we thought it would be. When we pitched, as a small team, this idea to Sebastian and to Vish at Udacity, there was a lot of skepticism that something like this was going to be successful. And the reason that there was that skepticism is that one of the formulas that Udacity looked at to determine the impact of building a certain type of content was the number of open jobs available. If there was, for example, in web development, mobile development, all that good stuff, there was millions of jobs open. So it felt like there was a massive opportunity to impact the area. But if you were to, in 2016, search for self-driving car engineers or the different disciplines that exist within, it was kind of just Google. So it was very interesting just to see the instantaneous reaction that we had to launching this curriculum. Today, over 14,000 successful students from all around the world, as you can see. Probably the most exciting thing is to see what students have done with this sort of curriculum. For example, I learned recently that a set of our students here are building a self-driving truck startup in India. Another set of students in South Korea are building a perception engine for self-driving cars. Just a whole bunch of folks building truly amazing things. And not only that, they've gotten jobs at Cruise, Zoogs, Waymo, Argo, all the big names, and are actively impacting those companies today. Now for the fun stuff. So we also decided to make that curriculum extra special. And we decided to do that by building an actual self-driving car. And whenever I talked about this internally at Udacity, people asked me why. Like, why do we need to do this, right? Isn't the curriculum just enough? Why go to the length of building an actual self-driving car? And selfishly, some of it was just a personal want to build a self-driving car. But the reasoning that I use is that what better way to prove to these students that are putting their faith in us that we know what we're doing, than to build our own self-driving car? And also, what better way to collaborate with these students on an area that is really infantile, than again, by having this platform that students could actually run code on a car. So we decided to buy a car, and we'll talk more about that in a second, but we set ourselves a milestone for our self-driving car. It was to drive from Mountain View to San Francisco, 32 miles of driving with zero disengagements. It should be repeatable. It won't be zero disengagements every single time, 'cause otherwise we've got an actual self-driving car. But in a short period of time, how much progress can we make towards this stated goal? Raise your hand if you've been on El Camino Real in that sort of region, okay? So you probably understand it's got a lot of traffic lights. In fact, in our route, about 130 traffic lights. It's a multi-lane, three lanes, speed limit of about 40, 45, something like that. So it's fairly complex, but it's also got some constraining factors, which is what we're looking for too. So it focused our tech efforts. This is the car we bought. You're probably very familiar if you follow self-driving cars with the Lincoln MKZs of the world. They're everywhere, and there's a reason for that in terms of the drive-by-wire nature of the vehicle and other stuff. And we outfitted a whole bunch of sensors, some cameras, some LIDARs, all that good stuff. We also tried to build our own mount. We affectionately called this the Periscope. I don't know why it's in slow motion, but this was not our final design. Built all this from parts at Home Depot. Truly a MVP. And then we got to work. The goal was to accomplish that milestone within six months. So we, of course, had to work fast, assembled a dream team of folks that I'd worked with on different projects at Udacity that also wanted to come dabble in this, folks that worked on the machine learning curriculum, robotics curriculum, et cetera. So this was one of our first days testing. And we did this at the Shoreline Amphitheater parking lot, which actually now is a very popular place to test self-driving cars in the Bay Area because Google used to do it in the past. We saw a lot of weird stuff. For example, you'll see here. (car beeping) (audience laughing) We saw what I believe to be a motorcycle gang. And we made progress. We kept iterating, kept building. And it started to come together. In fact, some stuff that we thought wouldn't work surprisingly just started to work. This is on El Camino Real. I'm in the back seat here. So Mac discovered that we shouldn't have stopped at that traffic light. But we did. We resolved the mystery later. Let's go to the next video. And of course, we learned a lot by going in this route, the different behaviors of drivers. One of the things that we were worried about is vehicles cutting us off. And when we say cutting us off, it means a vehicle pulling out in front of us, even a few hundred feet in front. You'll see here. (audience member speaking indistinctly) (audience member speaking indistinctly) We drove a little slow, 25. (audience member speaking indistinctly) Turned out it was fine. And pretty soon it got quite boring. Car was doing very well driving itself. We built some cool algorithms to change lanes when necessary, similar to what you see with Tesla Autopilot these days. We collaborated with some students on a traffic light classifier, which was integrated into Roster. (audience member speaking indistinctly) And yeah, pretty boring stuff. So you can tell Eric was surprised that it was just fine. And we also had a penchant for building, or for recording themed videos, like you saw maybe from Elon Musk and the Tesla team with Paint It Black. We've got our own version of that. Eventually we became pretty confident, but we always wanted to test most of the day just to get the most learnings out of everything. This video was made at 2.30 a.m., driving from Mountain View to San Francisco, all 32 miles. Of course there's a backing track. (gentle music) (gentle music) (gentle music) (gentle music) (gentle music) (gentle music) (gentle music) (gentle music) (gentle music) (gentle music) (gentle music) (gentle music) (gentle music) (gentle music) (gentle music) (gentle music) Maybe I want to turn it down. (laughs) So it's easier because there's less traffic, right? This is kind of cheating, and didn't count as the milestone, just to be clear. You'll see that we eventually hit the 32 miles, and Mac, who's in the driver's seat, is pretty excited about that. (gentle music) (gentle music) (gentle music) (gentle music) (laughs) (gentle music) (gentle music) (laughs) (gentle music) (gentle music) (laughs) And they hit it. But of course, that didn't count because it's in the middle of the night, and that's not gonna be a very useful route, but it was an awesome accomplishment just to even make it 32 miles with no disengagements when there's traffic lights, lane changes, all that good stuff. But after four months, this is in the daytime, this began, I believe, at like six, sorry, 7 a.m., we accomplished it. That small team had come together and built something pretty cool that could handle, again, multi-lane roadways, varying speed limits, traffic lights, objects, all that good stuff. And the thing that really brought this home to me is that the industry was now ready, right? It felt like this feeling I had in software where someone in their bedroom can go and build something and launch it, almost feeling overnight, could now, not quite the same, but close to the same, happen in self-driving cars. But we'll talk more about what this led to in a little bit. Let's talk about open-source challenges. We also got the same question, why do this? And it was clear to me that for something like self-driving cars, which was so formative, we had to collaborate with students to figure out the best stuff because even the folks that were at Udacity were not necessarily the world's leading experts in these topics, so we wanted to use this hive mind of activity from around the world to teach the best stuff. So just through a period of a year, these are all the different challenges we launched. There was prizes and leaderboards and all this sort of fun stuff. The one that I'll focus most on today is using deep learning to predict steering angles. And the challenge was clear. It was that given a single camera frame, you have to predict the appropriate steering angle of the vehicle. If anyone had read NVIDIA's end-to-end papers in 2016, this stuff was all the rage, and it felt like one of those areas that was just begging for more exploration. And again, let's use this, all these students from around the world to do it. And we did have students from all around the world. There was over 100 teams, people self-organized into these little groups to go and build this. And over the course of about four months, we had a whole bunch of submissions, all taking incredibly different approaches to the problem. We released data sets, validation sets, all that good stuff. Here you'll see our V-winning model. And I later found out that the author of this model actually went on to lead the self-driving car team at Yandex, which if you've been following CES is doing some pretty cool stuff in self-driving cars today. But you'll see this is on a route from the Bay Area to Half Moon Bay, a very windy road. And you'll see that the prediction matches pretty closely to the actual, which is nice. And if you read his description of his solution, it's a pretty cool solution. And I think the most exciting thing was just the number of different approaches to the problem, all resulting in some awesome stuff. And again, in true Voyage fashion, we recorded a video of what this model performed like on our car. (video playing) It wasn't perfect as any first model and just the general approach of camera only driving had its faults. One of the main ones that we realized after trying all this stuff out is that, of course, a car, when steered by such an input, performs differently in a car than it does on your desk in a simulator or through prerecorded camera frames. So adjusting for those corrections that might need to be made is something that students after the fact added, which was pretty cool. So after all of these things, building that curriculum, building a self-driving car, launching these challenges, it felt like it was time for something new. It was awesome to go and collaborate with all these students. And it felt like I had to go build something. So gathered that same team that had built this curriculum and we said, we're gonna go build a self-driving car. This is from my pitch at Coastal Ventures. You can kind of see the pitch deck there a little bit. Voyage is a new kind of taxi service. Our pitch has changed somewhat through time, but that's still pretty accurate. And we started what is now called Voyage. And our goal really was that we wanted to, again, build a self-driving car, but we wanted to do it differently. We didn't wanna follow the same formula that we felt we'd seen from some of the other folks in the field. And the reason is that those folks have real advantages. When you think about Google's project, of which I'm a big fan, they have this massive engineering pipeline of folks that wanna go build a self-driving car at today Waymo. But they also have a cash bank balance of billions of dollars that is hard to match. They also have the brand recognition of getting to work with Google and all that good stuff. So we just knew we had to think about this problem quite differently. And what motivated me is that today, as we all know, we have this incredibly broken transportation system. You step outside onto the roads today, and I don't know about you guys, but I don't feel particularly safe when I jump into my car. Over, we all know the stats, over one million people have, suffer fatalities on the roads today. Doesn't include folks that break necks, that injure, break bones, all that horrific stuff. It's also incredibly inefficient. We've, again, all observed this as we go about our day. Just the number of lanes that exist on a road today to account for peak traffic, the number of vehicles which have enough room for eight people have usually one person in that front seat. I read a stat recently that only 7% of the average vehicles' energy usage is going towards moving the things that are actually in the car. The rest is waste. So an incredibly inefficient system. It's also expensive. The reason we see a lot of old cars on the road today is because that's, at least today, the most optimal and affordable way to for lots of folks to get around, and inaccessible. And you'll see why this matters to us in particular. Our goal is to introduce a new way to explore our communities. This is a video of one of our cars at a particularly cool place, which we'll talk more about. And this is kind of our mission. And why now? Why is it possible to build a self-driving car now? A number of factors that we learned during that Udacity experience, but some new as well. It feels, from everything we see, that sensors are now in this position, which these sensors are now capable of level four self-driving cars. The resolution, the range, the reliability, all those things that were necessary for an L4 self-driving car are today ready. That didn't used to be the case. If you rewind to 2007 and look at the cars that were participating in the DARPA challenges, you'll see a lot of single channel lasers. You'll see the relic of the Velodyne HTL64, the spinning bucket, as it's called today. And no one would have claimed those sensors already. But today, you've got this enormous breadth of sensors that can take you that way. Compute is there. When we think about the recent rise in GPUs and whatnot, finally being able to have enough performance in the back of a car with the power constraints that you have, it's there. And talent. Again, this is not just Google today. You've got all of these great minds from all around the world building this technology. So you're able to recruit those folks, put them to work on the problems they've solved in many cases beforehand. The reason I have yellow for computer vision, which is not a knock against computer vision, is because it's not quite there yet for a fully driverless self-driving car. If you, again, rewound three, four, five years, this would have been a red. But today, with all the community and whatnot around computer vision, this is steadily getting to a green state. So pretty soon, that'll be green. And of course, then you'll have that perfect formula for level four driving. What we run after is ride sharing. We believe that the optimal way for people to move around is to be able to summon a car. But the thing that's suboptimal today is that you have to have a human driving you whenever you wanna move around. Prevents the cost from being lower, prevents some safety issues, prevents some quality issues. We think solving that will mean these next generation way of moving around will come to fruition. But what we also see is that if you, let's say we never remove the driver from the car, that a ride-hailing network always had a human driver, you are inherently limited by the number of miles you can drive, which means that it'll never replace personal car ownership, will never fix that fatality number I talked about, all of those things we must solve. So we think by having a self-driving car that these next generation transportation networks will come to fruition. Our lead VC is a guy called Vinod Khosla, the founder of Khosla Ventures, an awesome guy who's done some truly world-changing things. He has this quote, which I'm a big fan of. "Your market entry strategy is often different "from your market disruption. "Start where you find a gap in the market "and push your way through." And this better communicated what I mentioned at the very beginning, which is that we should build a self-driving car, but do it in a different way. Because if we don't do that, we're gonna fall into the same traps as many of the others that have died along the way. We have to find a way to do something different that we own and that we are really, really good at. And for us, that was retirement communities. Hands up if you've ever visited a retirement community. Let's see, way less, there you go. Surprise, Lex, I've gotta get you out to one. But these are just amazing places. And the reasons we choose retirement communities first to deploy our self-driving technology in is for these four reasons. They are slower, the speed limits in these communities tend to be far slower than you'd see on public road. Much calmer roadway. When you visit these locations, I liken it to listening to a podcast at 0.75x. Just very constrained, very slow, and a little boring from time to time. But you've also got these heartfelt transportation challenges. We hear from these residents all the time about how transportation is a pain point and that their only option is a personally owned vehicle. These folks know in many cases they shouldn't be driving, but because they don't have an alternative, they still drive. We hear from folks that put off much needed surgeries, hip replacements, things like that, because they don't have a friend in town who's gonna be able to move them around. We hear from folks with vision degeneration that they just don't see a way that they'll be able to move around and keep that quality of life that they've been able to have. Folks gripping steering wheels for extended period of times, all these challenges that felt like the best first place for a self-driving car to begin. And a clear path to customers. We see that on the roads today, ride sharing on public cities and whatnot is a particularly brutal battle, a race to the bottom in terms of cost. If we owned every retirement community in the country, meaning the transportation networks there, that would in and itself be a very valuable business. One of my favorite passengers is Anahid. She came to visit us recently and gave this quick speech about why self-driving cars matter to her and her community. - Not only that, but we're concerned about safety. I was on the road and it was one of the drivers. A car turned and went the wrong way right at us. A four-hundred meter spine just caught up with us. An older person who doesn't have the same reflexes strapped up to their door in an accident. - Let's talk about our first community. This is the villagers. Whenever I show this slide, people are astounded by the number of residents in a community like this. Over 125,000 and growing. Over 750 miles of road. And what we have in this location is an exclusive license to operate an autonomous vehicle service. This is one of our other beliefs, which is that by partnering very deeply with the community, it means that we're able to deliver a better service and that we're able to grow a more reliable business. We won't have entrants and competitors from all of the other self-driving car companies in our communities. What we actually do in exchange for that exclusive license is grant these communities equity. Because if we win, it's probably, in fact, highly likely as a result of those communities. And the addressable market of transportation in these regions is massive. These residents tend to be, as a lot of seniors tend to be, quite affluent, which means that they have some disposable income when it comes to being able to pay for ride-sharing services and other things like that. So we find that that recipe is absolutely perfect here. And we're launching and have launched passenger services to these residents. Gotten a lot of awesome feedback. Learned a lot about the needs of providing ride-sharing for senior citizens. Just some quick stats. This is from my Series X fundraising deck, just about the size of the senior market. Again, this is the first place we go, but you can get a feel for just how large this transportation market is. Today, there are 47 million seniors. That's growing by 2060 to over 100 million seniors in the US. The total addressable market for just seniors is incredibly large. 2,500 plus communities, all that good stuff. And this is how we see the world, the landscape of potential deployments. You've kind of got a lot of the big guys focusing on that bottom left quadrant. They're focusing on large cities. And it makes sense because it's playing to their unique strengths. It's playing to their ability to deploy thousands of cars, tens of thousands of cars. It plays to the strength that they have, at least some patience or ability to have more extended timelines when it comes to building this technology. But for a startup like us, that fights for survival every single day, it means that we have to do things differently. So we focus on that top right quadrant there, what we've kind of coined as self-contained communities. These places are simpler, slower, but they also have this ability for us to have that exclusivity that I talked about. And there's some others, of course, that we play in, whether it's the senior market or maybe even small cities and things like that. Let's talk about autonomous technology. So just to reiterate, why do we deploy in retirement communities? Slower speed, simpler roadway. There is a central authority. These places tend to be run by private companies, which makes for a quite unique relationship in a very positive way. It means we can deploy faster. It means we have the potential to have more impact in these regions. It also turns out that retirement communities tend to be located where there's ideal weather for self-driving cars. Think about Arizona, Florida, et cetera. We have a world-class team building this at Voyage from all the major programs out there, and that makes our lives infinitely easier. One thing that also makes our lives easier is the sensor configuration of our car. We've intentionally made this decision that we're not gonna focus on optimizing for cost today, but to optimize for performance. We wanna get to truly driverless sooner than most, and one of the easiest ways you can, again, make your life easier is by optimizing for high-resolution sensors. At the very top of the vehicle, we have the VLS-128, which is a 128-channel LiDAR that's capable of seeing 300 meters in 360 degrees. Many of the different LiDARs on the vehicle to cover different certain blind spots. Altogether, we process 12.6 million points per second, and that just looks incredibly high-resolution. You'll see our car at the bottom there, and that's the raw point cloud output that we see in the world. We run towards level four, and for us, what that means is that if you're building a demo self-driving car, kinda like we did at the Udacity project, you may focus on just the top four items, that top row. You may focus on perception, prediction, planning, and controls, and it turns out you can build a very impressive demo quite quickly by just focusing on those things, but of course, those things fall apart whenever edge cases are introduced, which happen all the time, so we've spent a ton of time on all the items here because, again, our goal is to build not a demo but a truly driverless vehicle. We also have an emphasis on partnerships because what we've noticed in the self-driving ecosystem is that there's not just more self-driving car companies building the full stack. There's now folks getting into simulation, to mapping, to middlewares, to teleoperations, to routing, to sensors, of course, and a ton more, so we make our lives, again, easier by partnering with companies like this so that we don't have to spin up a simulation team or we don't have to spin up an operations team to go map the world. We can just work with these very cool companies. Let's talk about one unsolved problem which fascinates me. It's to do with perception, and you probably won't be able to notice this unsolved problem from just this picture, but maybe if I add some annotations, you might. Foliage, trees, bushes, whatever you wanna call them. You may have seen some quotes in the media about some popular AV programs struggling with such foliage. For example, cruise cars sometimes slow down or stop if they see a bush on the side of a street or a lane-dividing pole. That was in the information. Oop, wrong way. This one, Uber's self-driving car software has routinely been fooled by the shadows of tree branches, which it would sometimes mistake for real objects, insiders say, that's Business Insider, and even Voyage. There's only one hard stop on the way. The culprit is a bush two feet high that protrudes into a lane from a street median, which Voyage considers a possible threat. Voyage may trim it, and we did, but we don't think that's scalable. And, or maybe it is, I don't know. But we, at the beginning of 2018, decided to solve this problem. So, of course, all of this resides in the world of perception, area of particular fascination for me. We're sharing these slides, but these are just some of the papers and research that we see going on that intends to solve those sorts of issues. One of the reasons you've seen those programs, including ours, be particularly sensitive to foliage is because, from a perception perspective, one of the most well-known way to detect objects is to utilize the map. So if you have this map, and you effectively, it's simplifying to a certain extent, but subtract objects that aren't in the map, and then use that as a way to understand what's in and around you that's dynamic, then, of course, you'll end up with decent representations of cars and pedestrians and whatnot. But if foliage grows, which it does, trees, then that's gonna extend out from the map and mean that that particular bush is now an object in your path. These networks here, which these are all neural networks, don't use that same technique. They don't use the map as a prior. Instead, what they do is take, of course, the 3D scan of the world, and then take a more learned approach to the problem. You'll have tens of thousands, hundreds of thousands of labels of cars, humans, et cetera, and then these next networks will be able to pick these ones out. We're particularly fascinated by PIXOR, which came from some great researchers at Uber ATG. VoxelNet came from Apple SPG. I've heard our engineers talking a lot about Fast and Furious recently, which merges together perception, prediction, and tracking into a single network, which is pretty cool, and PointPillars, which I think came from the Neutronomy team recently. I think Carl is speaking soon, right? So just in general, we see a whole bunch of work going out there to solve these issues. The other one that these sorts of networks solve, which I also find particularly fascinating, is that if you use traditional clustering algorithms, what you might see is that if two people are stood next to each other, a traditional algorithm will cluster as one object, which when you're trying to move away from those edge cases and build a truly self-driving car, that's a non-starter, right? Because pedestrians are the most important thing you can probably detect, and detecting two things as one thing is not gonna cut it. And of course, it does that because it's a dumb algorithm. It's not trained on any sort of information. But these networks, again, are very, very good at understanding the features and perspectives of humans, even if they are in crowds and whatnot. And that then helps all your stack downstream, because if you have accurate perception information about objects in and around you, your predictions are much better, your tracking is much better, and ultimately how you navigate the world is much safer. I'm also particularly fascinated by reinforcement learning, which I know Lex is as well. If you've read Waymo's recent work on imitation learning, I think that's particularly cool. Another company we track quite closely, just 'cause they do amazing stuff, is Wave, trying to build an entirely self-driving car powered by reinforcement learning. Think about disengagements as rewards and things like that, to be able to tune that to better performance. Also just areas of learned behavior planning, ultimately fusing rules of the road with more learned behaviors. The ecosystem, I think it's this area that is thriving today, seeing just how many folks are diving into not just the full stack, but building tools and building other really important parts of the stack. The maturation of sensors, not just higher resolution LiDAR, but things like 3D radar. We get pitched all the time from these companies, and it's clear to see there's been a rise in volume from all these great efforts. Lessons learned, now that I've been building Voyage for two years, and prior to that, four years at Udacity, what things have I personally learned that are not technical in nature? So many things. So these all may look like cliches, but I promise you they all came from lessons which were really, really painful in the moment. Don't be intimidated. So the thing that I feel happens a lot in self-driving cars is that because it started in this very academic sense, meaning Stanford, Carnegie Mellon, and whatnot, that it felt like to break into the industry, you had to also go through that same path. You had to get a PhD in something, and really go the path that was well-trodden. But I think that only takes the industry so far. And I think it's really important that we get folks from all different backgrounds, all different industries, to come contribute to this field. 'Cause if we don't, there is no driverless. It can't happen in that isolated bubble. It needs to be extended out. So don't be intimidated by those things. Understand your limitations. This is perhaps more of a kind of CEO lesson for myself, but I think when you're building out a company from one person or five people to, today we're 44 folks, you cannot do everything. And it's really important that you build a team around you that is able to do what you used to do, but do it 10 times better. I probably didn't spend enough time building out that team until we had some challenges our way when it comes to that stuff. Be proactive versus reactive. I think it's really crucial, again, when you're building a company, to try and predict what's gonna happen next. Because if you're reactive, you're constantly two steps behind what other folks are doing. Get out of the way. I think a lot of folks, again, perhaps overstay their welcome in certain areas of the company when they should just say, okay, I've got experts now. I can just step aside and let those folks do what they do best. And speaking of which, hire the best. It's really easy when all this pressure's on, when you're building a company, to kind of sacrifice when it comes to your culture, when it comes to hiring. It's really crucial that you find folks that are not just the best in their field, but are the best match for your company. And always be curious. I think it's always one of the things we believe in at Voyage is that it's important that knowledge is not isolated to just one person, that that knowledge should be spread throughout the company. Because even though it may feel like oversharing or overcommunicating, what that knowledge may mean for someone that has a particularly unique background is they may do something incredibly cool with it. They may build something that totally transforms our company. So that's about it. Can jump to questions if that's helpful. That was great. Please give a big hand. (audience applauding) How did you identify retired communities as the target market to prioritize? Yes, so retirement communities for us was actually, there's a really long story, but I'll trim it down a little bit. So when we were starting Voyage, Sebastian Thrum was very helpful in helping us start this company. And of course, as kind of naive founders of a company, we were like, oh, let's just take this El Camino thing and put it on everywhere else that looks like El Camino and just do that over and over again. But he cautioned against that. And very wisely so, because again, you're nothing special compared to the other self-driving car companies out there by doing so. And in 2009, he had really advocated to Google leadership, et cetera, Larry Page, that retirement communities for self-driving cars might just be the best way for Google to go about deploying their self-driving cars. But, and I can understand why, I think the Google folks were Google, right? We're not just about retirement communities, we're about the world, like level five or nothing, right? So he got some pushback, but he did some research in that process, met some folks. So when we were starting, he was like, you got to check out these retirement communities. So we did, we went to visit and eventually we got there. So we wouldn't have got to that point without Sebastian pushing for that. - Just to follow up on the question of retirement communities, the question is, do you ever think about the other collateral issues, especially the retirement community would have to get into a car? - Yep. - And how exactly would they interface, like somebody wants to make a call to have a car come to their, wherever they are, and they have to move from A.A to point B. So how did you ever think about all these issues that are very germane? It's not just a vehicle moving on its own. - Yep. - These are all collateral issues. How do you plan to address this? - It's a good question. So the way we think about this is that today we've intentionally focused it on a segment of the market, which is called the active adult communities. These folks tend to be able to go into their own cars or into a taxi, open the door, sit down without the need for any assistance when it comes to that. But they may have vision issues, they may have other issues that prevent them from driving perhaps. For example, we hear a lot that folks feel really uncomfortable driving in the evenings. They feel comfortable driving in the daytime 'cause their vision supports it, but when it comes to the evening time, they have this mad rush to get home. But there is that other market which you're talking about, right, which is folks that just need that helping hand towards getting to the car. And one of our beliefs as a company is that the senior market, like I had in that slide, is surprisingly large. And what that means to us is that we think we can own it. We think we can be that company that any senior citizen in that situation thinks, oh, I should call Voyage because I need to get from point A to point B. Instead of thinking I should call Waymo or Cruise or any of the folks that are gonna go after the general big market, they'll think about Voyage. And the reason they'll think about it is because we'll deliver a product to them that is meant for those folks, that is designed for their use cases. It may be that actually if they're going on a long trip, let's say they're traveling 50 miles, the first mile of that trip and the last mile of that trip may involve a human, like helping them into the car and then dropping that human off somewhere else to go do that all over again. It may involve crazy robots that help people from their cars. We've heard from folks at Toyota that are building these bag-carrying robots and other things that may assist seniors from getting to the cars and whatnot. So I think that's why that market for us is particularly exciting because it feels like you can deliver these tailored products that would enable us to be the market leader. But today we focus on active adult, but who knows where you go next. - Can you talk a little bit about how you determined your final sensor suite? - Yeah, so the truth is it's never final. So we think about generations of vehicles. So we have our first generation vehicle, which was a Ford Fusion, had a single Validyne HDL64 in it, bunch of cameras, radar, and we set some milestones based on that vehicle and we accomplished those milestones. And then once we reached the max in which we're able to take that vehicle, we then say, "Oh, we need to bring on our G2 vehicle, "our second generation vehicle." So we did that and we said, "Okay, we have these certain goals in mind, "which are pretty lofty and pretty ambitious. "We need incredible range, "incredible resolution for these things." And actually what we've discovered is that in our particular communities going at the speeds that we're going at, radar isn't particularly useful. So we don't have radar on our second generation vehicle, for example. But I'm sure that when we go to that third generation vehicle there'll be other driving factors that, you know, we work backwards from the milestone to say, "What do we need on this vehicle?" Maybe cost in the third generation vehicle, right? We may say that, "Hey, we need a more affordable sensor suite "than what exists in our second generation vehicle." But they're driven by technical requirements and that means that we are able to really marry the two with the vehicle. - I was curious, when you showed the student-led content, or when you showed one of the students in your first practice car had developed a traffic light sensor, and then you showed later on that, you know, you were getting student input for deep learning models for steering wheel turns. I was wondering how, what your system architecture kind of looks like in terms of the kinds of perception that you take in, how modular it is, and to what extent deep learning algorithms have played a part in those different parts of that system? - Yeah, that's a good question. So I really encourage folks to get familiar with ROS. So ROS has always been this kind of playground for roboticists of all different types of robots to be able to try things out on robots. And ROS 1 is particularly notorious for kind of hacky and hobbyist types of projects, but it's not meant for production. ROS 2 though, which is in kind of an alpha release state, is definitely meant for more production-oriented things. And the reason I mentioned ROS is because it has this awesome architecture which lets you plug and play what they call nodes and be able to experiment with different approaches to the problem. So for example, what, you know, was running that deep learning model, predicting steering angles, effectively replaced our more rules-based planner and perception engine. And we just plugged the output of that to, of the steering angle straight to our controller to just actuate the vehicle. And ROS is particularly good at those sorts of architectures and it's all open source. So you can do some cool stuff with it. - Can you tell like how you handle the liability and insurance for passengers for your vehicles also? - How we handle insurance? Is that a question? So we have a pretty cool deal with a company called Intact Insurance. And the idea is that insurance in the autonomous age is gonna be very different than insurance, you know, today, right, for human drivers because there's different risk assessments and whatnot. And one of the ways that we're able to prove to these insurers that, you know, we're good at what we do is actually sending them data, right, we send them data from our cars as we drive showing that as we move through the world, we accurately detected things and planned around things and all that good stuff. And then they use that data to inform our rates of insurance. I think that the future actually of insurance will be on a similar lines, but perhaps more extreme where, for example, the rates will change depending on the complexity of the environment. If we're just driving down a straight road, completely straight and there's zero vehicles around us, our insurance rate should be super low, right? But if we enter a city center and there's thousands of people and cars and all that crazy stuff, our insurance rate should just rise almost instantaneously. So we're partnering with someone today that ensures the passenger, the car, sensors, all that stuff, but I think there's a lot of room for innovation there too. - Did you have any problems like onboarding the retired people initially? Were they like, you know, skeptical, scared? And then the other question is, what are the like major missing pieces in terms of computer vision to achieve L4? - What was that last, missing pieces between computer vision? - In computer vision to achieve like L4 self-driving. - Gotcha. So one of the more interesting insights I think we had about retirees is that, again, in my kind of naive state back in 2016, my general feeling was retirement communities might not be the first to adopt this technology, right? Because they may be slower to adopt new technology, might be scared of the technology, all those sorts of things. And to kind of validate that, I went to talk to some senior citizens 'cause I talked to my own grandma. She hates self-driving cars. Sounds like that's not a good sign. But went to talk to these folks in these sorts of locations. And the really interesting thing we learned is that traditional consumer software or devices, yes, there is definitely a lag in adoption with senior citizens. And that's proven in many studies, many stats, that senior citizens are slower to adopt the Facebooks of the world or the Instagrams or the WhatsApps, all those sorts of things. Cryptocurrency, I don't know. But that's because they have these very well-defined processes that they've had for most of their lives, right? Instead of using Facebook, they call someone up and they have a chat, a conversation with someone about their day or stuff that's going on. Or they don't share a picture on Instagram, they physically mail a picture or something like that. So to change that behavior is tough, right? Because that's a behavior that is fundamentally different than what they're used to. They have to log onto a computer, go to this weird Facebook thing and share pictures with thousands of people. That's weird. But the difference between that and a self-driving car is that our experience is no different than the car they're used to. It just turns out it's being driven differently, right? Like they see a car, it's the same, similar form factor to what they're used to. They open the door, they sit in the back seat. Okay, there is a button that I have to press to say go, but it's pretty similar to what I'm used to in my past. I don't have to learn a new behavior. I don't have to change something that I'm used to. So that was our first learning. And then also, they actually really don't care too much that it's autonomous. They are very, when I'm in the car, I'm quite curious and enthusiastic about the technology and wanna tell them about, I don't know, LIDAR and deep learning and perception. And they just don't wanna hear any of that stuff. And it kind of dawned on me that the reason that is is because what they, senior citizens, have witnessed over their lifetimes is far more dramatic than I have, right? Like our oldest passenger was 93. And she told me a story about how when she was very young, she remembers literally moving on an almost daily basis in a horse and cart. So when you talk about self-driving cars to those folks, like they just, they couldn't care less because between that period and today, they've seen the birth of flight planes everywhere. They've seen car proliferation. They've seen scooters now. They've seen all of this crazy subway systems. So a self-driving car to them is like, oh, that's cool. I just want it to move me. So that's our biggest learning there. The question was computer vision, what needs to happen between now and level four? Yeah, so I think the holy grail, right? So if you had perfect perception, self-driving cars are solved. If we knew every object that was on the road, in and around us within a reasonable distance, self-driving cars are solved. False positives are accepted today, which I think is good, but you really wanna minimize false negatives, right? You want zero false negatives in the world. And I think that's why we still have a tiny bit of work to do because when you think about the reason for a test driver being in the vehicle, well, perception feeds everything downstream, right? So if you miss an object, misidentify an object, any of that sort of stuff, then that effect causes the whole stack downstream to become quite chaotic. That's why I'm excited about all those networks that I talked about. One of the other things we believe that helps us minimize false negatives to non-existent kind of status for us is that we band together multiple networks. So we don't just rely on a single layer of perception. We say different networks have different strengths. For example, VoxelNet is particularly good at pedestrians, but Pixar is not so great at pedestrians 'cause it's from a bird's eye view where pedestrians are quite thin and whatnot. So let's band those two networks together and let's also band together some more traditional computer vision algorithms that may not be processed on the entire 360 scan, but may be processed on a small sample, maybe at the front of the vehicle, for example. So there's just lots of little bits and pieces like that to go through to minimize the worst case scenario, which is a false negative. But it's clear that when you see Waymo and whatnot, that they feel very, very, very close to that sort of state. - You mentioned that weather was one of the main reasons this was a great place to start. Can you talk about hurricanes? - Yes, it was funny. I got a question recently from Alex Roy. Me and Lex were just talking about, okay, in the event of a hurricane, right? Let's not talk about the technology second, but in the event of a hurricane, we've all seen those pictures of people getting on the freeways and trying to get out of the path of the hurricane, right? How is that gonna work in a world where self-driving cars are everywhere and personally driven vehicles are maybe more of the smaller size? I don't quite have an answer to that yet, but I think it's an interesting kind of thought problem. From a technology perspective, the really important part of weather for us is remote operation. So inside every one of our, sorry, all of our vehicles have a cellular connection, right? And each of those vehicles is connected to a remote operator that's sat in somewhat close proximity to that vehicle. And that remote operator has a few jobs. One is to just ensure the safe operation of the vehicle, make sure that that vehicle is doing as it's intended to do, all those good things. But another is to make sure that the operational domain that we are currently operating in is the one that it's designed for. So all these different camera feeds are being live streamed to this remote operator. And if there is sudden downpour of rain, that remote operator has the ability to bring that vehicle to a safe stop until that rain shower disappears or whatever, or hurricane, whatever it may be. But there are companies, I was pitched recently by a company that's building weather forecasting on a scale that is not really used today, but really microclimate. So thinking about just like this small subsection of the villages, predicting and understanding exact weather within those regions, and then having webhooks to tell you or us, Voyage, that that's about to happen. So there's a lot of cool stuff happening there, but remote operators currently kind of the eyes and ears of our cars to prevent that sort of issue. So please give Oliver a big hand. Thank you very much. Love you guys. (audience applauding) (upbeat music) (upbeat music) (upbeat music) (upbeat music) (upbeat music)