back to index

Drive Gaze Region Classification in a Tesla


Whisper Transcript | Transcript Only Page

00:00:00.000 | We're driving around MIT campus today in this little bit of rain.
00:00:03.480 | In a Tesla we've instrumented with six cameras plus other sensors,
00:00:07.880 | all going into a single board computer.
00:00:10.200 | The reason we're doing that is we're going to give you a demo of driver gaze classification.
00:00:17.440 | One of the things we're interested in our group at the MIT Age Lab is developing systems for driver state detection.
00:00:26.880 | So there is the first part is the perception control and planning,
00:00:30.800 | which comes from the external sensors, video cameras, radar, sometimes LiDAR.
00:00:36.200 | And then there's inward facing sensors like the cameras we have in here that detect the state of the driver.
00:00:42.440 | This is an important component because the car that's driving itself needs to know when the driver is able to take control back and vice versa.
00:00:51.040 | And now is a visualization of some of the synchronized data we're capturing, both for real time detection and post-processing analysis.
00:00:59.520 | Top left is the video of the face. Bottom left is the video of the hands, lap and the instrument cluster.
00:01:06.120 | Bottom middle is the cropped video of the center stack display.
00:01:10.720 | Bottom right is the fish eye video of the instrument cluster.
00:01:14.400 | And the top right is a video of the forward roadway.
00:01:18.640 | And then there are two things being detected. In the top middle is the visualization of the facial landmarks used in the gaze classification.
00:01:27.760 | In the bottom right are annotations of the instrument cluster video showing the status of the autopilot based on the automatically detected autopilot icon.
00:01:40.200 | We can think of these two detections as classifying the state of the human and the state of the machine.
00:01:45.760 | And allows us to study the handover of control from the human to the machine and back.
00:01:53.040 | One of the key novel aspects of our approach is instead of looking at gaze estimation as a geometric problem,
00:02:00.040 | we treat it as a supervised learning problem in classifying gaze into one of six regions of road, rearview mirror, left, right, center stack and instrument cluster.
00:02:12.360 | This approach allows us to use large semi-automatically annotated data sets to generalize over the edge cases that pop up in the wild.
00:02:21.720 | And then in addition to the data on the CAN network, there's the automated detection of automation state from the instrument cluster.
00:02:31.160 | This combination of detecting human state and machine state allows us to study the interaction between the two.
00:02:38.600 | [END]
00:02:41.120 | [AUDIO OUT]
00:02:43.120 | [BLANK_AUDIO]