ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst

178Citations
Citations of this article
677Readers
Mendeley users who have this article in their library.

Abstract

Our goal is to train a policy for autonomous driving via imitation learning that is robust enough to drive a real vehicle. We find that standard behavior cloning is insufficient for handling complex driving scenarios, even when we leverage a perception system for preprocessing the input and a controller for executing the output on the car: 30 million examples are still not enough. We propose exposing the learner to synthesized data in the form of perturbations to the expert’s driving, which creates interesting situations such as collisions and/or going off the road. Rather than purely imitating all data, we augment the imitation loss with additional losses that penalize undesirable events and encourage progress – the perturbations then provide an important signal for these losses and lead to robustness of the learned model. We show that the ChauffeurNet model can handle complex situations in simulation, and present ablation experiments that emphasize the importance of each of our proposed changes and show that the model is responding to the appropriate causal factors. Finally, we demonstrate the model driving a real car at our test facility.

Cite

CITATION STYLE

APA

Bansal, M., Krizhevsky, A., & Ogale, A. (2019). ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst. In Robotics: Science and Systems. MIT Press Journals. https://doi.org/10.15607/RSS.2019.XV.031

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free