Multi-task Learning with Future States for Vision-Based Autonomous Driving

3Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Human drivers consider past and future driving environments to maintain stable control of a vehicle. To adopt a human driver’s behavior, we propose a vision-based autonomous driving model, called Future Actions and States Network (FASNet), which uses predicted future actions and generated future states in multi-task learning manner. Future states are generated using an enhanced deep predictive-coding network and motion equations defined by the kinematic vehicle model. The final control values are determined by the weighted average of the predicted actions for a stable decision. With these methods, the proposed FASNet has a high generalization ability in unseen environments. To validate the proposed FASNet, we conducted several experiments, including ablation studies in realistic three-dimensional simulations. FASNet achieves a higher Success Rate (SR) on the recent CARLA benchmarks under several conditions as compared to state-of-the-art models.

Cite

CITATION STYLE

APA

Kim, I., Lee, H., Lee, J., Lee, E., & Kim, D. (2021). Multi-task Learning with Future States for Vision-Based Autonomous Driving. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12624 LNCS, pp. 654–669). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-69535-4_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free