Combining asynchronous events and traditional frames for steering angle prediction

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Advances in deep learning over the last decade enabled by the availability of more computing resources have revived interest in end-to-end neural network methods for command prediction in vehicle control. Most of the existing frameworks in the literature make use of visual data from conventional video cameras to infer low level (steering wheel, speed, etc.) or high level (curvature, driving path and more) commands for actuation. In this paper, we propose an efficient convolutional neural network model that takes both perceptual data in the form of signals (events) from an event-based sensor and traditional frames from the same camera to generate steering wheel angle. We show that our model outperforms many state-of-the-art deep learning approaches using just one type of input among regular frames or events while being much more efficient.

Cite

CITATION STYLE

APA

Ly, A. O., & Akhloufi, M. A. (2020). Combining asynchronous events and traditional frames for steering angle prediction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12131 LNCS, pp. 244–252). Springer. https://doi.org/10.1007/978-3-030-50347-5_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free