LVD-NMPC: A learning-based vision dynamics approach to nonlinear model predictive control for autonomous vehicles

4Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

In this article, we introduce a learning-based vision dynamics approach to nonlinear model predictive control (NMPC) for autonomous vehicles, coined learning-based vision dynamics (LVD) NMPC. LVD-NMPC uses an a-priori process model and a learned vision dynamics model used to calculate the dynamics of the driving scene, the controlled system’s desired state trajectory, and the weighting gains of the quadratic cost function optimized by a constrained predictive controller. The vision system is defined as a deep neural network designed to estimate the dynamics of the image scene. The input is based on historic sequences of sensory observations and vehicle states, integrated by an augmented memory component. Deep Q-learning is used to train the deep network, which once trained can also be used to calculate the desired trajectory of the vehicle. We evaluate LVD-NMPC against a baseline dynamic window approach (DWA) path planning executed using standard NMPC and against the PilotNet neural network. Performance is measured in our simulation environment GridSim, on a real-world 1:8 scaled model car as well as on a real size autonomous test vehicle and the nuScenes computer vision dataset.

Cite

CITATION STYLE

APA

Grigorescu, S., Ginerica, C., Zaha, M., Macesanu, G., & Trasnea, B. (2021). LVD-NMPC: A learning-based vision dynamics approach to nonlinear model predictive control for autonomous vehicles. International Journal of Advanced Robotic Systems, 18(3). https://doi.org/10.1177/17298814211019544

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free