Design and experimental validation of a cooperative adaptive cruise control system based on supervised reinforcement learning

39Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.

Abstract

This paper presents a supervised reinforcement learning (SRL)-based framework for longitudinal vehicle dynamics control of cooperative adaptive cruise control (CACC) system. A supervisor network trained by real driving data is incorporated into the actor-critic reinforcement learning approach. In the SRL training process, the actor and critic network are updated under the guidance of the supervisor and the gain scheduler. As a result, the training success rate is improved, and the driver characteristics can be learned by the actor to achieve a human-like CACC controller. The SRL-based control policy is compared with a linear controller in typical driving situations through simulation, and the control policies trained by drivers with different driving styles are compared using a real driving cycle. Furthermore, the proposed control strategy is demonstrated by a real vehicle-following experiment with different time headways. The simulation and experimental results not only validate the effectiveness and adaptability of the SRL-based CACC system, but also show that it can provide natural following performance like human driving.

Cite

CITATION STYLE

APA

Wei, S., Zou, Y., Zhang, T., Zhang, X., & Wang, W. (2018). Design and experimental validation of a cooperative adaptive cruise control system based on supervised reinforcement learning. Applied Sciences (Switzerland), 8(7). https://doi.org/10.3390/app8071014

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free