Abstract
Building a human-like car-following model that can accurately simulate drivers’ carfollowing behaviors is helpful to the development of driving assistance systems and autonomous driving. Recent studies have shown the advantages of applying reinforcement learning methods in car-following modeling. However, a problem has remained where it is difficult to manually determine the reward function. This paper proposes a novel car-following model based on generative adversarial imitation learning. The proposed model can learn the strategy from drivers’ demonstrations without specifying the reward. Gated recurrent units was incorporated in the actorcritic network to enable the model to use historical information. Drivers’ car-following data collected by a test vehicle equipped with a millimeter-wave radar and controller area network acquisition card was used. The participants were divided into two driving styles by K-means with time-headway and time-headway when braking used as input features. Adopting five-fold crossvalidation for model evaluation, the results show that the proposed model can reproduce drivers’ car-following trajectories and driving styles more accurately than the intelligent driver model and the recurrent neural network-based model, with the lowest average spacing error (19.40%) and speed validation error (5.57%), as well as the lowest Kullback-Leibler divergences of the two indicators used for driving style clustering.
Author supplied keywords
Cite
CITATION STYLE
Zhou, Y., Fu, R., Wang, C., & Zhang, R. (2020). Modeling car-following behaviors and driving styles with generative adversarial imitation learning. Sensors (Switzerland), 20(18), 1–20. https://doi.org/10.3390/s20185034
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.