Human-like Decision Making for Autonomous Vehicles at the Intersection Using Inverse Reinforcement Learning

9Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.

Abstract

With the rapid development of autonomous driving technology, both self-driven and human-driven vehicles will share roads in the future and complex information exchange among vehicles will be required. Therefore, autonomous vehicles need to behave as similar to human drivers as possible, to ensure that their behavior can be effectively understood by the drivers of other vehicles and be more in line with the cognition of humans on driving behavior. Therefore, this paper studies the evaluation function of human drivers, using the method of inverse reinforcement learning, aiming for the learned behavior to better imitate the behavior of human drivers. At the same time, this paper proposes a semi-Markov model, to extract the intentions of surrounding related vehicles and divides them into defensive and cooperative, leading the vehicle to adopt a reasonable response to different types of driving scenarios.

Cite

CITATION STYLE

APA

Wu, Z., Qu, F., Yang, L., & Gong, J. (2022). Human-like Decision Making for Autonomous Vehicles at the Intersection Using Inverse Reinforcement Learning. Sensors, 22(12). https://doi.org/10.3390/s22124500

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free