Biased Pressure: Cyclic Reinforcement Learning Model for Intelligent Traffic Signal Control

11Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.

Abstract

Existing inefficient traffic signal plans are causing traffic congestions in many urban areas. In recent years, many deep reinforcement learning (RL) methods have been proposed to control traffic signals in real‐time by interacting with the environment. However, most of existing state‐of-the‐art RL methods use complex state definition and reward functions and/or neglect the real‐world constraints such as cyclic phase order and minimum/maximum duration for each traffic phase. These issues make existing methods infeasible to implement for real‐world applications. In this pa-per, we propose an RL‐based multi‐intersection traffic light control model with a simple yet effective combination of state, reward, and action definitions. The proposed model uses a novel pressure method called Biased Pressure (BP). We use a state‐of‐the‐art advantage actor‐critic learning mech-anism in our model. Due to the decentralized nature of our state, reward, and action definitions, we achieve a scalable model. The performance of the proposed method is compared with related methods using both synthetic and real‐world datasets. Experimental results show that our method out-performs the existing cyclic phase control methods with a significant margin in terms of throughput and average travel time. Moreover, we conduct ablation studies to justify the superiority of the BP method over the existing pressure methods.

Cite

CITATION STYLE

APA

Ibrokhimov, B., Kim, Y. J., & Kang, S. (2022). Biased Pressure: Cyclic Reinforcement Learning Model for Intelligent Traffic Signal Control. Sensors, 22(7). https://doi.org/10.3390/s22072818

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free