Reinforcement learning is one of the most widely used methods for traffic signal control, but the method experiences issues with state information explosion, inadequate adaptability to special scenarios, and low security. Therefore, this paper proposes a traffic signal control method based on the efficient channel attention mechanism (ECA-NET), long short-term memory (LSTM), and double Dueling deep Q-network (D3QN), which is EL_D3QN. Firstly, the ECA-NET and LSTM module are included in order to lessen the state space’s design complexity, improve the model’s robustness, and adapt to various emergent scenarios. As a result, the cumulative reward is improved by 27.9%, and the average queue length, average waiting time, and (Formula presented.) emissions are decreased by 15.8%, 22.6%, and 4.1%, respectively. Next, the dynamic phase interval (Formula presented.) is employed to enable the model to handle more traffic conditions. Its cumulative reward is increased by 34.2%, and the average queue length, average waiting time, and (Formula presented.) emissions are reduced by 19.8%, 30.1%, and 5.6%. Finally, experiments are carried out using various vehicle circumstances and unique scenarios. In a complex environment, EL_D3QN reduces the average queue length, average waiting time, and (Formula presented.) emissions by at least 13.2%, 20.2%, and 3.2% compared to the four existing methods. EL_D3QN also exhibits good generalization and control performance when exposed to the traffic scenarios of unequal stability and equal stability. Furthermore, even when dealing with unique events like a traffic surge, EL_D3QN maintains significant robustness.
CITATION STYLE
Zai, W., & Yang, D. (2023). Improved Deep Reinforcement Learning for Intelligent Traffic Signal Control Using ECA_LSTM Network. Sustainability (Switzerland), 15(18). https://doi.org/10.3390/su151813668
Mendeley helps you to discover research relevant for your work.