Cooperative Traffic Signal Control Based on Multi-agent Reinforcement Learning

3Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper proposes a traffic signal cooperative control algorithm based on multi-agent reinforcement learning (MARL), and design a framework of edge computing under traffic signal control scene. By introducing edge computing into the scene of traffic signal cooperative control, it will bring minimal response time and reduce network load. We abstracted the traffic signal control problem into the Markov decision process (MDP). The traffic state is discretized by feature extraction to avoid the curse of dimensionality. We propose a fusion of multi-agent reinforcement learning and coordination mechanisms through collaborative Q-values. The action selection strategy of an intersection depends not only on its own local reward, but also on the impact of other intersections. Different from considering only adjacent intersections, algorithm combines the static distance and dynamic traffic flow, and considers the cooperative relationship between neighbor and non-neighbor nodes. Finally, we show through simulation experiments on SUMO that our algorithm can effectively control traffic signal.

Cite

CITATION STYLE

APA

Gao, R., Liu, Z., Li, J., & Yuan, Q. (2020). Cooperative Traffic Signal Control Based on Multi-agent Reinforcement Learning. In Communications in Computer and Information Science (Vol. 1156 CCIS, pp. 787–793). Springer. https://doi.org/10.1007/978-981-15-2777-7_65

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free