Intelligent Ramp Control for Incident Response Using Dyna-Q Architecture

2Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Reinforcement learning (RL) has shown great potential for motorway ramp control, especially under the congestion caused by incidents. However, existing applications limited to single-agent tasks and based on Q-learning have inherent drawbacks for dealing with coordinated ramp control problems. For solving these problems, a Dyna-Q based multiagent reinforcement learning (MARL) system named Dyna-MARL has been developed in this paper. Dyna-Q is an extension of Q-learning, which combines model-free and model-based methods to obtain benefits from both sides. The performance of Dyna-MARL is tested in a simulated motorway segment in the UK with the real traffic data collected from AM peak hours. The test results compared with Isolated RL and noncontrolled situations show that Dyna-MARL can achieve a superior performance on improving the traffic operation with respect to increasing total throughput, reducing total travel time and CO2 emission. Moreover, with a suitable coordination strategy, Dyna-MARL can maintain a highly equitable motorway system by balancing the travel time of road users from different on-ramps.

Cite

CITATION STYLE

APA

Lu, C., Zhao, Y., & Gong, J. (2015). Intelligent Ramp Control for Incident Response Using Dyna-Q Architecture. Mathematical Problems in Engineering, 2015. https://doi.org/10.1155/2015/896943

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free