In recent studies, deep reinforcement learning (RL) methods have been used for the real-time control of urban drainage systems (UDSs). However, the training process of an RL agent is computationally expensive since collecting the training data requires multiple simulations of a UDS model. An effective solution to this issue is to replace the original UDS model with an emulator, a simpler model which emulates the specified output of the system. However, the adaptability and interpretability of the existing UDS emulators still require further improvements. To address these challenges, a new emulator with increased adaptability and interpretability, called Koopman emulator, was developed by using the Koopman operator and a deep learning technique in this study. Then, the Koopman emulator was used to replace the UDS model in RL to establish an emulator-based RL framework for fast and efficient training. This framework was used to train emulator-based RL agents for flooding and overflow mitigation in a UDS located in Eastern China. Baseline emulators were also employed for comparison. All the methods were evaluated through simulations. The following results were obtained: (a) The Koopman emulator demonstrated a better emulation performance than the baseline emulators. (b) The RL agents based on the Koopman emulator achieved a similar control effect with a faster training process and more efficient data usage compared with the RL agents based on the storm water management model. (c) The uncertainty analysis showed that the RL agents based on the Koopman emulator exhibited acceptable robustness when facing different rainfall events and input perturbations.
CITATION STYLE
Tian, W., Liao, Z., Zhang, Z., Wu, H., & Xin, K. (2022). Flooding and Overflow Mitigation Using Deep Reinforcement Learning Based on Koopman Operator of Urban Drainage Systems. Water Resources Research, 58(7). https://doi.org/10.1029/2021WR030939
Mendeley helps you to discover research relevant for your work.