This paper proposes a reinforcement learning-based approach for distribution network reconfiguration(DNR) to enhance the resilience of the electric power supply. Resilience enhancements usually require solving large-scale stochastic optimization problems that are computationally expensive and sometimes infeasible. The exceptional performance of reinforcement learning techniques has encouraged their adoption in various power system control studies, specifically resilience-based real-time applications. In this paper, a single agent framework is developed using an Actor-Critic algorithm (ACA) to determine statuses of tie-switches in a distribution feeder impacted by an extreme weather event. The proposed approach provides a fast-acting control algorithm that reconfigures the feeder topology to reduce or even avoid load shedding. The problem is formulated as a discrete Markov decision process in such a way that a system state captures the system topology and its operational characteristics. An action is made to open or close a specific set of tie-switches after which a reward is calculated to evaluate the practicality and advantage of that action. The iterative Markov process is used to train the proposed ACA under diverse failure scenarios and is demonstrated on the 33-node distribution feeder system. Results show the capability of the proposed ACA to determine proper switching action of tie-switches with accuracy exceeding 93%.
CITATION STYLE
Abdelmalak, M., Gautam, M., Morash, S., Snyder, A. F., Hotchkiss, E., & Benidris, M. (2022). Network Reconfiguration for Enhanced Operational Resilience using Reinforcement Learning. In SEST 2022 - 5th International Conference on Smart Energy Systems and Technologies. Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/SEST53650.2022.9898469
Mendeley helps you to discover research relevant for your work.