Deep reinforcement learning based voltage control revisited

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Deep Reinforcement Learning (DRL) has shown promise for voltage control in power systems due to its speed and model-free nature. However, learning optimal control policies through trial and error on a real grid is infeasible due to the mission-critical nature of power systems. Instead, DRL agents are typically trained on a simulator, which may not accurately represent the real grid. This discrepancy can lead to suboptimal control policies and raises concerns for power system operators. In this paper, we revisit the problem of RL-based voltage control and investigate how model inaccuracies affect the performance of the DRL agent. Extensive numerical experiments are conducted to quantify the impact of model inaccuracies on learning outcomes. Specifically, techniques that enable the DRL agent are focused on learning robust policies that can still perform well in the presence of model errors. Furthermore, the impact of the agent's decisions on the overall system loss are analyzed to provide additional insight into the control problem. This work aims to address the concerns of power system operators and make DRL-based voltage control more practical and reliable.

Cite

CITATION STYLE

APA

Nematshahi, S., Shi, D., Wang, F., Yan, B., & Nair, A. (2023). Deep reinforcement learning based voltage control revisited. IET Generation, Transmission and Distribution, 17(21), 4826–4835. https://doi.org/10.1049/gtd2.13001

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free