Simplified Deep Reinforcement Learning Based Volt-var Control of Topologically Variable Power System

2Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

The high penetration and uncertainty of distributed energies force the upgrade of volt-var control (VVC) to smooth the voltage and var fluctuations faster. Traditional mathematical or heuristic algorithms are increasingly incompetent for this task because of the slow online calculation speed. Deep reinforcement learning (DRL) has recently been recognized as an effective alternative as it transfers the computational pressure to the off-line training and the online calculation timescale reaches milliseconds. However, its slow offline training speed still limits its application to VVC. To overcome this issue, this paper proposes a simplified DRL method that simplifies and improves the training operations in DRL, avoiding invalid explorations and slow reward calculation speed. Given the problem that the DRL network parameters of original topology are not applicable to the other new topologies, side-tuning transfer learning (TL) is introduced to reduce the number of parameters needed to be updated in the TL process. Test results based on IEEE 30-bus and 118-bus systems prove the correctness and rapidity of the proposed method, as well as their strong applicability for large-scale control variables.

Cite

CITATION STYLE

APA

Ma, Q., & Deng, C. (2023). Simplified Deep Reinforcement Learning Based Volt-var Control of Topologically Variable Power System. Journal of Modern Power Systems and Clean Energy, 11(5), 1396–1404. https://doi.org/10.35833/MPCE.2022.000468

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free