Navigating the Landscape of Deep Reinforcement Learning for Power System Stability Control: A Review

8Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The widespread penetration of inverter-based resources has profoundly impacted the electrical stability of power systems (PSs). Deepening grid integration of photovoltaic and wind systems is introducing unforeseen uncertainties for the electricity sector. As a cutting-edge machine learning technology, deep reinforcement learning (DRL) breakthroughs have been in the spotlight over the last few years with potential contributions to PS stability (PSS). The ubiquitous DRL architecture, by learning from the dynamism inherent in PSs, produces near-optimal actions for PSS. This article provides a rigorous review of the latest research efforts focused on DRL to derive PSS policies while accounting for the unique properties of power grids. Furthermore, this paper highlights the theoretical advantages and the key tradeoffs of the emerging DRL techniques as powerful tools for optimal power flow. For all methods outlined, a discussion on their bottlenecks, research challenges, and potential opportunities in large-scale PSS is also presented. This review aims to support research in this area of DRL algorithms to embrace PSS against unseen faults and different PS topologies.

Cite

CITATION STYLE

APA

Massaoudi, M. S., Abu-Rub, H., & Ghrayeb, A. (2023). Navigating the Landscape of Deep Reinforcement Learning for Power System Stability Control: A Review. IEEE Access, 11, 134298–134317. https://doi.org/10.1109/ACCESS.2023.3337118

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free