The widespread penetration of inverter-based resources has profoundly impacted the electrical stability of power systems (PSs). Deepening grid integration of photovoltaic and wind systems is introducing unforeseen uncertainties for the electricity sector. As a cutting-edge machine learning technology, deep reinforcement learning (DRL) breakthroughs have been in the spotlight over the last few years with potential contributions to PS stability (PSS). The ubiquitous DRL architecture, by learning from the dynamism inherent in PSs, produces near-optimal actions for PSS. This article provides a rigorous review of the latest research efforts focused on DRL to derive PSS policies while accounting for the unique properties of power grids. Furthermore, this paper highlights the theoretical advantages and the key tradeoffs of the emerging DRL techniques as powerful tools for optimal power flow. For all methods outlined, a discussion on their bottlenecks, research challenges, and potential opportunities in large-scale PSS is also presented. This review aims to support research in this area of DRL algorithms to embrace PSS against unseen faults and different PS topologies.
CITATION STYLE
Massaoudi, M. S., Abu-Rub, H., & Ghrayeb, A. (2023). Navigating the Landscape of Deep Reinforcement Learning for Power System Stability Control: A Review. IEEE Access, 11, 134298–134317. https://doi.org/10.1109/ACCESS.2023.3337118
Mendeley helps you to discover research relevant for your work.