Reinforcement learning (RL) to optimal reconfiguration of radial distribution system (RDS)

7Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a Reinforcement Learning (RL) method for optimal reconfiguration of radial distribution system (RDS). Optimal reconfiguration involves selection of the best set of branches to be opened, one from each loop, such that the resulting RDS has the desired performance. Among the several performance criteria considered for optimal network reconfiguration, an important one is real power losses minimization, while satisfying voltage limits. The RL method formulates the reconfiguration of RDS as a multistage decision problem. More specifically, the model-free learning algorithm (Q-learning) learns by experience how to adjust a closed-loop control rule mapping operating states to control actions by means of reward values. Rewards are chosen to express how well control actions cause minimization of power losses. The Q-learning algorithm is applied to the reconfiguration of 33-bus RDS busbar system. The results are compared with those given by other evolutionary programming methods.

Cite

CITATION STYLE

APA

Vlachogiannis, J. G., & Hatziargyriou, N. (2004). Reinforcement learning (RL) to optimal reconfiguration of radial distribution system (RDS). In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 3025, pp. 439–446). Springer Verlag. https://doi.org/10.1007/978-3-540-24674-9_46

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free