Decentralized Reinforcement Learning Approach for Microgrid Energy Management in Stochastic Environment

8Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Microgrids are considered to be smart power grids that can integrate Distributed Energy Resources (DERs) in the main grid cleanly and reliably. Due to the random and unpredictable nature of Renewable Energy Sources (RESs) and electricity demand, designing a control system for microgrid energy management is a complex task. In addition, the policies of microgrid agents are changing over time to improve their expected profits. Therefore, the problem is stochastic and the policies of the agents are not stationary and deterministic. This paper proposes a fully decentralized multiagent Energy Management System (EMS) for microgrids using the reinforcement learning and stochastic game. The microgrid agents, comprising customers, and DERs are considered as intelligent and autonomous decision makers. The proposed method solves a distributed optimization problem for each self-interested decision maker. Interactions between the decision makers and the environment during the learning phase lead the system to converge to the optimal equilibrium point in which the benefits of all the agents are maximized. Simulation studies using a real dataset demonstrate the effectiveness of the proposed method for the hourly energy management of microgrids.

Cite

CITATION STYLE

APA

Darshi, R., Shamaghdari, S., Jalali, A., & Arasteh, H. (2023). Decentralized Reinforcement Learning Approach for Microgrid Energy Management in Stochastic Environment. International Transactions on Electrical Energy Systems, 2023. https://doi.org/10.1155/2023/1190103

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free