Novel energy trading system based on deep-reinforcement learning in microgrids

12Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Inefficiencies in energy trading systems of microgrids are mainly caused by uncertainty in non-stationary operating environments. The problem of uncertainty can be mitigated by ana-lyzing patterns of primary operation parameters and their corresponding actions. In this paper, a novel energy trading system based on a double deep Q-networks (DDQN) algorithm and a double Kelly strategy is proposed for improving profits while reducing dependence on the main grid in the microgrid systems. The DDQN algorithm is proposed in order to select optimized action for improving energy transactions. Additionally, the double Kelly strategy is employed to control the microgrid’s energy trading quantity for producing long-term profits. From the simulation results, it is confirmed that the proposed strategies can achieve a significant improvement in the total profits and independence from the main grid via optimized energy transactions.

Cite

CITATION STYLE

APA

Lee, S., Seon, J., Kyeong, C., Kim, S., Sun, Y., & Kim, J. (2021). Novel energy trading system based on deep-reinforcement learning in microgrids. Energies, 14(17). https://doi.org/10.3390/en14175515

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free