A Q-cube framework of reinforcement learning algorithm for continuous double auction among microgrids

16Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

Abstract

Decision-making of microgrids in the condition of a dynamic uncertain bidding environment has always been a significant subject of interest in the context of energy markets. The emerging application of reinforcement learning algorithms in energy markets provides solutions to this problem. In this paper, we investigate the potential of applying a Q-learning algorithm into a continuous double auction mechanism. By choosing a global supply and demand relationship as states and considering both bidding price and quantity as actions, a new Q-learning architecture is proposed to better reflect personalized bidding preferences and response to real-time market conditions. The application of battery energy storage system performs an alternative form of demand response by exerting potential capacity. A Q-cube framework is designed to describe the Q-value distribution iteration. Results from a case study on 14 microgrids in Guizhou Province, China indicate that the proposed Q-cube framework is capable of making rational bidding decisions and raising the microgrids’ profits.

Cite

CITATION STYLE

APA

Wang, N., Xu, W., Shao, W., & Xu, Z. (2019). A Q-cube framework of reinforcement learning algorithm for continuous double auction among microgrids. Energies, 12(15). https://doi.org/10.3390/en12152891

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free