Wind and Storage Cooperative Scheduling Strategy Based on Deep Reinforcement Learning Algorithm

4Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Wind energy has become one of the most promising new energy sources in the context of the global energy interconnection. However, the inherent uncertainty of wind power and the volatility of its output make it difficult to join the internet for scheduling. The uncertainty of the output must be reduced or even eliminated by certain methods. In this paper, wind power and energy storage are coordinated to eliminate the uncertainty of wind power output, and to reduce the burden on the grid while ensuring the long-term operation of wind farms. This paper first introduces Q-learning in reinforcement learning as a controller. Through a large number of historical wind power data training, the controller has good decision-making ability, so as to reduce the punishment caused by wind power uncertainty; then the Q-learning is improved aiming at the maximum average income of the stage. Finally, the Q-value network is established with conventional Q-learning and improved Q-learning. The DQN algorithm in the deep reinforcement learning algorithm is introduced for deep training and decision-making and the three algorithms are verified. The result proves that the deep reinforcement learning algorithm can achieve better control effect than Q-learning.

Cite

CITATION STYLE

APA

Qin, J., Han, X., Liu, G., Wang, S., Li, W., & Jiang, Z. (2019). Wind and Storage Cooperative Scheduling Strategy Based on Deep Reinforcement Learning Algorithm. In Journal of Physics: Conference Series (Vol. 1213). Institute of Physics Publishing. https://doi.org/10.1088/1742-6596/1213/3/032002

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free