Reinforcement learning based two-timescale energy management for energy hub

2Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Maintaining energy balance and economical operation is significant for energy hub (EH) which serves as the central component. Implementing real-time regulation for heating and cooling equipment within the EH is challenging due to their slow response time in response to the stochastic fluctuation in renewable energy sources and demands while the opposite is true for electric energy storage equipment (EST), a conventional single timescale energy management strategy is no longer sufficient to take into account the operating characteristics of all equipment. With this motivation, this study proposes a deep reinforcement learning based two-timescale energy management strategy for EH, which controls heating & cooling equipment on a long timescale of 1 h, and EST on a short timescale of 15 min. The actions of the EST are modelled as discrete to reduce the action spaces, and the discrete-continuous hybrid action sequential TD3 model is proposed to address the problem of handling both discrete and continuous actions in long timescale policy. A joint training approach based on the centralized training framework is proposed to learn multiple levels of policies in parallel. The case studies demonstrate that the proposed strategy reduces the economic cost and carbon emissions by 1%, and 0.5% compared to the single time-scale strategy respectively.

Cite

CITATION STYLE

APA

Chen, J., Mao, C., Sha, G., Sheng, W., Fan, H., Wang, D., … Zhang, Y. (2024). Reinforcement learning based two-timescale energy management for energy hub. IET Renewable Power Generation, 18(3), 476–488. https://doi.org/10.1049/rpg2.12911

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free