The model uncertainties and the heterogeneous energy states burden the effective aggregation of electric vehicles (EVs), especially coupling with the real-time frequency dynamic of the electrical grid. Integrating the advantages of deep learning and reinforcement learning, deep reinforcement learning shows its potential to relieve this challenge, where an intelligent agent fully considers the individual state of charge (SOC) difference of EV and the grid state to optimize the aggregation performance. However, existing policies of deep reinforcement learning usually provide deterministic and certain actions, and it is difficult to deal with the increasing uncertainties and randomness in modern electrical systems. In this paper, a probability-based management strategy is proposed with continuous action space based on the deep reinforcement learning, which provides fine-grained energy management and addresses the time-varying dynamics from EVs and electrical grid simultaneously. Moreover, an optimization based on the proximal policy is further introduced to clip the policy upgradation speed to enhance the training stability. The effectiveness of proposed energy management structure and policy optimization strategy are verified on various scenarios and uncertainties, which demonstrates advantageous performance in the SOC management and frequency maintenance. Besides the performance merits, the training procedure is also presented revealing the evolution reason for the proposed approach.
CITATION STYLE
Dong, C., Sun, J., Wu, F., & Jia, H. (2020). Probability-Based Energy Reinforced Management of Electric Vehicle Aggregation in the Electrical Grid Frequency Regulation. IEEE Access, 8, 110598–110610. https://doi.org/10.1109/ACCESS.2020.3002693
Mendeley helps you to discover research relevant for your work.