Abstract
A time-variable time-of-use electricity price can be used to reduce the charging costs for electric vehicle (EV) owners. Considering the uncertainty of price fluctuation and the randomness of EV owner's commuting behavior, we propose a deep reinforcement learning based method for the minimization of individual EV charging cost. The charging problem is first formulated as a Markov decision process (MDP), which has unknown transition probability. A modified long short-term memory (LSTM) neural network is used as the representation layer to extract temporal features from the electricity price signal. The deep deterministic policy gradient (DDPG) algorithm, which has continuous action spaces, is used to solve the MDP. The proposed method can automatically adjust the charging strategy according to electricity price to reduce the charging cost of the EV owner. Several other methods to solve the charging problem are also implemented and quantitatively compared with the proposed method which can reduce the charging cost up to 70.2% compared with other benchmark methods.
Author supplied keywords
Cite
CITATION STYLE
Li, S., Hu, W., Cao, D., Dragicevic, T., Huang, Q., Chen, Z., & Blaabjerg, F. (2022). Electric Vehicle Charging Management Based on Deep Reinforcement Learning. Journal of Modern Power Systems and Clean Energy, 10(3), 719–730. https://doi.org/10.35833/MPCE.2020.000460
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.