Abstract
Intrinsic and environmental factors contribute to variability in the performance of cells within a battery pack, affecting the lifespan and safety of battery systems. To solve this problem, active and passive equalization methods are proposed. However, existing passive equalization methods suffer from energy loss and low efficiency among batteries, while existing active equalization methods necessitate complex expert knowledge and control algorithms. We propose an active equalization model that leverages a generative model (GM) to assist in pattern selection for a reinforcement learning (RL) scheme, tailored for Dynamic Reconfigurable Battery (DRB) systems. The proposed model overcomes the pattern selection challenge in large-scale discrete action spaces by employing a Variational Autoencoder (VAE) for dimensionality reduction and latent space mapping, actively balancing DRB systems. Moreover, the use of pattern subgraphs diminishes dependence on expert knowledge, enabling the model to recognize structural information and adjust the system's stability. The experimental setup adheres to the laws of physics and tests the model's functionality on a simulation system. Results show that the proposed Generative Model-based Reinforcement Learning (GMRL) approach effectively addresses decision-making challenges in large-scale spaces. It can learn the structured features of the battery network, thus balancing the energy storage system and maximizing discharge efficiency gains.
Author supplied keywords
Cite
CITATION STYLE
Hu, J., Li, X., Li, X., Hou, Z., & Zhang, Z. (2025). Optimizing reinforcement learning for large action spaces via generative models: Battery pattern selection. Pattern Recognition, 160. https://doi.org/10.1016/j.patcog.2024.111194
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.