The combination of multi-agent technology and reinforcement learning methods has been recognized as an effective way which is used in path planning-based crowd simulation. However, the existing solution is still not satisfactory due to the problem in the mutual influence of agents. Therefore, an improved multi-agent reinforcement learning method (IMARL algorithm) is introduced. In this method, the intersection of the pedestrian trajectory extracted from the real video is first used as the state space for reinforcement learning. The crowd is grouped and the leader is selected. A bulletin board is added to the reinforcement learning algorithm of multi-agent to store the empirical knowledge of the learning process, and the navigation agent passes information between the leader and the bulletin board. The original social force model was improved, and the cohesive force of visual factors was added to the force formula. The IMARL algorithm is combined with the improved social force model for crowd evacuation simulation. Using a two-layer control mechanism, the leader in the upper layer uses the decision process based on the IMARL algorithm to select the path, and the individuals in the bottom group use the improved social force model to evacuate. The method of this paper not only solves the dimensionality disaster problem of reinforcement learning but also improves the convergence speed. The evacuation efficiency is effectively improved in crowd evacuation simulation experiments. In addition, it can also provide specific guidance scheme for crowd evacuation improvement and assistant decision support for the prevention and management of large-scale group trampling incidents.
CITATION STYLE
Wang, Q., Liu, H., Gao, K., & Zhang, L. (2019). Improved Multi-Agent Reinforcement Learning for Path Planning-Based Crowd Simulation. IEEE Access, 7, 73841–73855. https://doi.org/10.1109/ACCESS.2019.2920913
Mendeley helps you to discover research relevant for your work.