Calibrating a motion model based on reinforcement learning for pedestrian simulation

7Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, the calibration of a framework based in Multi-agent Reinforcement Learning (RL) for generating motion simulations of pedestrian groups is presented. The framework sets a group of autonomous embodied agents that learn to control individually its instant velocity vector in scenarios with collisions and friction forces. The result of the process is a different learned motion controller for each agent. The calibration of both, the physical properties involved in the motion of our embodied agents and the corresponding dynamics, is an important issue for a realistic simulation. The physics engine used has been calibrated with values taken from real pedestrian dynamics. Two experiments have been carried out for testing this approach. The results of the experiments are compared with databases of real pedestrians in similar scenarios. As a comparison tool, the diagram of speed versus density, known as fundamental diagram in the literature, is used. © 2012 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Martinez-Gil, F., Lozano, M., & Fernández, F. (2012). Calibrating a motion model based on reinforcement learning for pedestrian simulation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7660 LNCS, pp. 302–313). Springer Verlag. https://doi.org/10.1007/978-3-642-34710-8_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free