LCrowdV: Generating labeled videos for simulation-based crowd behavior learning

18Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We present a novel procedural framework to generate an arbitrary number of labeled crowd videos (LCrowdV). The resulting crowd video datasets are used to design accurate algorithms or training models for crowded scene understanding. Our overall approach is composed of two components: a procedural simulation framework for generating crowd movements and behaviors, and a procedural rendering framework to generate different videos or images. Each video or image is automatically labeled based on the environment, number of pedestrians, density, behavior (agent personality), flow, lighting conditions, viewpoint, noise, etc. Furthermore, we can increase the realism by combining synthetically-generated behaviors with real-world background videos.We demonstrate the benefits of LCrowdV over prior labeled crowd datasets, by augmenting real dataset with it and improving the accuracy in pedestrian detection. LCrowdV has been made available as an online resource.

Cite

CITATION STYLE

APA

Cheung, E., Wong, T. K., Bera, A., Wang, X., & Manocha, D. (2016). LCrowdV: Generating labeled videos for simulation-based crowd behavior learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9914 LNCS, pp. 709–727). Springer Verlag. https://doi.org/10.1007/978-3-319-48881-3_50

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free