Learning pedestrian detection from virtual worlds

17Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, we present a real-time pedestrian detection system that has been trained using a virtual environment. This is a very popular topic of research having endless practical applications and recently, there was an increasing interest in deep learning architectures for performing such a task. However, the availability of large labeled datasets is a key point for an effective train of such algorithms. For this reason, in this work, we introduced ViPeD, a new synthetically generated set of images extracted from a realistic 3D video game where the labels can be automatically generated exploiting 2D pedestrian positions extracted from the graphics engine. We exploited this new synthetic dataset fine-tuning a state-of-the-art computationally efficient Convolutional Neural Network (CNN). A preliminary experimental evaluation, compared to the performance of other existing approaches trained on real-world images, shows encouraging results.

Cite

CITATION STYLE

APA

Amato, G., Ciampi, L., Falchi, F., Gennaro, C., & Messina, N. (2019). Learning pedestrian detection from virtual worlds. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11751 LNCS, pp. 302–312). Springer Verlag. https://doi.org/10.1007/978-3-030-30642-7_27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free