Learning Pedestrian Detection from Virtual Worlds

  • Amato Giuseppe and Ciampi L
N/ACitations
Citations of this article
2Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we present a real-time pedestrian detection system that has been trained using a virtual environment. This is a very popular topic of research having endless practical applications and recently, there was an increasing interest in deep learning architectures for performing such a task. However, the availability of large labeled datasets is a key point for an effective train of such algorithms. For this reason, in this work, we introduced ViPeD, a new synthetically generated set of images extracted from a realistic 3D video game where the labels can be automatically generated exploiting 2D pedestrian positions extracted from the graphics engine. We exploited this new synthetic dataset fine-tuning a state-of-the-art computationally efficient Convolutional Neural Network (CNN). A preliminary experimental evaluation, compared to the performance of other existing approaches trained on real-world images, shows encouraging results.

Cite

CITATION STYLE

APA

Amato Giuseppe and Ciampi, L. and F. F. and G. C. and M. N. (2019). Learning Pedestrian Detection from Virtual Worlds. In S. and S. C. and L. O. and M. S. and S. N. Ricci Elisa and Rota Bulò (Ed.), Image Analysis and Processing – ICIAP 2019 (pp. 302–312). Springer International Publishing.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free