Deep motion model for pedestrian tracking in 360 degrees videos

1Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper proposes a deep convolutional neural network (CNN) for pedestrian tracking in 360° videos based on the target’s motion. The tracking algorithm takes advantage of a virtual Pan-Tilt-Zoom (vPTZ) camera simulated by means of the 360° video. The CNN takes in input a motion image, i.e. the difference of two images taken by using the vPTZ camera at different times by the same pan, tilt and zoom parameters. The CNN predicts the vPTZ camera parameter adjustments required to keep the target at the center of the vPTZ camera view. Experiments on a publicly available dataset performed in cross-validation demonstrate that the learned motion model generalizes, and that the proposed tracking algorithm achieves state-of-the-art performance.

Author supplied keywords

Cite

CITATION STYLE

APA

Lo Presti, L., & La Cascia, M. (2019). Deep motion model for pedestrian tracking in 360 degrees videos. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11751 LNCS, pp. 36–47). Springer Verlag. https://doi.org/10.1007/978-3-030-30642-7_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free