Tracklet reidentification in crowded scenes using bag of spatio-temporal histograms of oriented gradients

7Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

A novel tracklet association framework is introduced to perform robust online re-identification of pedestrians in crowded scenes recorded by a single camera. Recent advances in multi-target tracking allow the generation of longer tracks, but problems of fragmentation and identity switching remain, due to occlusions and interactions between subjects. To address these issues, a discriminative and efficient descriptor is proposed to represent a tracklet as a bag of independent motion signatures using spatio-temporal histograms of oriented gradients. Due to the significant temporal variations of these features, they are generated only at automatically identified key poses that capture the essence of its appearance and motion. As a consequence, the re-identification involves only the most appropriate features in the bag at given time. The superiority of the methodology is demonstrated on two publicly available datasets achieving accuracy over 90% of the first rank tracklet associations. © 2013 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Lewandowski, M., Simonnet, D., Makris, D., Velastin, S. A., & Orwell, J. (2013). Tracklet reidentification in crowded scenes using bag of spatio-temporal histograms of oriented gradients. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7914 LNCS, pp. 94–103). https://doi.org/10.1007/978-3-642-38989-4_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free