Learn to move: Activity specific motion models for tracking by detection

1Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, we focus on human activity detection, which solves detection, tracking, and recognition jointly. Existing approaches typically use off-the-shelf approaches for detection and tracking, ignoring naturally given prior knowledge. Hence, in this work we present a novel strategy for learning activity specific motion models by feature-to-temporal-displacement relationships. We propose a method based on an augmented version of canonical correlation analysis (AuCCA) for linking high-dimensional features to activity-specific spatial displacements over time. We compare this continuous and discriminative approach to other well established methods in the field of activity recognition and detection. In particular, we first improve activity detections by incorporating temporal forward and backward mappings for regularization of detections. Second, we extend a particle filter framework by using activity-specific motion proposals, allowing for drastically reducing the search space. To show these improvements, we run detailed evaluations on several benchmark data sets, clearly showing the advantages of our activity-specific motion models. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Mauthner, T., Roth, P. M., & Bischof, H. (2012). Learn to move: Activity specific motion models for tracking by detection. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7585 LNCS, pp. 183–192). Springer Verlag. https://doi.org/10.1007/978-3-642-33885-4_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free