Motion-based view-invariant articulated motion detection and pose estimation using sparse point features

1Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present an approach for articulated motion detection and pose estimation that uses only motion information. To estimate the pose and viewpoint we introduce a novel motion descriptor that computes the spatial relationships of motion vectors representing various parts of the person using the trajectories of a number of sparse points. A nearest neighbor search for the closest motion descriptor from the labeled training data of human walking poses in multiple views is performed. This observational probability is fed to a Hidden Markov Model defined over multiple poses and viewpoints to obtain temporally consistent pose estimates. Experimental results on various sequences of walking subjects with multiple viewpoints demonstrate the effectiveness of the approach. In particular, our purely motion-based approach is able to track people even when other visible cues are not available, such as in low-light situations. © 2009 Springer-Verlag.

Cite

CITATION STYLE

APA

Pundlik, S. J., & Birchfield, S. T. (2009). Motion-based view-invariant articulated motion detection and pose estimation using sparse point features. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5875 LNCS, pp. 425–434). https://doi.org/10.1007/978-3-642-10331-5_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free