Generative estimation of 3D human pose using shape contexts matching

5Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a method for 3D pose estimation of human motion in generative framework. For the generalization of application scenario, the observation information we utilized comes from monocular silhouettes. We distill prior information of human motion by performing conventional PCA on single motion capture data sequence. In doing so, the aims for both reducing dimensionality and extracting the prior knowledge of human motion are achieved simultaneously. We adopt the shape contexts descriptor to construct the matching function, by which the validity and the robustness of the matching between image features and synthesized model features can be ensured. To explore the solution space efficiently, we design the Annealed Genetic Algorithm (AGA) and Hierarchical Annealed Genetic Algorithm (HAGA) that searches the optimal solutions effectively by utilizing the characteristics of state space. Results of pose estimation on different motion sequences demonstrate that the novel generative method can achieves viewpoint invariant 3D pose estimation. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Zhao, X., & Liu, Y. (2007). Generative estimation of 3D human pose using shape contexts matching. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4843 LNCS, pp. 419–429). Springer Verlag. https://doi.org/10.1007/978-3-540-76386-4_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free