Estimating 3D human pose from single images using iterative refinement of the prior

1Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper proposes a generative method to extract 3D human pose using just a single image. Unlike many existing approaches we assume that accurate foreground background segmentation is not possible and do not use binary silhouettes. A stochastic method is used to search the pose space and the posterior distribution is maximized using Expectation Maximization (EM). It is assumed that some knowledge is known a priori about the position, scale and orientation of the person present and we specifically develop an approach to exploit this. The result is that we can learn a more constrained prior without having to sacrifice its generality to a specific action type. A single prior is learnt using all actions in the HumanEva dataset [9] and we provide quantitative results for images selected across all action categories and subjects, captured from differing viewpoints. © 2010 IEEE.

Cite

CITATION STYLE

APA

Daubney, B., & Xie, X. (2010). Estimating 3D human pose from single images using iterative refinement of the prior. In Proceedings - International Conference on Pattern Recognition (pp. 3440–3443). https://doi.org/10.1109/ICPR.2010.840

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free