Integrating multiple uncalibrated views for human 3D pose estimation

2Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We address the problem of how human pose in 3D can be estimated from video data. The use of multiple views has the potential of tackling self-occlusion of the human subject in any particular view, as well as of estimating the human pose more precisely. We propose a scheme of allowing multiple views to be put together naturally for determining human pose, allowing hypotheses of the body parts in each view to be pruned away efficiently through consistency check over all the views. The scheme relates the different views through a linear combination-like expression of all the image data, which captures the rigidity of the human subject in 3D. The scheme does not require thorough calibration of the cameras themselves nor the camera inter-geometry. A formulation is also introduced that expresses the multi-view scheme, as well as other constraints, in the pose estimation problem. A belief propagation approach is used to reach a final human pose under the formulation. Experimental results on in-house captured image data as well as publicly available benchmark datasets are shown to illustrate the performance of the system. © 2010 Springer-Verlag.

Cite

CITATION STYLE

APA

Wang, Z., & Chung, R. (2010). Integrating multiple uncalibrated views for human 3D pose estimation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6455 LNCS, pp. 280–290). https://doi.org/10.1007/978-3-642-17277-9_29

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free