3D reconstruction of human motion and skeleton from uncalibrated monocular video

12Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper introduces a new model-based approach for simultaneously reconstructing 3D human motion and full-body skeletal size from a small set of 2D image features tracked from uncalibrated monocular video sequences. The key idea of our approach is to construct a generative human motion model from a large set of preprocessed human motion examples to constrain the solution space of monocular human motion tracking. In addition, we learn a generative skeleton model from prerecorded human skeleton data to reduce ambiguity of the human skeleton reconstruction. We formulate the reconstruction process in a nonlinear optimization framework by continuously deforming the generative models to best match a small set of 2D image features tracked from a monocular video sequence. We evaluate the performance of our system by testing the algorithm on a variety of uncalibrated monocular video sequences. © Springer-Verlag 2010.

Cite

CITATION STYLE

APA

Chen, Y. L., & Chai, J. (2010). 3D reconstruction of human motion and skeleton from uncalibrated monocular video. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5994 LNCS, pp. 71–82). https://doi.org/10.1007/978-3-642-12307-8_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free