In this paper, we present an Orthogonal Locality Preserving Projection based (OLPP) approach to capture three-dimensional human motion from monocular images. From the motion capture data residing in high dimension space of human activities, we extract the motion base space in which human pose can be described essentially and concisely by more controllable way. This is actually a dimensionality reduction process completed in the framework of OLPP. And then, the structure of this space corresponding to special activity such as walking motion is explored with data clustering. Pose recovering is performed in the generative framework. For the single image, Gaussian mixture model is used to generate candidates of the 3D pose. The shape context is the common descriptor of image silhouette feature and synthetical feature of human model. We get the shortlist of 3D poses by measuring the shape contexts matching cost between image features and the synthetical features. In tracking situation, an AR model trained by the example sequence produces almost accurate pose predictions. Experiments demonstrate that the proposed approach works well. © Springer-Verlag Berlin Heidelberg 2007.
CITATION STYLE
Zhao, X., & Liu, Y. (2007). Capturing 3D human motion from monocular images using orthogonal locality preserving projection. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4561 LNCS, pp. 304–313). Springer Verlag. https://doi.org/10.1007/978-3-540-73321-8_36
Mendeley helps you to discover research relevant for your work.