Capturing 3D human motion from monocular images using orthogonal locality preserving projection

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, we present an Orthogonal Locality Preserving Projection based (OLPP) approach to capture three-dimensional human motion from monocular images. From the motion capture data residing in high dimension space of human activities, we extract the motion base space in which human pose can be described essentially and concisely by more controllable way. This is actually a dimensionality reduction process completed in the framework of OLPP. And then, the structure of this space corresponding to special activity such as walking motion is explored with data clustering. Pose recovering is performed in the generative framework. For the single image, Gaussian mixture model is used to generate candidates of the 3D pose. The shape context is the common descriptor of image silhouette feature and synthetical feature of human model. We get the shortlist of 3D poses by measuring the shape contexts matching cost between image features and the synthetical features. In tracking situation, an AR model trained by the example sequence produces almost accurate pose predictions. Experiments demonstrate that the proposed approach works well. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Zhao, X., & Liu, Y. (2007). Capturing 3D human motion from monocular images using orthogonal locality preserving projection. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4561 LNCS, pp. 304–313). Springer Verlag. https://doi.org/10.1007/978-3-540-73321-8_36

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free