Weight compensated motion estimation for facial deformation analysis

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Investigation of the motion performed by a person's face while speaking is the target of this paper. Methods and results of the studied facial motions are presented and rigid and non-rigid motion are analyzed. In order to extract only facial deformation independent from head pose, we use a new and simple approach for separating rigid and non-rigid motion called Weight Compensated Motion Estimation (WCME). This approach weights the data points according to their influence to the desired motion model. A synthetic test as well as real data are used to demonstrate the performance of this approach. We also present results in the field of facial deformation analysis and used basis shapes as description form. These results can be used for recognition purposes by adding temporal changes to the overall process or adding natural deformations other than at the given database. © 2009 Springer Berlin Heidelberg.

Author supplied keywords

Cite

CITATION STYLE

APA

Rurainsky, J. (2009). Weight compensated motion estimation for facial deformation analysis. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5627 LNCS, pp. 668–677). https://doi.org/10.1007/978-3-642-02611-9_66

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free