Nonlinear dynamic shape and appearance models for facial motion tracking

1Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We present a framework for tracking large facial deformations using nonlinear dynamic shape and appearance model based upon local motion estimation. Local facial deformation estimation based on a given single template fails to track large facial deformations due to significant appearance variations. A nonlinear generative model that uses low dimensional manifold representation provides adaptive facial appearance templates depending upon the movement of the facial motion state and the expression type. The proposed model provides a generative model for Bayesian tracking of facial motions using particle filtering with simultaneous estimation of the expression type. We estimate the geometric transformation and the global deformation using the generative model. The appearance templates from the global model then estimate local deformation based on thin-plate spline parameters. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Lee, C. S., Elgammal, A., & Metaxas, D. (2007). Nonlinear dynamic shape and appearance models for facial motion tracking. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4872 LNCS, pp. 205–220). Springer Verlag. https://doi.org/10.1007/978-3-540-77129-6_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free