Spatio-temporal graphical-model-based multiple facial feature tracking

9Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

It is challenging to track multiple facial features simultaneously when rich expressions are presented on a face. We propose a two-step solution. In the first step, several independent condensation-style particle filters are utilized to track each facial feature in the temporal domain. Particle filters are very effective for visual tracking problems; however multiple independent trackers ignore the spatial constraints and the natural relationships among facial features. In the second step, we use Bayesian inference-belief propagation-to infer each facial feature's contour in the spatial domain, in which we learn the relationships among contours of facial features beforehand with the help of a large facial expression database. The experimental results show that our algorithm can robustly track multiple facial features simultaneously, while there are large interframe motions with expression changes. © 2005 Hindawi Publishing Corporation.

Cite

CITATION STYLE

APA

Su, C., & Huang, L. (2005). Spatio-temporal graphical-model-based multiple facial feature tracking. In Eurasip Journal on Applied Signal Processing (Vol. 2005, pp. 2091–2100). https://doi.org/10.1155/ASP.2005.2091

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free