A hierarchical face behavior model for a 3D face tracking without markers

3Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In the context of post-production for the movie industry, localization of a 3D face in an image sequence is a topic, with a growing interest. Its not only a face detection (already done!), but an accurate 3D face localization, an accurate face expression recognition, coupled with the localization, allowing to track a real "living" faces (with speech and emotion). To obtain a faithful tracking, the 3D face model has to be very accurate, and the deformation of the face (the behavior model) has to be realistic. In this paper, we present a new easy-to-use face behavior model, and a tracking system based upon image analysis/synthesis collaboration. This tracking algorithm is computing, for each image of a sequence, the 6 parameters of the 3D face model position and rotation, and the 14 behavior parameters (the amount of each behavior in the behavior space). The result is a moving face, in 3D, with speech and emotions which is not discriminable from the image sequence from which it was extracted. © Springer-Veriag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Roussel, R., & Gagalowicz, A. (2005). A hierarchical face behavior model for a 3D face tracking without markers. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3691 LNCS, pp. 854–861). https://doi.org/10.1007/11556121_105

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free