Expressive face animation synthesis based on dynamic mapping method

3Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In the paper, we present a framework of speech driven face animation system with expressions. It systematically addresses audio-visual data acquisition, expressive trajectory analysis and audio-visual mapping. Based on this framework, we learn the correlation between neutral facial deformation and expressive facial deformation with Gaussian Mixture Model (GMM). A hierarchical structure is proposed to map the acoustic parameters to lip FAPs. Then the synthesized neutral FAP streams will be extended with expressive variations according to the prosody of the input speech. The quantitative evaluation of the experimental result is encouraging and the synthesized face shows a realistic quality. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Yin, P., Zhao, L., Huang, L., & Tao, J. (2007). Expressive face animation synthesis based on dynamic mapping method. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4738 LNCS, pp. 1–11). Springer Verlag. https://doi.org/10.1007/978-3-540-74889-2_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free