Dynamic mapping method based speech driven face animation system

3Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In the paper, we design and develop a speech driven face animation system based on the dynamic mapping method. The face animation is synthesized by the unit concatenating, and synchronous with the real speech. The units are selected according to the cost functions which correspond to voice spectrum distance between training and target units. Visual distance between two adjacent training units is also used to get better mapping results. Finally, the Viterbi method is used to find out the best face animation sequence. The experimental results show that synthesized lip movement has a good and natural quality. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Yin, P., & Tao, J. (2005). Dynamic mapping method based speech driven face animation system. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3784 LNCS, pp. 755–763). https://doi.org/10.1007/11573548_97

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free