Analysis of facial motion capture data for visual speech synthesis

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The paper deals with interpretation of facial motion capture data for visual speech synthesis. For the purpose of analysis visual speech composed of 170 artificially created words was recorded by one speaker and the state-of-the-art face motion capture method. New nonlinear method is proposed to approximate the motion capture data using intentionally defined set of articulatory parameters. The result of the comparison shows that the proposed method outperforms baseline method with the same number of parameters. The precision of the approximation is evaluated by the parameter values extracted from unseen dataset and also verified with the 3D animated model of human head as the output reproducing visual speech in an artificial manner.

Cite

CITATION STYLE

APA

Železný, M., Krňoul, Z., & Jedlička, P. (2015). Analysis of facial motion capture data for visual speech synthesis. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9319, pp. 81–88). Springer Verlag. https://doi.org/10.1007/978-3-319-23132-7_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free