Model-based synthesis of visual speech movements from 3D video

8Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We describe a method for the synthesis of visual speech movements using a hybrid unit selection/model-based approach. Speech lip movements are captured using a 3D stereo face capture system and split up into phonetic units. A dynamic parameterisation of this data is constructed which maintains the relationship between lip shapes and velocities; within this parameterisation a model of how lips move is built and is used in the animation of visual speech movements from speech audio input. The mapping from audio parameters to lip movements is disambiguated by selecting only the most similar stored phonetic units to the target utterance during synthesis. By combining properties of model-based synthesis (e.g., HMMs, neural nets) with unit selection we improve the quality of our speech synthesis. Copyright © 2009 James D. Edge et al.

Cite

CITATION STYLE

APA

Edge, J. D., Hilton, A., & Jackson, P. (2009). Model-based synthesis of visual speech movements from 3D video. Eurasip Journal on Audio, Speech, and Music Processing, 2009. https://doi.org/10.1155/2009/597267

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free