Speech driven MPEG-4 based face animation via neural network

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, some clustering and machine learning methods are combined together to learn the correspondence between speech acoustic and MPEG-4 based face animation parameters. The features of audio and image sequences can be extracted from the large recorded audio-visual database. The face animation parameter (FAP) sequences can be computed and then clustered to FAP patterns. An artificial neural network (ANN) was trained to map the linear predictive coefficients (LPC) and some prosodic features of an individual’s natural speech to FAP patterns. The performance of our system shows that the proposed learning algorithm is suitable, which can greatly improve the realism of real time face animation during speech.

Cite

CITATION STYLE

APA

Chen, Y., Gao, W., Wang, Z., & Zuo, L. (2001). Speech driven MPEG-4 based face animation via neural network. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2195, pp. 1108–1113). Springer Verlag. https://doi.org/10.1007/3-540-45453-5_152

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free