Analysis and encoding of lip movements

2Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Model-based image coding has recently attracted much attention as a basis for the next generation of communication services. This article proposes a model-based image coding for the mouth, which is aimed at capturing visual information related to speech, in order to make the decoded video sequence suitable for lip-reading. Such a coding system is basically composed of an analysis process on the transmitting side, and a synthesis process on the receiving side. On the transmitting side, an encoding technique based on a deformable template of the lips is introduced, which allows representing data about the form of the mouth in a very compact way. On the receiving side, a decoding technique for lip movement synthesis is proposed, one that allows lip animation, starting from a reference image, by applying warping techniques to the proposed model.

Cite

CITATION STYLE

APA

Coianiz, T., & Torresani, L. (1997). Analysis and encoding of lip movements. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1206, pp. 51–60). Springer Verlag. https://doi.org/10.1007/bfb0015979

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free