Enhancing visual speech recognition with lip protrusion estimation

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Visual speech recognition is emerging as an important research area in human–computer interaction. Most of the work done in this area has focused on lip-reading using the frontal view of the speaker or on views available from multiple cameras. However, in absence of views available from different angles, profile information from the speech articulators is lost. This chapter tries to estimate lip protrusion from images available from only the frontal pose of the speaker. With our proposed methodology, an estimated computation of lip profile information from frontal features, increases system efficiency in absence of expensive hardware and without adding to computation overheads. We also show that lip protrusion is a key speech articulator and that other prominent articulators are contained within the centre area of the mouth.

Cite

CITATION STYLE

APA

Singh, P. (2018). Enhancing visual speech recognition with lip protrusion estimation. In Studies in Computational Intelligence (Vol. 730, pp. 519–536). Springer Verlag. https://doi.org/10.1007/978-3-319-63754-9_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free