Contribution of oral periphery on visual speech intelligibility

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Visual speech recognition aims at improving speech recognition for human-computer interaction. Motivated by the cognitive ability of humans to lip-read, visual speech recognition systems take into account the movement of visible speech articulators to classify the spoken word. However, while most of the research has been focussed on lip movement, the contribution of other factors has not been much looked into. This paper studies the effect of the movement of the area around the lips on the accuracy of speech classification. Two sets of visual features are derived: one set corresponds to the parameters from an accurate lip contour while the other feature set takes into account the area around the lips also. The features have been classified using data mining algorithms in WEKA. It is observed from results that features incorporating the area around the lips show an improvement in the performance of machines to recognize the spoken word. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Singh, P., Gupta, D., Laxmi, V., & Gaur, M. S. (2011). Contribution of oral periphery on visual speech intelligibility. In Communications in Computer and Information Science (Vol. 191 CCIS, pp. 183–190). https://doi.org/10.1007/978-3-642-22714-1_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free