Lip Movement Modeling Based on DCT and HMM for Visual Speech Recognition System

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a system that recognizes the lip movement for lip-reading system. Four lip gestures are recognized: rounded open, wide open, small open and closed. These gestures are used to describe visually the speech. Firstly, we detect the mouth region from frame using Viola–Jones algorithm. Then, we use DCT to extract the mouth features. The recognition is performed by a HMM which achieves a high performance of 84.99%.

Cite

CITATION STYLE

APA

Addarrazi, I., Satori, H., & Satori, K. (2020). Lip Movement Modeling Based on DCT and HMM for Visual Speech Recognition System. In Advances in Intelligent Systems and Computing (Vol. 1076, pp. 399–407). Springer. https://doi.org/10.1007/978-981-15-0947-6_38

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free