This paper presents a system that recognizes the lip movement for lip-reading system. Four lip gestures are recognized: rounded open, wide open, small open and closed. These gestures are used to describe visually the speech. Firstly, we detect the mouth region from frame using Viola–Jones algorithm. Then, we use DCT to extract the mouth features. The recognition is performed by a HMM which achieves a high performance of 84.99%.
CITATION STYLE
Addarrazi, I., Satori, H., & Satori, K. (2020). Lip Movement Modeling Based on DCT and HMM for Visual Speech Recognition System. In Advances in Intelligent Systems and Computing (Vol. 1076, pp. 399–407). Springer. https://doi.org/10.1007/978-981-15-0947-6_38
Mendeley helps you to discover research relevant for your work.