Human machine interaction via visual speech spotting

12Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose an automatic visual speech spotting system adapted for RGB-D cameras and based on Hidden Markov Models (HMMs). Our system is based on two main processing blocks, namely, visual feature extraction and speech spotting and recognition. In feature extraction step, the speaker’s face pose is estimated using a 3D face model including a rectangular 3D mouth patch used to precisely extract the mouth region. Then, spatio-temporal features are computed on the extracted mouth region. In the second step, the speech video is segmented by finding the starting and the ending points of meaningful utterances and recognized using Viterbi algorithm. The proposed system is mainly evaluated on an extended version of the MIRACL-VC1 dataset. Experimental results demonstrate that the proposed system can segment and recognize key utterances with a recognition rates of 83% and a reliability of 81.4%.

Cite

CITATION STYLE

APA

Rekik, A., Ben-Hamadou, A., & Mahdi, W. (2015). Human machine interaction via visual speech spotting. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9386, pp. 566–574). Springer Verlag. https://doi.org/10.1007/978-3-319-25903-1_49

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free