In this paper, we propose an automatic visual speech spotting system adapted for RGB-D cameras and based on Hidden Markov Models (HMMs). Our system is based on two main processing blocks, namely, visual feature extraction and speech spotting and recognition. In feature extraction step, the speaker’s face pose is estimated using a 3D face model including a rectangular 3D mouth patch used to precisely extract the mouth region. Then, spatio-temporal features are computed on the extracted mouth region. In the second step, the speech video is segmented by finding the starting and the ending points of meaningful utterances and recognized using Viterbi algorithm. The proposed system is mainly evaluated on an extended version of the MIRACL-VC1 dataset. Experimental results demonstrate that the proposed system can segment and recognize key utterances with a recognition rates of 83% and a reliability of 81.4%.
CITATION STYLE
Rekik, A., Ben-Hamadou, A., & Mahdi, W. (2015). Human machine interaction via visual speech spotting. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9386, pp. 566–574). Springer Verlag. https://doi.org/10.1007/978-3-319-25903-1_49
Mendeley helps you to discover research relevant for your work.