The use of visual features to help acoustic speech recognition (ASR) is an appropriate tool to enhance ASR. In this paper, we propose a novel system integrates face detection, user identification and visual speech recognition. Here we use the self organizing map to achieve visual features extraction. Then, the extracted features are recognized using K-nearest neighbor classifier. Experimental results, using a database includes Arabic digits, show that the proposed system is promising and effectively comparable with other reported systems. © 2012 Springer-Verlag.
CITATION STYLE
Sagheer, A., & Aly, S. (2012). Integration of face detection and user identification with visual speech recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7667 LNCS, pp. 479–487). https://doi.org/10.1007/978-3-642-34500-5_57
Mendeley helps you to discover research relevant for your work.