Towards a visual speech learning system for the deaf by matching dynamic lip shapes

1Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we propose a visual-based speech learning framework to assist deaf persons by comparing the lip movements between a student and an E-tutor in an intelligent tutoring system. The framework utilizes lip reading technologies to determine if a student learns the correct pronunciation. Different from conventional speech recognition systems, which usually recognize a speaker's utterance, our speech learning framework focuses on recognizing whether a student pronounces are correct according to an instructor's utterance by using visual information. We propose a method by extracting dynamic shape difference features (DSDF) based on lip shapes to recognize the pronunciation difference. The preliminary experimental results demonstrate the robustness and effectiveness of our approach on a database we collected, which contains multiple persons speaking a small number of selected words. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Chen, S., Quintian, D. M., & Tian, Y. (2012). Towards a visual speech learning system for the deaf by matching dynamic lip shapes. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7382 LNCS, pp. 1–9). https://doi.org/10.1007/978-3-642-31522-0_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free