Within the communication process of human beings, the speaker's facial expression and lip-shape movement contains extremely rich language information. The hearing impaired, aside from using residual listening to communicate with other people, can also use lip reading as a communication tool. As the hearing impaired learn the lip reading using a computer-assisted lip-reading system, they can freely learn lip reading without the constraints of time, place or situation. Therefore, we propose a computer-assisted lip-reading system (CALRS) for phonetic pronunciation recognition of the correct lip-shape with an image processing method, object-oriented language and neuro-network. This system can accurately compare the lip image of Mandarin phonetic pronunciation using self-organizing map neuro-network (SOMNN) and extension theory to help hearing impaired correct their pronunciation. © 2007.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below