Silent speech or unvoiced speech can be interpreted by lip reading, which is difficult, or by using EMG (Electromyography) electrodes to convert the facial muscle movements into distinct signals. These signals are processed in MATLAB and matched to a predefined word by using Dynamic Time Warping algorithm. The identified word is then converted to speech and can be used to control a nearby device such as a motorized wheelchair. Thus, a silent speech interface has the potential to enable a differently-abled person to communicate and interact with objects in their surroundings to ease their lives.
CITATION STYLE
Joy, J. E., Ajay Yadukrishnan, H., Poojith, V., & Prathap, J. (2020). Work-in-Progress: Silent Speech Recognition Interface for the Differently Abled. In Lecture Notes in Networks and Systems (Vol. 80, pp. 805–813). Springer. https://doi.org/10.1007/978-3-030-23162-0_73
Mendeley helps you to discover research relevant for your work.