In recent years audio-visual speech recognition has emerged as an active field of research thanks to advances in pattern recognition, signal processing and machine vision. Its ultimate goal is to allow human-computer communication using voice, taking into account the visual information contained in the audio-visual speech signal. This document presents a command's automatic recognition system using audio-visual information. The system is expected to control the laparoscopic robot da Vinci. The audio signal is treated using the Mel Frequency Cepstral Coefficients parametrization method. Besides, features based on the points that define the mouth's outer contour according to the MPEG-4 standard are used in order to extract the visual speech information. © 2009 Springer-Verlag Berlin Heidelberg.
CITATION STYLE
Ceballos, A., Gómez, J., Prieto, F., & Redarce, T. (2009). Robot command interface using an audio-visual speech recognition system. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5856 LNCS, pp. 869–876). https://doi.org/10.1007/978-3-642-10268-4_102
Mendeley helps you to discover research relevant for your work.