Evolving fuzzy-neural method for multimodal speech recognition

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Improving automatic speech recognition systems is one of the hottest topics in speech-signal processing, especially if such systems are to operate in noisy environments. This paper proposes a multimodal evolutionary neurofuzzy approach to developing an automatic speech-recognition system. To make inferences at the decision stage about audiovisual information for speechto-text conversion, the EFuNN paradigm was applied. Two independent feature extractors were developed, one for the speech phonetics (speech listening) and the other for the speech visemics (lip reading). The EFuNN network has been trained to fuse decisions on audio and decisions on video. This soft computing approach proved robust in harsh conditions and, at the same time, less complex than hard computing, pattern-matching methods. Preliminary experiments confirm the reliability of the proposed method for developing a robust, automatic, speech-recognition system.

Cite

CITATION STYLE

APA

Malcangi, M., & Grew, P. (2015). Evolving fuzzy-neural method for multimodal speech recognition. In Communications in Computer and Information Science (Vol. 517, pp. 216–227). Springer Verlag. https://doi.org/10.1007/978-3-319-23983-5_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free