Journal article

Neural Network Models of Sensory Integration for Improved Vowel Recognition

Yuhas B, Goldstein M, Sejnowski T, Jenkins R ...see all

Proceedings of the IEEE, vol. 78, issue 10 (1990) pp. 1658-1668

  • 32

    Readers

    Mendeley users who have this article in their library.
  • 42

    Citations

    Citations of this article.
Sign in to save reference

Abstract

It is demonstrated that multiple sources of speech information can
be integrated at a subsymbolic level to improve vowel recognition.
Feedforward and recurrent neural networks are trained to estimate the
acoustic characteristics of a vocal tract from images of the speaker's
mouth. These estimates are then combined with the noise-degraded
acoustic information, effectively increasing the signal-to-noise ratio
and improving the recognition of these noise-degraded signals.
Alternative symbolic strategies such as direct categorization of the
visual signals into vowels are also presented. The performances of these
neural networks compare favorably with human performance and with other
pattern-matching and estimation techniques

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

Get full text

Authors

  • Ben P. Yuhas

  • Moise H. Goldstein

  • Terrence J. Sejnowski

  • Robert E. Jenkins

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free