Skip to content

Neural Network Models of Sensory Integration for Improved Vowel Recognition

by Ben P. Yuhas, Moise H. Goldstein, Terrence J. Sejnowski, Robert E. Jenkins
Proceedings of the IEEE ()
Get full text at journal

Abstract

It is demonstrated that multiple sources of speech information can\nbe integrated at a subsymbolic level to improve vowel recognition.\nFeedforward and recurrent neural networks are trained to estimate the\nacoustic characteristics of a vocal tract from images of the speaker's\nmouth. These estimates are then combined with the noise-degraded\nacoustic information, effectively increasing the signal-to-noise ratio\nand improving the recognition of these noise-degraded signals.\nAlternative symbolic strategies such as direct categorization of the\nvisual signals into vowels are also presented. The performances of these\nneural networks compare favorably with human performance and with other\npattern-matching and estimation techniques

Cite this document (BETA)

Readership Statistics

19 Readers on Mendeley
by Discipline
 
32% Engineering
 
21% Computer Science
 
16% Agricultural and Biological Sciences
by Academic Status
 
32% Student > Ph. D. Student
 
21% Researcher
 
11% Professor > Associate Professor
by Country
 
5% United Kingdom
 
5% United States
 
5% Sweden

Sign up today - FREE

Mendeley saves you time finding and organizing research. Learn more

  • All your research in one place
  • Add and import papers easily
  • Access it anywhere, anytime

Start using Mendeley in seconds!

Sign up & Download

Already have an account? Sign in