Unsupervised neural networks for speech perception with cochlear implant systems for the profoundly deaf

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recently we have proposed a new speech processing concept for Cochlear Implant (C O - systems. The concept is based on speaker independent signal representation and a neural net classifier which can be combined with the well known CI- speech- coding- strategies. This paper describes some new simulation results: For every speech input frame a 4- dimensional feature vector has been extracted by employing a relative spectral perceptual linear predictive (RASTA-PLP) technique. To classify the feature vectors into so called “auditory related units (ARU)” we applied the self-organizing Kohonen neural net. The best matching ARU's will directly control the synthesis of a “alphabet” of patient adapted stimulus patterns. Simulation results show that the Kohonen algorithm finds representative clusters in the feature vector space for different net dimensions. A discussion of the results and a overview of present experiments with deaf patients will be given.

Cite

CITATION STYLE

APA

Leisenberg, M. (1995). Unsupervised neural networks for speech perception with cochlear implant systems for the profoundly deaf. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 930, pp. 462–470). Springer Verlag. https://doi.org/10.1007/3-540-59497-3_210

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free