Vowel classification by a neurophysiologically parameterized auditory model

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A meaningful feature extraction is a very important challenge indispensable to allow good classification results. In Automatic Speech Recognition human performance is still superior to technical solutions. In this paper a feature extraction for sound data is presented that is perceptually motivated by the signal processing of the human auditory system. The physiological mechanisms of signal transduction in the human ear and its neural representation are described. The generated pulse spiking trains of the inner hair cells are connected to a feed forward timing artificial Hubel-Wiesel network, which is a structured computational map for higher cognitive functions as e.g. vowel recognition. According to the theory of Greenberg a signal triggers a set of delay trajectories. In the paper this is shown for classification of different vowels from several speakers.

Cite

CITATION STYLE

APA

Szepannek, G., Harczos, T., Klefenz, F., Katai, A., Schikowski, P., & Weihs, C. (2007). Vowel classification by a neurophysiologically parameterized auditory model. In Studies in Classification, Data Analysis, and Knowledge Organization (pp. 653–660). Kluwer Academic Publishers. https://doi.org/10.1007/978-3-540-70981-7_75

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free