Speech Emotions Recognition Using 2-D Neural Classifier

11Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This article deals with a speech emotion recognition system. We discuss the usage of a neural network as the final classifier for human speech emotional state. We carried our research on a database of records of both genders and various emotional states. In the preprocessing and speech processing phase, we focused our intent on parameters dependent on the emotional state. The output of this work is a system for classifying the emotional state of a man's voice, which is based on a neural network classifier. For output-stage classifier was used self-organizing feature map, which is specific type of artificial neural nets. The number of input parameters must be limited for hardware and time consuming computation of neurons positions. Therefore we discuss the accuracy of the classifier whose input is the fundamental frequency calculated by different methods. © Springer International Publishing Switzerland 2013.

Cite

CITATION STYLE

APA

Partila, P., & Voznak, M. (2013). Speech Emotions Recognition Using 2-D Neural Classifier. Advances in Intelligent Systems and Computing, 210, 221–231. https://doi.org/10.1007/978-3-319-00542-3_23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free