Enhancing emotion recognition from speech through feature selection

22Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In the present work we aim at performance optimization of a speaker-independent emotion recognition system through speech feature selection process. Specifically, relying on the speech feature set defined in the Interspeech 2009 Emotion Challenge, we studied the relative importance of the individual speech parameters, and based on their ranking, a subset of speech parameters that offered advantageous performance was selected. The affect-emotion recognizer utilized here relies on a GMM-UBM-based classifier. In all experiments, we followed the experimental setup defined by the Interspeech 2009 Emotion Challenge, utilizing the FAU Aibo Emotion Corpus of spontaneous, emotionally coloured speech. The experimental results indicate that the correct choice of the speech parameters can lead to better performance than the baseline one. © 2010 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Kostoulas, T., Ganchev, T., Lazaridis, A., & Fakotakis, N. (2010). Enhancing emotion recognition from speech through feature selection. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6231 LNAI, pp. 338–344). https://doi.org/10.1007/978-3-642-15760-8_43

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free