This paper presents an approach to improve emotion recognition from spontaneous speech. We used a wrapper method to reduce an acoustic set of features and feature-level fusion to merge them with a set of linguistic ones. The proposed system was evaluated with the FAU Aibo Corpus. We considered the same emotion set that was proposed in the Interspeech 2009 Emotion Challenge. The main contribution of this work is the improvement, with the reduced set of features, of the results obtained in this Challenge and the combination of the best ones. We built this set with a selection of 28 acoustic and 5 linguistic features and concatenation of the feature vectors from an original set of 389 parameters. © 2011 Springer-Verlag.
CITATION STYLE
Planet, S., & Iriondo, I. (2011). Improving spontaneous children’s emotion recognition by acoustic feature selection and feature-level fusion of acoustic and linguistic parameters. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7015 LNAI, pp. 88–95). https://doi.org/10.1007/978-3-642-25020-0_12
Mendeley helps you to discover research relevant for your work.