Speech emotion recognition system based on L1 regularized linear regression and decision fusion

N/ACitations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper describes a speech emotion recognition system that is built for Audio Sub-Challenge of Audio/Visual Emotion Challenge (AVEC 2011). In this system, feature selection is conducted via L1 regularized linear regression in which the L1 norm of regression weights is minimized to find a sparse weight vector. The features with approximately zero weights are removed to create a well-selected small feature set. A fusion scheme by combining the strength from linear regression and Extreme learning machine (EML) based feedforward neural networks (NN) is proposed for classification. The experiment results conducted on the SEMAINE database of naturalistic dialogues distributed through AVEC 2011 are presented. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Cen, L., Yu, Z. L., & Dong, M. H. (2011). Speech emotion recognition system based on L1 regularized linear regression and decision fusion. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6975 LNCS, pp. 332–340). https://doi.org/10.1007/978-3-642-24571-8_44

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free