Improved speaker-independent emotion recognition from speech using two-stage feature reduction

0Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

In the recent years, researchers are focusing to improve the accuracy of speech emotion recognition. Generally, high emotion recognition accuracies were obtained for two-class emotion recognition, but multi-class emotion recognition is still a challenging task. The main aim of this work is to propose a two-stage feature reduction using Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) for improving the accuracy of the speech emotion recognition (ER) system. Short-term speech features were extracted from the emotional speech signals. Experiments were carried out using four different supervised classifiers with two different emotional speech databases. From the experimental results, it can be inferred that the proposed method provides better accuracies of 87.48% for speaker dependent (SD) and gender dependent (GD) ER experiment, 85.15% for speaker independent (SI) ER experiment, and 87.09% for gender independent (GI) experiment.

Cite

CITATION STYLE

APA

Mohd Nazid, H., Muthusamy, H., Vijean, V., & Yaacob, S. (2015). Improved speaker-independent emotion recognition from speech using two-stage feature reduction. Journal of Information and Communication Technology, 14, 57–76. https://doi.org/10.32890/jict2015.14.4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free