Speech Emotion Recognition Using Regularized Discriminant Analysis

8Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Speech emotion recognition plays a vital role in the field of Human Computer Interaction. The aim of speech emotion recognition system is to extract the information from the speech signal and identify the emotional state of a human being. The information extracted from the speech signal is to be appropriate for the analysis of the emotions. This paper analyses the characteristics of prosodic and spectral features. In addition feature fusion technique is also used to improve the performance. We used Linear Discriminant Analysis (LDA), Regularized Discriminant Analysis (RDA), Support Vector Machines (SVM), K-Nearest Neighbor (KNN) as a Classifiers. Results suggest that spectral features outperform prosodic features. Results are validated over Berlin and Spanish emotional speech databases. © Springer International Publishing Switzerland 2014.

Cite

CITATION STYLE

APA

Kuchibhotla, S., Yalamanchili, B. S., Vankayalapati, H. D., & Anne, K. R. (2014). Speech Emotion Recognition Using Regularized Discriminant Analysis. In Advances in Intelligent Systems and Computing (Vol. 247, pp. 363–369). Springer Verlag. https://doi.org/10.1007/978-3-319-02931-3_41

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free