Ranking Speech Features for Their Usage in Singing Emotion Classification

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper aims to retrieve speech descriptors that may be useful for the classification of emotions in singing. For this purpose, Mel Frequency Cepstral Coefficients (MFCC) and selected Low-Level MPEG 7 descriptors were calculated based on the RAVDESS dataset. The database contains recordings of emotional speech and singing of professional actors presenting six different emotions. Employing the algorithm of Feature Selection based on the Forest of Trees method, descriptors with the best ranking results were determined. Then, the emotions were classified using the Support Vector Machine (SVM). The training was performed several times, and the results were averaged. It was found that descriptors used for emotion detection in speech are not as useful for singing. Also, an approach using Convolutional Neural Network (CNN) employing spectrogram representation of audio signals was tested. Several parameters for singing were determined, which, according to the obtained results, allow for a significant reduction in the dimensionality of feature vectors while increasing the classification efficiency of emotion detection.

Cite

CITATION STYLE

APA

Zaporowski, S., & Kostek, B. (2020). Ranking Speech Features for Their Usage in Singing Emotion Classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12117 LNAI, pp. 225–234). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-59491-6_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free