Enhancing speech emotion recognition with deep learning using multi-feature stacking and data augmentation

0Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

Abstract

This study evaluates the effectiveness of data augmentation on 1D convolutional neural network (CNN) and transformer models for speech emotion recognition (SER) on the Ryerson audio-visual database of emotional speech and song (RAVDESS) dataset. The results show that data augmentation has a positive impact on improving emotion classification accuracy. Techniques such as noising, pitching, stretching, shifting, and speeding are applied to increase data variation and overcome class imbalance. The 1D CNN model with data augmentation achieved 94.5% accuracy, while the transformer model with data augmentation performed even better at 97.5%. This research is expected to contribute better insights for the development of accurate emotion recognition methods by using data augmentation with these models to improve classification accuracy on the RAVDESS dataset. Further research can explore larger and more diverse datasets and alternative model approaches.

Cite

CITATION STYLE

APA

Al Mukarram, K., Mukhlas, M. A., & Zahra, A. (2024). Enhancing speech emotion recognition with deep learning using multi-feature stacking and data augmentation. Bulletin of Electrical Engineering and Informatics, 13(3), 1920–1926. https://doi.org/10.11591/eei.v13i3.6049

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free