Deep learning models for speech emotion recognition

Citations of this article
Mendeley users who have this article in their library.


Emotions play a vital role in the efficient and natural human computer interaction. Recognizing human emotions from their speech is truly a challenging task when accuracy, robustness and latency are considered. With the recent advancements in deep learning now it is possible to get better accuracy, robustness and low latency for solving complex functions. In our experiment we have developed two deep learning models for emotion recognition from speech. We compare the performance of a feed forward Deep Neural Network (DNN) with the recently developed Recurrent Neural Network (RNN) which is known as Gated Recurrent Unit (GRU) for speech emotion recognition. GRUs are currently not explored for classifying emotions from speech. The DNN model gives an accuracy of 89.96% and the GRU model gives an accuracy of 95.82%. Our experiments show that GRU model performs very well on emotion classification compared to the DNN model.




Praseetha, V. M., & Vadivel, S. (2018). Deep learning models for speech emotion recognition. Journal of Computer Science, 14(11), 1577–1587.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free