Performance evaluation of deep autoencoder network for speech emotion recognition

4Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

The learning methods with multiple levels of representation is called deep learning methods. The composition of simple but now linear modules results in deep-learning model. Deep-learning in near future will have many more success, because it requires very little engineering in hands and it can easily take ample amount of data for computation. In this paper the deep learning network is used to recognize speech emotions. The deep Autoencoder is constructed to learn the speech emotions (Angry, Happy, Neutral, and Sad) of Normal and Autistic Children. Experimental results evident that the categorical classification accuracy of speech is 46.5% and 33.3% for Normal and Autistic children speech respectively. Whereas, Auto encoder shows a very low classification accuracy of 26.1% for only happy emotion and no classification accuracy for Angry, Neutral and Sad emotions.

Cite

CITATION STYLE

APA

AndleebSiddiqui, M., Hussain, W., Ali, S. A., & Danish-ur-Rehman. (2020). Performance evaluation of deep autoencoder network for speech emotion recognition. International Journal of Advanced Computer Science and Applications, (2), 606–611. https://doi.org/10.14569/ijacsa.2020.0110276

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free