Designing and Implementing of Intelligent Emotional Speech Recognition with Wavelet and Neural Network

  • Zahra B
  • Mirvaziri H
  • Sadeghi F
N/ACitations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

—Recognition of emotion from speech is a significant subject in man-machine fields. In this study, speech signal has analyzed in order to create a recognition system which is able to recognize human emotion and a new set of characteristic has proposed in time, frequency and time–frequency domain in order to increase the accuracy. After extracting features of Pitch, MFCC, Wavelet, ZCR and Energy, neural networks classify four emotions of EMO-DB and SAVEE databases. Combination of features for two emotions in EMO-DB database is 100%, for three emotions is 98.48% and for four emotions is 90% due to the variety of speech, existing more spoken words and distinguishing male and female which is better than the result of SAVEE database. In SAVEE database, accuracy is 97.83% for two emotions of happy and sad, 84.75% for three emotions of angry, normal and sad and 77.78% for four emotions of happy, angry, sad and normal Keywords—Recognition of emotion from speech; feature extraction; MFCC; Artificial neural network; Wavelet

Cite

CITATION STYLE

APA

Zahra, B., Mirvaziri, H., & Sadeghi, F. (2016). Designing and Implementing of Intelligent Emotional Speech Recognition with Wavelet and Neural Network. International Journal of Advanced Computer Science and Applications, 7(9). https://doi.org/10.14569/ijacsa.2016.070904

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free