Emotion Recognition and Classification in Speech using Artificial Neural Networks

  • Shaw A
  • Kumar R
  • Saxena S
N/ACitations
Citations of this article
23Readers
Mendeley users who have this article in their library.

Abstract

To date, little research has been done in emotion classification and recognition in speech. Therefore, there is a need to discuss why this topic is interesting and present a system for classifying and recognizing emotions through speech using neural networks through this article. The proposed system will be speaker independent since a database of speech samples will be used. Various classifiers will used to differentiate emotions such as neutral, anger, happy, sad, etc. The database will consist of emotional speech samples. Prosodic features like pitch, energy, formant frequencies and spectral features like mel frequency cepstral coefficients will be used in the system. Further the classifiers will be trained by using these features for classifying emotions accurately. Following classification, these features will be used to recognize the emotion of the speech sample. Thus, many components like pre-processing of speech, MFCC features, classifiers, prosodic features come together in the implementation of emotion recognition system using speech.

Cite

CITATION STYLE

APA

Shaw, A., Kumar, R., & Saxena, S. (2016). Emotion Recognition and Classification in Speech using Artificial Neural Networks. International Journal of Computer Applications, 145(8), 5–9. https://doi.org/10.5120/ijca2016910710

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free