Speech Emotion Based Sentiment Recognition using Deep Neural Networks

23Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The capacity to comprehend and communicate with others via language is one of the most valuable human abilities. We are well-Trained in our experience reading awareness of different emotions since they play a vital part in communication. Contrary to popular belief, emotion recognition is a challenging task for computers or robots due to the subjective nature of human mood. This research proposes a framework for acknowledging the passionate sections of conversation, independent of the semantic content, via the recognition of discourse feelings. To categorize the emotional content of audio files, this article employs deep learning techniques such as convolutional neural networks (CNNs) and long short-Term memories (LSTMs). In order to make sound information as helpful as possible for future use, models using Mel-frequency cepstral coefficients (MFCCs) were created. It was tested using RAVDESS and TESS datasets and found that the CNN had a 97.1% accuracy rate.

Cite

CITATION STYLE

APA

Choudhary, R. R., Meena, G., & Mohbey, K. K. (2022). Speech Emotion Based Sentiment Recognition using Deep Neural Networks. In Journal of Physics: Conference Series (Vol. 2236). IOP Publishing Ltd. https://doi.org/10.1088/1742-6596/2236/1/012003

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free