Emotion Detection via Voice and Speech Recognition

6Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.

Abstract

Emotion detection from voice signals is needed for human-computer interaction (HCI), which is a difficult challenge. In the literature on speech emotion recognition, various well known speech analysis and classification methods have been used to extract emotions from signals. Deep learning strategies have recently been proposed as a workable alternative to conventional methods and discussed. Several recent studies have employed these methods to identify speech-based emotions. The review examines the databases used, the emotions collected, and the contributions to speech emotion recognition. The Speech Emotion Recognition Project was created by the research team. It recognizes human speech emotions. The research team developed the project using Python 3.6. RAVDEESS dataset was also used since it contained eight distinct emotions expressed by all speakers. The RAVDESS dataset, Python programming languages, and Pycharm as an IDE were all used by the author team.

Cite

CITATION STYLE

APA

Rastogi, R., Anand, T., Sharma, S. K., & Panwar, S. (2023). Emotion Detection via Voice and Speech Recognition. International Journal of Cyber Behavior, Psychology and Learning, 13(1). https://doi.org/10.4018/IJCBPL.333473

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free