Design of neural network model for emotional speech recognition

11Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Human–computer interaction (HCI) needs to be improved for the field of recognition and detection. Exclusively, the emotion recognition has major impact on social, engineering, and medical science applications. This paper presents an approach for emotion recognition of emotional speech based on neural network. Linear predictive coefficients and radial basis function network are used as features and classification techniques, respectively, for emotion recognition. Results reveal that the approach is effective in recognition of human speech emotions. Speech utterances are directly extracted from audio channel including background noise. Totally, 75 utterances from 05 speakers were collected based on five emotion categories. Fifteen utterances have been considered for training and rest are for test. The proposed approach has been tested and verified for newly developed dataset.

Cite

CITATION STYLE

APA

Palo, H. K., Mohanty, M. N., & Chandra, M. (2015). Design of neural network model for emotional speech recognition. In Advances in Intelligent Systems and Computing (Vol. 325, pp. 291–300). Springer Verlag. https://doi.org/10.1007/978-81-322-2135-7_32

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free