Text independent speaker and emotion independent speech recognition in emotional environment

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

It is well known fact that the accuracy of the speaker identification or speech recognition using the speeches recorded in neutral environment is normally good. It has become a challenging work to improve the accuracy of the recognition system using the speeches recorded in emotional environment. This paper mainly discusses the effectiveness on the use of iterative clustering technique and Gaussian mixture modeling technique (GMM) for recognizing speech and speaker from the emotional speeches using Mel frequency perceptual linear predictive cepstral coefficients (MFPLPC) and MFPLPC concatenated with probability as a feature. For the emotion independent speech recognition, models are created for speeches of archetypal emotions such as boredom, disgust, fear, happy, neutral and sad and testing is done on the speeches of emotion anger. For the text independent speaker recognition, individual models are created for all speakers using speeches of nine utterances and testing is done using the speeches of a tenth utterance. 80% of the data is used for training and 20% of the data is used for testing. This system provides the average accuracy of 95% for text independent speaker recognition and emotion independent speech recognition for the system tested on models developed using MFPLPC and MFPLPC concatenated with probability. Accuracy is increased by 1%, if the group classification is done prior to speaker classification with reference to the set of male or female speakers forming a group. Text independent speaker recognition is also evaluated by doing group classification using clustering technique and speaker in a group is identified by applying the test vectors on the GMM models corresponding to the small set of speakers in a group and the accuracy obtained is 97%.

Cite

CITATION STYLE

APA

Revathi, A., & Venkataramani, Y. (2015). Text independent speaker and emotion independent speech recognition in emotional environment. In Advances in Intelligent Systems and Computing (Vol. 339, pp. 43–52). Springer Verlag. https://doi.org/10.1007/978-81-322-2250-7_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free