Emotion recognition with poincare mapping of voiced-speech segments of utterances

9Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The following paper introduces a set of novel descriptors of emotional speech, which allows for a significant increase in emotion classification performance. The proposed characteristics - statistical properties of Poincare Maps, derived for voiced-speech segments of utterances - are used in recognition in combinations with a variety of both commonly used and some other, original descriptors of emotional speech. The introduced features proved to provide useful information into a classification process. Emotion recognition is performed using binary decision trees, which perform extraction of different emotions at consecutive decision levels. Classification rates for the considered six-category problem, which involved anger, boredom, joy, fear, neutral and sadness, are at the level up to 79% for both speaker-dependent and speaker-independent cases. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Ślot, K., Cichosz, J., & Bronakowski, L. (2008). Emotion recognition with poincare mapping of voiced-speech segments of utterances. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5097 LNAI, pp. 886–895). https://doi.org/10.1007/978-3-540-69731-2_84

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free