Extracting emotion from speech: Towards emotional speech-driven facial animations

0Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Facial expressions and characteristics of speech are exploited intuitively by humans to infer the emotional status of their partners in communication. This paper investigates ways to extract emotion from spontaneous speech, aiming at transferring emotions to appropriate facial expressions of the speaker's virtual representatives. Hence, this paper presents one step towards an emotional speech-driven facial animation system, promises to be the first true non-human animation assistant. Different classifier-algorithms (support vector machines, neural networks, and decision trees) were compared in extracting emotion from speech features. Results show that these machine-learning algorithms outperform human subjects extracting emotion from speech alone if there is no access to additional cues onto the emotional state. © Springer-Verlag Berlin Heidelberg 2003.

Cite

CITATION STYLE

APA

Aina, O. O., Hartmann, K., & Strothotte, T. (2003). Extracting emotion from speech: Towards emotional speech-driven facial animations. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2733, 162–171. https://doi.org/10.1007/3-540-37620-8_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free