Representing real-life emotions in audiovisual data with non basic emotional patterns and context features

28Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The modeling of realistic emotional behavior is needed for various applications in multimodal human-machine interaction such as emotion detection in a surveillance system or the design of natural Embodied Conversational Agents. Yet, building such models requires appropriate definition of various levels for representing: the emotional context, the emotion itself and observed multimodal behaviors. This paper presents the multi-level emotion and context coding scheme that has been defined following the annotation of fifty one videos of TV interviews. Results of annotation analysis show the complexity and the richness of the real-life data: around 50% of the clips feature mixed emotions with multi-modal conflictual cues. A typology of mixed emotional patterns is proposed showing that cause-effect conflict and masked acted emotions are perceptually difficult to annotate regarding the valence dimension. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Devillers, L., Abrilian, S., & Martin, J. C. (2005). Representing real-life emotions in audiovisual data with non basic emotional patterns and context features. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3784 LNCS, pp. 519–526). https://doi.org/10.1007/11573548_67

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free