The design of affective interfaces such as credible expressive characters in story-telling applications requires the understanding and the modeling of relations between realistic emotions and behaviors in different modalities such as facial expressions, speech, hand gestures and body movements. Yet, research on emotional multimodal behaviors has focused on individual modalities during acted basic emotions. In this paper we describe the coding scheme that we have designed for annotating multimodal behaviors observed during mixed and non acted emotions. We explain how we used it for the annotation of videos from a corpus of emotionally rich TV interviews. We illustrate how the annotations can be used to compute expressive profiles of videos and relations between non basic emotions and multimodal behaviors. © Springer-Verlag Berlin Heidelberg 2005.
CITATION STYLE
Martin, J. C., Abrilian, S., & Devillers, L. (2005). Annotating multimodal behaviors occurring during non basic emotions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3784 LNCS, pp. 550–557). Springer Verlag. https://doi.org/10.1007/11573548_71
Mendeley helps you to discover research relevant for your work.