Contextual factors and adaptative multimodal human-computer interaction: Multi-level specification of emotion and expressivity in Embodied Conversational Agents

4Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we present an Embodied Conversational Agent (ECA) model able to display rich verbal and non-verbal behaviors. The selection of these behaviors should depend not only on factors related to her individuality such as her culture, her social and professional role, her personality, but also on a set of contextual variables (such as her interlocutor, the social conversation setting), and other dynamic variables (belief, goal, emotion). We describe the representation scheme and the computational model of behavior expressivity of the Expressive Agent System that we have developed. We explain how the multi-level annotation of a corpus of emotionally rich TV video interviews can provide context-dependent knowledge as input for the specification of the ECA (e.g. which contextual cues and levels of representation are required for enabling the proper recognition of the emotions). © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Lamolle, M., Mancini, M., Pelachaud, C., Abrilian, S., Martin, J. C., & Devillers, L. (2005). Contextual factors and adaptative multimodal human-computer interaction: Multi-level specification of emotion and expressivity in Embodied Conversational Agents. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3554 LNAI, pp. 225–239). Springer Verlag. https://doi.org/10.1007/11508373_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free