Embodied conversational agents: Computing and rendering realistic gaze patterns

1Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We describe here our efforts for modeling multimodal signals exchanged by interlocutors when interacting face-to-face. This data is then used to control embodied conversational agents able to engage into a realistic face-to-face interaction with human partners. This paper focuses on the generation and rendering of realistic gaze patterns. The problems encountered and solutions proposed claim for a stronger coupling between research fields such as audiovisual signal processing, linguistics and psychosocial sciences for the sake of efficient and realistic human-computer interaction. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Bailly, G., Elisei, F., Raidt, S., Casari, A., & Picot, A. (2006). Embodied conversational agents: Computing and rendering realistic gaze patterns. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4261 LNCS, pp. 9–18). Springer Verlag. https://doi.org/10.1007/11922162_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free