Automatic generation of conversational behavior for multiple embodied virtual characters: The rules and models behind our system

1Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we presented the rules and algorithms we use to automatically generate non-verbal behavior like gestures and gaze for two embodied virtual agents. They allow us to transform a dialogue in text format into an agent behavior script enriched by eye gaze and conversational gesture behavior. The agents' gaze behavior is informed by theories of human face-to-face gaze behavior. Gestures are generated based on the analysis of linguistic and contextual information of the input text. Since all behaviors are generated automatically, our system offers content creators a convenient method to compose multimodal presentations, a task that would otherwise be very cumbersome and time consuming. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Breitfuss, W., Prendinger, H., & Ishizuka, M. (2008). Automatic generation of conversational behavior for multiple embodied virtual characters: The rules and models behind our system. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5208 LNAI, pp. 472–473). https://doi.org/10.1007/978-3-540-85483-8_49

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free