Lifelike gesture synthesis and timing for conversational agents

18Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Synchronization of synthetic gestures with speech output is one of the goals for embodied conversational agents which have become a new paradigm for the study of gesture and for human-computer interface. In this context, this contribution presents an operational model that enables lifelike gesture animations of an articulated figure to be rendered in real-time from representations of spatiotemporal gesture knowledge. Based on various findings on the production of human gesture, the model provides means for motion representation, planning, and control to drive the kinematic skeleton of a figure which comprises 43 degrees of freedom in 29j oints for the main body and 20 DOF for each hand. The model is conceived to enable cross-modal synchrony with respect to the coordination of gestures with the signal generated by a text-to-speech system.

Cite

CITATION STYLE

APA

Wachsmuth, I., & Kopp, S. (2002). Lifelike gesture synthesis and timing for conversational agents. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2298, pp. 120–133). Springer Verlag. https://doi.org/10.1007/3-540-47873-6_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free