Expressive gesture model for humanoid robot

5Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents an expressive gesture model that generates communicative gestures accompanying speech for the humanoid robot Nao. The research work focuses mainly on the expressivity of robot gestures being coordinated with speech. To reach this objective, we have extended and developed our existing virtual agent platform GRETA to be adapted to the robot. Gestural prototypes are described symbolically and stored in a gestural database, called lexicon. Given a set of intentions and emotional states to communicate the system selects from the robot lexicon corresponding gestures. After that the selected gestures are planned to synchronize speech and then instantiated in robot joint values while taking into account parameters of gestural expressivity such as temporal extension, spatial extension, fluidity, power and repetitivity. In this paper, we will provide a detailed overview of our proposed model. © 2011 Springer-Verlag.

Author supplied keywords

Cite

CITATION STYLE

APA

Anh, L. Q., & Pelachaud, C. (2011). Expressive gesture model for humanoid robot. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6975 LNCS, pp. 224–231). https://doi.org/10.1007/978-3-642-24571-8_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free