Towards a Framework for Social Robot Co-speech Gesture Generation with Semantic Expression

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The ability to express semantic co-speech gestures in an appropriate manner of the robot is needed for enhancing the interaction between humans and social robots. However, most of the learning-based methods in robot gesture generation are unsatisfactory in expressing the semantic gesture. Many generated gestures are ambiguous, making them difficult to deliver the semantic meanings accurately. In this paper, we proposed a robot gesture generation framework that can effectively improve the semantic gesture expression ability of social robots. In this framework, the semantic words in a sentence are selected and expressed by clear and understandable co-speech gestures with appropriate timing. In order to test the proposed method, we designed an experiment and conducted the user study. The result shows that the performances of the gesture generated by the proposed method are significantly improved compared to the baseline gesture in three evaluation factors: human-likeness, naturalness and easiness to understand.

Cite

CITATION STYLE

APA

Zhang, H., Yu, C., & Tapus, A. (2022). Towards a Framework for Social Robot Co-speech Gesture Generation with Semantic Expression. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13817 LNAI, pp. 110–119). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-24667-8_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free