Collection and analysis of multimodal interaction in direction-giving dialogues: Towards an automatic gesture selection mechanism for metaverse avatars

1Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

With the aim of building a spatial gesture generation mechanism in Metaverse avatars, we report on an empirical study for multimodal direction-giving dialogues and propose a prototype system for gesture generation. First, we conducted an experiment in which a direction receiver asked for directions to some place on a university campus, and the direction giver gave directions. Then, using a machine learning technique, we annotated the direction giver's right-hand gestures automatically and analyzed the distribution of the direction of the gestures. As a result, we proposed four types of proxemics and found that the distribution of gesture directions differs with the type of proxemics between the conversational participants. Finally, we implement a gesture generation mechanism into a Metaverse application and demonstrate an example. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Tsukamoto, T., Muroya, Y., Okamoto, M., & Nakano, Y. (2012). Collection and analysis of multimodal interaction in direction-giving dialogues: Towards an automatic gesture selection mechanism for metaverse avatars. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7471 LNAI, pp. 94–105). https://doi.org/10.1007/978-3-642-32326-3_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free