Who, me? How virtual agents can shape conversational footing in virtual reality

13Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The nonverbal behaviors of conversational partners reflect their conversational footing, signaling who in the group are the speakers, addressees, bystanders, and overhearers. Many applications of virtual reality (VR) will involve multiparty conversations with virtual agents and avatars of others where appropriate signaling of footing will be critical. In this paper, we introduce computational models of gaze and spatial orientation that a virtual agent can use to signal specific footing configurations. An evaluation of these models through a user study found that participants conformed to conversational roles signaled by the agent and contributed to the conversation more as addressees than as bystanders. We observed these effects in immersive VR, but not on a 2D display, suggesting an increased sensitivity to virtual agents’ footing cues in VR-based interfaces.

Cite

CITATION STYLE

APA

Pejsa, T., Gleicher, M., & Mutlu, B. (2017). Who, me? How virtual agents can shape conversational footing in virtual reality. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10498 LNAI, pp. 347–359). Springer Verlag. https://doi.org/10.1007/978-3-319-67401-8_45

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free