Eliciting Multimodal Gesture+Speech Interactions in a Multi-Object Augmented Reality Environment

5Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.
Get full text

Abstract

As augmented reality (AR) technology and hardware become more mature and affordable, researchers have been exploring more intuitive and discoverable interaction techniques for immersive environments. This paper investigates multimodal interaction for 3D object manipulation in a multi-object AR environment. To identify the user-defined gestures, we conducted an elicitation study involving 24 participants and 22 referents using an augmented reality headset. It yielded 528 proposals and generated a winning gesture set with 25 gestures after binning and ranking all gesture proposals. We found that for the same task, the same gesture was preferred for both one and two-object manipulation, although both hands were used in the two-object scenario. We present the gestures and speech results, and the differences compared to similar studies in a single object AR environment. The study also explored the association between speech expressions and gesture stroke during object manipulation, which could improve the recognizer efficiency in augmented reality headsets.

Cite

CITATION STYLE

APA

Zhou, X., Williams, A. S., & Ortega, F. R. (2022). Eliciting Multimodal Gesture+Speech Interactions in a Multi-Object Augmented Reality Environment. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST. Association for Computing Machinery. https://doi.org/10.1145/3562939.3565637

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free