Exploring cross-modal training via touch to learn a mid-air marking menu gesture set

7Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

While mid-air gestures are an attractive modality with an extensive research history, one challenge with their usage is that the gestures are not self-revealing. Scaffolding techniques to teach these gestures are difficult to implement since the input device, e.g. a hand, wand or arm, cannot present the gestures to the user. In contrast, for touch gestures, feedforward mechanisms (such as Marking Menus or OctoPocus) have been shown to effectively support user awareness and learning. In this paper, we explore whether touch gesture input can be leveraged to teach users to perform mid-air gestures. We show that marking menu touch gestures transfer directly to knowledge of mid-air gestures, allowing performance of these gestures without intervention. We argue that cross-modal learning can be an effective mechanism for introducing users to mid-air gestural input.

Cite

CITATION STYLE

APA

Henderson, J., Mizobuchi, S., Li, W., & Lank, E. (2019). Exploring cross-modal training via touch to learn a mid-air marking menu gesture set. In Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI 2019. Association for Computing Machinery, Inc. https://doi.org/10.1145/3338286.3340119

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free