Tapnet: The design, training, implementation, and applications of a multi-task learning cnn for off-screen mobile input

10Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.
Get full text

Abstract

To make off-screen interaction without specialized hardware practical, we investigate using deep learning methods to process the common built-in IMU sensor (accelerometers and gyroscopes) on mobile phones into a useful set of one-handed interaction events. We present the design, training, implementation and applications of TapNet, a multi-task network that detects tapping on the smartphone. With phone form factor as auxiliary information, TapNet can jointly learn from data across devices and simultaneously recognize multiple tap properties, including tap direction and tap location. We developed two datasets consisting of over 135K training samples, 38K testing samples, and 32 participants in total. Experimental evaluation demonstrated the effectiveness of the TapNet design and its significant improvement over the state of the art. Along with the datasets, codebase1, and extensive experiments, TapNet establishes a new technical foundation for off-screen mobile input.

Author supplied keywords

Cite

CITATION STYLE

APA

Huang, M. X., & Li, Y. (2021). Tapnet: The design, training, implementation, and applications of a multi-task learning cnn for off-screen mobile input. In Conference on Human Factors in Computing Systems - Proceedings. Association for Computing Machinery. https://doi.org/10.1145/3411764.3445626

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free