Decoding surface touch typing from hand-tracking

24Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.

Abstract

We propose a novel text decoding method that enables touch typing on an uninstrumented flat surface. Rather than relying on physical keyboards or capacitive touch, our method takes as input hand motion of the typist, obtained through hand-tracking, and decodes this motion directly into text. We use a temporal convolutional network to represent a motion model that maps the hand motion, represented as a sequence of hand pose features, into text characters. To enable touch typing without the haptic feedback of a physical keyboard, we had to address more erratic typing motion due to drift of the fingers. Thus, we incorporate a language model as a text prior and use beam search to efficiently combine our motion and language models to decode text from erratic or ambiguous hand motion. We collected a dataset of 20 touch typists and evaluated our model on several baselines, including contact-based text decoding and typing on a physical keyboard. Our proposed method is able to leverage continuous hand pose information to decode text more accurately than contact-based methods and an offline study shows parity (73 WPM, 2.38% UER) with typing on a physical keyboard. Our results show that hand-tracking has the potential to enable rapid text entry in mobile environments.

Cite

CITATION STYLE

APA

Richardson, M., Durasoff, M., & Wang, R. (2020). Decoding surface touch typing from hand-tracking. In UIST 2020 - Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (pp. 686–696). Association for Computing Machinery, Inc. https://doi.org/10.1145/3379337.3415816

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free