Active Stylus Input Latency Compensation on Touch Screen Mobile Devices

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Input latency adversely affects users experience when they interact with touchscreen devices, especially, when they perform such pointing tasks as writing or drawing with a stylus. To cope with this problem, we capitalize on the deep-learning latency compensation approach, which is considered effective for now, and propose GRU-CNN architecture that enables more accurate prediction of stylus nib future position based on the sequence of the latest input events. To improve prediction accuracy, we minimize the value of custom loss estimating not only distance but direction proximity of actual touches to predicted stylus positions. Additional usage of real-pen specific features generated with an active stylus (tilt, orientation, and pressure values) is also aimed at accuracy improvement. Experiments reveal that the models with proposed GRU-CNN architecture give 0.07, 0.24, and 0.47 mm of prediction error, which are 9.4, 5.3, and 3.8 times lower than the state-of-the-art LSTM-based ones have in cases of prediction in ~16.6, ~33.3, and 50 ms. The proposed solution provides low-latency interaction in real-time (about 4 ms on Galaxy Note 9) with no cost of hardware complexity.

Cite

CITATION STYLE

APA

Kushnirenko, R., Alkhimova, S., Sydorenko, D., & Tolmachov, I. (2020). Active Stylus Input Latency Compensation on Touch Screen Mobile Devices. In Communications in Computer and Information Science (Vol. 1224 CCIS, pp. 245–253). Springer. https://doi.org/10.1007/978-3-030-50726-8_32

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free