Categorizing touch-input locations from touchscreen device interfaces via on-board mechano-acoustic transducers

4Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

Many mobile electronics devices, including smartphones and tablets, require the user to interact physically with the device via tapping the touchscreen. Conveniently, these compact devices are also equipped with high-precision transducers such as accelerometers and microphones, integrated mechanically and designed on-board to support a range of user functionalities. However, unintended access to these transducer signals (bypassing normal on-board data access controls) may allow sensitive user interaction information to be detected and thereby exploited. In this study, we show that acoustic features extracted from the on-board microphone signals, supported with accelerometer and gyroscope signals, may be used together with machine learning techniques to successfully determine the user’s touch input location on a touchscreen: our ensemble model, namely the random forest model, predicts touch input location with up to 86% accuracy in a realistic scenario. Accordingly, we present the approach and techniques used, the performance of the model developed, and also discuss limitations and possible mitigation methods to thwart possible exploitation of such unintended signal channels.

Cite

CITATION STYLE

APA

Teo, K. R., Balamurali, B. T., Zhou, J., & Chen, J. M. (2021). Categorizing touch-input locations from touchscreen device interfaces via on-board mechano-acoustic transducers. Applied Sciences (Switzerland), 11(11). https://doi.org/10.3390/app11114834

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free