No interface, no problem: Gesture recognition on physical objects using radar sensing

13Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Physical objects are usually not designed with interaction capabilities to control digital content. Nevertheless, they provide an untapped source for interactions since every object could be used to control our digital lives. We call this the missing interface problem: Instead of embedding computational capacity into objects, we can simply detect users’ gestures on them. However, gesture detection on such unmodified objects has to date been limited in the spatial resolution and detection fidelity. To address this gap, we conducted research on micro-gesture detection on physical objects based on Google Soli’s radar sensor. We introduced two novel deep learning architectures to process range Doppler images, namely a three-dimensional convolutional neural network (Conv3D) and a spectrogram-based ConvNet. The results show that our architectures enable robust on-object gesture detection, achieving an accuracy of approximately 94% for a five-gesture set, surpassing previous state-of-the-art performance results by up to 39%. We also showed that the decibel (dB) Doppler range setting has a significant effect on system performance, as accuracy can vary up to 20% across the dB range. As a result, we provide guidelines on how to best calibrate the radar sensor.

Cite

CITATION STYLE

APA

Attygalle, N. T., Leiva, L. A., Kljun, M., Sandor, C., Plopski, A., Kato, H., & Čopič Pucihar, K. (2021). No interface, no problem: Gesture recognition on physical objects using radar sensing. Sensors, 21(17). https://doi.org/10.3390/s21175771

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free