Knock&Tap: Classification and Localization of Knock and Tap Gestures using Deep Sound Transfer Learning

3Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Gesture interaction is considered one of the promising approaches to control smart devices. In this paper, we present Knock&Tap, an audio-based approach that can perform gesture classification and gesture localization using deep transfer learning. Knock&Tap consists of a single 4-microphone array to record the sound of the user's knocking and tapping gestures and a wood/glass panel for knocking and tapping. Knock&Tap can be used in a situation or environment where vision-based gesture recognition is impossible due to the lighting condition or camera installation issue. Various experiments were conducted to validate the feasibility of Knock&Tap with 7 gesture types on both wood and glass panels. Our experimental results show that Knock&Tap predicts the gesture type and location with an accuracy of up to 97.24% and 92.05%, respectively.

Cite

CITATION STYLE

APA

Jeong, J. Y., Kim, J. H., Yoon, H. Y., & Jeong, J. W. (2021). Knock&Tap: Classification and Localization of Knock and Tap Gestures using Deep Sound Transfer Learning. In ICMI 2021 Companion - Companion Publication of the 2021 International Conference on Multimodal Interaction (pp. 1–6). Association for Computing Machinery, Inc. https://doi.org/10.1145/3461615.3485428

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free