Echo-id: Smart user identification leveraging inaudible sound signals

3Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this article, we present a novel user identification mechanism for smart spaces called Echo-ID (referred to as E-ID). Our solution relies on inaudible sound signals for capturing the user’s behavioral tapping/typing characteristics while s/he types the PIN on a PIN-PAD, and uses them to identify the corresponding user from a set of N enrolled inhabitants. E-ID proposes an all-inclusive pipeline that generates and transmits appropriate sound signals, and extracts a user-specific imprint from the recorded signals (E-Sign). For accurate identification of the corresponding user given an E-Sign sample, E-ID makes use of deep-learning (i.e., CNN for feature extraction) and SVM classifier (for making the identification decision). We implemented a proof of the concept of E-ID by leveraging the commodity speaker and microphone. Our evaluations revealed that E-ID can identify the users with an average accuracy of 93% to 78% from an enrolled group of 2-5 subjects, respectively.

Cite

CITATION STYLE

APA

Ali Shah, S. W., Shaghaghi, A., Kanhere, S. S., Zhang, J., Anwar, A., & Doss, R. (2020). Echo-id: Smart user identification leveraging inaudible sound signals. IEEE Access, 8, 194508–194522. https://doi.org/10.1109/ACCESS.2020.3031899

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free