With the continuing growth of voice-controlled devices, voice metrics have been widely used for user identification. However, voice biometrics is vulnerable to replay attacks and ambient noise. We identify that the fundamental vulnerability in voice biometrics is rooted in its indirect sensing modality (e.g., microphone). In this paper, we present VocalPrint, a resilient mmWave interrogation system which directly captures and analyzes the vocal vibrations for user authentication. Specifically, VocalPrint exploits the unique disturbance of the skin-reflect radio frequency (RF) signals around the near-throat region of the user, caused by the vocal vibrations during communication. The complex ambient noise is isolated from the RF signal using a novel resilience-aware clutter suppression approach for preserving fine-grained vocal biometric properties. Afterward, we extract the text-independent vocal tract and vocal source features and input them to an ensemble classifier for user authentication. VocalPrint is practical as it leverages a low-cost, portable, and energy-efficient hardware allowing effortless transition to a smartphone while having sufficient usability as typical voice authentication systems due to its non-contact nature. Our experimental results from 41 participants with different interrogation distances, orientations, and body motions show that VocalPrint can achieve over 96% authentication accuracy even under unfavorable conditions. We demonstrate the resilience of our system against complex noise interference and spoof attacks of various threat levels.
CITATION STYLE
Li, H., Xu, C., Rathore, A. S., Li, Z., Zhang, H., Song, C., … Xu, W. (2020). VocalPrint: Exploring a resilient and secure voice authentication via mmWave biometric interrogation. In SenSys 2020 - Proceedings of the 2020 18th ACM Conference on Embedded Networked Sensor Systems (pp. 312–325). Association for Computing Machinery, Inc. https://doi.org/10.1145/3384419.3430779
Mendeley helps you to discover research relevant for your work.