NASR: Nonauditory speech recognition with motion sensors in head-mounted displays

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

With the growing popularity of Virtual Reality (VR), people spend more and more time wearing Head-Mounted Display (HMD) for an immersive experience. HMD is physically attached on wearer’s head so that head motion can be tracked. We find it can also detect subtle movement of facial muscles which is strongly related to speech according to the mechanism of phonation. Inspired by this observation, we propose NonAuditory Speech Recognition (NASR). It uses motion sensor for recognizing spoken words. Different from most prior work of speech recognition using microphone to capture audio signal for analysis, NASR is resistant to acoustic noise of surroundings because of its nonauditory mechanism. Without using microphone, it consumes less power and requires no special permissions in most operating systems. Besides, NASR can be seamlessly integrated into existing speech recognition systems. Through extensive experiments, NASR can get up to 90.97% precision with 82.98% recall rate for speech recognition.

Cite

CITATION STYLE

APA

Gu, J., Shen, K., Wang, J., & Yu, Z. (2018). NASR: Nonauditory speech recognition with motion sensors in head-mounted displays. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10874 LNCS, pp. 754–759). Springer Verlag. https://doi.org/10.1007/978-3-319-94268-1_63

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free