Personalizing head related transfer functions for earables

19Citations
Citations of this article
42Readers
Mendeley users who have this article in their library.

Abstract

Head related transfer functions (HRTF) describe how sound signals bounce, scatter, and diffract when they arrive at the head, and travel towards the ears. HRTFs produce distinct sound patterns that ultimately help the brain infer the spatial properties of the sound, such as its direction of arrival, . If an earphone can learn the HRTF, it could apply the HRTF to any sound and make that sound appear directional to the user. For instance, a directional voice guide could help a tourist navigate a new city. While past works have estimated human HRTFs, an important gap lies in personalization. Today's HRTFs are global templates that are used in all products; since human HRTFs are unique, a global HRTF only offers a coarse-grained experience. This paper shows that by moving a smartphone around the head, combined with mobile acoustic communications between the phone and the earbuds, it is possible to estimate a user's personal HRTF. Our personalization system, UNIQ, combines techniques from channel estimation, motion tracking, and signal processing, with a focus on modeling signal diffraction on the curvature of the face. The results are promising and could open new doors into the rapidly growing space of immersive AR/VR, earables, smart hearing aids, etc.

Cite

CITATION STYLE

APA

Yang, Z., & Choudhury, R. R. (2021). Personalizing head related transfer functions for earables. In SIGCOMM 2021 - Proceedings of the ACM SIGCOMM 2021 Conference (pp. 137–150). Association for Computing Machinery, Inc. https://doi.org/10.1145/3452296.3472907

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free