HRTF Estimation in the Wild

7Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Head Related Transfer Functions (HRTFs) play a crucial role in creating immersive spatial audio experiences. However, HRTFs differ significantly from person to person, and traditional methods for estimating personalized HRTFs are expensive, time-consuming, and require specialized equipment. We imagine a world where your personalized HRTF can be determined by capturing data through earbuds in everyday environments. In this paper, we propose a novel approach for deriving personalized HRTFs that only relies on in-the-wild binaural recordings and head tracking data. By analyzing how sounds change as the user rotates their head through different environments with different noise sources, we can accurately estimate their personalized HRTF. Our results show that our predicted HRTFs closely match ground-truth HRTFs measured in an anechoic chamber. Furthermore, listening studies demonstrate that our personalized HRTFs significantly improve sound localization and reduce front-back confusion in virtual environments. Our approach offers an efficient and accessible method for deriving personalized HRTFs and has the potential to greatly improve spatial audio experiences.

Cite

CITATION STYLE

APA

Jayaram, V., Kemelmacher-Shlizerman, I., & Seitz, S. M. (2023). HRTF Estimation in the Wild. In UIST 2023 - Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. Association for Computing Machinery, Inc. https://doi.org/10.1145/3586183.3606782

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free