Adaptive audio mixing for enhancing immersion in augmented reality audio games

6Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this work we present an adaptive audio mixing technique to be implemented in the design of Augmented Reality Audio (ARA) systems. The content of such systems is delivered entirely through the acoustic channel: the real acoustic environment is mixed with a virtual soundscape and returns to the listener as "pseudoacoustic"environment. We argue that the proposed adaptive mixing technique enhances user immersion in the augmented space in terms of the localization of sound objects. The need to optimise our ARA mixing engine emerged from our previous research, and more specifically from the analysis of the experimental results regarding the development of the Augmented Reality Audio Game (ARAG) "Audio Legends"that was tested on the field. The purpose of our new design was to aid sound localization, which is a crucial and demanding factor for delivering an immersive acoustic experience. We describe in depth the adaptive mixing along with the experimental test-bed. The results for the sound localization scenario indicate a substantial increase of 55 percent in accuracy compared to the legacy ARA mix model.

Cite

CITATION STYLE

APA

Moustakas, K., Rovithis, E., Vogklis, K., & Floros, A. (2020). Adaptive audio mixing for enhancing immersion in augmented reality audio games. In ICMI 2020 Companion - Companion Publication of the 2020 International Conference on Multimodal Interaction (pp. 220–227). Association for Computing Machinery, Inc. https://doi.org/10.1145/3395035.3425325

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free