The realtime method based on audio scenegraph for 3D sound rendering

N/ACitations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent studies have shown that the combination of auditory and visual cues enhances the sense of immersion in virtual reality or interactive entertainment applications. However, realtime 3D audiovisual rendering requires high computational cost. In this paper, to reduce realtime computation, we suggest a novel framework of optimized 3D sound rendering, where we define Audio Scenegraph that contains reduced 3D scene information and the necessary parameters for computing early reflections of sound. During pre-computation phase using our framework, graphic reduction and sound source reduction are accomplished according to the environment containing complex 3D scene, sound sources, and a listener. That is, complex 3D scene is reduced to a set of significant facets for sound rendering, and the resulting scene is represented as Audio Scenegraph we defined. And then, the graph is transmitted to the sound engine which clusters a number of sound sources for reducing realtime calculation of sound propagation. For sound source reduction, it is required to estimate early reflection time to test perceptual culling and to cluster sounds which are reachable to facets of each sub space according to the estimation results. During realtime phase according to the position, direction and index of the space of a listener, sounds inside sub space are played by image method and sounds outside sub space are also played by assigning clustered sounds to buffers. Even if the number of sounds is increased, realtime calculation is very stable because most calculations about sounds can be performed offline. It took very consistent time for 3D sound rendering regardless of complexity of 3D scene including hundreds of sound sources by this method. As a future study, it is required to estimate the perceptual acceptance of grouping algorithm by user test. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Yi, J. S., Seong, S. J., & Nam, Y. H. (2005). The realtime method based on audio scenegraph for 3D sound rendering. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3767 LNCS, pp. 720–730). https://doi.org/10.1007/11581772_63

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free