Interactive Sound Propagation and Rendering for Large Multi-Source Scenes

  • Schissler C
  • Manocha D
N/ACitations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

We present an approach to generate plausible acoustic effects at interactive rates in large dynamic environments containing many sound sources. Our formulation combines listener-based backward ray tracing with sound source clustering and hybrid audio rendering to handle complex scenes. We present a new algorithm for dynamic late reverberation that performs high-order ray tracing from the listener against spherical sound sources. We achieve sublinear scaling with the number of sources by clustering distant sound sources and taking relative visibility into account. We also describe a hybrid convolution-based audio rendering technique that can process hundreds of thousands of sound paths at interactive rates. We demonstrate the performance on many indoor and outdoor scenes with up to 200 sound sources. In practice, our algorithm can compute more than 50 reflection orders at interactive rates on a multicore PC, and we observe a 5x speedup over prior geometric sound propagation algorithms.

Cite

CITATION STYLE

APA

Schissler, C., & Manocha, D. (2017). Interactive Sound Propagation and Rendering for Large Multi-Source Scenes. ACM Transactions on Graphics, 36(4), 1. https://doi.org/10.1145/3072959.2943779

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free