Frame-independent and parallel method for 3D audio real-time rendering on mobile devices

2Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

As 3D audio is a fundamental medium of virtual reality (VR), 3D audio real-time rendering technique is essential for the implementation of VR, especially on the mobile devices. While constrained by the limited computational power, the computation load is too high to implement 3D audio real-time rendering on the mobile devices. To solve this problem, we propose a frame-independent and parallel method of framing convolution, to parallelize process of 3D audio rendering using head-related transfer function (HRTF). In order to refrain from the dependency of overlap-add convolution over the adjacent frames, the data of convolution result is added on the final results of the two adjacent frames. We found our method could reduce the calculation time of 3D audio rendering significantly. The results were 0.74 times, 0.5 times and 0.36 times the play duration of si03.wav (length of 27 s), with Snapdragon 801, Kirin 935 and Helio X10 Turbo, respectively.

Cite

CITATION STYLE

APA

Song, Y., Wang, X., Yang, C., Gao, G., Chen, W., & Tu, W. (2017). Frame-independent and parallel method for 3D audio real-time rendering on mobile devices. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10133 LNCS, pp. 221–232). Springer Verlag. https://doi.org/10.1007/978-3-319-51814-5_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free