User-guided rendering of audio objects using an interactive genetic algorithm

3Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Object-based audio allows for personalization of content, perhaps to improve accessibility or to increase quality of experience more generally. This paper describes the design and evaluation of an interactive audio renderer, which is used to optimize an audio mix based on the feedback of the listener. A panel of 14 trained participants were recruited to trial the system. The range of audio mixes produced using the proposed system was comparable to the range of mixes achieved using a traditional fader-based mixing interface. Evaluation using the System Usability Scale showed a low level of physical and mental burden, making this a suitable interface for users with impairments, such as to vision and/or mobility.

Cite

CITATION STYLE

APA

Wilson, A., & Fazenda, B. M. (2019). User-guided rendering of audio objects using an interactive genetic algorithm. AES: Journal of the Audio Engineering Society, 67(7–8), 522–530. https://doi.org/10.17743/jaes.2019.0035

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free