This paper proposes a neural network that performs audio transformations to user-specified sources (e.g., vocals) of a given audio track according to a given description while preserving other sources not mentioned in the description. Audio Manipulation on a Specific Source (AMSS) is challenging because a sound object (i.e., a waveform sample or frequency bin) is 'transparent'; it usually carries information from multiple sources, in contrast to a pixel in an image. To address this challenging problem, we propose AMSS-Net, which extracts latent sources and selectively manipulates them while preserving irrelevant sources. We also propose an evaluation benchmark for several AMSS tasks, and we show that AMSS-Net outperforms baselines on several AMSS tasks via objective metrics and empirical verification.
CITATION STYLE
Choi, W., Kim, M., Martínez Ramírez, M. A., Chung, J., & Jung, S. (2021). AMSS-Net: Audio Manipulation on User-Specified Sources with Textual Queries. In MM 2021 - Proceedings of the 29th ACM International Conference on Multimedia (pp. 1775–1783). Association for Computing Machinery, Inc. https://doi.org/10.1145/3474085.3475323
Mendeley helps you to discover research relevant for your work.