Source separation using dilated time-frequency DenseNet for music identification in broadcast contents

17Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

We propose a source separation architecture using dilated time-frequency DenseNet for background music identification of broadcast content. We apply source separation techniques to the mixed signals of music and speech. For the source separation purpose, we propose a new architecture to add a time-frequency dilated convolution to the conventional DenseNet in order to effectively increase the receptive field in the source separation scheme. In addition, we apply different convolutions to each frequency band of the spectrogram in order to reflect the different frequency characteristics of the low-and high-frequency bands. To verify the performance of the proposed architecture, we perform singing-voice separation and music-identification experiments. As a result, we confirm that the proposed architecture produces the best performance in both experiments because it uses the dilated convolution to reflect wide contextual information.

Cite

CITATION STYLE

APA

Heo, W. H., Kim, H., & Kwon, O. W. (2020). Source separation using dilated time-frequency DenseNet for music identification in broadcast contents. Applied Sciences (Switzerland), 10(5). https://doi.org/10.3390/app10051727

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free