MusicCNNs: A new benchmark on content-based music recommendation

6Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose a new deep convolutional neural network for content-based music recommendation, and call it MusicCNNs. To learn effective representations of the music segments, we have collected a data set including 600,000+ songs, where each song has been split into about 20 music segments. Furthermore, the music segments are converted to “images” using the Fourier transformation, so that they can be easily fed into MusicCNNs. On this collected data set, we compared MusicCNNs with other existing methods for content-based music recommendation. Experimental results show that MusicCNNs can generally deliver more accurate recommendations than the compared methods. Therefore, along with the collected data set, MusicCNNs can be considered as a new benchmark for content-based music recommendation.

Cite

CITATION STYLE

APA

Zhong, G., Wang, H., & Jiao, W. (2018). MusicCNNs: A new benchmark on content-based music recommendation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11301 LNCS, pp. 394–405). Springer Verlag. https://doi.org/10.1007/978-3-030-04167-0_36

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free