ALF-200k: Towards extensive multimodal analyses of music tracks and playlists

5Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In recent years, approaches in music information retrieval have been based on multimodal analyses of music incorporating audio as well as lyrics features. Because most of those approaches are lacking reusable, high-quality datasets, in this work we propose ALF-200k, a publicly available, novel dataset including 176 audio and lyrics features of more than 200,000 tracks and their attribution to more than 11,000 user-created playlists. While the dataset is of general purpose and thus, may be used in experiments for diverse music information retrieval problems, we present a first multimodal study on playlist features and particularly analyze, which type of features are shared within specific playlists and thus, characterize it. We show that while acoustic features act as the major glue between tracks contained in a playlists, also lyrics features are a powerful means to attribute tracks to playlists.

Cite

CITATION STYLE

APA

Zangerle, E., Tschuggnall, M., Wurzinger, S., & Specht, G. (2018). ALF-200k: Towards extensive multimodal analyses of music tracks and playlists. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10772 LNCS, pp. 584–590). Springer Verlag. https://doi.org/10.1007/978-3-319-76941-7_48

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free