Combining features for cover song identification

9Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we evaluate a set of methods for combining features for cover song identification. We first create multiple classifiers based on global tempo, duration, loudness, beats and chroma average features, training a random forest for each feature. Subsequently, we evaluate standard combination rules for merging these single classifiers into a composite classifier based on global features. We further obtain two higher level classifiers based on chroma features: one based on comparing histograms of quantized chroma features, and a second one based on computing cross-correlations between sequences of chroma features, to account for temporal information. For combining the latter chroma-based classifiers with the composite classifier based on global features, we use standard rank aggregation methods adapted from the information retrieval literature. We evaluate performance with the Second Hand Song dataset, where we quantify performance using multiple statistics. We observe that each combination rule outperforms single methods in terms of the total number of identified queries. Experiments with rank aggregation methods show an increase of up to 23.5 % of the number of identified queries, compared to single classifiers.

Cite

CITATION STYLE

APA

Osmalskyj, J., Foster, P., Dixon, S., & Embrechts, J. J. (2015). Combining features for cover song identification. In Proceedings of the 16th International Society for Music Information Retrieval Conference, ISMIR 2015 (pp. 462–468). International Society for Music Information Retrieval.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free