Embedded Feature Selection for Multi-label Classification of Music Emotions

38Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

When detecting of emotions from music, many features are extracted from the original music data. However, there are redundant or irrelevant features, which will reduce the performance of classification models. Considering the feature problems, we propose an embedded feature selection method, called Multi-label Embedded Feature Selection (MEFS), to improve classification performance by selecting features. MEFS embeds classifier and considers the label correlation. Other three representative multi-label feature selection methods, known as LP-Chi, max and avg, together with four multi-label classification algorithms, is included for performance comparison. Experimental results show that the performance of our MEFS algorithm is superior to those filter methods in the music emotion dataset. © 2012 Copyright the authors.

Cite

CITATION STYLE

APA

You, M., Liu, J., Li, G. Z., & Chen, Y. (2012). Embedded Feature Selection for Multi-label Classification of Music Emotions. International Journal of Computational Intelligence Systems, 5(4), 668–678. https://doi.org/10.1080/18756891.2012.718113

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free