Scalable feature selection algorithms should remove irrelevant and redundant features and scale well on very large datasets. We identify that the currently best state-of-art methods perform well on binary classification tasks but often underperform on multi-class tasks. We suggest that they suffer from the so-called accumulative effect which becomes more visible with the growing number of classes and results in removing relevant and unredundant features. To remedy the problem, we propose two new feature filtering methods which are both scalable and well adapted for the multi-class cases. We report the evaluation results on 17 different datasets which include both binary and multi-class cases. © 2008 Springer-Verlag Berlin Heidelberg.
CITATION STYLE
Chidlovskii, B., & Lecerf, L. (2008). Scalable feature selection for multi-class problems. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5211 LNAI, pp. 227–240). https://doi.org/10.1007/978-3-540-87479-9_33
Mendeley helps you to discover research relevant for your work.