Error curves for evaluating the quality of feature rankings

0Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.

Abstract

In this article, we propose a method for evaluating feature ranking algorithms. A feature ranking algorithm estimates the importance of descriptive features when predicting the target variable, and the proposed method evaluates the correctness of these importance values by computing the error measures of two chains of predictive models. The models in the first chain are built on nested sets of top-ranked features, while the models in the other chain are built on nested sets of bottom ranked features. We investigate which predictive models are appropriate for building these chains, showing empirically that the proposed method gives meaningful results and can detect differences in feature ranking quality. This is first demonstrated on synthetic data, and then on several real-world classification benchmark problems.

Cite

CITATION STYLE

APA

Slavkov, I., Petković, M., Geurts, P., Kocev, D., & Džeroski, S. (2020). Error curves for evaluating the quality of feature rankings. PeerJ Computer Science, 6, 1–39. https://doi.org/10.7717/peerj-cs.310

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free