Ensemble algorithms for feature selection

10Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Many feature selection algorithms are limited in that they attempt to identify relevant feature subsets by examining the features individually. This paper introduces a technique for determining feature relevance using the average information gain achieved during the construction of decision tree ensembles. The technique introduces a node complexity measure and a statistical method for updating the feature sampling distribution based upon confidence intervals to control the rate of convergence. A feature selection threshold is also derived, using the expected performance of an irrelevant feature. Experiments demonstrate the potential of these methods and illustrate the need for both feature weighting and selection. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Rogers, J. D., & Gunn, S. R. (2005). Ensemble algorithms for feature selection. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3635 LNAI, pp. 180–198). Springer Verlag. https://doi.org/10.1007/11559887_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free