An improper design of feature selection methods can often lead to incorrect conclusions. Moreover, it is not generally realised that functional values of the criterion guiding the search for the best feature set are random variables with some probability distribution. This contribution examines the influence of several estimation techniques on the consistency of the final result. We propose an entropy based measure which can assess the stability of feature selection methods with respect to perturbations in the data. Results show that filters achieve a better stability and performance if more samples are employed for the estimation, i.e., using leave-one-out cross-validation, for instance. However, the best results for wrappers are acquired with the 50/50 holdout validation. © Springer-Verlag Berlin Heidelberg 2007.
CITATION STYLE
Křížek, P., Kittler, J., & Hlaváč, V. (2007). Improving stability of feature selection methods. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4673 LNCS, pp. 929–936). Springer Verlag. https://doi.org/10.1007/978-3-540-74272-2_115
Mendeley helps you to discover research relevant for your work.