Feature selection filters based on the permutation test

39Citations
Citations of this article
51Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We investigate the problem of supervised feature selection within the filtering framework. In our approach, applicable to the two-class problems, the feature strength is inversely proportional to the p-value of the null hypothesis that its class-conditional densities, p(X|Y = 0) and p(X|Y = 1), are identical. To estimate the p-values, we use Fisher's permutation test combined with the four simple filtering criteria in the roles of test statistics: sample mean difference, symmetric Kullback-Leibler distance, information gain, and chi-square statistic. The experimental results of our study, performed using naive Bayes classifier and support vector machines, strongly indicate that the permutation test improves the above-mentioned filters and can be used effectively when sample size is relatively small and number of features relatively large. © Springer-Verlag Berlin Heidelberg 2004.

Cite

CITATION STYLE

APA

Radivojac, P., Obradovic, Z., Keith Dunker, A., & Vucetic, S. (2004). Feature selection filters based on the permutation test. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 3201, pp. 334–346). Springer Verlag. https://doi.org/10.1007/978-3-540-30115-8_32

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free