Forward semi-supervised feature selection

75Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Traditionally, feature selection methods work directly on labeled examples. However, the availability of labeled examples cannot be taken for granted for many real world applications, such as medical diagnosis, forensic science, fraud detection, etc, where labeled examples are hard to find. This practical problem calls the need for "semi-supervised feature selection" to choose the optimal set of features given both labeled and unlabeled examples that return the most accurate classifier for a learning algorithm. In this paper, we introduce a "wrapper-type" forward semi-supervised feature selection framework. In essence, it uses unlabeled examples to extend the initial labeled training set. Extensive experiments on publicly available datasets shows that our proposed framework, generally, outperforms both traditional supervised and state-of-the-art "filter-type" semi-supervised feature selection algorithms [5] by 1% to 10% in accuracy. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Ren, J., Qiu, Z., Fan, W., Cheng, H., & Yu, P. S. (2008). Forward semi-supervised feature selection. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5012 LNAI, pp. 970–976). https://doi.org/10.1007/978-3-540-68125-0_101

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free