Combining feature selection with feature weighting for k-NN classifier

3Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The k-nearest neighbor (k-NN) classification is a simple and effective classification approach. However, it suffers from over-sensitivity problem due to irrelevant and noisy features. In this paper, we propose an algorithm to improve the effectiveness of k-NN by combining these two approaches. Specifically, we select all relevant features firstly, and then assign a weight to each one. Experimental results show that our algorithm achieves the highest accuracy or near to the highest accuracy on all test datasets. It also achieves higher generalization accuracy compared with the well-known algorithms IB1-4 and C4.5.

Cite

CITATION STYLE

APA

Bao, Y., Du, X., & Ishii, N. (2002). Combining feature selection with feature weighting for k-NN classifier. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2412, pp. 461–468). Springer Verlag. https://doi.org/10.1007/3-540-45675-9_69

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free