An efficient nearest neighbor classifier using an adaptive distance measure

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The Nearest Neighbor (NN) rule is one of the simplest and most effective pattern classification algorithms. In basic NN rule, all the instances in the training set are considered the same to find the NN of an input test pattern. In the proposed approach in this article, a local weight is assigned to each training instance. The weights are then used while calculating the adaptive distance metric to find the NN of a query pattern. To determine the weight of each training pattern, we propose a learning algorithm that attempts to minimize the number of misclassified patterns on the training data. To evaluate the performance of the proposed method, a number of UCI-ML data sets were used. The results show that the proposed method improves the generalization accuracy of the basic NN classifier. It is also shown that the proposed algorithm can be considered as an effective instance reduction technique for the NN classifier. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Dehzangi, O., Zolghadri, M. J., Taheri, S., & Dehzangi, A. (2007). An efficient nearest neighbor classifier using an adaptive distance measure. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4673 LNCS, pp. 970–978). Springer Verlag. https://doi.org/10.1007/978-3-540-74272-2_120

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free