Non-parametric Nearest Neighbor with local adaptation

2Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The k-Nearest Neighbor algorithm (k-NN) uses a classification criterion that depends on the parameter k. Usually, the value of this parameter must be determined by the user. In this paper we present an algorithm based on the NN technique that does not take the value of k from the user. Our approach evaluates values of k that classified the training examples correctly and takes which classified most examples. As the user does not take part in the election of the parameter k, the algorithm is non-parametric. With this heuristic, we propose an easy variation of the k-NN algorithm that gives robustness with noise present in data. Summarized in the last section, the experiments show that the error rate decreases in comparison with the k-NN technique when the best k for each database has been previously obtained. © Springer-Verlag Berlin Heidelberg 2001.

Cite

CITATION STYLE

APA

Ferrer-Troyano, F. J., Aguilar-Ruiz, J. S., & Riquelme, J. C. (2001). Non-parametric Nearest Neighbor with local adaptation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2258 LNAI, pp. 22–29). Springer Verlag. https://doi.org/10.1007/3-540-45329-6_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free