Locally determining the number of neighbors in the k-nearest neighbor rule based on statistical confidence

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The k-nearest neighbor rule is one of the most attractive pattern classification algorithms. In practice, the value of k is usually determined by the cross-validation method. In this work, we propose a new method that locally determines the number of nearest neighbors based on the concept of statistical confidence. We define the confidence associated with decisions that are made by the majority rule from a finite number of observations and use it as a criterion to determine the number of nearest neighbors needed. The new algorithm is tested on several real-world datasets and yields results comparable to those obtained by the k-nearest neighbor rule. In contrast to the k-nearest neighbor rule that uses a fixed number of nearest neighbors throughout the feature space, our method locally adjusts the number of neighbors until a satisfactory level of confidence is reached. In addition, the statistical confidence provides a natural way to balance the trade-off between the reject rate and the error rate by excluding patterns that have low confidence levels. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Wang, J., Neskovic, P., & Cooper, L. N. (2005). Locally determining the number of neighbors in the k-nearest neighbor rule based on statistical confidence. In Lecture Notes in Computer Science (Vol. 3610, pp. 71–80). Springer Verlag. https://doi.org/10.1007/11539087_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free