The k-nearest neighbor rule is one of the most attractive pattern classification algorithms. In practice, the value of k is usually determined by the cross-validation method. In this work, we propose a new method that locally determines the number of nearest neighbors based on the concept of statistical confidence. We define the confidence associated with decisions that are made by the majority rule from a finite number of observations and use it as a criterion to determine the number of nearest neighbors needed. The new algorithm is tested on several real-world datasets and yields results comparable to those obtained by the k-nearest neighbor rule. In contrast to the k-nearest neighbor rule that uses a fixed number of nearest neighbors throughout the feature space, our method locally adjusts the number of neighbors until a satisfactory level of confidence is reached. In addition, the statistical confidence provides a natural way to balance the trade-off between the reject rate and the error rate by excluding patterns that have low confidence levels. © Springer-Verlag Berlin Heidelberg 2005.
CITATION STYLE
Wang, J., Neskovic, P., & Cooper, L. N. (2005). Locally determining the number of neighbors in the k-nearest neighbor rule based on statistical confidence. In Lecture Notes in Computer Science (Vol. 3610, pp. 71–80). Springer Verlag. https://doi.org/10.1007/11539087_9
Mendeley helps you to discover research relevant for your work.