Combining k-nearest neighbor and centroid neighbor classifier for fast and robust classification

3Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The k-NN classifier is one of the most known and widely used nonparametric classifiers. The k-NN rule is optimal in the asymptotic case which means that its classification error aims for Bayes error if the number of the training samples approaches infinity. A lot of alternative extensions of the traditional k-NN have been developed to improve the classification accuracy. However, it is also well-known fact that when the number of the samples grows it can become very inefficient because we have to compute all the distances from the testing sample to every sample from the training data set. In this paper, a simple method which addresses this issue is proposed. Combining k-NN classifier with the centroid neighbor classifier improves the speed of the algorithm without changing the results of the original k-NN. In fact usage confusion matrices and excluding outliers makes the resulting algorithm much faster and robust.

Cite

CITATION STYLE

APA

Chmielnicki, W. (2016). Combining k-nearest neighbor and centroid neighbor classifier for fast and robust classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9648, pp. 536–548). Springer Verlag. https://doi.org/10.1007/978-3-319-32034-2_45

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free