Weighted k-nearest leader classifier for large data sets

9Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Leaders clustering method is a fast one and can be used to derive prototypes called leaders from a large training set which can be used in designing a classifier. Recently nearest leader based classifier is shown to be a faster version of the nearest neighbor classifier, but its performance can be a degraded one since the density information present in the training set is lost while deriving the prototypes. In this paper we present a generalized weighted k-nearest leader based classifier which is a faster one and also an on-par classifier with the k-nearest neighbor classifier. The method is to find the relative importance of each prototype which is called its weight and to use them in the classification. The design phase is extended to eliminate some of the noisy prototypes to enhance the performance of the classifier. The method is empirically verified using some standard data sets and a comparison is drawn with some of the earlier related methods. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Babu, V. S., & Viswanath, P. (2007). Weighted k-nearest leader classifier for large data sets. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4815 LNCS, pp. 17–24). Springer Verlag. https://doi.org/10.1007/978-3-540-77046-6_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free