Towards heterogeneous similarity function learning for the k-nearest neighbors classification

2Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In order to classify an unseen (query) vector q with the k-Nearest Neighbors method (k-NN) one computes a similarity function between q and training vectors in a database. In the basic variant of the k-NN algorithm the predicted class of q is estimated by taking the majority class of the q's k-nearest neighbors. Various similarity functions may be applied leading to different classification results. In this paper a heterogeneous similarity function is constructed out of different 1-component metrics by minimization of the number of classification errors the system makes on a training set. The HSFL-NN system, which has been introduced in this paper, on five tested datasets has given better results on unseen samples than the plain k-NN method with the optimally selected k parameter and the optimal homogeneous similarity function. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Grudziński, K. (2008). Towards heterogeneous similarity function learning for the k-nearest neighbors classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5097 LNAI, pp. 578–587). https://doi.org/10.1007/978-3-540-69731-2_56

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free