Fast k most similar neighbor classifier for mixed data based on a tree structure and approximating-eliminating

0Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The k nearest neighbor (k-NN) classifier has been extensively used as a nonparametric technique in Pattern Recognition. However, in some applications where the training set is large, the exhaustive k-NN classifier becomes impractical. Therefore, many fast k-NN classifiers have been developed to avoid this problem. Most of these classifiers rely on metric properties, usually the triangle inequality, to reduce the number of prototype comparisons. However, in soft sciences, the prototypes are usually described by qualitative and quantitative features (mixed data), and sometimes the comparison function does not satisfy the triangle inequality. Therefore, in this work, a fast k most similar neighbor (k-MSN) classifier, which uses a Tree structure and an Approximating and Eliminating approach for Mixed Data, not based on metric properties (Tree AEMD), is introduced. The proposed classifier is compared against other fast k-NN classifiers. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Hernández-Rodríguez, S., Carrasco-Ochoa, J. A., & Martínez-Trinidad, J. F. (2008). Fast k most similar neighbor classifier for mixed data based on a tree structure and approximating-eliminating. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5197 LNCS, pp. 364–371). https://doi.org/10.1007/978-3-540-85920-8_45

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free