Distance measures for prototype based classification

19Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The basic concepts of distance based classification are introduced in terms of clear-cut example systems. The classical k-Nearest- Neigbhor (kNN) classifier serves as the starting point of the discussion. Learning Vector Quantization (LVQ) is introduced, which represents the reference data by a few prototypes. This requires a data driven training process; examples of heuristic and cost function based prescriptions are presented. While the most popular measure of dissimilarity in this context is the Euclidean distance, this choice is frequently made without justification. Alternative distances can yield better performance in practical problems. Several examples are discussed, including more general Minkowski metrics and statistical divergences for the comparison of, e.g., histogram data. Furthermore, the framework of relevance learning in LVQ is presented. There, parameters of adaptive distance measures are optimized in the training phase. A practical application of Matrix Relevance LVQ in the context of tumor classification illustrates the approach.

Cite

CITATION STYLE

APA

Biehl, M., Hammer, B., & Villmann, T. (2014). Distance measures for prototype based classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8603, pp. 100–116). Springer Verlag. https://doi.org/10.1007/978-3-319-12084-3_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free