Global metric learning by gradient descent

5Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The k-NN classifier can be very competitive if an appropriate distance measure is used. It is often used in applications because the classification decisions are easy to interpret. Here, we demonstrate how to find a good Mahalanobis distance for k-NN classification by a simple gradient descent without any constraints. The cost term uses global distances and unlike other methods there is a soft transition in the influence of data points. It is evaluated and compared to other metric learning and feature weighting methods on datasets from the UCI repository, where the described gradient method also shows a high robustness. In the comparison the advantages of global approaches are demonstrated. © 2014 Springer International Publishing Switzerland.

Cite

CITATION STYLE

APA

Hocke, J., & Martinetz, T. (2014). Global metric learning by gradient descent. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8681 LNCS, pp. 129–135). Springer Verlag. https://doi.org/10.1007/978-3-319-11179-7_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free