The k-NN classifier can be very competitive if an appropriate distance measure is used. It is often used in applications because the classification decisions are easy to interpret. Here, we demonstrate how to find a good Mahalanobis distance for k-NN classification by a simple gradient descent without any constraints. The cost term uses global distances and unlike other methods there is a soft transition in the influence of data points. It is evaluated and compared to other metric learning and feature weighting methods on datasets from the UCI repository, where the described gradient method also shows a high robustness. In the comparison the advantages of global approaches are demonstrated. © 2014 Springer International Publishing Switzerland.
CITATION STYLE
Hocke, J., & Martinetz, T. (2014). Global metric learning by gradient descent. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8681 LNCS, pp. 129–135). Springer Verlag. https://doi.org/10.1007/978-3-319-11179-7_17
Mendeley helps you to discover research relevant for your work.