Algorithmic robustness for semi-supervised (ε, γ, τ)-good metric learning

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Using the appropriate metric is crucial for the performance of most of machine learning algorithms. For this reason, a lot of effort has been put into distance and similarity learning. However, it is worth noting that this research field lacks theoretical guarantees that can be expected on the generalization capacity of the classifier associated to a learned metric. The theoretical framework of (ε, γ, τ)-good similarity functions [1] provides means to relate the properties of a similarity function and those of a linear classifier making use of it. In this paper, we extend this theory to a method where the metric and the separator are jointly learned in a semi-supervised way, setting that has not been explored before. We furthermore prove the robustness of our algorithm, which allows us to provide a generalization bound for this approach. The behavior of our method is illustrated via some experimental results.

Cite

CITATION STYLE

APA

Nicolae, M. I., Sebban, M., Habrard, A., Gaussier, E., & Amini, M. R. (2015). Algorithmic robustness for semi-supervised (ε, γ, τ)-good metric learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9489, pp. 253–263). Springer Verlag. https://doi.org/10.1007/978-3-319-26532-2_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free