Learning from few positives: A provably accurate metric learning algorithm to deal with imbalanced data

4Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Learning from imbalanced data, where the positive examples are very scarce, remains a challenging task from both a theoretical and algorithmic perspective. In this paper, we address this problem using a metric learning strategy. Unlike the state-of-the-art methods, our algorithm MLFP, for Metric Learning from Few Positives, learns a new representation that is used only when a test query is compared to a minority training example. From a geometric perspective, it artificially brings positive examples closer to the query without changing the distances to the negative (majority class) data. This strategy allows us to expand the decision boundaries around the positives, yielding a better F-Measure, a criterion which is suited to deal with imbalanced scenarios. Beyond the algorithmic contribution provided by MLFP, our paper presents generalization guarantees on the false positive and false negative rates. Extensive experiments conducted on several imbalanced datasets show the effectiveness of our method.

Cite

CITATION STYLE

APA

Viola, R., Emonet, R., Habrard, A., Metzler, G., & Sebban, M. (2020). Learning from few positives: A provably accurate metric learning algorithm to deal with imbalanced data. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 2155–2161). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/298

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free