Learning Non-Parametric Surrogate Losses with Correlated Gradients

0Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Training models by minimizing surrogate loss functions with gradient-based algorithms is a standard approach in various vision tasks. This strategy often leads to suboptimal solutions due to the gap between the target evaluation metrics and surrogate loss functions. In this paper, we propose a framework to learn a surrogate loss function that approximates the evaluation metric with correlated gradients. We observe that the correlated gradients significantly benefit the gradient-based algorithms to improve the quality of solutions. We verify the effectiveness of our method in various tasks such as multi-class classification, ordinal regression, and pose estimation with three evaluation metrics and five datasets. Our extensive experiments showed that our method outperforms conventional loss functions and surrogate loss learning methods.

Cite

CITATION STYLE

APA

Yoa, S., Park, J., & Kim, H. J. (2021). Learning Non-Parametric Surrogate Losses with Correlated Gradients. IEEE Access, 9, 141199–141209. https://doi.org/10.1109/ACCESS.2021.3120092

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free