Training models by minimizing surrogate loss functions with gradient-based algorithms is a standard approach in various vision tasks. This strategy often leads to suboptimal solutions due to the gap between the target evaluation metrics and surrogate loss functions. In this paper, we propose a framework to learn a surrogate loss function that approximates the evaluation metric with correlated gradients. We observe that the correlated gradients significantly benefit the gradient-based algorithms to improve the quality of solutions. We verify the effectiveness of our method in various tasks such as multi-class classification, ordinal regression, and pose estimation with three evaluation metrics and five datasets. Our extensive experiments showed that our method outperforms conventional loss functions and surrogate loss learning methods.
CITATION STYLE
Yoa, S., Park, J., & Kim, H. J. (2021). Learning Non-Parametric Surrogate Losses with Correlated Gradients. IEEE Access, 9, 141199–141209. https://doi.org/10.1109/ACCESS.2021.3120092
Mendeley helps you to discover research relevant for your work.