A Unified Framework of Surrogate Loss by Refactoring and Interpolation

5Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We introduce UniLoss, a unified framework to generate surrogate losses for training deep networks with gradient descent, reducing the amount of manual design of task-specific surrogate losses. Our key observation is that in many cases, evaluating a model with a performance metric on a batch of examples can be refactored into four steps: from input to real-valued scores, from scores to comparisons of pairs of scores, from comparisons to binary variables, and from binary variables to the final performance metric. Using this refactoring we generate differentiable approximations for each non-differentiable step through interpolation. Using UniLoss, we can optimize for different tasks and metrics using one unified framework, achieving comparable performance compared with task-specific losses. We validate the effectiveness of UniLoss on three tasks and four datasets. Code is available at https://github.com/princeton-vl/uniloss.

Cite

CITATION STYLE

APA

Liu, L., Wang, M., & Deng, J. (2020). A Unified Framework of Surrogate Loss by Refactoring and Interpolation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12348 LNCS, pp. 278–293). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58580-8_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free