Taming the Cross Entropy Loss

10Citations
Citations of this article
62Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present the Tamed Cross Entropy (TCE) loss function, a robust derivative of the standard Cross Entropy (CE) loss used in deep learning for classification tasks. However, unlike other robust losses, the TCE loss is designed to exhibit the same training properties than the CE loss in noiseless scenarios. Therefore, the TCE loss requires no modification on the training regime compared to the CE loss and, in consequence, can be applied in all applications where the CE loss is currently used. We evaluate the TCE loss using the ResNet architecture on four image datasets that we artificially contaminated with various levels of label noise. The TCE loss outperforms the CE loss in every tested scenario.

Cite

CITATION STYLE

APA

Martinez, M., & Stiefelhagen, R. (2019). Taming the Cross Entropy Loss. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11269 LNCS, pp. 628–637). Springer Verlag. https://doi.org/10.1007/978-3-030-12939-2_43

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free