Towards Understanding Deep Learning from Noisy Labels with Small-Loss Criterion

34Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.

Abstract

Deep neural networks need large amounts of labeled data to achieve good performance. In real-world applications, labels are usually collected from non-experts such as crowdsourcing to save cost and thus are noisy. In the past few years, deep learning methods for dealing with noisy labels have been developed, many of which are based on the small-loss criterion. However, there are few theoretical analyses to explain why these methods could learn well from noisy labels. In this paper, we theoretically explain why the widely-used small-loss criterion works. Based on the explanation, we reformalize the vanilla small-loss criterion to better tackle noisy labels. The experimental results verify our theoretical explanation and also demonstrate the effectiveness of the reformalization.

Cite

CITATION STYLE

APA

Gui, X. J., Wang, W., & Tian, Z. H. (2021). Towards Understanding Deep Learning from Noisy Labels with Small-Loss Criterion. In IJCAI International Joint Conference on Artificial Intelligence (pp. 2469–2475). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/340

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free