Learning with Noisy Labels by Adaptive Gradient-Based Outlier Removal

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

An accurate and substantial dataset is essential for training a reliable and well-performing model. However, even manually annotated datasets contain label errors, not to mention automatically labeled ones. Previous methods for label denoising have primarily focused on detecting outliers and their permanent removal – a process that is likely to over- or underfilter the dataset. In this work, we propose AGRA: a new method for learning with noisy labels by using Adaptive GRAdient-based outlier removal (We share our code at: https://github.com/anasedova/AGRA.) Instead of cleaning the dataset prior to model training, the dataset is dynamically adjusted during the training process. By comparing the aggregated gradient of a batch of samples and an individual example gradient, our method dynamically decides whether a corresponding example is helpful for the model at this point or is counter-productive and should be left out for the current update. Extensive evaluation on several datasets demonstrates AGRA’s effectiveness, while a comprehensive results analysis supports our initial hypothesis: permanent hard outlier removal is not always what model benefits the most from.

Cite

CITATION STYLE

APA

Sedova, A., Zellinger, L., & Roth, B. (2023). Learning with Noisy Labels by Adaptive Gradient-Based Outlier Removal. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 14169 LNAI, pp. 237–253). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-43412-9_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free