A framework using contrastive learning for classification with noisy labels

8Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.

Abstract

We propose a framework using contrastive learning as a pre-training task to perform image classification in the presence of noisy labels. Recent strategies, such as pseudo-labeling, sample selection with Gaussian Mixture models, and weighted supervised contrastive learning have, been combined into a fine-tuning phase following the pre-training. In this paper, we provide an extensive empirical study showing that a preliminary contrastive learning step brings a significant gain in performance when using different loss functions: non robust, robust, and early-learning regularized. Our experiments performed on standard benchmarks and real-world datasets demonstrate that: (i) the contrastive pre-training increases the robustness of any loss function to noisy labels and (ii) the additional fine-tuning phase can further improve accuracy, but at the cost of additional complexity.

Cite

CITATION STYLE

APA

Ciortan, M., Dupuis, R., & Peel, T. (2021). A framework using contrastive learning for classification with noisy labels. Data, 6(6). https://doi.org/10.3390/data6060061

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free