Self-paced robust learning for leveraging clean labels in noisy data

17Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.

Abstract

The success of training accurate models strongly depends on the availability of a sufficient collection of precisely labeled data. However, real-world datasets contain erroneously labeled data samples that substantially hinder the performance of machine learning models. Meanwhile, well-labeled data is usually expensive to obtain and only a limited amount is available for training. In this paper, we consider the problem of training a robust model by using large-scale noisy data in conjunction with a small set of clean data. To leverage the information contained via the clean labels, we propose a novel self-paced robust learning algorithm (SPRL) that trains the model in a process from more reliable (clean) data instances to less reliable (noisy) ones under the supervision of well-labeled data. The self-paced learning process hedges the risk of selecting corrupted data into the training set. Moreover, theoretical analyses on the convergence of the proposed algorithm are provided under mild assumptions. Extensive experiments on synthetic and real-world datasets demonstrate that our proposed approach can achieve a considerable improvement in effectiveness and robustness to existing methods.

Cite

CITATION STYLE

APA

Zhang, X., Wu, X., Chen, F., Zhao, L., & Lu, C. T. (2020). Self-paced robust learning for leveraging clean labels in noisy data. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 6853–6860). AAAI press. https://doi.org/10.1609/aaai.v34i04.6166

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free