Learning from Complementary Labels via Partial-Output Consistency Regularization

15Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In complementary-label learning (CLL), a multiclass classifier is learned from training instances each associated with complementary labels, which specify the classes that the instance does not belong to. Previous studies focus on unbiased risk estimator or surrogate loss while neglect the importance of regularization in training phase. In this paper, we give the first attempt to leverage regularization techniques for CLL. By decoupling a label vector into complementary labels and partial unknown labels, we simultaneously inhibit the outputs of complementary labels with a complementary loss and penalize the sensitivity of the classifier on the partial outputs of these unknown classes by consistency regularization. Then we unify the complementary loss and consistency loss together by a specially designed dynamic weighting factor. We conduct a series of experiments showing that the proposed method achieves highly competitive performance in CLL.

Cite

CITATION STYLE

APA

Wang, D. B., Feng, L., & Zhang, M. L. (2021). Learning from Complementary Labels via Partial-Output Consistency Regularization. In IJCAI International Joint Conference on Artificial Intelligence (pp. 3075–3081). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/423

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free