Learning from multi-dimensional partial labels

10Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

Multi-dimensional classification (MDC) has attracted much attention from the community. Though most studies consider fully annotated data, in real practice obtaining fully labeled data in MDC tasks is usually intractable. In this paper, we propose a novel learning paradigm: MultiDimensional Partial Label Learning (MDPL) where the ground-truth labels of each instance are concealed in multiple candidate label sets. We first introduce the partial hamming loss for MDPL that incurs a large loss if the predicted labels are not in candidate label sets, and provide an empirical risk minimization (ERM) framework. Theoretically, we rigorously prove the conditions for ERM learnability of MDPL in both independent and dependent cases. Furthermore, we present two MDPL algorithms under our proposed ERM framework. Comprehensive experiments on both synthetic and real-world datasets validate the effectiveness of our proposals.

Cite

CITATION STYLE

APA

Wang, H., Liu, W., Zhao, Y., Hu, T., Chen, K., & Chen, G. (2020). Learning from multi-dimensional partial labels. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 2943–2949). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/407

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free