Partial multi-label learning with label distribution

93Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.

Abstract

Partial multi-label learning (PML) aims to learn from training examples each associated with a set of candidate labels, among which only a subset are valid for the training example. The common strategy to induce predictive model is trying to disambiguate the candidate label set, such as identifying the ground-truth label via utilizing the confidence of each candidate label or estimating the noisy labels in the candidate label sets. Nonetheless, these strategies ignore considering the essential label distribution corresponding to each instance since the label distribution is not explicitly available in the training set. In this paper, a new partial multi-label learning strategy named PML-LD is proposed to learn from partial multi-label examples via label enhancement. Specifically, label distributions are recovered by leveraging the topological information of the feature space and the correlations among the labels. After that, a multi-class predictive model is learned by fitting a regularized multi-output regressor with the recovered label distributions. Experimental results on synthetic as well as real-world datasets clearly validate the effectiveness of PML-LD for solving PML problems.

Cite

CITATION STYLE

APA

Xu, N., Liu, Y. P., & Geng, X. (2020). Partial multi-label learning with label distribution. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 6510–6517). AAAI press. https://doi.org/10.1609/aaai.v34i04.6124

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free