Abstract
Partial Multi-Label Learning (PML) aims to learn from the training data where each instance is associated with a set of candidate labels, among which only a part of them are relevant. Existing PML methods mainly focus on label disambiguation, while they lack the consideration of noise in feature space. To tackle the problem, we propose a novel framework named partial multilabel learning via MUlti-SubspacE Representation (MUSER), where the redundant labels together with noisy features are jointly taken into consideration during the training process. Specifically, we first decompose the original label space into a latent label subspace and a label correlation matrix to reduce the negative effects of redundant labels, then we utilize the correlations among features to map the original noisy feature space to a feature subspace to resist the noisy feature information. Afterwards, we introduce a graph Laplacian regularization to constrain the label subspace to keep intrinsic structure among features and impose an orthogonality constraint on the correlations among features to guarantee discriminability of the feature subspace. Extensive experiments conducted on various datasets demonstrate the superiority of our proposed method.
Cite
CITATION STYLE
Li, Z., Lyu, G., & Feng, S. (2020). Partial multi-label learning via multi-subspace representation. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 2612–2618). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/362
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.