Multi-view multi-label learning serves an important framework to learn from objects with diverse representations and rich semantics. Existing multi-view multi-label learning techniques focus on exploiting shared subspace for fusing multiview representations, where helpful view-specific information for discriminative modeling is usually ignored. In this paper, a novel multi-view multi-label learning approach named SIMM is proposed which leverages shared subspace exploitation and view-specific information extraction. For shared subspace exploitation, SIMM jointly minimizes confusion adversarial loss and multi-label loss to utilize shared information from all views. For view-specific information extraction, SIMM enforces an orthogonal constraint w.r.t. the shared subspace to utilize view-specific discriminative information. Extensive experiments on real-world data sets clearly show the favorable performance of SIMM against other state-of-the-art multi-view multi-label learning approaches.
CITATION STYLE
Wu, X., Chen, Q. G., Hu, Y., Wang, D., Chang, X., Wang, X., & Zhang, M. L. (2019). Multi-view multi-label learning with view-specific information extraction. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2019-August, pp. 3884–3890). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2019/539
Mendeley helps you to discover research relevant for your work.