Multi-view multi-label learning with view-label-specific features

28Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In multi-view multi-label learning, each object is represented by multiple data views, and belongs to multiple class labels simultaneously. Generally, all the data views have a contribution to the multi-label learning task, but their contributions are different. Besides, for each data view, each class label is only associated with a subset data features, and different features have different contributions to each class label. In this paper, we propose a novel framework VLSF for multi-view multi-label learning, i.e., multiview multi-label learning with View-Label-Specific Features. Specifically, we first learn a low dimensional label-specific data representation for each data view and construct a multi-label classification model based on it by exploiting label correlations and view consensus, and learn the contribution weight of each data view to multi-label learning task for all the class labels jointly. Then, the final prediction can be made by combing the prediction results of all the classifiers and the learned contribution weights. The extensive comparison experiments with the state-of-the-art approaches manifest the effectiveness of the proposed method VLSF.

Cite

CITATION STYLE

APA

Huang, J., Qu, X., Li, G., Qin, F., Zheng, X., & Huang, Q. (2019). Multi-view multi-label learning with view-label-specific features. IEEE Access, 7, 100979–100992. https://doi.org/10.1109/ACCESS.2019.2930468

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free