Multiplicative sparse feature decomposition for efficient multi-view multi-task learning

5Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.

Abstract

Multi-view multi-task learning refers to dealing with dual-heterogeneous data, where each sample has multi-view features, and multiple tasks are correlated via common views. Existing methods do not sufficiently address three key challenges: (a) saving task correlation efficiently, (b) building a sparse model and (c) learning view-wise weights. In this paper, we propose a new method to directly handle these challenges based on multiplicative sparse feature decomposition. For (a), the weight matrix is decomposed into two components via low-rank constraint matrix factorization, which saves task correlation by learning a reduced number of model parameters. For (b) and (c), the first component is further decomposed into two sub-components, to select topic-specific features and learn view-wise importance, respectively. Theoretical analysis reveals its equivalence with a general form of joint regularization, and motivates us to develop a fast optimization algorithm in a linear complexity w.r.t. the data size. Extensive experiments on both simulated and real-world datasets validate its efficiency.

Cite

CITATION STYLE

APA

Sun, L., Nguyen, C. H., & Mamitsuka, H. (2019). Multiplicative sparse feature decomposition for efficient multi-view multi-task learning. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2019-August, pp. 3506–3512). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2019/486

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free