Task’s Choice: Pruning-Based Feature Sharing (PBFS) for Multi-Task Learning

8Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

In most of the existing multi-task learning (MTL) models, multiple tasks’ public information is learned by sharing parameters across hidden layers, such as hard sharing, soft sharing, and hierarchical sharing. One promising approach is to introduce model pruning into information learning, such as sparse sharing, which is regarded as being outstanding in knowledge transferring. However, the above method performs inefficiently in conflict tasks, with inadequate learning of tasks’ private information, or through suffering from negative transferring. In this paper, we propose a multi-task learning model (Pruning-Based Feature Sharing, PBFS) that merges a soft parameter sharing structure with model pruning and adds a prunable shared network among different task-specific subnets. In this way, each task can select parameters in a shared subnet, according to its requirements. Experiments are conducted on three benchmark public datasets and one synthetic dataset; the impact of the different subnets’ sparsity and tasks’ correlations to the model performance is analyzed. Results show that the proposed model’s information sharing strategy is helpful to transfer learning and superior to the several comparison models.

Cite

CITATION STYLE

APA

Chen, Y., Yu, J., Zhao, Y., Chen, J., & Du, X. (2022). Task’s Choice: Pruning-Based Feature Sharing (PBFS) for Multi-Task Learning. Entropy, 24(3). https://doi.org/10.3390/e24030432

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free