The Stem Cell Hypothesis: Dilemma behind Multi-Task Learning with Transformer Encoders

56Citations
Citations of this article
70Readers
Mendeley users who have this article in their library.

Abstract

Multi-task learning with transformer encoders (MTL) has emerged as a powerful technique to improve performance on closely-related tasks for both accuracy and efficiency while a question still remains whether or not it would perform as well on tasks that are distinct in nature. We first present MTL results on five NLP tasks, POS, NER, DEP, CON, and SRL, and depict its deficiency over single-task learning. We then conduct an extensive pruning analysis to show that a certain set of attention heads get claimed by most tasks during MTL, who interfere with one another to fine-tune those heads for their own objectives. Based on this finding, we propose the Stem Cell Hypothesis to reveal the existence of attention heads naturally talented for many tasks that cannot be jointly trained to create adequate embeddings for all of those tasks. Finally, we design novel parameter-free probes to justify our hypothesis and demonstrate how attention heads are transformed across the five tasks during MTL through label analysis.

Cite

CITATION STYLE

APA

He, H., & Choi, J. D. (2021). The Stem Cell Hypothesis: Dilemma behind Multi-Task Learning with Transformer Encoders. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 5555–5577). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.451

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free