Task Compass: Scaling Multi-task Pre-training with Task Prefix

7Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Leveraging task-aware annotated data as supervised signals to assist with self-supervised learning on large-scale unlabeled data has become a new trend in pre-training language models. Existing studies show that multitask learning with large-scale supervised tasks suffers from negative effects across tasks. To tackle the challenge, we propose a task prefix guided multi-task pre-training framework to explore the relationships among tasks. We conduct extensive experiments on 40 datasets, which show that our model can not only serve as the strong foundation backbone for a wide range of tasks but also be feasible as a probing tool for analyzing task relationships. The task relationships reflected by the prefixes align transfer learning performance between tasks. They also suggest directions for data augmentation with complementary tasks, which help our model achieve human-parity results on commonsense reasoning leaderboards. Code is available at https://github.com/cooelf/CompassMTL.

Cite

CITATION STYLE

APA

Zhang, Z., Wang, S., Xu, Y., Fang, Y., Yu, W., Liu, Y., … Zeng, M. (2022). Task Compass: Scaling Multi-task Pre-training with Task Prefix. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 5700–5714). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.192

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free