ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning

2Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

Abstract

We propose a new paradigm to continually evolve pretrained models, denoted ColD Fusion. It provides the benefits of multitask learning but leverages distributed computation with limited communication and eliminates the need for shared data. Consequentially, ColD Fusion can give rise to a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based upon. We show that ColD Fusion yields comparable benefits to multitask training by producing a model that (a) attains strong performance on all of the datasets it was trained on; and (b) is a better starting point for finetuning on unseen datasets. We show that ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.33 points on average without any changes to the architecture.

Cite

CITATION STYLE

APA

Don-Yehiya, S., Venezian, E., Raffel, C., Slonim, N., Katz, Y., & Choshen, L. (2023). ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 788–806). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.46

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free