Maximal multiverse learning for promoting cross-task generalization of fine-tuned language models

5Citations
Citations of this article
75Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Language modeling with BERT consists of two phases of (i) unsupervised pre-training on unlabeled text, and (ii) fine-tuning for a specific supervised task. We present a method that leverages the second phase to its fullest, by applying an extensive number of parallel classifier heads, which are enforced to be orthogonal, while adaptively eliminating the weaker heads during training. We conduct an extensive inter- and intradataset evaluation, showing that our method improves the generalization ability of BERT, sometimes leading to a +9% gain in accuracy. These results highlight the importance of a proper fine-tuning procedure, especially for relatively smaller-sized datasets. Our code is attached as supplementary.

Cite

CITATION STYLE

APA

Malkiel, I., & Wolf, L. (2021). Maximal multiverse learning for promoting cross-task generalization of fine-tuned language models. In EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 187–199). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.eacl-main.14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free