NoisyTune: A Little Noise Can Help You Finetune Pretrained Language Models Better

22Citations
Citations of this article
88Readers
Mendeley users who have this article in their library.

Abstract

Effectively finetuning pretrained language models (PLMs) is critical for their success in downstream tasks. However, PLMs may have risks in overfitting the pretraining tasks and data, which usually have gap with the target downstream tasks. Such gap may be difficult for existing PLM finetuning methods to overcome and lead to suboptimal performance. In this paper, we propose a very simple yet effective method named NoisyTune to help better finetune PLMs on downstream tasks by adding some noise to the parameters of PLMs before finetuning. More specifically, we propose a matrix-wise perturbing method which adds different uniform noises to different parameter matrices based on their standard deviations. In this way, the varied characteristics of different types of parameters in PLMs can be considered. Extensive experiments on both GLUE English benchmark and XTREME multilingual benchmark show NoisyTune can consistently empower the finetuning of different PLMs on different downstream tasks.

Cite

CITATION STYLE

APA

Wu, C., Wu, F., Qi, T., & Huang, Y. (2022). NoisyTune: A Little Noise Can Help You Finetune Pretrained Language Models Better. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 2, pp. 680–685). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-short.76

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free