UniDrop: A Simple yet Effective Technique to Improve Transformer without Extra Cost

17Citations
Citations of this article
91Readers
Mendeley users who have this article in their library.

Abstract

Transformer architecture achieves great success in abundant natural language processing tasks. The over-parameterization of the Transformer model has motivated plenty of works to alleviate its overfitting for superior performances. With some explorations, we find simple techniques such as dropout, can greatly boost model performance with a careful design. Therefore, in this paper, we integrate different dropout techniques into the training of Transformer models. Specifically, we propose an approach named UniDrop to unite three different dropout techniques from fine-grain to coarse-grain, i.e., feature dropout, structure dropout, and data dropout. Theoretically, we demonstrate that these three dropouts play different roles from regularization perspectives. Empirically, we conduct experiments on both neural machine translation and text classification benchmark datasets. Extensive results indicate that Transformer with UniDrop can achieve around 1.5 BLEU improvement on IWSLT14 translation tasks, and better accuracy for the classification even using strong pre-trained RoBERTa as backbone.

Cite

CITATION STYLE

APA

Wu, Z., Wu, L., Meng, Q., Xia, Y., Xie, S., Qin, T., … Liu, T. Y. (2021). UniDrop: A Simple yet Effective Technique to Improve Transformer without Extra Cost. In NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 3865–3878). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.naacl-main.302

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free