RealFormer: Transformer Likes Residual Attention

34Citations
Citations of this article
226Readers
Mendeley users who have this article in their library.

Abstract

Transformer is the backbone of modern NLP models. In this paper, we propose RealFormer, a simple and generic technique to create Residual Attention Layer Transformer networks that significantly outperform the canonical Transformer and its variants (BERT, ETC, etc.) on a wide spectrum of tasks including Masked Language Modeling, GLUE, SQuAD, Neural Machine Translation, WikiHop, HotpotQA, Natural Questions, and OpenKP. We also observe empirically that RealFormer stabilizes training and leads to models with sparser attention. Source code and pre-trained checkpoints for RealFormer can be found at https://github.com/google-research/google-research/tree/master/realformer.

Cite

CITATION STYLE

APA

He, R., Ravula, A., Kanagal, B., & Ainslie, J. (2021). RealFormer: Transformer Likes Residual Attention. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 929–943). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.81

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free