Subformer: ExploringWeight Sharing for Parameter Efficiency in Generative Transformers

21Citations
Citations of this article
70Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Transformers have shown improved performance when compared to previous architectures for sequence processing such as RNNs. Despite their sizeable performance gains, as recently suggested, the model is computationally expensive to train and with a high parameter budget. In light of this, we explore parameter-sharing methods in Transformers with a specific focus on generative models. We perform an analysis of different parameter sharing/reduction methods and develop the Subformer. Our model combines sandwichstyle parameter sharing, which overcomes naive cross-layer parameter sharing in generative models, and self-attentive embedding factorization (SAFE). Experiments on machine translation, abstractive summarization and language modeling show that the Subformer can outperform the Transformer even when using significantly fewer parameters.

Cite

CITATION STYLE

APA

Reid, M., Marrese-Taylor, E., & Matsuo, Y. (2021). Subformer: ExploringWeight Sharing for Parameter Efficiency in Generative Transformers. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 4081–4090). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.344

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free