Balancing cost and benefit with tied-multi transformers

2Citations
Citations of this article
73Readers
Mendeley users who have this article in their library.

Abstract

We propose a novel procedure for training multiple Transformers with tied parameters which compresses multiple models into one enabling the dynamic choice of the number of encoder and decoder layers during decoding. In training an encoder-decoder model, typically, the output of the last layer of the N-layer encoder is fed to the M-layer decoder, and the output of the last decoder layer is used to compute loss. Instead, our method computes a single loss consisting of N × M losses, where each loss is computed from the output of one of the M decoder layers connected to one of the N encoder layers. Such a model subsumes N × M models with different number of encoder and decoder layers, and can be used for decoding with fewer than the maximum number of encoder and decoder layers. Given our flexible tied model, we also address to a-priori selection of the number of encoder and decoder layers for faster decoding, and explore recurrent stacking of layers and knowledge distillation for model compression. We present a cost-benefit analysis of applying the proposed approaches for neural machine translation and show that they reduce decoding costs while preserving translation quality.

Cite

CITATION STYLE

APA

Dabre, R., Rubino, R., & Fujita, A. (2020). Balancing cost and benefit with tied-multi transformers. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 24–34). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.ngt-1.3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free