Fully quantized transformer for machine translation

31Citations
Citations of this article
104Readers
Mendeley users who have this article in their library.

Abstract

State-of-the-art neural machine translation methods employ massive amounts of parameters. Drastically reducing computational costs of such methods without affecting performance has been up to this point unsuccessful. To this end, we propose FullyQT: an all-inclusive quantization strategy for the Transformer. To the best of our knowledge, we are the first to show that it is possible to avoid any loss in translation quality with a fully quantized Transformer. Indeed, compared to full-precision, our 8-bit models score greater or equal BLEU on most tasks. Comparing ourselves to all previously proposed methods, we achieve state-of-the-art quantization results.

Cite

CITATION STYLE

APA

Prato, G., Charlaix, E., & Rezagholizadeh, M. (2020). Fully quantized transformer for machine translation. In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020 (pp. 1–14). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.findings-emnlp.1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free