Towards reasonably-sized character-level transformer NMT by finetuning subword systems

11Citations
Citations of this article
83Readers
Mendeley users who have this article in their library.

Abstract

Applying the Transformer architecture on the character level usually requires very deep architectures that are difficult and slow to train. These problems can be partially overcome by incorporating a segmentation into tokens in the model. We show that by initially training a subword model and then finetuning it on characters, we can obtain a neural machine translation model that works at the character level without requiring token segmentation. We use only the vanilla 6-layer Transformer Base architecture. Our character-level models better capture morphological phenomena and show more robustness to noise at the expense of somewhat worse overall translation quality. Our study is a significant step towards high-performance and easy to train character-based models that are not extremely large.

Cite

CITATION STYLE

APA

Libovický, J., & Fraser, A. (2020). Towards reasonably-sized character-level transformer NMT by finetuning subword systems. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 2572–2579). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.203

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free