Fine-Tuning by Curriculum Learning for Non-Autoregressive Neural Machine Translation

58Citations
Citations of this article
71Readers
Mendeley users who have this article in their library.

Abstract

Non-Autoregressive translation (NAT) models remove the dependence on previous target tokens and generate all target tokens in parallel, resulting in significant inference speedup but at the cost of inferior translation accuracy compared to autoregressive translation (AT) models. Considering that AT models have higher accuracy and are easier to train than NAT models, and both of them share the same model configurations, a natural idea to improve the accuracy of NAT models is to transfer a well-Trained AT model to an NAT model through fine-Tuning. However, since AT and NAT models differ greatly in training strategy, straightforward finetuning does not work well. In this work, we introduce curriculum learning into fine-Tuning for NAT. Specifically, we design a curriculum in the fine-Tuning process to progressively switch the training from autoregressive generation to nonautoregressive generation. Experiments on four benchmark translation datasets show that the proposed method achieves good improvement (more than 1 BLEU score) over previous NAT baselines in terms of translation accuracy, and greatly speed up (more than 10 times) the inference process over AT baselines.

Cite

CITATION STYLE

APA

Guo, J., Tan, X., Xu, L., Qin, T., Chen, E., & Liu, T. Y. (2020). Fine-Tuning by Curriculum Learning for Non-Autoregressive Neural Machine Translation. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 7839–7846). AAAI press. https://doi.org/10.1609/aaai.v34i05.6289

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free