A Comparable Study on Model Averaging, Ensembling and Reranking in NMT

21Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Neural machine translation has become a benchmark method in machine translation. Many novel structures and methods have been proposed to improve the translation quality. However, it is difficult to train and turn parameters. In this paper, we focus on decoding techniques that boost translation performance by utilizing existing models. We address the problem from three aspects—parameter, word and sentence level, corresponding to checkpoint averaging, model ensembling and candidates reranking which all do not need to retrain the model. Experimental results have shown that the proposed decoding approaches can significantly improve the performance over baseline model.

Cite

CITATION STYLE

APA

Liu, Y., Zhou, L., Wang, Y., Zhao, Y., Zhang, J., & Zong, C. (2018). A Comparable Study on Model Averaging, Ensembling and Reranking in NMT. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11109 LNAI, pp. 299–308). Springer Verlag. https://doi.org/10.1007/978-3-319-99501-4_26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free