Continuous Learning from Human Post-Edits for Neural Machine Translation

  • Turchi M
  • Negri M
  • Farajian M
  • et al.
N/ACitations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Improving machine translation (MT) by learning from human post-edits is a powerful solution that is still unexplored in the neural machine translation (NMT) framework. Also in this scenario, effective techniques for the continuous tuning of an existing model to a stream of manual corrections would have several advantages over current batch methods. First, they would make it possible to adapt systems at run time to new users/domains; second, this would happen at a lower computational cost compared to NMT retraining from scratch or in batch mode. To attack the problem, we explore several online learning strategies to stepwise fine-tune an existing model to the incoming post-edits. Our evaluation on data from two language pairs and different target domains shows significant improvements over the use of static models.

Cite

CITATION STYLE

APA

Turchi, M., Negri, M., Farajian, M. A., & Federico, M. (2017). Continuous Learning from Human Post-Edits for Neural Machine Translation. The Prague Bulletin of Mathematical Linguistics, 108(1), 233–244. https://doi.org/10.1515/pralin-2017-0023

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free