Multi-source neural automatic post-editing: FBK's participation in the WMT 2017 APE shared task

32Citations
Citations of this article
70Readers
Mendeley users who have this article in their library.

Abstract

Previous phrase-based approaches to Automatic Post-editing (APE) have shown that the dependency of MT errors from the source sentence can be exploited by jointly learning from source and target information. By integrating this notion in a neural approach to the problem, we present the multi-source neural machine translation (NMT) system submitted by FBK to the WMT 2017 APE shared task. Our system implements multi-source NMT in a weighted ensemble of 8 models. The n-best hypotheses produced by this ensemble are further re-ranked using features based on the edit distance between the original MT output and each APE hypothesis, as well as other statistical models (n-gram language model and operation sequence model). This solution resulted in the best system submission for this round of the APE shared task for both en-de and de-en language directions. For the former language direction, our primary submission improves over the MT baseline up to -4.9 TER and +7.6 BLEU points. For the latter, where the higher quality of the original MT output reduces the room for improvement, the gains are lower but still significant (-0.25 TER and +0.3 BLEU).

Cite

CITATION STYLE

APA

Chatterjee, R., Farajian, A., Negri, M., Turchi, M., Srivastava, A., & Pal, S. (2017). Multi-source neural automatic post-editing: FBK’s participation in the WMT 2017 APE shared task. In WMT 2017 - 2nd Conference on Machine Translation, Proceedings (pp. 630–638). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w17-4773

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free