This paper describes POSTECH’s submission to the WMT 2019 shared task on Automatic Post-Editing (APE). In this paper, we propose a new multi-source APE model by extending Transformer. The main contributions of our study are that we 1) reconstruct the encoder to generate a joint representation of translation (mt) and its src context, in addition to the conventional src encoding and 2) suggest two types of multi-source attention layers to compute attention between two outputs of the encoder and the decoder state in the decoder. Furthermore, we train our model by applying various teacher-forcing ratios to alleviate exposure bias. Finally, we adopt the ensemble technique across variations of our model. Experiments on the WMT19 English-German APE data set show improvements in terms of both TER and BLEU scores over the baseline. Our primary submission achieves -0.73 in TER and +1.49 in BLEU compared to the baseline, and ranks second among all submitted systems.
CITATION STYLE
Lee, W. K., Shin, J., & Lee, J. H. (2019). Transformer-based automatic post-editing model with joint encoder and multi-source attention of decoder. In WMT 2019 - 4th Conference on Machine Translation, Proceedings of the Conference (Vol. 3, pp. 112–117). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w19-5412
Mendeley helps you to discover research relevant for your work.