In this paper, we introduce a system built for the Duolingo Simultaneous Translation And Paraphrase for Language Education (STAPLE) shared task at the 4th Workshop on Neural Generation and Translation (WNGT 2020). We participated in the English-to-Japanese track with a Transformer model pretrained on the JParaCrawl corpus and fine-tuned in two steps on the JESC corpus and then the (smaller) Duolingo training corpus. First, during training, we find it is essential to deliberately expose the model to higher-quality translations more often during training for optimal translation performance. For inference, encouraging a small amount of diversity with Diverse Beam Search to improve translation coverage yielded marginal improvement over regular Beam Search. Finally, using an auxiliary filtering model to filter out unlikely candidates from Beam Search improves performance further. We achieve a weighted F1 score of 27.56% on our own test set, outperforming the STAPLE AWS translations baseline score of 4.31%.
CITATION STYLE
Yang, M., Liu, Y., & Mayuranath, R. (2020). Training and inference methods for high-coverage neural machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 119–128). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.ngt-1.13
Mendeley helps you to discover research relevant for your work.