Automatically extracting challenge sets for non-local phenomena in neural machine translation

13Citations
Citations of this article
74Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We show that the state-of-the-art Transformer MT model is not biased towards monotonic reordering (unlike previous recurrent neural network models), but that nevertheless, longdistance dependencies remain a challenge for the model. Since most dependencies are short-distance, common evaluation metrics will be little influenced by how well systems perform on them. We therefore propose an automatic approach for extracting challenge sets replete with long-distance dependencies, and argue that evaluation using this methodology provides a complementary perspective on system performance. To support our claim, we compile challenge sets for English-German and German-English, which are much larger than any previously released challenge set for MT. The extracted sets are large enough to allow reliable automatic evaluation, which makes the proposed approach a scalable and practical solution for evaluating MT performance on the long-tail of syntactic phenomena1.

Cite

CITATION STYLE

APA

Choshen, L., & Abend, O. (2019). Automatically extracting challenge sets for non-local phenomena in neural machine translation. In CoNLL 2019 - 23rd Conference on Computational Natural Language Learning, Proceedings of the Conference (pp. 291–303). Association for Computational Linguistics. https://doi.org/10.18653/v1/k19-1028

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free