Predicting target language ccg supertags improves neural machine translation

36Citations
Citations of this article
112Readers
Mendeley users who have this article in their library.

Abstract

Neural machine translation (NMT) models are able to partially learn syntactic information from sequential lexical information. Still, some complex syntactic phenomena such as prepositional phrase attachment are poorly modeled. This work aims to answer two questions: 1) Does explicitly modeling target language syntax help NMT? 2) Is tight integration of words and syntax better than multitask training? We introduce syntactic information in the form of CCG supertags in the decoder, by interleaving the target supertags with the word sequence. Our results on WMT data show that explicitly modeling target-syntax improves machine translation quality for German?English, a high-resource pair, and for Romanian?English, a low-resource pair and also several syntactic phenomena including prepositional phrase attachment. Furthermore, a tight coupling of words and syntax improves translation quality more than multitask training. By combining target-syntax with adding source-side dependency labels in the embedding layer, we obtain a total improvement of 0.9 BLEU for German?English and 1.2 BLEU for Romanian?English.

Cite

CITATION STYLE

APA

Nadejde, M., Reddy, S., Sennrich, R., Dwojak, T., Junczys-Dowmunt, M., Koehn, P., & Birch, A. (2017). Predicting target language ccg supertags improves neural machine translation. In WMT 2017 - 2nd Conference on Machine Translation, Proceedings (pp. 68–79). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w17-4707

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free