End-to-end neural word alignment outperforms GIZA++

48Citations
Citations of this article
127Readers
Mendeley users who have this article in their library.

Abstract

Word alignment was once a core unsupervised learning task in natural language processing because of its essential role in training statistical machine translation (MT) models. Although unnecessary for training neural MT models, word alignment still plays an important role in interactive applications of neural machine translation, such as annotation transfer and lexicon injection. While statistical MT methods have been replaced by neural approaches with superior performance, the twenty-year-old GIZA++ toolkit remains a key component of state-of-the-art word alignment systems. Prior work on neural word alignment has only been able to outperform GIZA++ by using its output during training. We present the first end-to-end neural word alignment method that consistently outperforms GIZA++ on three data sets. Our approach repurposes a Transformer model trained for supervised translation to also serve as an unsupervised word alignment model in a manner that is tightly integrated and does not affect translation quality.

Cite

CITATION STYLE

APA

Zenkel, T., Wuebker, J., & DeNero, J. (2020). End-to-end neural word alignment outperforms GIZA++. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 1605–1617). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.146

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free