Machine Translation Pre-training for Data-to-Text Generation - A Case Study in Czech

5Citations
Citations of this article
71Readers
Mendeley users who have this article in their library.

Abstract

While there is a large body of research studying deep learning methods for text generation from structured data, almost all of it focuses purely on English. In this paper, we study the effectiveness of machine translation based pre-training for data-to-text generation in non-English languages. Since the structured data is generally expressed in English, text generation into other languages involves elements of translation, transliteration and copying - elements already encoded in neural machine translation systems. Moreover, since data-to-text corpora are typically small, this task can benefit greatly from pre-training. We conduct experiments on Czech, a morphologically complex language. Results show that machine translation pre-training lets us train end-to-end models that significantly improve upon unsupervised pre-training and linguistically informed pipelined neural systems, as judged by automatic metrics and human evaluation. We also show that this approach enjoys several desirable properties, including improved performance in low data scenarios and applicability to low resource languages.

Cite

CITATION STYLE

APA

Kale, M., & Roy, S. (2020). Machine Translation Pre-training for Data-to-Text Generation - A Case Study in Czech. In INLG 2020 - 13th International Conference on Natural Language Generation, Proceedings (pp. 91–96). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.inlg-1.13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free