MTG: A Benchmark Suite for Multilingual Text Generation

19Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.

Abstract

We introduce MTG, a new benchmark suite for training and evaluating multilingual text generation. It is the first-proposed multilingual multiway text generation dataset with the largest human-annotated data (400k). It includes four generation tasks (story generation, question generation, title generation and text summarization) across five languages (English, German, French, Spanish and Chinese). The multiway setup enables testing knowledge transfer capabilities for a model across languages and tasks. Using MTG, we train and analyze several popular multilingual generation models from different aspects. Our benchmark suite fosters model performance enhancement with more human-annotated parallel data. It provides comprehensive evaluations with diverse generation scenarios. Code and data are available at https://github.com/zide05/ MTG.

Cite

CITATION STYLE

APA

Chen, Y., Song, Z., Wu, X., Wang, D., Xu, J., Chen, J., … Li, L. (2022). MTG: A Benchmark Suite for Multilingual Text Generation. In Findings of the Association for Computational Linguistics: NAACL 2022 - Findings (pp. 2508–2527). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-naacl.192

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free