MULTITuDE: Large-Scale Multilingual Machine-Generated Text Detection Benchmark

43Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.
Get full text

Abstract

There is a lack of research into capabilities of recent LLMs to generate convincing text in languages other than English and into performance of detectors of machine-generated text in multilingual settings. This is also reflected in the available benchmarks which lack authentic texts in languages other than English and predominantly cover older generators. To fill this gap, we introduce MULTITuDE1, a novel benchmarking dataset for multilingual machine-generated text detection comprising of 74,081 authentic and machine-generated texts in 11 languages (ar, ca, cs, de, en, es, nl, pt, ru, uk, and zh) generated by 8 multilingual LLMs. Using this benchmark, we compare the performance of zero-shot (statistical and black-box) and fine-tuned detectors. Considering the multilinguality, we evaluate 1) how these detectors generalize to unseen languages (linguistically similar as well as dissimilar) and unseen LLMs and 2) whether the detectors improve their performance when trained on multiple languages.

Cite

CITATION STYLE

APA

Macko, D., Moro, R., Uchendu, A., Lucas, J. S., Yamashita, M., Pikuliak, M., … Bielikova, M. (2023). MULTITuDE: Large-Scale Multilingual Machine-Generated Text Detection Benchmark. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 9960–9987). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.616

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free