A Simple Recipe for Multilingual Grammatical Error Correction

89Citations
Citations of this article
131Readers
Mendeley users who have this article in their library.

Abstract

This paper presents a simple recipe to train state-of-the-art multilingual Grammatical Error Correction (GEC) models. We achieve this by first proposing a language-agnostic method to generate a large number of synthetic examples. The second ingredient is to use largescale multilingual language models (up to 11B parameters). Once fine-tuned on languagespecific supervised sets we surpass the previous state-of-the-art results on GEC benchmarks in four languages: English, Czech, German and Russian. Having established a new set of baselines for GEC, we make our results easily reproducible and accessible by releasing a CLANG-8 dataset.1 It is produced by using our best model, which we call gT5, to clean the targets of a widely used yet noisy LANG-8 dataset. CLANG-8 greatly simplifies typical GEC training pipelines composed of multiple fine-tuning stages - we demonstrate that performing a single fine-tuning step on CLANG-8 with the off-the-shelf language models yields further accuracy improvements over an already top-performing gT5 model for English.

Cite

CITATION STYLE

APA

Rothe, S., Mallinson, J., Malmi, E., Krause, S., & Severyn, A. (2021). A Simple Recipe for Multilingual Grammatical Error Correction. In ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference (Vol. 2, pp. 702–707). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.acl-short.89

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free