Stronger Baselines for Grammatical Error Correction Using a Pretrained Encoder-Decoder Model

  • Katsumata S
N/ACitations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Studies on grammatical error correction (GEC) have reported the effectiveness of pretraining a Seq2Seq model with a large amount of pseudodata. However, this approach requires time-consuming pretraining for GEC because of the size of the pseudodata. In this study, we explore the utility of bidirectional and auto-regressive transformers (BART) as a generic pretrained encoder-decoder model for GEC. With the use of this generic pretrained model for GEC, the time-consuming pretraining can be eliminated. We find that monolingual and multilingual BART models achieve high performance in GEC, with one of the results being comparable to the current strong results in English GEC. Our implementations are publicly available at GitHub (https://github.com/Katsumata420/generic-pretrained-GEC).

Cite

CITATION STYLE

APA

Katsumata, S. (2021). Stronger Baselines for Grammatical Error Correction Using a Pretrained Encoder-Decoder Model. Journal of Natural Language Processing, 28(1), 276–280. https://doi.org/10.5715/jnlp.28.276

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free