Improving grammatical error correction models with purpose-built adversarial examples

21Citations
Citations of this article
108Readers
Mendeley users who have this article in their library.

Abstract

A sequence-to-sequence (seq2seq) learning with neural networks empirically shows to be an effective framework for grammatical error correction (GEC), which takes a sentence with errors as input and outputs the corrected one. However, the performance of GEC models with the seq2seq framework heavily relies on the size and quality of the corpus on hand. We propose a method inspired by adversarial training to generate more meaningful and valuable training examples by continually identifying the weak spots of a model, and to enhance the model by gradually adding the generated adversarial examples to the training set. Extensive experimental results show that such adversarial training can improve both the generalization and robustness of GEC models.

Cite

CITATION STYLE

APA

Wang, L., & Zheng, X. (2020). Improving grammatical error correction models with purpose-built adversarial examples. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 2858–2869). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.228

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free