Grammatical Error Correction via Mixed-Grained Weighted Training

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

The task of Grammatical Error Correction (GEC) aims to automatically correct grammatical errors in natural texts. Almost all previous works treat annotated training data equally, but inherent discrepancies in data are neglected. In this paper, the inherent discrepancies are manifested in two aspects, namely, accuracy of data annotation and diversity of potential annotations. To this end, we propose MainGEC, which designs token-level and sentence-level training weights based on inherent discrepancies in accuracy and potential diversity of data annotation, respectively, and then conducts mixed-grained weighted training to improve the training effect for GEC. Empirical evaluation shows that whether in the Seq2Seq or Seq2Edit manner, MainGEC achieves consistent and significant performance improvements on two benchmark datasets, demonstrating the effectiveness and superiority of the mixed-grained weighted training. Further ablation experiments verify the effectiveness of designed weights of both granularities in MainGEC.

Cite

CITATION STYLE

APA

Li, J., Wang, Q., Zhu, C., Mao, Z., & Zhang, Y. (2023). Grammatical Error Correction via Mixed-Grained Weighted Training. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 6027–6037). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.400

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free