Exploration and Exploitation: TwoWays to Improve Chinese Spelling Correction Models

26Citations
Citations of this article
83Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A sequence-to-sequence learning with neural networks has empirically proven to be an effective framework for Chinese Spelling Correction (CSC), which takes a sentence with some spelling errors as input and outputs the corrected one. However, CSC models may fail to correct spelling errors covered by the confusion sets, and also will encounter unseen ones. We propose a method, which continually identifies the weak spots of a model to generate more valuable training instances, and apply a task-specific pre-training strategy to enhance the model. The generated adversarial examples are gradually added to the training set. Experimental results show that such an adversarial training method combined with the pretraining strategy can improve both the generalization and robustness of multiple CSC models across three different datasets, achieving stateof- the-art performance for CSC task.

Cite

CITATION STYLE

APA

Li, C., Zhang, C., Zheng, X., & Huang, X. (2021). Exploration and Exploitation: TwoWays to Improve Chinese Spelling Correction Models. In ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference (Vol. 2, pp. 441–446). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.acl-short.56

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free