Approaching neural grammatical error correction as a low-resource machine translation task

124Citations
Citations of this article
231Readers
Mendeley users who have this article in their library.

Abstract

Previously, neural methods in grammatical error correction (GEC) did not reach state-ofthe-art results compared to phrase-based statistical machine translation (SMT) baselines. We demonstrate parallels between neural GEC and low-resource neural MT and successfully adapt several methods from low-resource MT to neural GEC. We further establish guidelines for trustable results in neural GEC and propose a set of model-independent methods for neural GEC that can be easily applied in most GEC settings. Proposed methods include adding source-side noise, domain-Adaptation techniques, a GEC-specific training-objective, transfer learning with monolingual data, and ensembling of independently trained GEC models and language models. The combined effects of these methods result in better than state-of-The-Art neural GEC models that outperform previously best neural GEC systems by more than 10% M2 on the CoNLL-2014 benchmark and 5.9% on the JFLEG test set. Non-neural state-of-The-Art systems are outperformed by more than 2% on the CoNLL-2014 benchmark and by 4% on JFLEG.

Cite

CITATION STYLE

APA

Junczys-Dowmunt, M., Grundkiewicz, R., Guha, S., & Heafield, K. (2018). Approaching neural grammatical error correction as a low-resource machine translation task. In NAACL HLT 2018 - 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference (Vol. 1, pp. 595–606). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/n18-1055

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free