There's no comparison: Reference-less evaluation metrics in grammatical error correction

36Citations
Citations of this article
129Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Current methods for automatically evaluating grammatical error correction (GEC) systems rely on gold-standard references. However, these methods suffer from penalizing grammatical edits that are correct but not in the gold standard. We show that reference-less grammaticality metrics correlate very strongly with human judgments and are competitive with the leading reference-based evaluation metrics. By interpolating both methods, we achieve state-of-the-art correlation with human judgments. Finally, we show that GEC metrics are much more reliable when they are calculated at the sentence level instead of the corpus level. We have set up a CodaLab site for benchmarking GEC output using a common dataset and different evaluation metrics.

Cite

CITATION STYLE

APA

Napoles, C., Sakaguchi, K., & Tetreault, J. (2016). There’s no comparison: Reference-less evaluation metrics in grammatical error correction. In EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 2109–2115). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d16-1228

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free