CoNLL 2014 shared task: Grammatical error correction with a syntactic N-gram language model from a big corpora

10Citations
Citations of this article
119Readers
Mendeley users who have this article in their library.

Abstract

We describe our approach to grammatical error correction presented in the CoNLL Shared Task 2014. Our work is focused on error detection in sentences with a language model based on syntactic tri-grams and bi-grams extracted from dependency trees generated from 90% of the English Wikipedia. Also, we add a naïve module to error correction that outputs a set of possible answers, those sentences are scored using a syntactic n-gram language model. The sentence with the best score is the final suggestion of the system. The system was ranked 11th, evidently this is a very simple approach, but since the beginning our main goal was to test the syntactic n-gram language model with a big corpus to future comparison.

Cite

CITATION STYLE

APA

David Hernandez, S., & Calvo, H. (2014). CoNLL 2014 shared task: Grammatical error correction with a syntactic N-gram language model from a big corpora. In CoNLL 2014 - 18th Conference on Computational Natural Language Learning, Proceedings of the Shared Task (pp. 53–59). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/w14-1707

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free