Multi-headed architecture based on bert for grammatical errors correction

14Citations
Citations of this article
129Readers
Mendeley users who have this article in their library.

Abstract

During last years we have seen tremendous progress in the development of NLPrelated solutions and area in general. It happened primarily due to emergence of pre-trained models based on the Transformer (Vaswani et al., 2017) architecture such as GPT (Radford et al., 2018) and BERT (Devlin et al., 2019). Fine-tuned models containing these representations can achieve stateof- the-art results in many NLP-related tasks. Given this, the use of pre-trained models in the Grammatical Error Correction (GEC) task seems reasonable. In this paper, we describe our approach to GEC using the BERT model for creation of encoded representation and some of our enhancements, namely, "Heads" are fullyconnected networks which are used for finding the errors and later receive recommendation from the networks on dealing with a highlighted part of the sentence only. Among the main advantages of our solution is increasing the system productivity and lowering the time of processing while keeping the high accuracy of GEC results.

Cite

CITATION STYLE

APA

Shaptala, J., & Didenko, B. (2019). Multi-headed architecture based on bert for grammatical errors correction. In ACL 2019 - Innovative Use of NLP for Building Educational Applications, BEA 2019 - Proceedings of the 14th Workshop (pp. 246–251). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w19-4426

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free