An error-oriented approach to word embedding pre-training

3Citations
Citations of this article
106Readers
Mendeley users who have this article in their library.

Abstract

We propose a novel word embedding pretraining approach that exploits writing errors in learners' scripts. We compare our method to previous models that tune the embeddings based on script scores and the discrimination between correct and corrupt word contexts in addition to the generic commonly-used embeddings pre-trained on large corpora. The comparison is achieved by using the aforementioned models to bootstrap a neural network that learns to predict a holistic score for scripts. Furthermore, we investigate augmenting our model with error corrections and monitor the impact on performance. Our results show that our error-oriented approach outperforms other comparable ones which is further demonstrated when training on more data. Additionally, extending the model with corrections provides further performance gains when data sparsity is an issue.

Cite

CITATION STYLE

APA

Farag, Y., Rei, M., & Briscoe, T. (2017). An error-oriented approach to word embedding pre-training. In EMNLP 2017 - 12th Workshop on Innovative Use of NLP for Building Educational Applications, BEA 2017 - Proceedings of the Workshop (pp. 149–158). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w17-5016

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free