Context representations are central to various NLP tasks, such as word sense disambiguation, named entity recognition, co-reference resolution, and many more. In this work we present a neural model for efficiently learning a generic context embedding function from large corpora, using bidirectional LSTM. With a very simple application of our context representations, we manage to surpass or nearly reach state-of-the-art results on sentence completion, lexical substitution and word sense disambiguation tasks, while substantially outperforming the popular context representation of averaged word embeddings. We release our code and pre-trained models, suggesting they could be useful in a wide variety of NLP tasks.
CITATION STYLE
Melamud, O., Goldberger, J., & Dagan, I. (2016). context2vec: Learning generic context embedding with bidirectional LSTM. In CoNLL 2016 - 20th SIGNLL Conference on Computational Natural Language Learning, Proceedings (pp. 51–61). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/k16-1006
Mendeley helps you to discover research relevant for your work.