Not all contexts are created equal: Better word representations with variable attention

103Citations
Citations of this article
284Readers
Mendeley users who have this article in their library.

Abstract

We introduce an extension to the bag-ofwords model for learning words representations that take into account both syntactic and semantic properties within language. This is done by employing an attention model that finds within the contextual words, the words that are relevant for each prediction. The general intuition of our model is that some words are only relevant for predicting local context (e.g. function words), while other words are more suited for determining global context, such as the topic of the document. Experiments performed on both semantically and syntactically oriented tasks show gains using our model over the existing bag of words model. Furthermore, compared to other more sophisticated models, our model scales better as we increase the size of the context of the model.

Cite

CITATION STYLE

APA

Ling, W., Chu-Cheng, L., Tsvetkov, Y., Amir, S., Astudillo, R. F., Dyer, C., … Trancoso, I. (2015). Not all contexts are created equal: Better word representations with variable attention. In Conference Proceedings - EMNLP 2015: Conference on Empirical Methods in Natural Language Processing (pp. 1367–1372). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d15-1161

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free