We introduce an extension to the bag-ofwords model for learning words representations that take into account both syntactic and semantic properties within language. This is done by employing an attention model that finds within the contextual words, the words that are relevant for each prediction. The general intuition of our model is that some words are only relevant for predicting local context (e.g. function words), while other words are more suited for determining global context, such as the topic of the document. Experiments performed on both semantically and syntactically oriented tasks show gains using our model over the existing bag of words model. Furthermore, compared to other more sophisticated models, our model scales better as we increase the size of the context of the model.
CITATION STYLE
Ling, W., Chu-Cheng, L., Tsvetkov, Y., Amir, S., Astudillo, R. F., Dyer, C., … Trancoso, I. (2015). Not all contexts are created equal: Better word representations with variable attention. In Conference Proceedings - EMNLP 2015: Conference on Empirical Methods in Natural Language Processing (pp. 1367–1372). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d15-1161
Mendeley helps you to discover research relevant for your work.