Not All Contexts Are Created Equal: Better Word Representations with Variable Attention

  • Ling W
  • Tsvetkov Y
  • Amir S
 et al. 
  • 170

    Readers

    Mendeley users who have this article in their library.
  • 18

    Citations

    Citations of this article.

Abstract

We introduce an extension to the bag-of-words model for learning words represen-tations that take into account both syn-tactic and semantic properties within lan-guage. This is done by employing an at-tention model that finds within the con-textual words, the words that are relevant for each prediction. The general intuition of our model is that some words are only relevant for predicting local context (e.g. function words), while other words are more suited for determining global con-text, such as the topic of the document. Experiments performed on both semanti-cally and syntactically oriented tasks show gains using our model over the existing bag of words model. Furthermore, com-pared to other more sophisticated models, our model scales better as we increase the size of the context of the model.

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

Authors

  • Wang Ling

  • Yulia Tsvetkov

  • Silvio Amir

  • Ramon Fermandez

  • Chris Dyer

  • Alan W Black

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free