Distributed representations of geographically situated language

71Citations
Citations of this article
148Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We introduce a model for incorporating contextual information (such as geography) in learning vector-space representations of situated language. In contrast to approaches to multimodal representation learning that have used properties of the object being described (such as its color), our model includes information about the subject (i.e., the speaker), allowing us to learn the contours of a word's meaning that are shaped by the context in which it is uttered. In a quantitative evaluation on the task of judging geographically informed semantic similarity between representations learned from 1.1 billion words of geo-located tweets, our joint model outperforms comparable independent models that learn meaning in isolation. © 2014 Association for Computational Linguistics.

Cite

CITATION STYLE

APA

Bamman, D., Dyer, C., & Smith, N. A. (2014). Distributed representations of geographically situated language. In 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014 - Proceedings of the Conference (Vol. 2, pp. 828–834). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/p14-2134

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free