Inducing language networks from continuous space word representations

6Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent advancements in unsupervised feature learning have developed powerful latent representations of words. However, it is still not clear what makes one representation better than another and how we can learn the ideal representation. Understanding the structure of latent spaces attained is key to any future advancement in unsupervised learning. In this work, we introduce a new view of continuous space word representations as language networks.We explore two techniques to create language networks from learned features by inducing them for two popular word representation methods and examining the properties of their resulting networks. We find that the induced networks differ from other methods of creating language networks, and that they contain meaningful community structure. © 2014 Springer International Publishing Switzerland.

Cite

CITATION STYLE

APA

Perozzi, B., Al-Rfou’, R., Kulkarni, V., & Skiena, S. (2014). Inducing language networks from continuous space word representations. In Studies in Computational Intelligence (Vol. 549, pp. 261–273). Springer Verlag. https://doi.org/10.1007/978-3-319-05401-8_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free