Decoupled word embeddings using latent topics

2Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose decoupled word embeddings (DWE) as a universal word representation that covers multiple senses of words. Toward this goal, our model represents each word as a combination of multiple word vectors that are associated with latent topics. Specifically, we decompose a word vector into multiple word vectors for multiple senses, according to the topic weight obtained from pre-trained topic models. Although this dynamic word representation is simple, the proposed model can leverage both local and global contexts. Through extensive experiments, including qualitative and quantitative analyses, we demonstrate that the proposed model is comparable to or better than state-of-the-art word embedding models. The code is publicly available at https://github.com/righ120/DWE.

Cite

CITATION STYLE

APA

Park, H., & Lee, J. (2020). Decoupled word embeddings using latent topics. In Proceedings of the ACM Symposium on Applied Computing (pp. 875–882). Association for Computing Machinery. https://doi.org/10.1145/3341105.3373997

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free