Adaptive Probabilistic Word Embedding

10Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Word embeddings have been widely used and proven to be effective in many natural language processing and text modeling tasks. It is obvious that one ambiguous word could have very different semantics in various contexts, which is called polysemy. Most existing works aim at generating only one single embedding for each word while a few works build a limited number of embeddings to present different meanings for each word. However, it is hard to determine the exact number of senses for each word as the word meaning is dependent on contexts. To address this problem, we propose a novel Adaptive Probabilistic Word Embedding (APWE) model, where the word polysemy is defined over a latent interpretable semantic space. Specifically, at first each word is represented by an embedding in the latent semantic space and then based on the proposed APWE model, the word embedding can be adaptively adjusted and updated based on different contexts to obtain the tailored word embedding. Empirical comparisons with state-of-the-art models demonstrate the superiority of the proposed APWE model.

Cite

CITATION STYLE

APA

Li, S., Zhang, Y., Pan, R., & Mo, K. (2020). Adaptive Probabilistic Word Embedding. In The Web Conference 2020 - Proceedings of the World Wide Web Conference, WWW 2020 (pp. 651–661). Association for Computing Machinery, Inc. https://doi.org/10.1145/3366423.3380147

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free