Word2Sense: Sparse interpretable word embeddings

34Citations
Citations of this article
176Readers
Mendeley users who have this article in their library.

Abstract

We present an unsupervised method to generate Word2Sense word embeddings that are interpretable - each dimension of the embedding space corresponds to a fine-grained sense, and the non-negative value of the embedding along the j-th dimension represents the relevance of the j-th sense to the word. The underlying LDA-based generative model can be extended to refine the representation of a polysemous word in a short context, allowing us to use the embeddings in contextual tasks. On computational NLP tasks, Word2Sense embeddings compare well with other word embeddings generated by unsupervised methods. Across tasks such as word similarity, entailment, sense induction, and contextual interpretation, Word2Sense is competitive with the state-of-the-art method for that task. Word2Sense embeddings are at least as sparse and fast to compute as prior art.

Cite

CITATION STYLE

APA

Panigrahi, A., Simhadri, H. V., & Bhattacharyya, C. (2020). Word2Sense: Sparse interpretable word embeddings. In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (pp. 5692–5705). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p19-1570

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free