Extending Multi-Sense Word Embedding to Phrases and Sentences for Unsupervised Semantic Applications

7Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

Most unsupervised NLP models represent each word with a single point or single region in semantic space, while the existing multi-sense word embeddings cannot represent longer word sequences like phrases or sentences. We propose a novel embedding method for a text sequence (a phrase or a sentence) where each sequence is represented by a distinct set of multi-mode codebook embeddings to capture different semantic facets of its meaning. The codebook embeddings can be viewed as the cluster centers which summarize the distribution of possibly co-occurring words in a pre-trained word embedding space. We introduce an end-to-end trainable neural model that directly predicts the set of cluster centers from the input text sequence during test time. Our experiments show that the per-sentence codebook embeddings significantly improve the performances in unsupervised sentence similarity and extractive summarization benchmarks. In phrase similarity experiments, we discover that the multi-facet embeddings provide an interpretable semantic representation but do not outperform the single-facet baseline.

Cite

CITATION STYLE

APA

Chang, H. S., Agrawal, A., & McCallum, A. (2021). Extending Multi-Sense Word Embedding to Phrases and Sentences for Unsupervised Semantic Applications. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 8A, pp. 6956–6965). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i8.16857

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free