Short text topic modeling with topic distribution quantization and negative sampling decoder

81Citations
Citations of this article
115Readers
Mendeley users who have this article in their library.

Abstract

Topic models have been prevailing for many years on discovering latent semantics while modeling long documents. However, for short texts they generally suffer from data sparsity because of extremely limited word co-occurrences; thus tend to yield repetitive or trivial topics with low quality. In this paper, to address this issue, we propose a novel neural topic model in the framework of autoencoding with a new topic distribution quantization approach generating peakier distributions that are more appropriate for modeling short texts. Besides the encoding, to tackle this issue in terms of decoding, we further propose a novel negative sampling decoder learning from negative samples to avoid yielding repetitive topics. We observe that our model can highly improve short text topic modeling performance. Through extensive experiments on real-world datasets, we demonstrate our model can outperform both strong traditional and neural baselines under extreme data sparsity scenes, producing high-quality topics.

Cite

CITATION STYLE

APA

Wu, X., Li, C., Zhu, Y., & Miao, Y. (2020). Short text topic modeling with topic distribution quantization and negative sampling decoder. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 1772–1782). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.138

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free