Pre-training and Fine-tuning Neural Topic Model: A Simple yet Effective Approach to Incorporating External Knowledge

12Citations
Citations of this article
59Readers
Mendeley users who have this article in their library.

Abstract

Recent years have witnessed growing interests in incorporating external knowledge such as pre-trained word embeddings (PWEs) or pre-trained language models (PLMs) into neural topic modeling. However, we found that employing PWEs and PLMs for topic modeling only achieved limited performance improvements but with huge computational overhead. In this paper, we propose a novel strategy to incorporate external knowledge into neural topic modeling where the neural topic model is pre-trained on a large corpus and then fine-tuned on the target dataset. Experiments have been conducted on three datasets and results show that the proposed approach significantly outperforms both current state-of-the-art neural topic models and some topic modeling approaches enhanced with PWEs or PLMs. Moreover, further study shows that the proposed approach greatly reduces the need for the huge size of training data.

Cite

CITATION STYLE

APA

Zhang, L., Hu, X., Wang, B., Zhou, D., Zhang, Q. W., & Cao, Y. (2022). Pre-training and Fine-tuning Neural Topic Model: A Simple yet Effective Approach to Incorporating External Knowledge. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 5980–5989). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.413

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free