Sparse parallel training of hierarchical dirichlet process topic models

0Citations
Citations of this article
78Readers
Mendeley users who have this article in their library.

Abstract

To scale non-parametric extensions of probabilistic topic models such as Latent Dirichlet allocation to larger data sets, practitioners rely increasingly on parallel and distributed systems. In this work, we study data-parallel training for the hierarchical Dirichlet process (HDP) topic model. Based upon a representation of certain conditional distributions within an HDP, we propose a doubly sparse data-parallel sampler for the HDP topic model. This sampler utilizes all available sources of sparsity found in natural language-an important way to make computation efficient. We benchmark our method on a well-known corpus (PubMed) with 8m documents and 768m tokens, using a single multi-core machine in under four days.

Cite

CITATION STYLE

APA

Terenin, A., Magnusson, M., & Jonsson, L. (2020). Sparse parallel training of hierarchical dirichlet process topic models. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 2925–2934). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.234

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free