Accelerating collapsed variational bayesian inference for latent dirichlet allocation with nvidia CUDA compatible devices

10Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose an acceleration of collapsed variational Bayesian (CVB) inference for latent Dirichlet allocation (LDA) by using Nvidia CUDA compatible devices. While LDA is an efficient Bayesian multi-topic document model, it requires complicated computations for parameter estimation in comparison with other simpler document models, e.g. probabilistic latent semantic indexing, etc. Therefore, we accelerate CVB inference, an efficient deterministic inference method for LDA, with Nvidia CUDA. In the evaluation experiments, we used a set of 50,000 documents and a set of 10,000 images. We could obtain inference results comparable to sequential CVB inference. © 2009 Springer Berlin Heidelberg.

Cite

CITATION STYLE

APA

Masada, T., Hamada, T., Shibata, Y., & Oguri, K. (2009). Accelerating collapsed variational bayesian inference for latent dirichlet allocation with nvidia CUDA compatible devices. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5579 LNAI, pp. 491–500). https://doi.org/10.1007/978-3-642-02568-6_50

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free