Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality

461Citations
Citations of this article
424Readers
Mendeley users who have this article in their library.

Abstract

Topic models based on latent Dirichlet allocation and related methods are used in a range of user-focused tasks including document navigation and trend analysis, but evaluation of the intrinsic quality of the topic model and topics remains an open research area. In this work, we explore the two tasks of automatic evaluation of single topics and automatic evaluation of whole topic models, and provide recommendations on the best strategy for performing the two tasks, in addition to providing an open-source toolkit for topic and topic model evaluation. © 2014 Association for Computational Linguistics.

Cite

CITATION STYLE

APA

Lau, J. H., Newman, D., & Baldwin, T. (2014). Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In 14th Conference of the European Chapter of the Association for Computational Linguistics 2014, EACL 2014 (pp. 530–539). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/e14-1056

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free