Latent Feature Word Representations to Enhance Topic Models for Text Mining Algorithms

  • et al.
N/ACitations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Dealing with large number of textual documents needs proven models that leverage the efficiency in processing. Text mining needs such models to have meaningful approaches to extract latent features from document collection. Latent Dirichlet allocation (LDA) is one such probabilistic generative process model that helps in representing document collections in a systematic approach. In many text mining applications LDA is useful as it supports many models. One such model is known as Topic Model. However, topic models LDA needs to be improved in order to exploit latent feature vector representations of words trained on large corpora to improve word-topic mapping learnt on smaller corpus. With respect to document clustering and document classification, it is essential to have a novel topic models to improve performance. In this paper, an improved topic model is proposed and implemented using LDA which exploits the benefits of Word2Vec tool to have pre-trained word vectors so as to achieve the desired enhancement. A prototype application is built to demonstrate the proof of the concept with text mining operations like document clustering.

Cite

CITATION STYLE

APA

Mohammed, D. T. K., Gayatri, M., … Reddy, Dr. V. (2019). Latent Feature Word Representations to Enhance Topic Models for Text Mining Algorithms. International Journal of Engineering and Advanced Technology, 9(2), 4816–4821. https://doi.org/10.35940/ijeat.b2503.129219

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free