The hidden Markov topic model: A probabilistic model of semantic representation

49Citations
Citations of this article
135Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, we describe a model that learns semantic representations from the distributional statistics of language. This model, however, goes beyond the common bag-of-words paradigm, and infers semantic representations by taking into account the inherent sequential nature of linguistic data. The model we describe, which we refer to as a Hidden Markov Topics model, is a natural extension of the current state of the art in Bayesian bag-of-words models, that is, the Topics model of Griffiths, Steyvers, and Tenenbaum (2007), preserving its strengths while extending its scope to incorporate more fine-grained linguistic information. © 2009 Cognitive Science Society, Inc.

Cite

CITATION STYLE

APA

Andrews, M., & Vigliocco, G. (2010). The hidden Markov topic model: A probabilistic model of semantic representation. Topics in Cognitive Science, 2(1), 101–113. https://doi.org/10.1111/j.1756-8765.2009.01074.x

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free