Towards developing probabilistic generative models for reasoning with natural language representations

3Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Probabilistic generative models have been applied successfully in a wide range of applications that range from speech recognition and part of speech tagging, to machine translation and information retrieval, but, traditionally, applications such as reasoning have been thought to fall outside the scope of the generative framework for both theoretical and practical reasons. Theoretically, it is difficult to imagine, for example, what a reasonable generative story for first-order logic inference might look like. Practically, even if we can conceive of such a story, it is unclear how one can obtain sufficient amounts of training data. In this paper, we discuss how by embracing a less restrictive notion of inference, one can build generative models of inference that can be trained on massive amounts of naturally occurring texts; and text-based deduction and abduction decoding algorithms. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Marcu, D., & Popescu, A. M. (2005). Towards developing probabilistic generative models for reasoning with natural language representations. In Lecture Notes in Computer Science (Vol. 3406, pp. 88–99). Springer Verlag. https://doi.org/10.1007/978-3-540-30586-6_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free