Introduction to variational methods for graphical models

2.3kCitations
Citations of this article
2.2kReaders
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper presents a tutorial introduction to the use of variational methods for inference and learning in graphical models (Bayesian networks and Markov random fields). We present a number of examples of graphical models, including the QMR-DT database, the sigmoid belief network, the Boltzman machine, and several variants of hidden Markov models, in which it is infeasible to run exact inference algorithms. We then introduce variational methods, which exploit laws of large numbers to transform the original graphical model into a simplified graphical model in which inference is efficient. Inference in the simplified model provides bounds on probabilities of interest in the original model. We describe a general framework for generating variational transformations based on convex duality. Finally we return to the examples and demonstrate how variational algorithms can be formulated in each case.

Cite

CITATION STYLE

APA

Jordan, M. I., Ghahramani, Z., Jaakkola, T. S., & Saul, L. K. (1999). Introduction to variational methods for graphical models. Machine Learning, 37(2), 183–233. https://doi.org/10.1023/A:1007665907178

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free