Bayesian learning of Markov network structure

8Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We propose a simple and efficient approach to building undirected probabilistic classification models (Markov networks) that extend naïve Bayes classifiers and outperform existing directed probabilistic classifiers (Bayesian networks) of similar complexity. Our Markov network model is represented as a set of consistent probability distributions on subsets of variables. Inference with such a model can be done efficiently in closed form for problems like class probability estimation. We also propose a highly efficient Bayesian structure learning algorithm for conditional prediction problems, based on integrating along a hill-climb in the structure space. Our prior based on the degrees of freedom effectively prevents overfitting. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Jakulin, A., & Rish, I. (2006). Bayesian learning of Markov network structure. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4212 LNAI, pp. 198–209). Springer Verlag. https://doi.org/10.1007/11871842_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free