Consistency of learning bayesian network structures with continuous variables: An information theoretic approach

8Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

We consider the problem of learning a Bayesian network structure given n examples and the prior probability based on maximizing the posterior probability. We propose an algorithm that runs in O(n log n) time and that addresses continuous variables and discrete variables without assuming any class of distribution. We prove that the decision is strongly consistent, i.e., correct with probability one as n → ∞. To date, consistency has only been obtained for discrete variables for this class of problem, and many authors have attempted to prove consistency when continuous variables are present. Furthermore, we prove that the "log n" term that appears in the penalty term of the description length can be replaced by 2(1+ε) log log n to obtain strong consistency, where ε > 0 is arbitrary, which implies that the Hannan-Quinn proposition holds.

Cite

CITATION STYLE

APA

Suzuki, J. (2015). Consistency of learning bayesian network structures with continuous variables: An information theoretic approach. Entropy, 17(8), 5752–5770. https://doi.org/10.3390/e17085752

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free