We consider the problem of learning a Bayesian network structure given n examples and the prior probability based on maximizing the posterior probability. We propose an algorithm that runs in O(n log n) time and that addresses continuous variables and discrete variables without assuming any class of distribution. We prove that the decision is strongly consistent, i.e., correct with probability one as n → ∞. To date, consistency has only been obtained for discrete variables for this class of problem, and many authors have attempted to prove consistency when continuous variables are present. Furthermore, we prove that the "log n" term that appears in the penalty term of the description length can be replaced by 2(1+ε) log log n to obtain strong consistency, where ε > 0 is arbitrary, which implies that the Hannan-Quinn proposition holds.
CITATION STYLE
Suzuki, J. (2015). Consistency of learning bayesian network structures with continuous variables: An information theoretic approach. Entropy, 17(8), 5752–5770. https://doi.org/10.3390/e17085752
Mendeley helps you to discover research relevant for your work.