Upper bounds for variational stochastic complexities of Bayesian networks

2Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In recent years, variational Bayesian learning has been used as an approximation of Bayesian learning. In spite of the computational tractability and good generalization performance in many applications, its statistical properties have yet to be clarified. In this paper, we analyze the statistical property in variational Bayesian learning of Bayesian networks which are widely used in information processing and uncertain artificial intelligence. We derive upper bounds for asymptotic variational stochastic complexities of Bayesian networks. Our result theoretically supports the effectiveness of variational Bayesian learning as an approximation of Bayesian learning. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Watanabe, K., Shiga, M., & Watanabe, S. (2006). Upper bounds for variational stochastic complexities of Bayesian networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4224 LNCS, pp. 139–146). Springer Verlag. https://doi.org/10.1007/11875581_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free