Gradient descent training of Bayesian networks

17Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

As shown by Russel et al., 1995 [7], Bayesian networks can be equipped with a gradient descent learning method similar to the training method for neural networks. The calculation of the required gradients can be performed locally along with propagation. We review how this can be done, and we show how the gradient descent approach can be used for various tasks like tuning and training with training sets of denite as well as non-denite classications. We introduce tools for resistance and damping to guide the direction of convergence, and we use them for a new adaptation method which can also handle situations where parameters in the network covary.

Cite

CITATION STYLE

APA

Jensen, F. V. (1999). Gradient descent training of Bayesian networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1638, pp. 190–200). Springer Verlag. https://doi.org/10.1007/3-540-48747-6_18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free