Bayesian Methods for Backpropagation Networks

  • MacKay D
N/ACitations
Citations of this article
69Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Bayesian probability theory provides a unifying framework for data modeling. In this framework, the overall aims are to find models that are well matched to the data, and to use these models to make optimal predictions. Neural network learning is interpreted as an inference of the most probable parameters for the model, given the training data. The search in model space (i.e., the space of architectures, noise models, preprocessings, regularizers, and weight decay constants) also then can be treated as an inference problem, in which we infer the relative probability of alternative models, given the data. This provides powerful and practical methods for controlling, comparing, and using adaptive network models. This chapter describes numerical techniques based on Gaussian approximations for implementation of these methods.

Cite

CITATION STYLE

APA

MacKay, D. J. C. (1996). Bayesian Methods for Backpropagation Networks (pp. 211–254). https://doi.org/10.1007/978-1-4612-0723-8_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free