In this paper, we study the incorporation of Bayesian regularization into constructive neural networks. The degree of regularization is automatically controlled in the Bayesian inference framework and hence does not require manual setting. Simulation shows that regularization, with input training using a full Bayesian approach, produces networks with better generalization performance and lower susceptibility to over-fitting as the network size increases. Regularization with input training under MacKay's evidence framework, however, does not produce significant improvement on the problems tested.
CITATION STYLE
Kwok, T. Y., & Yeung, D. Y. (1996). Bayesian regularization in constructive neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1112 LNCS, pp. 557–562). Springer Verlag. https://doi.org/10.1007/3-540-61510-5_95
Mendeley helps you to discover research relevant for your work.