Balancing bias and variance: Network topology and pattern set reduction techniques

14Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

It has been estimated that some 70% of applications of neural networks use some variant of the multi-layer feed-forward network trained using back-propagation. These neural networks are non-parametric estimators, and their limitations can be explained by a well understood problem in non-parametric statistics, being the “bias and variance” dilemma. The dilemma is that to obtain a good approximation of an input-output relationship using some form of estimator, constraints must be placed on the structure of the estimator and hence introduce bias, or a very large number of examples of the relationship must be used to construct the estimator. Thus, we have a trade off between generalisation ability and training time. We overview this area and introduce our own methods for reducing the size of trained networks without compromising their trained generalisation abilities, and to reduce the size of the training pattern set to improve the training time again without reducing generalisation.

Cite

CITATION STYLE

APA

Gedeon, T. D., Wong, P. M., & Harris, D. (1995). Balancing bias and variance: Network topology and pattern set reduction techniques. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 930, pp. 551–559). Springer Verlag. https://doi.org/10.1007/3-540-59497-3_222

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free