Model complexity in neural-network learning is investigated using tools from nonlinear approximation and integration theory. Estimates of network complexity are obtained from inspection of upper bounds on convergence of minima of error functionals over networks with an increasing number of units to their global minima. The estimates are derived using integral transforms induced by computational units. The role of dimensionality of training data defining error functionals is discussed. © 2009 Springer-Verlag Berlin Heidelberg.
CITATION STYLE
Kůrková, V. (2009). Estimates of model complexity in neural-network learning. Studies in Computational Intelligence, 247, 97–111. https://doi.org/10.1007/978-3-642-04003-0_5
Mendeley helps you to discover research relevant for your work.