Model complexity of feedforward neural networks is studied in terms of rates of variable-basis approximation. Sets of functions, for which the errors in approximation by neural networks with n hidden units converge to zero geometrically fast with increasing number n, are described. However, the geometric speed of convergence depends on parameters, which are specific for each function to be approximated. The results are illustrated by examples of estimates of such parameters for functions in infinite-dimensional Hulbert spaces. © Springer-Verlag Berlin Heidelberg 2008.
CITATION STYLE
Kůrková, V., & Sanguineti, M. (2008). Geometrie rates of approximation by neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4910 LNCS, pp. 541–550). https://doi.org/10.1007/978-3-540-77566-9_47
Mendeley helps you to discover research relevant for your work.