Capabilities of linear and neural-network models are compared from the point of view of requirements on the growth of model complexity with an increasing accuracy of approximation. Upper bounds on worst-case errors in approximation by neural networks are compared with lower bounds on these errors in linear approximation. The bounds are formulated in terms of singular numbers of certain operators induced by computational units and high-dimensional volumes of the domains of the functions to be approximated. © 2010 Springer-Verlag Berlin Heidelberg.
CITATION STYLE
Gnecco, G., Kůrková, V., & Sanguineti, M. (2010). Some comparisons of model complexity in linear and neural-network approximation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6354 LNCS, pp. 358–367). https://doi.org/10.1007/978-3-642-15825-4_48
Mendeley helps you to discover research relevant for your work.