Regularization theory presents a sound framework to solving supervised learning problems. However, the regularization networks have a large size corresponding to the size of training data. In this work we study a relationship between network complexity, i.e. number of hidden units, and approximation and generalization ability. We propose an incremental hybrid learning algorithm that produces smaller networks with performance similar to original regularization networks. © 2010 Springer-Verlag.
CITATION STYLE
Vidnerová, P., & Neruda, R. (2010). Hybrid learning of regularization neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6114 LNAI, pp. 124–131). https://doi.org/10.1007/978-3-642-13232-2_15
Mendeley helps you to discover research relevant for your work.