Gradient descent training of radial basis functions

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we present experiments comparing different training algorithms for Radial Basis Functions (RBF) neural networks. In particular we compare the classical training which consist of a unsupervised training of centers followed by a supervised training of the weights at the output, with the full supervised training by gradient descent proposed recently in same papers. We conclude that a fully supervised training performs generally better. We also compare Batch training with Online training of fully supervised training and we conclude that Online training suppose a reduction in the number of iterations and therefore increase the speed of convergence. © Springer-Verlag 2004.

Cite

CITATION STYLE

APA

Fernández-Redondo, M., Hernández-Espinosa, C., Ortiz-Gómez, M., & Torres-Sospedra, J. (2004). Gradient descent training of radial basis functions. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3173, 229–234. https://doi.org/10.1007/978-3-540-28647-9_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free