Training radial basis functions by gradient descent

5Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we present experiments comparing different training algorithms for Radial Basis Functions (RBF) neural networks. In particular we compare the classical training which consist of a unsupervised training of centers followed by a supervised training of the weights at the output, with the full supervised training by gradient descent proposed recently in same papers. We conclude that a fully supervised training performs generally better. We also compare Batch training with Online training and we conclude that Online training suppose a reduction in the number of iterations.

Cite

CITATION STYLE

APA

Fernández-Redondo, M., Hernández-Espinosa, C., Ortiz-Gómez, M., & Torres-Sospedra, J. (2004). Training radial basis functions by gradient descent. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 3070, pp. 184–189). Springer Verlag. https://doi.org/10.1007/978-3-540-24844-6_23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free