Gradient descent and radial basis functions

N/ACitations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we present experiments comparing different training algorithms for Radial Basis Functions (RBF) neural networks. In particular we compare the classical training which consists of an unsupervised training of centers followed by a supervised training of the weights at the output, with the full supervised training by gradient descent proposed recently in same papers. We conclude that a fully supervised training performs generally better. We also compare Batch training with Online training and we conclude that Online training suppose a reduction in the number of iterations. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Fernández-Redondo, M., Torres-Sospedra, J., & Hernández-Espinosa, C. (2006). Gradient descent and radial basis functions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4113 LNCS-I, pp. 391–396). Springer Verlag. https://doi.org/10.1007/11816157_45

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free