Accelerating the convergence of EM-based training algorithms for RBF networks

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose a new Expectation-Maximization (EM) algorithm which speeds up the training of feedforward networks with local activation functions such as the Radial Basis Function (RBF) network. The core of the conditional EM algorithm for supervised learning of feedforward net works consists of decomposing the obserations into their individual output units and then estimating the parameters of each unit separately. In previously proposed approaches, at each E-step the residual is decomposed equally among the units or proportionally to the weights of the output layer. However, this approach tends to slow down the training of networks with local activation units. To overcome this drawback in this paper we use a new E-step which applies a soft decomposition of the residual among the units. Inparticular, the residual is decomposed according to the probability of each RBF unit given each input-output pattern. It is shown that this variant not only speeds up the training in comparison with other EM-type1 algorithms, but also provides better results than a global gradient-descent, technique since it has the capability of avoiding some un wanted minima of the cost function. © Springer-Verlag Berlin Heidelberg 2001.

Cite

CITATION STYLE

APA

Lázaro, M., Santamaría, I., & Pantaleón, C. (2001). Accelerating the convergence of EM-based training algorithms for RBF networks. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2084, 347–354. https://doi.org/10.1007/3-540-45720-8_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free