Learning Regularization Parameters of Radial Basis Functions in Embedded Likelihoods Space

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Neural networks with radial basis activation functions are typically trained in two different phases: the first consists in the construction of the hidden layer, while the second consists in finding the output layer weights. Constructing the hidden layer involves defining the number of units in it, as well as their centers and widths. The training process of the output layer can be done using least squares methods, usually setting a regularization term. This work proposes an approach for building the whole network using information extracted directly from the projected training data in the space formed by the likelihoods functions. One can, then, train RBF networks for pattern classification with minimal external intervention.

Cite

CITATION STYLE

APA

Menezes, M., Torres, L. C. B., & Braga, A. P. (2019). Learning Regularization Parameters of Radial Basis Functions in Embedded Likelihoods Space. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11805 LNAI, pp. 281–292). Springer Verlag. https://doi.org/10.1007/978-3-030-30244-3_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free