Training Neural Networks using non-standard norms-preliminary results

4Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We discuss alternative norms to train Neural Networks (NNs). We focus on the so called Multilayer Perceptrons (MLPs). To achieve this we rely on a Genetic Algorithm called an Eclectic GA (EGA). By using the EGA we avoid the drawbacks of the standard training algorithm in this sort of NNs: the backpropagation algorithm. We define four measures of distance: a) The mean exponential error (MEE), b) The mean absolute error (MAE), c) The maximum square error (MSE) and d) The maximum (supremum) absolute error (SAE). We analyze the behavior of an MLP NN on two kinds of problems: Classification and Forecasting. We discuss the results of applying an EGA to train the NNs and show that alternative norms yield better results than the traditional RMS norm. © Springer-Verlag Berlin Heidelberg 2000.

Cite

CITATION STYLE

APA

Kuri Morales, A. (2000). Training Neural Networks using non-standard norms-preliminary results. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1793 LNAI, pp. 350–364). https://doi.org/10.1007/10720076_32

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free