Training neural networks with GA hybrid algorithms

73Citations
Citations of this article
65Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Training neural networks is a complex task of great importance in the supervised learning field of research. In this work we tackle this problem with five algorithms, and try to offer a set of results that could hopefully foster future comparisons by following a kind of standard evaluation of the results (the Prechelt approach). To achieve our goal of studying in the same paper population based, local search, and hybrid algorithms, we have selected two gradient descent algorithms: Backpropagation and Levenberg-Marquardt, one population based heuristic such as a Genetic Algorithm, and two hybrid algorithms combining this last with the former local search ones. Our benchmark is composed of problems arising in Medicine, and our conclusions clearly establish the advantages of the proposed hybrids over the pure algorithms. © Springer-Verlag Berlin Heidelberg 2004.

Cite

CITATION STYLE

APA

Alba, E., & Chicano, J. F. (2004). Training neural networks with GA hybrid algorithms. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3102, 852–863. https://doi.org/10.1007/978-3-540-24854-5_87

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free