Avoiding local minima in feedforward neural networks by simultaneous learning

51Citations
Citations of this article
70Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Feedforward neural networks are particularly useful in learning a training dataset without prior knowledge. However, weight adjusting with a gradient descent may result in the local minimum problem. Repeated training with random starting weights is among the popular methods to avoid this problem, but it requires extensive computational time. This paper proposes a simultaneous training method with removal criteria to eliminate less promising neural networks, which can decrease the probability of achieving a local minimum while efficiently utilizing resources. The experimental results demonstrate the effectiveness and efficiency of the proposed training method in comparison with conventional training. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Atakulreka, A., & Sutivong, D. (2007). Avoiding local minima in feedforward neural networks by simultaneous learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4830 LNAI, pp. 100–109). Springer Verlag. https://doi.org/10.1007/978-3-540-76928-6_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free