Two frameworks for improving gradient-based learning algorithms

3Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Backpropagation is the most popular algorithm for training neural networks. However, this gradient-based training method is known to have a tendency towards very long training times and convergence to local optima. Various methods have been proposed to alleviate these issues including, but not limited to, different training algorithms, automatic architecture design and different transfer functions. In this chapter we continue the exploration into improving gradient-based learning algorithms through dynamic transfer function modification. We propose opposite transfer functions as a means to improve the numerical conditioning of neural networks and extrapolate two backpropagation-based learning algorithms. Our experimental results show an improvement in accuracy and generalization ability on common benchmark functions. The experiments involve examining the sensitivity of the approach to learning parameters, type of transfer function and number of neurons in the network. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Ventresca, M., & Tizhoosh, H. R. (2008). Two frameworks for improving gradient-based learning algorithms. Studies in Computational Intelligence, 155, 255–284. https://doi.org/10.1007/978-3-540-70829-2_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free