Hybrid Learning Algorithms for Feed-Forward Neural Networks

  • Pfister M
  • Rojas R
N/ACitations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Since the introduction of the backpropagation algorithm as a learning rule for neural networks much effort has been spent trying to develop faster alternatives. This is e.g. done by using adaptively changing learning rates or exploiting second order information of the error surface. These optimization strategies are fixed once chosen, so if the heuristic does not fit the actual shape of the error surface, the computed weight changes will be far from the optimal ones.In this paper we propose two hybrid learning algorithms, which dynamically switch between different optimization strategies. The algorithms basically use adaptive step sizes for the weight changes, but adaptively include second order information if a valley of the error function is reached.The proposed hybrid algorithms, as well as standard backpropagation and three other known fast learning algorithms, were implemented on a SIMD neurocomputer, Adaptive Solutions CNAPS, and benchmarked against the Carnegie-Mellon benchmarks.

Cite

CITATION STYLE

APA

Pfister, M., & Rojas, R. (1994). Hybrid Learning Algorithms for Feed-Forward Neural Networks (pp. 61–68). https://doi.org/10.1007/978-3-642-79386-8_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free