Speeding Up Back-Propagation Neural Networks

  • A. Otair M
  • A. Salameh W
N/ACitations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

There are many successful applications of Backpropagation (BP) for training multilayer neural networks. However, it has many shortcomings. Learning often takes long time to converge, and it may fall into local minima. One of the possible remedies to escape from local minima is by using a very small learning rate, which slows down the learning process. The proposed algorithm presented in this study used for training depends on a multilayer neural network with a very small learning rate, especially when using a large training set size. It can be applied in a generic manner for any network size that uses a backpropgation algorithm through an optical time (seen time). The paper describes the proposed algorithm, and how it can improve the performance of back-propagation (BP). The feasibility of proposed algorithm is shown through out number of experiments on different network architectures.

Cite

CITATION STYLE

APA

A. Otair, M., & A. Salameh, W. (2005). Speeding Up Back-Propagation Neural Networks. In Proceedings of the 2005 InSITE Conference. Informing Science Institute. https://doi.org/10.28945/2931

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free