Abstract
Recently, a new class of learning algorithms have been proposed for training feed-forward neural networks. These algorithms have many advantages over conventional iterative learning algorithms [1]. Unlike iterative learning algorithms, incremental learning algorithms not only adjust interconnection weights, but change the network architecture by adding hidden nodes to the network. This paper examines the c^abilities and potentials of these incremental learning algorithms. In particular, four different incremental learning algorithms have been simulated for a variety of learning tasks. Due to inherent weaknesses of the pocket algorithm used by most of the incremental learning algorithms, a large number of iterations is needed to generate a weight vector at each node. To improve the performance of the incremental learning algorithms, we propose a new perceptron learning algorithm, the smart algorithm, to find the near-optimal set of weights at each node. Our simulation results show that the smart algorithm improves the performance of these incremental learning algorithm and among the four algorithms we examined, the global algorithm performs the best.
Cite
CITATION STYLE
Wang, E. H. C., & Kuh, A. (1992). A Smart Algorithm for Incremental Learning. In Proceedings of the International Joint Conference on Neural Networks (Vol. 3, pp. 121–126). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/IJCNN.1992.227182
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.