Ever since the first gradient-based algorithm, the brilliant backpropagation proposed by Rumelhart, a variety of new training algorithms have emerged to improve different aspects of the learning process for feed-forward neural networks. One of these aspects is the learning speed. In this paper, we present a learning algorithm that combines linearleast-squares with gradient descent. The theoretical basis for the method is given and its performance is illustrated by its application to several examples in which it is compared with other learning algorithms and well known data sets. Results show the proposed algorithm improves the learning speed of the basic backpropagation algorithm in several orders of magnitude, while maintaining good optimization accuracy. Its performance and low computational cost makes it an interesting alternative even for second order methods, specially when dealing large networks and training sets. © Springor-Verlag Berlin Heidelberg 2007.
CITATION STYLE
Guijarro-Berdiñas, B., Fontenla-Romero, O., Pérez-Sánchez, B., & Fragüela, P. (2007). A fast semi-linear backpropagation learning algorithm. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4668 LNCS, pp. 190–198). Springer Verlag. https://doi.org/10.1007/978-3-540-74690-4_20
Mendeley helps you to discover research relevant for your work.