Concerned with neural learning without backpropagation, we investigate variants of the simultaneous perturbation stochastic approximation (SPSA) algorithm. Experimental results suggest that these allow for the successful training of deep feed-forward neural networks using forward passes only. In particular, we find that SPSA-based algorithms which update network parameters in a layer-wise manner are superior to variants which update all weights simultaneously.
CITATION STYLE
Wulff, B., Schuecker, J., & Bauckhage, C. (2018). SPSA for layer-wise training of deep networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11141 LNCS, pp. 564–573). Springer Verlag. https://doi.org/10.1007/978-3-030-01424-7_55
Mendeley helps you to discover research relevant for your work.