SPSA for layer-wise training of deep networks

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Concerned with neural learning without backpropagation, we investigate variants of the simultaneous perturbation stochastic approximation (SPSA) algorithm. Experimental results suggest that these allow for the successful training of deep feed-forward neural networks using forward passes only. In particular, we find that SPSA-based algorithms which update network parameters in a layer-wise manner are superior to variants which update all weights simultaneously.

Cite

CITATION STYLE

APA

Wulff, B., Schuecker, J., & Bauckhage, C. (2018). SPSA for layer-wise training of deep networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11141 LNCS, pp. 564–573). Springer Verlag. https://doi.org/10.1007/978-3-030-01424-7_55

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free