On Symmetry and Initialization for Neural Networks

1Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This work provides an additional step in the theoretical understanding of neural networks. We consider neural networks with one hidden layer and show that when learning symmetric functions, one can choose initial conditions so that standard SGD training efficiently produces generalization guarantees. We empirically verify this and show that this does not hold when the initial conditions are chosen at random. The proof of convergence investigates the interaction between the two layers of the network. Our results highlight the importance of using symmetry in the design of neural networks.

Author supplied keywords

Cite

CITATION STYLE

APA

Nachum, I., & Yehudayoff, A. (2020). On Symmetry and Initialization for Neural Networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12118 LNCS, pp. 401–412). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-61792-9_32

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free