Is the skip connection provable to reform the neural network loss landscape?

5Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The residual network is now one of the most effective structures in deep learning, which utilizes the skip connections to “guarantee” the performance will not get worse. However, the non-convexity of the neural network makes it unclear whether the skip connections do provably improve the learning ability since the nonlinearity may create many local minima. In some previous works [Freeman and Bruna, 2016], it is shown that despite the non-convexity, the loss landscape of the two-layer ReLU network has good properties when the number m of hidden nodes is very large. In this paper, we follow this line to study the topology (sub-level sets) of the loss landscape of deep ReLU neural networks with a skip connection and theoretically prove that the skip connection network inherits the good properties of the two-layer network and skip connections can help to control the connectedness of the sub-level sets, such that any local minima worse than the global minima of some two-layer ReLU network will be very “shallow”. The “depth” of these local minima are at most O(m(?-1)/n), where n is the input dimension, ? < 1. This provides a theoretical explanation for the effectiveness of the skip connection in deep learning.

Cite

CITATION STYLE

APA

Wang, L., Shen, B., Zhao, N., & Zhang, Z. (2020). Is the skip connection provable to reform the neural network loss landscape? In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 2792–2798). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/387

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free