Gaussian Perturbations in ReLU Networks and the Arrangement of Activation Regions

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

Recent articles indicate that deep neural networks are efficient models for various learning problems. However, they are often highly sensitive to various changes that cannot be detected by an independent observer. As our understanding of deep neural networks with traditional generalisation bounds still remains incomplete, there are several measures which capture the behaviour of the model in case of small changes at a specific state. In this paper we consider Gaussian perturbations in the tangent space and suggest tangent sensitivity in order to characterise the stability of gradient updates. We focus on a particular kind of stability with respect to changes in parameters that are induced by individual examples without known labels. We derive several easily computable bounds and empirical measures for feed-forward fully connected ReLU (Rectified Linear Unit) networks and connect tangent sensitivity to the distribution of the activation regions in the input space realised by the network.

Cite

CITATION STYLE

APA

Daróczy, B. (2022). Gaussian Perturbations in ReLU Networks and the Arrangement of Activation Regions. Mathematics, 10(7). https://doi.org/10.3390/math10071123

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free