Probing a Deep Neural Network

2Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We report a number of experiments on a deep convolutional network in order to gain a better understanding of the transformations that emerge from learning at the various layers. We analyze the backward flow and the reconstructed images, using an adaptive masking approach in which pooling and nonlinearities at the various layers are represented by data-dependent binary masks. We focus on the field of view of specific neurons, also using random parameters, in order to understand the nature of the information that flows through the activation’s“holes" that emerge in the multi-layer structure when an image is presented at the input. We show how the peculiarity of the multi-layer structure is not so much in the learned parameters, but in the patterns of connectivity that are partly imposed and then learned. Furthermore, a deep network appears to focus more on statistics, such as gradient-like transformations, rather than on filters matched to image patterns. Our probes seem to explain why classical image processing algorithms, such as the famous SIFT, have provided robust, although limited, solutions to image recognition tasks.

Cite

CITATION STYLE

APA

Palmieri, F. A. N., Baldi, M., Buonanno, A., Di Gennaro, G., & Ospedale, F. (2020). Probing a Deep Neural Network. In Smart Innovation, Systems and Technologies (Vol. 151, pp. 201–211). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-13-8950-4_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free