We present a ‘CLAssifier-DECoder’ architecture (ClaDec) which facilitates the comprehension of the output of an arbitrary layer in a neural network (NN). It uses a decoder to transform the non-interpretable representation of the given layer to a representation that is more similar to the domain a human is familiar with. In an image recognition problem, one can recognize what information is represented by a layer by contrasting reconstructed images of ClaDec with those of a conventional auto-encoder(AE) serving as reference. We also extend ClaDec to allow the trade-off between human interpretability and fidelity. We evaluate our approach for image classification using Convolutional NNs. We show that reconstructed visualizations using encodings from a classifier capture more relevant information for classification than conventional AEs. Relevant code is available at https://github.com/JohnTailor/ClaDec.
CITATION STYLE
Schneider, J., & Vlachos, M. (2021). Explaining Neural Networks by Decoding Layer Activations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12695 LNCS, pp. 63–75). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-74251-5_6
Mendeley helps you to discover research relevant for your work.