Abstract
This paper provides an entry point to the problem of interpreting a deep neural network model and explaining its predictions. It is based on a tutorial given at ICASSP 2017. As a tutorial paper, the set of methods covered here is not exhaustive, but sufficiently representative to discuss a number of questions in interpretability, technical challenges, and possible applications. The second part of the tutorial focuses on the recently proposed layer-wise relevance propagation (LRP) technique, for which we provide theory, recommendations, and tricks, to make most efficient use of it on real data.
Author supplied keywords
Cite
CITATION STYLE
Montavon, G., Samek, W., & Müller, K. R. (2018, February 1). Methods for interpreting and understanding deep neural networks. Digital Signal Processing: A Review Journal. Elsevier Inc. https://doi.org/10.1016/j.dsp.2017.10.011
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.