Methods for interpreting and understanding deep neural networks

1.5kCitations
Citations of this article
3.0kReaders
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper provides an entry point to the problem of interpreting a deep neural network model and explaining its predictions. It is based on a tutorial given at ICASSP 2017. As a tutorial paper, the set of methods covered here is not exhaustive, but sufficiently representative to discuss a number of questions in interpretability, technical challenges, and possible applications. The second part of the tutorial focuses on the recently proposed layer-wise relevance propagation (LRP) technique, for which we provide theory, recommendations, and tricks, to make most efficient use of it on real data.

Cite

CITATION STYLE

APA

Montavon, G., Samek, W., & Müller, K. R. (2018, February 1). Methods for interpreting and understanding deep neural networks. Digital Signal Processing: A Review Journal. Elsevier Inc. https://doi.org/10.1016/j.dsp.2017.10.011

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free