Gradient-Based Vs. Propagation-Based Explanations: An Axiomatic Comparison

25Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep neural networks, once considered to be inscrutable black-boxes, are now supplemented with techniques that can explain how these models decide. This raises the question whether the produced explanations are reliable. In this chapter, we consider two popular explanation techniques, one based on gradient computation and one based on a propagation mechanism. We evaluate them using three “axiomatic” properties: conservation, continuity, and implementation invariance. These properties are tested on the overall explanation, but also at intermediate layers, where our analysis brings further insights on how the explanation is being formed.

Cite

CITATION STYLE

APA

Montavon, G. (2019). Gradient-Based Vs. Propagation-Based Explanations: An Axiomatic Comparison. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11700 LNCS, pp. 253–265). Springer Verlag. https://doi.org/10.1007/978-3-030-28954-6_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free