Enhancing Deep Neural Network Saliency Visualizations with Gradual Extrapolation

6Citations
Citations of this article
52Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, an enhancement technique for the class activation mapping methods such as gradient-weighted class activation maps or excitation backpropagation is proposed to present the visual explanations of decisions from convolutional neural network-based models. The proposed idea, called Gradual Extrapolation, can supplement any method that generates a heatmap picture by sharpening the output. Instead of producing a coarse localization map that highlights the important predictive regions in the image, the proposed method outputs the specific shape that most contributes to the model output. Thus, the proposed method improves the accuracy of saliency maps. The effect has been achieved by the gradual propagation of the crude map obtained in the deep layer through all preceding layers with respect to their activations. In validation tests conducted on a selected set of images, the faithfulness, interpretability, and applicability of the method are evaluated. The proposed technique significantly improves the localization detection of the neural networks' attention at low additional computational costs. Furthermore, the proposed method is applicable to a variety deep neural network models. The code for the method can be found at https://github.com/szandala/gradual-extrapolation

Cite

CITATION STYLE

APA

Szandala, T. (2021). Enhancing Deep Neural Network Saliency Visualizations with Gradual Extrapolation. IEEE Access, 9, 95155–95161. https://doi.org/10.1109/ACCESS.2021.3093824

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free