Visual explanations from spiking neural networks using inter-spike intervals

36Citations
Citations of this article
89Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

By emulating biological features in brain, Spiking Neural Networks (SNNs) offer an energy-efficient alternative to conventional deep learning. To make SNNs ubiquitous, a ‘visual explanation’ technique for analysing and explaining the internal spike behavior of such temporal deep SNNs is crucial. Explaining SNNs visually will make the network more transparent giving the end-user a tool to understand how SNNs make temporal predictions and why they make a certain decision. In this paper, we propose a bio-plausible visual explanation tool for SNNs, called Spike Activation Map (SAM). SAM yields a heatmap (i.e., localization map) corresponding to each time-step of input data by highlighting neurons with short inter-spike interval activity. Interestingly, without the use of gradients and ground truth, SAM produces a temporal localization map highlighting the region of interest in an image attributed to an SNN’s prediction at each time-step. Overall, SAM outsets the beginning of a new research area ‘explainable neuromorphic computing’ that will ultimately allow end-users to establish appropriate trust in predictions from SNNs.

Cite

CITATION STYLE

APA

Kim, Y., & Panda, P. (2021). Visual explanations from spiking neural networks using inter-spike intervals. Scientific Reports, 11(1). https://doi.org/10.1038/s41598-021-98448-0

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free