MARGIN: Uncovering Deep Neural Networks Using Graph Signal Analysis

0Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Interpretability has emerged as a crucial aspect of building trust in machine learning systems, aimed at providing insights into the working of complex neural networks that are otherwise opaque to a user. There are a plethora of existing solutions addressing various aspects of interpretability ranging from identifying prototypical samples in a dataset to explaining image predictions or explaining mis-classifications. While all of these diverse techniques address seemingly different aspects of interpretability, we hypothesize that a large family of interepretability tasks are variants of the same central problem which is identifying relative change in a model’s prediction. This paper introduces MARGIN, a simple yet general approach to address a large set of interpretability tasks MARGIN exploits ideas rooted in graph signal analysis to determine influential nodes in a graph, which are defined as those nodes that maximally describe a function defined on the graph. By carefully defining task-specific graphs and functions, we demonstrate that MARGIN outperforms existing approaches in a number of disparate interpretability challenges.

Cite

CITATION STYLE

APA

Anirudh, R., Thiagarajan, J. J., Sridhar, R., & Bremer, P. T. (2021). MARGIN: Uncovering Deep Neural Networks Using Graph Signal Analysis. Frontiers in Big Data, 4. https://doi.org/10.3389/fdata.2021.589417

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free