Chain Graph Explanation of Neural Network Based on Feature-Level Class Confusion

3Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

Despite increasing interest in developing interpretable machine learning methods, most recent studies have provided explanations only for single instances, require additional datasets, and are sensitive to hyperparameters. This paper proposes a confusion graph that reveals model weaknesses by constructing a confusion dictionary. Unlike other methods, which focus on the performance variation caused by single-neuron suppression, it defines the role of each neuron in two different perspectives: ‘correction’ and ‘violation.’ Furthermore, our method can identify the class relationships in similar positions at the feature level, which can suggest improvements to the model. Finally, the proposed graph construction is model-agnostic and does not require additional data or tedious hyperparameter tuning. Experimental results show that the information loss from omitting the channels guided by the proposed graph can result in huge performance degradation, from 91% to 33%, while the proposed graph only retains 1% of total neurons.

Cite

CITATION STYLE

APA

Hwang, H., Park, E., & Shin, J. (2022). Chain Graph Explanation of Neural Network Based on Feature-Level Class Confusion. Applied Sciences (Switzerland), 12(3). https://doi.org/10.3390/app12031523

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free