Neural QBAFs: Explaining Neural Networks Under LRP-Based Argumentation Frameworks

3Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In recent years, there have been many attempts to combine XAI with the field of symbolic AI in order to generate explanations for neural networks that are more interpretable and better align with human reasoning, with one prominent candidate for this synergy being the sub-field of computational argumentation. One method is to represent neural networks with quantitative bipolar argumentation frameworks (QBAFs) equipped with a particular semantics. The resulting QBAF can then be viewed as an explanation for the associated neural network. In this paper, we explore a novel LRP-based semantics under a new QBAF variant, namely neural QBAFs (nQBAFs). Since an nQBAF of a neural network is typically large, the nQBAF must be simplified before being used as an explanation. Our empirical evaluation indicates that the manner of this simplification is all important for the quality of the resulting explanation.

Cite

CITATION STYLE

APA

Sukpanichnant, P., Rago, A., Lertvittayakumjorn, P., & Toni, F. (2022). Neural QBAFs: Explaining Neural Networks Under LRP-Based Argumentation Frameworks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13196 LNAI, pp. 429–444). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-08421-8_30

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free