Dissenting Explanations: Leveraging Disagreement to Reduce Model Overreliance

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

While modern explanation methods have been shown to be inconsistent and contradictory, the explainability of black-box models nevertheless remains desirable. When the role of explanations extends from understanding models to aiding decision making, the semantics of explanations is not always fully understood - to what extent do explanations “explain” a decision and to what extent do they merely advocate for a decision? Can we help humans gain insights from explanations accompanying correct predictions and not over-rely on incorrect predictions advocated for by explanations? With this perspective in mind, we introduce the notion of dissenting explanations: conflicting predictions with accompanying explanations. We first explore the advantage of dissenting explanations in the setting of model multiplicity, where multiple models with similar performance may have different predictions. Through a human study on the task of identifying deceptive reviews, we demonstrate that dissenting explanations reduce overreliance on model predictions, without reducing overall accuracy. Motivated by the utility of dissenting explanations we present both global and local methods for their generation.

Cite

CITATION STYLE

APA

Reingold, O., Shen, J. H., & Talati, A. (2024). Dissenting Explanations: Leveraging Disagreement to Reduce Model Overreliance. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 21537–21544). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i19.30151

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free