The hidden assumptions behind counterfactual explanations and principal reasons

187Citations
Citations of this article
182Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Counterfactual explanations are gaining prominence within technical, legal, and business circles as a way to explain the decisions of a machine learning model. These explanations share a trait with the long-established “principal reason” explanations required by U.S. credit laws: they both explain a decision by highlighting a set of features deemed most relevant-and withholding others. These “feature-highlighting explanations” have several desirable properties: They place no constraints on model complexity, do not require model disclosure, detail what needed to be different to achieve a different decision, and seem to automate compliance with the law. But they are far more complex and subjective than they appear. In this paper, we demonstrate that the utility of feature-highlighting explanations relies on a number of easily overlooked assumptions: that the recommended change in feature values clearly maps to real-world actions, that features can be made commensurate by looking only at the distribution of the training data, that features are only relevant to the decision at hand, and that the underlying model is stable over time, monotonic, and limited to binary outcomes. We then explore several consequences of acknowledging and attempting to address these assumptions, including a paradox in the way that feature-highlighting explanations aim to respect autonomy, the unchecked power that feature-highlighting explanations grant decision makers, and a tension between making these explanations useful and the need to keep the model hidden. While new research suggests several ways that feature-highlighting explanations can work around some of the problems that we identify, the disconnect between features in the model and actions in the real world-and the subjective choices necessary to compensate for this-must be understood before these techniques can be usefully implemented.

Cite

CITATION STYLE

APA

Barocas, S., Selbst, A. D., & Raghavan, M. (2020). The hidden assumptions behind counterfactual explanations and principal reasons. In FAT* 2020 - Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 80–89). Association for Computing Machinery, Inc. https://doi.org/10.1145/3351095.3372830

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free