Counterfactual Explanations for Optimization-Based Decisions in the Context of the GDPR

14Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

The General Data Protection Regulations (GDPR) entitle individuals to explanations for automated decisions. The form, comprehensibility, and even existence of such explanations remain open problems, investigated as part of explainable AI. We adopt the approach of counterfactual explanations and apply it to decisions made by declarative optimization models. We argue that inverse combinatorial optimization is particularly suited for counterfactual explanations but that the computational difficulties and relatively nascent literature make its application a challenge. To make progress, we address the case of counterfactual explanations that isolate the minimal differences for an individual. We show that under two common optimization functions, full inverse optimization is unnecessary. In particular, we show that for functions of the form of the sum of weighted binary variables, which includes frameworks such as weighted MaxSAT, a solution can be found by solving a slightly modified version of the original optimization model. In contrast, the sum of weighted integer variables can be solved with a binary search over a series of modifications to the original model.

Cite

CITATION STYLE

APA

Korikov, A., Shleyfman, A., & Beck, J. C. (2021). Counterfactual Explanations for Optimization-Based Decisions in the Context of the GDPR. In IJCAI International Joint Conference on Artificial Intelligence (pp. 4097–4103). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/564

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free