Explainable AI as evidence of fair decisions

2Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

This paper will propose that explanations are valuable to those impacted by a model's decisions (model patients) to the extent that they provide evidence that a past adverse decision was unfair. Under this proposal, we should favor models and explainability methods which generate counterfactuals of two types. The first type of counterfactual is positive evidence of fairness: a set of states under the control of the patient which (if changed) would have led to a beneficial decision. The second type of counterfactual is negative evidence of fairness: a set of irrelevant group or behavioral attributes which (if changed) would not have led to a beneficial decision. Each of these counterfactual statements is related to fairness, under the Liberal Egalitarian idea that treating one person differently than another is justified only on the basis of features which were plausibly under each person's control. Other aspects of an explanation, such as feature importance and actionable recourse, are not essential under this view, and need not be a goal of explainable AI.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Leben, D. (2023). Explainable AI as evidence of fair decisions. Frontiers in Psychology, 14. https://doi.org/10.3389/fpsyg.2023.1069426

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 4

100%

Readers' Discipline

Tooltip

Decision Sciences 1

25%

Pharmacology, Toxicology and Pharmaceut... 1

25%

Computer Science 1

25%

Environmental Science 1

25%

Article Metrics

Tooltip
Mentions
News Mentions: 4
Social Media
Shares, Likes & Comments: 1

Save time finding and organizing research with Mendeley

Sign up for free