How Counterfactual Fairness Modelling in Algorithms Can Promote Ethical Decision-Making

6Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Organizational decision-makers often need to make difficult decisions. One popular way today is to improve those decisions by using information and recommendations provided by data-driven algorithms (i.e., AI advisors). Advice is especially important when decisions involve conflicts of interests, such as ethical dilemmas. A defining characteristic of ethical decision-making is that it often involves a thought process of exploring and imagining what would, could, and should happen under alternative conditions (i.e., what-if scenarios). Such imaginative “counterfactual thinking,” however, is not explored by AI advisors - unless they are pre-programmed to do so. Drawing on Fairness Theory, we identify key counterfactual scenarios programmers can incorporate in the code of AI advisors to improve fairness perceptions. We conducted an experimental study to test our predictions, and the results showed that explanations that include counterfactual scenarios were perceived as fairer by recipients. Taken together, we believe that counterfactual modelling will improve ethical decision-making by actively modelling what-if scenarios valued by recipients. We further discuss benefits of counterfactual modelling, such as inspiring decision-makers to engage in counterfactual thinking within their own decision-making process.

Cite

CITATION STYLE

APA

De Schutter, L., & De Cremer, D. (2024). How Counterfactual Fairness Modelling in Algorithms Can Promote Ethical Decision-Making. International Journal of Human-Computer Interaction, 40(1), 33–44. https://doi.org/10.1080/10447318.2023.2247624

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free