People Prefer Moral Discretion to Algorithms: Algorithm Aversion Beyond Intransparency

24Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.

Abstract

We explore aversion to the use of algorithms in moral decision-making. So far, this aversion has been explained mainly by the fear of opaque decisions that are potentially biased. Using incentivized experiments, we study which role the desire for human discretion in moral decision-making plays. This seems justified in light of evidence suggesting that people might not doubt the quality of algorithmic decisions, but still reject them. In our first study, we found that people prefer humans with decision-making discretion to algorithms that rigidly apply exogenously given human-created fairness principles to specific cases. In the second study, we found that people do not prefer humans to algorithms because they appreciate flesh-and-blood decision-makers per se, but because they appreciate humans’ freedom to transcend fairness principles at will. Our results contribute to a deeper understanding of algorithm aversion. They indicate that emphasizing the transparency of algorithms that clearly follow fairness principles might not be the only element for fostering societal algorithm acceptance and suggest reconsidering certain features of the decision-making process.

Cite

CITATION STYLE

APA

Jauernig, J., Uhl, M., & Walkowitz, G. (2022). People Prefer Moral Discretion to Algorithms: Algorithm Aversion Beyond Intransparency. Philosophy and Technology, 35(1). https://doi.org/10.1007/s13347-021-00495-y

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free