Human perceptions on moral responsibility of ai: A case study in ai-assisted bail decision-making

47Citations
Citations of this article
101Readers
Mendeley users who have this article in their library.
Get full text

Abstract

How to attribute responsibility for autonomous artifcial intelligence (AI) systems' actions has been widely debated across the humanities and social science disciplines. This work presents two experiments (N=200 each) that measure people's perceptions of eight diferent notions of moral responsibility concerning AI and human agents in the context of bail decision-making. Using real-life adapted vignettes, our experiments show that AI agents are held causally responsible and blamed similarly to human agents for an identical task. However, there was a meaningful diference in how people perceived these agents' moral responsibility; human agents were ascribed to a higher degree of present-looking and forward-looking notions of responsibility than AI agents. We also found that people expect both AI and human decision-makers and advisors to justify their decisions regardless of their nature. We discuss policy and HCI implications of these fndings, such as the need for explainable AI in high-stakes scenarios.

Cite

CITATION STYLE

APA

Lima, G., Grgic-Hlaca, N., & Cha, M. (2021). Human perceptions on moral responsibility of ai: A case study in ai-assisted bail decision-making. In Conference on Human Factors in Computing Systems - Proceedings. Association for Computing Machinery. https://doi.org/10.1145/3411764.3445260

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free