Causal Framework of Artificial Autonomous Agent Responsibility

14Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent empirical work on people's attributions of responsibility toward artificial autonomous agents (such as Artificial Intelligence agents or robots) has delivered mixed findings. The conflicting results reflect differences in context, the roles of AI and human agents, and the domain of application. In this article, we outline a causal framework of responsibility attribution which integrates these findings. It outlines nine factors that influence responsibility attribution-causality, role, knowledge, objective foreseeability, capability, intent, desire, autonomy, and character. We propose a framework of responsibility that outlines the causal relationships between the nine factors and responsibility. To empirically test the framework we discuss some initial findings and outline an approach to using serious games for causal cognitive research on responsibility attribution. Specifically, we propose a game that uses a generative approach to creating different scenarios, in which participants can freely inspect different sources of information to make judgments about human and artificial autonomous agents.

Cite

CITATION STYLE

APA

Franklin, M., Ashton, H., Awad, E., & Lagnado, D. (2022). Causal Framework of Artificial Autonomous Agent Responsibility. In AIES 2022 - Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (pp. 276–284). Association for Computing Machinery, Inc. https://doi.org/10.1145/3514094.3534140

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free