Computational Accountability

5Citations
Citations of this article
54Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Automated decision making systems take decisions that matter. Some human or legal person remains responsible. Looking back, that person is accountable for the decisions made by the system, and may even be liable in case of damages. That puts constraints on the way in which decision making systems are designed, and how they are deployed in organizations. In this paper, we analyze computational accountability in three steps. First, being accountable is analyzed as a relationship between an actor deploying the system and a critical forum of subjects, users, experts and developers. Second, we discuss system design. In principle, evidence must be collected about the decision rule and the case data that were applied. However, many AI algorithms are not interpretable for humans. Alternatively, internal controls must ensure that a system uses valid algorithms and reliable data sets for training, which are appropriate for the application domain. Third, we discuss the governance model: roles, responsibilities, procedures and infrastructure, to ensure effective operation of these controls. The paper ends with a case study in the IT audit domain, to illustrate practical feasibility.

Cite

CITATION STYLE

APA

Hulstijn, J. (2023). Computational Accountability. In 19th International Conference on Artificial Intelligence and Law, ICAIL 2023 - Proceedings of the Conference (pp. 121–130). Association for Computing Machinery, Inc. https://doi.org/10.1145/3594536.3595122

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free