Putting accountability of AI systems into practice

11Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.

Abstract

To improve and ensure trustworthiness and ethics on Artificial Intelligence (AI) systems, several initiatives around the globe are producing principles and recommendations, which are providing to be difficult to translate into technical solutions. A common trait among ethical AI requirements is accountability that aims at ensuring responsibility, auditability, and reduction of negative impact of AI systems. To put accountability into practice, this paper presents the Global-view Accountability Framework (GAF) that considers auditability and redress of conflicting information arising from a context with two or more AI systems which can produce a negative impact. A technical implementation of the framework for automotive and motor insurance is demonstrated, where the focus is on preventing and reporting harm rendered by autonomous vehicles.

Cite

CITATION STYLE

APA

Miguel, B. S., Naseer, A., & Inakoshi, H. (2020). Putting accountability of AI systems into practice. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 5276–5278). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/768

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free