Black is the new orange: how to determine AI liability

5Citations
Citations of this article
51Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Autonomous artificial intelligence (AI) systems can lead to unpredictable behavior causing loss or damage to individuals. Intricate questions must be resolved to establish how courts determine liability. Until recently, understanding the inner workings of “black boxes” has been exceedingly difficult; however, the use of Explainable Artificial Intelligence (XAI) would help simplify the complex problems that can occur with autonomous AI systems. In this context, this article seeks to provide technical explanations that can be given by XAI, and to show how suitable explanations for liability can be reached in court. It provides an analysis of whether existing liability frameworks, in both civil and common law tort systems, with the support of XAI, can address legal concerns related to AI. Lastly, it claims their further development and adoption should allow AI liability cases to be decided under current legal and regulatory rules until new liability regimes for AI are enacted.

Cite

CITATION STYLE

APA

Padovan, P. H., Martins, C. M., & Reed, C. (2023). Black is the new orange: how to determine AI liability. Artificial Intelligence and Law, 31(1), 133–167. https://doi.org/10.1007/s10506-022-09308-9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free