The complexity of criminal liability of AI systems

10Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Technology is advancing at a rapid pace. As we anticipate a rapid increase in artificial intelligence (AI), we may soon find ourselves dealing with fully autonomous technology with the capacity to cause harm and injuries. What then? Who is going to be held accountable if AI systems harm us? Currently there is no answer to this question and the existing regulatory framework falls short in addressing the accountability regime of autonomous systems. This paper analyses criminal liability of AI systems, evaluated under the existing rules of criminal law. It highlights the social and legal implications of the current criminal liability regime as it is applied to the complex nature of industrial robots. Finally, the paper explores whether corporate liability is a viable option and what legal standards are possible for imposing criminal liability on the companies who deploy AI systems. The paper reveals that traditional criminal law and legal theory are not well positioned to answer the questions at hand, as there are many practical problems that require further evaluation. I have demonstrated that with the development of AI, more questions will surface and legal frameworks will inevitably need to adapt. The conclusions of this paper could be the basis for further research.

Cite

CITATION STYLE

APA

Osmani, N. (2020). The complexity of criminal liability of AI systems. Masaryk University Journal of Law and Technology, 14(1), 53–82. https://doi.org/10.5817/MUJLT2020-1-3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free