Killing by Autonomous Vehicles and the Legal Doctrine of Necessity

67Citations
Citations of this article
143Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

How should autonomous vehicles (aka self-driving cars) be programmed to behave in the event of an unavoidable accident in which the only choice open is one between causing different damages or losses to different objects or persons? This paper addresses this ethical question starting from the normative principles elaborated in the law to regulate difficult choices in other emergency scenarios. In particular, the paper offers a rational reconstruction of some major principles and norms embedded in the Anglo-American jurisprudence and case law on the “doctrine of necessity”; and assesses which, if any, of these principles and norms can be utilized to find reasonable guidelines for solving the ethical issue of the regulation of the programming of autonomous vehicles in emergency situations. The paper covers the following topics: the distinction between “justification” and “excuse”, the legal prohibition of intentional killing outside self-defence, the incommensurability of goods, and the legal constrains to the use of lethal force set by normative positions: obligations, responsibility, rights, and authority. For each of these principles and constrains the possible application to the programming of autonomous vehicles is discussed. Based on the analysis, some practical suggestions are offered.

Cite

CITATION STYLE

APA

Santoni de Sio, F. (2017). Killing by Autonomous Vehicles and the Legal Doctrine of Necessity. Ethical Theory and Moral Practice, 20(2), 411–429. https://doi.org/10.1007/s10677-017-9780-7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free