Legal Necessity, Pareto Efficiency & Justified Killing in Autonomous Vehicle Collisions

17Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Suppose a driverless car encounters a scenario where (i) harm to at least one person is unavoidable and (ii) a choice about how to distribute harms between different persons is required. How should the driverless car be programmed to behave in this situation? I call this the moral design problem. Santoni de Sio (Ethical Theory Moral Pract 20:411–429, 2017) defends a legal-philosophical approach to this problem, which aims to bring us to a consensus on the moral design problem despite our disagreements about which moral principles provide the correct account of justified harm. He then articulates an answer to the moral design problem based on the legal doctrine of necessity. In this paper, I argue that Santoni de Sio’s answer to the moral design problem does not achieve the aim of the legal-philosophical approach. This is because his answer relies on moral principles which, at least, utilitarians have reason to reject. I then articulate an alternative reading of the doctrine of necessity, and construct a partial answer to the moral design problem based on this. I argue that utilitarians, contractualists and deontologists can agree on this partial answer, even if they disagree about which moral principles offer the correct account of justified harm.

Cite

CITATION STYLE

APA

Keeling, G. (2018). Legal Necessity, Pareto Efficiency & Justified Killing in Autonomous Vehicle Collisions. Ethical Theory and Moral Practice, 21(2), 413–427. https://doi.org/10.1007/s10677-018-9887-5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free