In this paper, I will argue that automated vehicles should not swerve to avoid a person or vehicle in its path, unless they can do so without imposing risks onto others. I will argue that this is the conclusion that we should reach even if we start by assuming that we should divert the trolley in the standard trolley case (in which the trolley will hit and kill five people on the track, unless it is diverted onto a different track, where it will hit and kill just one person). In defence of this claim, I appeal to the distribution of moral and legal responsibilities, highlighting the importance of safe spaces, and arguing in favour of constraints on what can be done to minimise casualties. My arguments draw on the methodology associated with the trolley problem. As such, this paper also defends this methodology, highlighting a number of ways in which authors misunderstand and misrepresent the trolley problem. For example, the ‘trolley problem’ is not the ‘name given by philosophers to classic examples of unavoidable crash scenarios, historically involving runaway trolleys’, as Millar suggests, and trolley cases should not be compared with ‘model building in the (social) sciences’, as Gogoll and Müller suggest. Trolley cases have more in common with lab experiments than model building, and the problem referred to in the trolley problem is not the problem of deciding what to do in any one case. Rather, it refers to the problem of explaining what appear to be conflicting intuitions when we consider two cases together. The problem, for example, could be: how do we justify the claim that automated vehicles should not swerve even if we accept the claim that we should divert the trolley in an apparently similar trolley case?
CITATION STYLE
Lawlor, R. (2022). The Ethics of Automated Vehicles: Why Self-driving Cars Should not Swerve in Dilemma Cases. Res Publica, 28(1), 193–216. https://doi.org/10.1007/s11158-021-09519-y
Mendeley helps you to discover research relevant for your work.