Discussion of the ethics of driverless cars has often focused on supposed real-life versions of the famous trolley problem. In these cases, a driverless car is in a position where crashing is unavoidable and all possible crashes risk harm: for example, it can either continue on its current path and crash into five pedestrians or swerve and crash into one pedestrian. There are significant disanalogies between the human versions of the trolley problem and situations faced by driverless cars which affect the application and moral significance of key deontological distinctions, such as the distinction between doing and allowing harm. The application and moral significance of the doing/allowing distinction in the context of the behaviour of driverless cars depend on (a) our conception of the behaviour of driverless cars; (b) the forms of driverless cars that are developed and used; (c) the background expectations of programmers/manufacturers/owners of driverless cars and the conditions of being able to put those cars on the roads. These are as yet unsettled – and may even be undetermined. Nonetheless, trolley problems may still be useful in thinking about the ethics of driverless cars.
CITATION STYLE
Woollard, F. (2023). The New Trolley Problem: Driverless Cars and Deontological Distinctions. Journal of Applied Philosophy, 40(1), 49–64. https://doi.org/10.1111/japp.12610
Mendeley helps you to discover research relevant for your work.