Lethal Autonomous Weapon Systems and Responsibility Gaps

  • Anne Gerdes
0Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

This paper argues that delegation of lethal decisions to autonomous weapon systems opens an unacceptable responsibility gap, which cannot be effectively countered unless we enforce a preemptive ban on lethal autonomous weapon systems (LAWS). Initially, the promises and perils of artificial intelligence are brought forward in pointing out (1) that it remains an open question whether moral decision making, understood as situated ethical judgement, is computationally tractable, and (2) that the kind of artificial intelligence, which would be required to cause ethical reasoning, would imply a system capable of operating as an independent reasoner in novel contexts (sec. 2). In continuation thereof, issues of responsibility are discussed (sec. 3 and 3.1) and it is claimed that unacceptable responsibility gaps may occur since unpredictability would presumably follow full system autonomy. These circumstances call for a strong precautionary principle, in the form of a preemptive ban.

Cite

CITATION STYLE

APA

Anne Gerdes. (2018). Lethal Autonomous Weapon Systems and Responsibility Gaps. Philosophy Study, 8(5). https://doi.org/10.17265/2159-5313/2018.05.004

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free