Clarifying the language of lethal autonomy in military robots

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Many argue that robots should not make the decision to kill humans and thus call for a ban on “killer robots” or lethal autonomous weapons systems (LAWS). However lethal decision making is complex and requires detailed analysis to define what is to be banned or regulated. It is common to make distinctions between in the loop, on the loop and off the loop LAWS. It is also common to refer to the “critical functions” of selecting and engaging targets. In this paper I propose two extra LAWS types. A Type 0 LAWS is an RPV with “no robot on the lethal loop.” A Type 4 LAWS is a robot that has gone “beyond human control” and has “no human in the loop.” Types 1–3 are the familiar in, on and off the loop LAWS. I also define a third “critical function” namely defining the targeting criteria. The aim is to clarify what exactly is meant by “meaningful human control” of a LAWS and to facilitate wording such as might occur in a Protocol VI to be added to the Convention on Certain Conventional Weapons (CCW).

Cite

CITATION STYLE

APA

Welsh, S. (2017). Clarifying the language of lethal autonomy in military robots. In Intelligent Systems, Control and Automation: Science and Engineering (Vol. 84, pp. 171–183). Kluwer Academic Publishers. https://doi.org/10.1007/978-3-319-46667-5_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free