Just research into killer robots

7Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper argues that it is permissible for computer scientists and engineers—working with advanced militaries that are making good faith efforts to follow the laws of war—to engage in the research and development of lethal autonomous weapons systems (LAWS). Research and development into a new weapons system is permissible if and only if the new weapons system can plausibly generate a superior risk profile for all morally relevant classes and it is not intrinsically wrong. The paper then suggests that these conditions are satisfied by at least some potential LAWS development programs. More specifically, since LAWS will lead to greater force protection, warfighters are free to become more risk-acceptant in protecting civilian lives and property. Further, various malicious motivations that lead to war crimes will not apply to LAWS or will apply to no greater extent than with human warfighters. Finally, intrinsic objections—such as the claims that LAWS violate human dignity or that it creates ‘responsibility gaps’—are rejected on the basis that they rely upon implausibly idealized and atomized understandings of human decision-making in combat.

Cite

CITATION STYLE

APA

Smith, P. T. (2019). Just research into killer robots. Ethics and Information Technology, 21(4), 281–293. https://doi.org/10.1007/s10676-018-9472-6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free