A case for “killer robots”: why in the long run martial AI may be good for peace

  • Arandjelović O
N/ACitations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

Purpose The remarkable increase of sophistication of artificial intelligence in recent years has already led to its widespread use in martial applications, the potential of so-called “killer robots” ceasing to be a subject of fiction. The purpose of this paper is to re-examine the consequences of the availability of lethal autonomous robots (LARs) on global peace. Design/methodology/approach Virtually without exception, the aforementioned potential of LARs has generated fear, as evidenced by a mounting number of academic articles calling for the ban on their development and deployment. An analysis of the existing ethical objections to LARs is used as a vehicle for their critique and the advancement of an alternative. Findings The presented analysis shows the contemporary thought to be deficient in philosophical rigour, these deficiencies leading to a different view, one favourable to the development of LARs. Originality/value The emergent thesis is that LARs can in fact be a force for peace, leading to fewer and less deadly wars.

Cite

CITATION STYLE

APA

Arandjelović, O. (2023). A case for “killer robots”: why in the long run martial AI may be good for peace. Journal of Ethics in Entrepreneurship and Technology, 3(1), 20–32. https://doi.org/10.1108/jeet-01-2023-0003

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free