When speed kills: Lethal autonomous weapon systems, deterrence and stability

57Citations
Citations of this article
129Readers
Mendeley users who have this article in their library.
Get full text

Abstract

While the applications of artificial intelligence (AI) for militaries are broad, lethal autonomous weapon systems (LAWS) represent one possible usage of narrow AI by militaries. Research and development on LAWS by major powers, middle powers and non-state actors makes exploring the consequences for the security environment a crucial task. This article draws on classic research in security studies and examples from military history to assess the potential development and deployment of LAWS, as well as how they could influence arms races, the stability of deterrence, including strategic stability, the risk of crisis instability and wartime escalation. It focuses on these questions through the lens of two characteristics of LAWS: the potential for increased operational speed and the potential for decreased human control over battlefield choices. It also examines how these issues interact with the large uncertainty parameter associated with potential AI-based military capabilities at present, both in terms of the range of the possible and the opacity of their programming.

Cite

CITATION STYLE

APA

Horowitz, M. C. (2019). When speed kills: Lethal autonomous weapon systems, deterrence and stability. Journal of Strategic Studies, 42(6), 764–788. https://doi.org/10.1080/01402390.2019.1621174

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free