Autonomous AI Systems in Conflict: Emergent Behavior and Its Impact on Predictability and Reliability

10Citations
Citations of this article
50Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The development of complex autonomous systems that use artificial intelligence (AI) is changing the nature of conflict. In practice, autonomous systems will be extensively tested before being operationally deployed to ensure system behavior is reliable in expected contexts. However, the complexity of autonomous systems means that they will demonstrate emergent behavior in the open context of real-world conflict environments. This article examines the novel implications of emergent behavior of autonomous AI systems designed for conflict through two case studies. These case studies include (1) a swarm system designed for maritime intelligence, surveillance, and reconnaissance operations, and (2) a next-generation humanitarian notification system. Both case studies represent current or near-future technology in which emergent behavior is possible, demonstrating that such behavior can be both unpredictable and more reliable depending on the level at which the system is considered. This counterintuitive relationship between less predictability and more reliability results in unique challenges for system certification and adherence to the growing body of principles for responsible AI in defense, which must be considered for the real-world operationalization of AI designed for conflict environments.

Cite

CITATION STYLE

APA

Trusilo, D. (2023). Autonomous AI Systems in Conflict: Emergent Behavior and Its Impact on Predictability and Reliability. Journal of Military Ethics, 22(1), 2–17. https://doi.org/10.1080/15027570.2023.2213985

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free