Deception and Virtue in Robotic and Cyber Warfare

3Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Informational warfare is fundamentally about automating the human capacity for deceit and lies. This poses a significant problem in the ethics of informational warfare. If we want to maintain our commitments to just and legal warfare, then how can we build systems based on what would normally be considered unethical behavior in a way that our commitments to social justice are enhanced and not degraded by this endeavor, is there such a thing as a virtuous lie in the context of warfare? Given that no war is ever fully just or ethical. And that navigating the near instantaneous life and death decisions necessitated by modern conflicts fully taxes the moral intuitions of even the best trained and well intentioned war fighters. It follows, that we need accurate analysis on whether or not we can construct informational technologies that can help us make more ethical decisions on the battlefield. In this chapter I will focus on the fact that robots and other artificial agents will need to understand and utilize deception in order to be useful on the virtual and actual battlefield. At the same time, these agents must maintain the virtues required of an informational agent such as the ability to retain the trust of all those who interact with it. To further this analysis it is important to realize that the moral virtues required of an artificial agent are very different from those that are required of a human moral agent. Some of the major differences are that a virtuous artificial agent need only reveal its intentions to legitimate users, and in many situations it is actually morally obliged to keep some data confidential from certain users. In many circumstances cyber warfare systems must resist the attempts of other agents, human or otherwise, to change its programming or stored data. Given the specific virtues we must program into our cyber warfare systems, we will find that while human agents have many other drives and motivations that can complicate issues of trust, we will find that in comparison to human agents, artificial agents are far less complex and morally ambiguous. Thus it is conceivable that artificial agent should be actually more successful at navigating the moral paradox of the virtuous lie often necessitated by military conflict.

Cite

CITATION STYLE

APA

Sullins, J. P. (2014). Deception and Virtue in Robotic and Cyber Warfare. In Law, Governance and Technology Series (Vol. 14, pp. 187–201). Springer Science and Business Media B.V. https://doi.org/10.1007/978-3-319-04135-3_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free