Ethics of autonomous weapons systems and its applicability to any AI systems

16Citations
Citations of this article
138Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Most artificial intelligence technologies are dual-use. They are incorporated into both peaceful civilian applications and military weapons systems. Most of the existing codes of conduct and ethical principles on artificial intelligence address the former while largely ignoring the latter. But when these technologies are used to power systems specifically designed to cause harm, the question must be asked as to whether the ethics applied to military autonomous systems should also be taken into account for all artificial intelligence technologies susceptible of being used for those purposes. However, while a freeze in investigations is neither possible nor desirable, neither is the maintenance of the current status quo. Comparison between general-purpose ethical codes and military ones concludes that most ethical principles apply to human use of artificial intelligence systems as long as two characteristics are met: that the way algorithms work is understood and that humans retain enough control. In this way, human agency is fully preserved and moral responsibility is retained independently of the potential dual-use of artificial intelligence technology.

Cite

CITATION STYLE

APA

Gómez de Ágreda, Á. (2020). Ethics of autonomous weapons systems and its applicability to any AI systems. Telecommunications Policy, 44(6). https://doi.org/10.1016/j.telpol.2020.101953

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free