Abstract
The uses of artificial intelligence (AI) and lethal autonomous weapon systems (LAWS) have no specific international regulation and proposals for general AI regulation usually decline to engage in discussing them. Nevertheless, military AI systems must comply and integrate into their design the applicable International Humanitarian Law (IHL) and the principles of AI ethics. This should consider military distinctive elements such as effectiveness, criticality of the results, protection and quality of information and data, complexity and dynamism, dual nature of technologies, potential use by terrorist groups or organizations or scalability of use. Based on comparative experience, the authors formulate ethical principles of artificial intelligence in defense, in line with general ones, but with due regard for the particularities of the military context. Several topics are particularly emphasized, such as the need for human control (limited transfer of autonomy, meaningful human control, accountability throughout the life cycle), absence of bias and robustness, especially against unintended engagements, as well as the principles of reliability, transparency, traceability, and security of military AI systems. Specific aspects concerning the sector are discussed such as the individual responsibility that governs IHL, the difficulty of projecting the "doctrine of double effect" to autonomous systems or the unpredictability of these systems.
Author supplied keywords
Cite
CITATION STYLE
Cotino Hueso, L., & de Ágreda, Á. G. (2024). Ethical and International Humanitarian Law Criteria in the Use of Artificial Intelligence-Powered Military Systems. Novum Jus, 18(1), 249–283. https://doi.org/10.14718/NovumJus.2024.18.1.9
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.