The rules of International Humanitarian Law (IHL) set limits on the use of the means and methods of combat in the conduct of hostilities. While in its origins IHL was not devel-oped taking into account the challenges posed by Artificial Intelligence in this context, it is a reality that today the evolution of this intelligence, the algorithms and their emerging military application constitute a challenge in the light of humanitarian standards. This challenge comprises three fundamental as-pects: legal, technical and ethical. While it can be said that AI, in the current stage of development, allows a computer programme based on algorithms to perform certain tasks in a complex and uncertain environment, often with greater accuracy than humans, we must also stress that there is no technology that makes a machine behave like a human being who can determine whether an action is lawful or unlawful and decide not to proceed with the programmed action, with the protection of victims as the primary objective. This is one of the dominant themes in doctrinal debates on the application of IHL to means and methods of combat involving AI-related techniques. States must adopt verification, testing and monitoring systems as part of the process to determine and impose limitations or prohibitions in accordance with the essential principles of distinction and proportionality that IHL establishes in the use of weapons during international or non-international armed conflicts. Moreover, it is worth noting that from a legal as well as an ethical perspective, the human being is at the center of this issue, since the responsibility for the use of force cannot be transferred to weapons systems or algorithms, as it remains a human responsibility.
CITATION STYLE
Vigevano, M. R. (2021). Artificial intelligence in armed conflicts: Legal and ethical limits. Arbor, 197(800). https://doi.org/10.3989/arbor.2021.800002
Mendeley helps you to discover research relevant for your work.