In this article, we argue that the implementation of military robots must be preceded by a careful reflection on the ethics of warfare in that warfare must be regarded as a strictly human activity, for which human beings must remain responsible and in control and that ethical decision-making can never be transferred to autonomous robots in the foreseeable future, since these robots are not capable of making ethical decisions. Non-autonomous robots require that humans authorize any decision to use lethal force, i.e., they require a “man-in-the-loop”. We propose a model of relationality for the moral attitude that is needed to confront the moral questions and dilemmas that will be faced by future military operations using robots. This model provides two minimal criteria for ethical decision making: non-binary thinking and reflexivity by means of rooting and shifting. In the second part of this article, we apply these criteria to today’s human operators of non-autonomous military robots and secondly, to tomorrow’s autonomous military robots, and ask whether robots are capable of relationality, and to what degree human operators make decisions on the basis of relationality. We then conclude with what we take to be a possible, albeit limited, role for robotics in the military with regard to both the current and the foreseeable future role of military robotics.
CITATION STYLE
Royakkers, L., & Topolski, A. (2014). Military robotics & relationality: Criteria for ethical decision-making. In Responsible Innovation 1: Innovative Solutions for Global Issues (pp. 351–367). Springer Netherlands. https://doi.org/10.1007/978-94-017-8956-1_20
Mendeley helps you to discover research relevant for your work.