The debate about lethal autonomous weapons systems (LAWS) characterises them as future problems in need of pre-emptive regulation, for example, through codifying meaningful human control. But autonomous technologies are already part of weapons and have shaped how states think about human control. To understand this normative space, I proceed in two steps: first, I theorise how practices of designing, of training personnel for, and of operating weapon systems integrating autonomous technologies have shaped normativity/normality on human control at sites unseen. Second, I trace how this normativity/normality interacts with public deliberations at the Group of Governmental Experts (GGE) on LAWS by theorising potential dynamics of interaction. I find that the normativity/normality emerging from practices performed in relation to weapon systems integrating autonomous technologies assigns humans a reduced role in specific use of force decisions and understands this diminished decision-making capacity as ‘appropriate’ and ‘normal’. In the public-deliberative process, stakeholders have interacted with this normativity by ignoring it, engaging in distancing or positively acknowledging it – rather than scrutinising it. These arguments move beyond prioritising public deliberation in norm research towards exploring practices performed at sites outside of the public eye as productive of normativity. I theorise this process via international practice theories, critical security studies and Science and Technology scholarship to draw out how practices shape normativity, presenting ideas of oughtness and justice, and normality, making something appear normal via collective, repeated performances.
CITATION STYLE
Bode, I. (2023). Practice-based and public-deliberative normativity: retaining human control over the use of force. European Journal of International Relations, 29(4), 990–1016. https://doi.org/10.1177/13540661231163392
Mendeley helps you to discover research relevant for your work.