Robotics, autonomous systems, and artificial intelligence (RAS-AI) are at the technological edge of militaries trying to achieve ‘ethical’ war. RAS-AI have been cast as essential technologies for defence forces to develop in order to sustain military advantage. Central to the success of this endeavour is trust: the technologies must be trusted for defenc e personnel to be willing to use them, for defence to be trusted by the public, and for allies and partners to have confidence in each other’s developments. Yet, trust is not merely a technocratic term. Its dominant role in the successful adoption of RAS-AI gives it power. This paper argues that the language of trust is being used to facilitate the development and adoption of military RAS-AI, often in concert with the language of ethics. Building on Maja Zehfuss’ concept of the politics of ethics, this paper contends that when it comes to RAS-AI there is also a politics of trust. Analysing British, American, and Australian military documents demonstrates that this politics manifests in side-lining political questions about how RAS-AI will be used–against whom and for what purposes–through focusing instead on the need to develop ethical and trustworthy RAS-AI to wage virtuous war.
CITATION STYLE
Troath, S. (2024). Trusting technology to wage war: the politics of trust and ethics in the development of robotics, autonomous systems, and artificial intelligence. Critical Military Studies. https://doi.org/10.1080/23337486.2024.2362074
Mendeley helps you to discover research relevant for your work.