Ethical Artificial Intelligence in the Italian Defence: a Case Study

  • Fanni R
  • Giancotti F
N/ACitations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

The ethical or responsible use of Artificial Intelligence (AI) is central to numerous civilian AI governance frameworks and to literature. Not so in defence: only a handful of governments have engaged with ethical questions arising from the development and use of AI in and for defence. This paper fills a critical gap in the AI ethics literature by providing evidence on the perception of ethical AI within a national defence institution. Our qualitative case study analyses how the collective Italian Defence leadership thinks about deploying AI systems and the ethical implications. We interviewed 15 leaders about the impact of AI on the Italian Defence, key ethical challenges, and responsibility for future action. Our findings suggest that Italian Defence leaders are keen to address ethical issues but encounter challenges in developing a system governance approach to implement ethical AI across the organisation. Guidance on risk management and human–machine interaction, applied education, and interdisciplinary research, as well as guidance on AI defence ethics by the European Union are critical elements for Italian Defence leaders as they adapt their organisational processes to the AI-enabled digital transformation.

Cite

CITATION STYLE

APA

Fanni, R., & Giancotti, F. (2023). Ethical Artificial Intelligence in the Italian Defence: a Case Study. Digital Society, 2(2). https://doi.org/10.1007/s44206-023-00056-0

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free