How empty is Trustworthy AI? A discourse analysis of the Ethics Guidelines of Trustworthy AI

2Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

‘Trustworthy artificial intelligence’ (TAI) is contested. Considering the growing power of Big Tech and the fear that AI ethics lacks sufficient institutional backing to enforce its norms on AI industry, we struggle to reconcile ethical and economic demands in AI development. To establish such a convergence in the European context, the European Commission published the Ethics Guidelines for Trustworthy AI (EGTAI), aiming to strengthen the ethical authority and find common ground among AI industry, ethicists, and legal regulators. At first glance, this attempt allows to unify different camps around AI development, but we question this unity as one that subordinates the ethical perspective to industry interests. By employing Laclau’s work on empty signifiers and critical discourse analysis, we argue that the EU’s efforts are not pointless but establish a chain of equivalences among different stakeholders by promoting ‘TAI’ as a unifying signifier, left open so that diverse stakeholders unite their aspirations in a common regulatory framework. However, through a close reading of the EGTAI, we identify a hegemony of AI industry demands over ethics. This leaves AI ethics for the uncomfortable choice of affirming industry’s hegemonic position, undermining the purpose of ethics guidelines, or contesting industry hegemony.

Cite

CITATION STYLE

APA

Stamboliev, E., & Christiaens, T. (2024). How empty is Trustworthy AI? A discourse analysis of the Ethics Guidelines of Trustworthy AI. Critical Policy Studies. https://doi.org/10.1080/19460171.2024.2315431

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free