Emerging Consensus on ‘Ethical AI’: Human Rights Critique of Stakeholder Guidelines

47Citations
Citations of this article
149Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Voluntary guidelines on ‘ethical practices’ have been the response by stakeholders to address the growing concern over harmful social consequences of artificial intelligence and digital technologies. Issued by dozens of actors from industry, government and professional associations, the guidelines are creating a consensus on core standards and principles for ethical design, development and deployment of artificial intelligence (AI). Using human rights principles (equality, participation and accountability) and attention to the right to privacy, this paper reviews 15 guidelines preselected to be strongest on human rights, and on global health. We find about half of these ground their guidelines in international human rights law and incorporate the key principles; even these could go further, especially in suggesting ways to operationalize them. Those that adopt the ethics framework are particularly weak in laying out standards for accountability, often focusing on ‘transparency’, and remaining silent on enforceability and participation which would effectively protect the social good. These guidelines mention human rights as a rhetorical device to obscure the absence of enforceable standards and accountability measures, and give their attention to the single right to privacy. These ‘ethics’ guidelines, disproportionately from corporations and other interest groups, are also weak on addressing inequalities and discrimination. We argue that voluntary guidelines are creating a set of de facto norms and re-interpretation of the term ‘human rights’ for what would be considered ‘ethical’ practice in the field. This exposes an urgent need for action by governments and civil society to develop more rigorous standards and regulatory measures, grounded in international human rights frameworks, capable of holding Big Tech and other powerful actors to account.

Cite

CITATION STYLE

APA

Fukuda-Parr, S., & Gibbons, E. (2021). Emerging Consensus on ‘Ethical AI’: Human Rights Critique of Stakeholder Guidelines. Global Policy, 12(S6), 32–44. https://doi.org/10.1111/1758-5899.12965

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free