Artificial intelligence and the value of transparency

37Citations
Citations of this article
90Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Some recent developments in Artificial Intelligence—especially the use of machine learning systems, trained on big data sets and deployed in socially significant and ethically weighty contexts—have led to a number of calls for “transparency”. This paper explores the epistemological and ethical dimensions of that concept, as well as surveying and taxonomising the variety of ways in which it has been invoked in recent discussions. Whilst “outward” forms of transparency (concerning the relationship between an AI system, its developers, users and the media) may be straightforwardly achieved, what I call “functional” transparency about the inner workings of a system is, in many cases, much harder to attain. In those situations, I argue that contestability may be a possible, acceptable, and useful alternative so that even if we cannot understand how a system came up with a particular output, we at least have the means to challenge it.

Cite

CITATION STYLE

APA

Walmsley, J. (2021). Artificial intelligence and the value of transparency. AI and Society, 36(2), 585–595. https://doi.org/10.1007/s00146-020-01066-z

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free