The Limits of Value Transparency in Machine Learning

  • Nyrup R
N/ACitations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

Transparency has been proposed as a way of handling value-ladenness in machine learning (ML). This article highlights limits to this strategy. I distinguish three kinds of transparency: epistemic transparency, retrospective value transparency, and prospective value transparency. This corresponds to different approaches to transparency in ML, including so-called explainable artificial intelligence and governance based on disclosing information about the design process. I discuss three sources of value-ladenness in ML—problem formulation, inductive risk, and specification gaming—and argue that retrospective value transparency is only well-suited for dealing with the first, while the third raises serious challenges even for prospective value transparency.

Cite

CITATION STYLE

APA

Nyrup, R. (2022). The Limits of Value Transparency in Machine Learning. Philosophy of Science, 89(5), 1054–1064. https://doi.org/10.1017/psa.2022.61

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free