Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this article, I argue that nonepistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, and (ii) an internal opacity problem, where greater inductive risk demands a higher level of transparency regarding the inferences the model makes.
CITATION STYLE
Sullivan, E. (2022). Inductive Risk, Understanding, and Opaque Machine Learning Models. Philosophy of Science, 89(5), 1065–1074. https://doi.org/10.1017/psa.2022.62
Mendeley helps you to discover research relevant for your work.