Philosophy of science at sea: Clarifying the interpretability of machine learning

19Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In computer science, there are efforts to make machine learning more interpretable or explainable, and thus to better understand the underlying models, algorithms, and their behavior. But what exactly is interpretability, and how can it be achieved? Such questions lead into philosophical waters because their answers depend on what explanation and understanding are—and thus on issues that have been central to the philosophy of science. In this paper, we review the recent philosophical literature on interpretability. We propose a systematization in terms of four tasks for philosophers: (i) clarify the notion of interpretability, (ii) explain the value of interpretability, (iii) provide frameworks to think about interpretability, and (iv) explore important features of it to adjust our expectations about it.

Cite

CITATION STYLE

APA

Beisbart, C., & Räz, T. (2022). Philosophy of science at sea: Clarifying the interpretability of machine learning. Philosophy Compass, 17(6). https://doi.org/10.1111/phc3.12830

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free