Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

3.3kCitations
Citations of this article
3.7kReaders
Mendeley users who have this article in their library.
Get full text

Abstract

Black box machine learning models are currently being used for high-stakes decision making throughout society, causing problems in healthcare, criminal justice and other domains. Some people hope that creating methods for explaining these black box models will alleviate some of the problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practice and can potentially cause great harm to society. The way forward is to design models that are inherently interpretable. This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.

Cite

CITATION STYLE

APA

Rudin, C. (2019, May 1). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence. Nature Research. https://doi.org/10.1038/s42256-019-0048-x

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free