DeepMiner: Discovering Interpretable Representations for Mammogram Classification and Explanation

  • Wu J
  • Zhou B
  • Peck D
  • et al.
N/ACitations
Citations of this article
66Readers
Mendeley users who have this article in their library.

Abstract

We propose DeepMiner, a framework to discover interpretable representations in deep neural networks and to build explanations for medical predictions. By probing convolutional neural networks (CNNs) trained to classify cancer in mammograms, we show that many individual units in the final convolutional layer of a CNN respond strongly to diseased tissue concepts specified by the BI-RADS lexicon. After expert annotation of the interpretable units, our proposed method is able to generate explanations for CNN mammogram classification that are consistent with ground truth radiology reports on the Digital Database for Screening Mammography. We show that DeepMiner not only enables better understanding of the nuances of CNN classification decisions but also possibly discovers new visual knowledge relevant to medical diagnosis.

Cite

CITATION STYLE

APA

Wu, J., Zhou, B., Peck, D., Hsieh, S., Dialani, V., Mackey, L., & Patterson, G. (2021). DeepMiner: Discovering Interpretable Representations for Mammogram Classification and Explanation. Harvard Data Science Review, 3(4). https://doi.org/10.1162/99608f92.8b81b005

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free