On Interpretability of Artificial Neural Networks: A Survey

317Citations
Citations of this article
315Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Deep learning as performed by artificial deep neural networks (DNNs) has achieved great successes recently in many important areas that deal with text, images, videos, graphs, and so on. However, the black-box nature of DNNs has become one of the primary obstacles for their wide adoption in mission-critical applications such as medical diagnosis and therapy. Because of the huge potentials of deep learning, the interpretability of DNNs has recently attracted much research attention. In this article, we propose a simple but comprehensive taxonomy for interpretability, systematically review recent studies on interpretability of neural networks, describe applications of interpretability in medicine, and discuss future research directions, such as in relation to fuzzy logic and brain science.

Cite

CITATION STYLE

APA

Fan, F. L., Xiong, J., Li, M., & Wang, G. (2021). On Interpretability of Artificial Neural Networks: A Survey. IEEE Transactions on Radiation and Plasma Medical Sciences, 5(6), 741–760. https://doi.org/10.1109/TRPMS.2021.3066428

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free