Illuminating the Black Box: Interpreting Deep Neural Network Models for Psychiatric Research

44Citations
Citations of this article
91Readers
Mendeley users who have this article in their library.

Abstract

Psychiatric research is often confronted with complex abstractions and dynamics that are not readily accessible or well-defined to our perception and measurements, making data-driven methods an appealing approach. Deep neural networks (DNNs) are capable of automatically learning abstractions in the data that can be entirely novel and have demonstrated superior performance over classical machine learning models across a range of tasks and, therefore, serve as a promising tool for making new discoveries in psychiatry. A key concern for the wider application of DNNs is their reputation as a “black box” approach—i.e., they are said to lack transparency or interpretability of how input data are transformed to model outputs. In fact, several existing and emerging tools are providing improvements in interpretability. However, most reviews of interpretability for DNNs focus on theoretical and/or engineering perspectives. This article reviews approaches to DNN interpretability issues that may be relevant to their application in psychiatric research and practice. It describes a framework for understanding these methods, reviews the conceptual basis of specific methods and their potential limitations, and discusses prospects for their implementation and future directions.

Cite

CITATION STYLE

APA

Sheu, Y. H. (2020, October 29). Illuminating the Black Box: Interpreting Deep Neural Network Models for Psychiatric Research. Frontiers in Psychiatry. Frontiers Media S.A. https://doi.org/10.3389/fpsyt.2020.551299

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free