Attribution-Driven Explanation of the Deep Neural Network Model via Conditional Microstructure Image Synthesis

3Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

The materials science community has been increasingly interested in harnessing the power of deep learning to solve various domain challenges. However, despite their effectiveness in building highly predictive models, e.g., predicting material properties from microstructure imaging, due to their opaque nature fundamental challenges exist in extracting meaningful domain knowledge from the deep neural networks. In this work, we propose a technique for interpreting the behavior of deep learning models by injecting domain-specific attributes as tunable “knobs” in the material optimization analysis pipeline. By incorporating the material concepts in a generative modeling framework, we are able to explain what structure-to-property linkages these black-box models have learned, which provides scientists with a tool to leverage the full potential of deep learning for domain discoveries.

Cite

CITATION STYLE

APA

Liu, S., Kailkhura, B., Zhang, J., Hiszpanski, A. M., Robertson, E., Loveland, D., … Han, T. Y. J. (2022). Attribution-Driven Explanation of the Deep Neural Network Model via Conditional Microstructure Image Synthesis. ACS Omega, 7(3), 2624–2637. https://doi.org/10.1021/acsomega.1c04796

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free