Tutorial on Deep Learning Interpretation: A Data Perspective

8Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep learning models have achieved exceptional predictive performance in a wide variety of tasks, ranging from computer vision, natural language processing, to graph mining. Many businesses and organizations across diverse domains are now building large-scale applications based on deep learning. However, there are growing concerns, regarding the fairness, security, and trustworthiness of these models, largely due to the opaque nature of their decision processes. Recently, there has been an increasing interest in explainable deep learning that aims to reduce the opacity of a model by explaining its behavior, its predictions, or both, thus building trust between human and complex deep learning models. A collection of explanation methods have been proposed in recent years that address the problem of low explainability and opaqueness of models. In this tutorial, we introduce recent explanation methods from a data perspective, targeting models that process image data, text data, and graph data, respectively. We will compare their strengths and limitations, and offer real-world applications.

Cite

CITATION STYLE

APA

Yang, Z., Liu, N., Hu, X. B., & Jin, F. (2022). Tutorial on Deep Learning Interpretation: A Data Perspective. In International Conference on Information and Knowledge Management, Proceedings (pp. 5156–5159). Association for Computing Machinery. https://doi.org/10.1145/3511808.3557500

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free