A multiorder feature tracking and explanation strategy for explainable deep learning

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

A good AI algorithm can make accurate predictions and provide reasonable explanations for the field in which it is applied. However, the application of deep models makes the black box problem, i.e., the lack of interpretability of a model, more prominent. In particular, when there are multiple features in an application domain and complex interactions between these features, it is difficult for a deep model to intuitively explain its prediction results. Moreover, in practical applications, multiorder feature interactions are ubiquitous. To break the interpretation limitations of deep models, we argue that a multiorder linearly separable deep model can be divided into different orders to explain its prediction results. Inspired by the interpretability advantage of tree models, we design a feature representation mechanism that can consistently represent the features of both trees and deep models. Based on the consistent representation, we propose a multiorder feature-tracking strategy to provide a prediction-oriented multiorder explanation for a linearly separable deep model. In experiments, we have empirically verified the effectiveness of our approach in two binary classification application scenarios: education and marketing. Experimental results show that our model can intuitively represent complex relationships between features through diversified multiorder explanations.

Cite

CITATION STYLE

APA

Zheng, L., & Lin, Y. (2023). A multiorder feature tracking and explanation strategy for explainable deep learning. Journal of Intelligent Systems, 32(1). https://doi.org/10.1515/jisys-2022-0212

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free