In view of the poor interpretability of the current neural network models, the neural support decision tree model is used to enhance its interpretability. The model combines the characteristics of high recognition accuracy of neural network and strong interpretation of decision tree. We employ the ResNet18 model to solve the gradient disappearance problem with the increase of network depth. By constructing induction hierarchy and establishing hierarchy in weight space, a higher accuracy is obtained. The hierarchical structure derived from the model parameters is adopted to avoid over fitting. And the trained network weights are utilized to construct a tree structure to complete the tree monitoring loss training, and the classification network is retrained or finetuned with additional hierarchy-based loss items. We exploit the neural network backbone to characterize each sample, and establish a decision tree in the weight space is run to enhance the interpretability of the model. At the same time, the optimization of the model is completed. Compared with the original model, the traditional hard decision tree reasoning rules are abandoned and the soft decision tree reasoning rules are adopted to complete the soft tree supervision loss to improve the classification accuracy and generalization ability of the model, which not only ensures high accuracy, but also completes the explicit display of recognition and classification process.
CITATION STYLE
Xu, L., Jia, W., Jiang, J., & Yu, Y. (2022). An Interpretability Algorithm of Neural Network Based on Neural Support Decision Tree. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13369 LNAI, pp. 508–519). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-10986-7_41
Mendeley helps you to discover research relevant for your work.