Measuring feature importance of convolutional neural networks

7Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Convolutional neural networks have demonstrated powerful abilities to achieve state-of-the-art results in many classification tasks, while the interpretability and reliability of these complicated models are also a non-negligible problem. Understanding how these networks arrive at their final decisions becomes more and more indispensable, so this article puts forward an interpretive method to obtain feature importance, which indicates to what extent an input feature can discriminate different classes. The proposed method utilizes the attribution maps of multiple-class predictions and can decomposes the feature importance into individual and co-variation effects. Some properties of the method are justified theoretically. Furthermore, a visualization method is proposed to sketch the silhouette of the target object. And to improve computational efficiency, some practical tricks are applied. For the purpose of evaluating this method, some comparative experiments are performed; the results testify that the proposed method can identify important features and improve the visualization effects.

Cite

CITATION STYLE

APA

Zhang, X., & Gao, J. (2020). Measuring feature importance of convolutional neural networks. IEEE Access, 8, 196062–196074. https://doi.org/10.1109/ACCESS.2020.3034625

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free