Convolutional neural networks have demonstrated powerful abilities to achieve state-of-the-art results in many classification tasks, while the interpretability and reliability of these complicated models are also a non-negligible problem. Understanding how these networks arrive at their final decisions becomes more and more indispensable, so this article puts forward an interpretive method to obtain feature importance, which indicates to what extent an input feature can discriminate different classes. The proposed method utilizes the attribution maps of multiple-class predictions and can decomposes the feature importance into individual and co-variation effects. Some properties of the method are justified theoretically. Furthermore, a visualization method is proposed to sketch the silhouette of the target object. And to improve computational efficiency, some practical tricks are applied. For the purpose of evaluating this method, some comparative experiments are performed; the results testify that the proposed method can identify important features and improve the visualization effects.
CITATION STYLE
Zhang, X., & Gao, J. (2020). Measuring feature importance of convolutional neural networks. IEEE Access, 8, 196062–196074. https://doi.org/10.1109/ACCESS.2020.3034625
Mendeley helps you to discover research relevant for your work.