Learning Interpretable Features via Adversarially Robust Optimization

5Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Neural networks are proven to be remarkably successful for classification and diagnosis in medical applications. However, the ambiguity in the decision-making process and the interpretability of the learned features is a matter of concern. In this work, we propose a method for improving the feature interpretability of neural network classifiers. Initially, we propose a baseline convolutional neural network with state of the art performance in terms of accuracy and weakly supervised localization. Subsequently, the loss is modified to integrate robustness to adversarial examples into the training process. In this work, feature interpretability is quantified via evaluating the weakly supervised localization using the ground truth bounding boxes. Interpretability is also visually assessed using class activation maps and saliency maps. The method is applied to NIH ChestX-ray14, the largest publicly available chest x-rays dataset. We demonstrate that the adversarially robust optimization paradigm improves feature interpretability both quantitatively and visually.

Cite

CITATION STYLE

APA

Khakzar, A., Albarqouni, S., & Navab, N. (2019). Learning Interpretable Features via Adversarially Robust Optimization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11769 LNCS, pp. 793–800). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-32226-7_88

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free