ML-LOO: Detecting adversarial examples with feature attribution

67Citations
Citations of this article
95Readers
Mendeley users who have this article in their library.

Abstract

Deep neural networks obtain state-of-the-art performance on a series of tasks. However, they are easily fooled by adding a small adversarial perturbation to the input. The perturbation is often imperceptible to humans on image data. We observe a significant difference in feature attributions between adversarially crafted examples and original examples. Based on this observation, we introduce a new framework to detect adversarial examples through thresholding a scale estimate of feature attribution scores. Furthermore, we extend our method to include multi-layer feature attributions in order to tackle attacks that have mixed confidence levels. As demonstrated in extensive experiments, our method achieves superior performances in distinguishing adversarial examples from popular attack methods on a variety of real data sets compared to state-of-the-art detection methods. In particular, our method is able to detect adversarial examples of mixed confidence levels, and transfer between different attacking methods. We also show that our method achieves competitive performance even when the attacker has complete access to the detector.

Cite

CITATION STYLE

APA

Yang, P., Chen, J., Hsieh, C. J., Wang, J. L., & Jordan, M. I. (2020). ML-LOO: Detecting adversarial examples with feature attribution. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 6639–6647). AAAI press. https://doi.org/10.1609/aaai.v34i04.6140

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free