Transferability of Fast Gradient Sign Method

7Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Image classification in computer vision is a process which classifies an image depending on its content. While classifying an object is trivial for humans, robust image classification is still a challenge in computer vision applications. The robustness of such image classification models in real world applications is a major concern. Adversarial examples are specialized inputs created with the purpose of confusing a classifier, resulting in the misclassification of a given input. Some of these adversarial examples are indistinguishable to humans, but the classifier can still be tricked into outputting the wrong class. In some cases adversarial examples can be transferred: an adversarial example crafted on a target model fools another model too. In this paper we evaluate the transferability of adversarial examples crafted with Fast Gradient Sign Method across models available in the open source Tensorflow machine learning platform (using ResNetV2, DenseNet, MobileNetV2 and InceptionV3).

Cite

CITATION STYLE

APA

Muncsan, T., & Kiss, A. (2021). Transferability of Fast Gradient Sign Method. In Advances in Intelligent Systems and Computing (Vol. 1251 AISC, pp. 23–34). Springer. https://doi.org/10.1007/978-3-030-55187-2_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free