Adversarial Attack Defense Based on the Deep Image Prior Network

7Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Several recent studies have shown that artificial intelligence (AI) systems can be malfunctioned by deliberately crafted data entering through the normal route. For example, a well-crafted sticker attached on a traffic sign can lead a self-driving car to misinterpret the meaning of a traffic sign from its original one. Such deliberately crafted data which cause the AI system to misjudge are called adversarial examples. The problem is that current AI systems are not stable enough to defend adversarial examples when an attacker uses them as means to attack an AI system. Therefore, nowadays, many researches on detecting and removing adversarial examples are under way. In this paper, we proposed the use of the deep image prior (DIP) as a defense method against adversarial examples using only the adversarial noisy image. This is in contrast with other neural network based adversarial noise removal methods where many adversarial noisy and true images have to be used for the training of the neural network. Experimental results show the validness of the proposed approach.

Cite

CITATION STYLE

APA

Sutanto, R. E., & Lee, S. (2020). Adversarial Attack Defense Based on the Deep Image Prior Network. In Lecture Notes in Electrical Engineering (Vol. 621, pp. 519–526). Springer. https://doi.org/10.1007/978-981-15-1465-4_51

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free