Deep neural networks have achieved state-of-the-art performance in many fields including image classification. However, recent studies show these models are vulnerable to adversarial examples formed by adding small but intentional perturbations to clean examples. In this paper, we introduce a significant defense method against adversarial examples. The key idea is to leverage a super-resolution coding (SR-coding) network to eliminate noise from adversarial examples. Furthermore, to boost the effect of defending noise, we propose a novel hybrid approach that incorporates SR-coding and adversarial training to train robust neural networks. Experiments on benchmark datasets demonstrate the effectiveness of our method against both the state-of-the-art white-box attacks and black-box attacks. The proposed approach significantly improves defense performance and achieves up to 41.26% improvement based on the accuracy by ResNet18 on PGD white-box attack.
CITATION STYLE
Chen, Y., Cai, L., Cheng, W., & Wang, H. (2020). Super-resolution coding defense against adversarial examples. In ICMR 2020 - Proceedings of the 2020 International Conference on Multimedia Retrieval (pp. 189–197). Association for Computing Machinery, Inc. https://doi.org/10.1145/3372278.3390689
Mendeley helps you to discover research relevant for your work.