Challenging the Adversarial Robustness of DNNs Based on Error-Correcting Output Codes

7Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

The existence of adversarial examples and the easiness with which they can be generated raise several security concerns with regard to deep learning systems, pushing researchers to develop suitable defence mechanisms. The use of networks adopting error-correcting output codes (ECOC) has recently been proposed to counter the creation of adversarial examples in a white-box setting. In this paper, we carry out an in-depth investigation of the adversarial robustness achieved by the ECOC approach. We do so by proposing a new adversarial attack specifically designed for multilabel classification architectures, like the ECOC-based one, and by applying two existing attacks. In contrast to previous findings, our analysis reveals that ECOC-based networks can be attacked quite easily by introducing a small adversarial perturbation. Moreover, the adversarial examples can be generated in such a way to achieve high probabilities for the predicted target class, hence making it difficult to use the prediction confidence to detect them. Our findings are proven by means of experimental results obtained on MNIST, CIFAR-10, and GTSRB classification tasks.

References Powered by Scopus

Towards Evaluating the Robustness of Neural Networks

6405Citations
N/AReaders
Get full text

DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks

3946Citations
N/AReaders
Get full text

The limitations of deep learning in adversarial settings

3110Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Improving Robustness Against Stealthy Weight Bit-Flip Attacks by Output Code Matching

7Citations
N/AReaders
Get full text

MULTI-STAGE DEEP LEARNING METHOD WITH SELF-SUPERVISED PRETRAINING FOR SEWER PIPE DEFECTS CLASSIFICATION

6Citations
N/AReaders
Get full text

A Comparative Study of Artificial Intelligence-Based Algorithms for Bitwise Decoding of Error Correction Codes

2Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Zhang, B., Tondi, B., Lv, X., & Barni, M. (2020). Challenging the Adversarial Robustness of DNNs Based on Error-Correcting Output Codes. Security and Communication Networks, 2020. https://doi.org/10.1155/2020/8882494

Readers' Seniority

Tooltip

Professor / Associate Prof. 2

33%

PhD / Post grad / Masters / Doc 2

33%

Researcher 2

33%

Readers' Discipline

Tooltip

Computer Science 6

75%

Engineering 2

25%

Save time finding and organizing research with Mendeley

Sign up for free