MLAttack: Fooling Semantic Segmentation Networks by Multi-layer Attacks

6Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Despite the immense success of deep neural networks, their applicability is limited because they can be fooled by adversarial examples, which are generated by adding visually imperceptible and structured perturbations to the original image. Semantic segmentation is required in several visual recognition tasks, but unlike image classification, only a few studies are available for attacking semantic segmentation networks. The existing semantic segmentation adversarial attacks employ different gradient based loss functions which are defined using only the last layer of the network for gradient backpropogation. But some components of semantic segmentation networks implicitly mitigate several adversarial attacks (like multiscale analysis) due to which the existing attacks perform poorly. This provides us the motivation to introduce a new attack in this paper known as MLAttack, i.e., Multiple Layers Attack. It carefully selects several layers and use them to define a loss function for gradient based adversarial attack on semantic segmentation architectures. Experiments conducted on publicly available dataset using the state-of-the-art segmentation network architectures, demonstrate that MLAttack performs better than existing state-of-the-art semantic segmentation attacks.

Cite

CITATION STYLE

APA

Gupta, P., & Rahtu, E. (2019). MLAttack: Fooling Semantic Segmentation Networks by Multi-layer Attacks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11824 LNCS, pp. 401–413). Springer. https://doi.org/10.1007/978-3-030-33676-9_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free