Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations

10Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.

Abstract

We study the effect of adversarial perturbations of images on the estimates of disparity by deep learning models trained for stereo. We show that imperceptible additive perturbations can significantly alter the disparity map, and correspondingly the perceived geometry of the scene. These perturbations not only affect the specific model they are crafted for, but transfer to models with different architecture, trained with different loss functions. We show that, when used for adversarial data augmentation, our perturbations result in trained models that are more robust, without sacrificing overall accuracy of the model. This is unlike what has been observed in image classification, where adding the perturbed images to the training set makes the model less vulnerable to adversarial perturbations, but to the detriment of overall accuracy. We test our method using the most recent stereo networks and evaluate their performance on public benchmark datasets.

Cite

CITATION STYLE

APA

Wong, A., Mundhra, M., & Soatto, S. (2021). Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 4A, pp. 2879–2888). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i4.16394

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free