Real World Robustness from Systematic Noise

5Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Systematic error, which is not determined by chance, often refers to the inaccuracy (involving either the observation or measurement process) inherent to a system. In this paper, we exhibit some long-neglected but frequent-happening adversarial examples caused by systematic error. More specifically, we find the trained neural network classifier can be fooled by inconsistent implementations of image decoding and resize. This tiny difference between these implementations often causes an accuracy drop from training to deployment. To benchmark these real-world adversarial examples, we propose ImageNet-S dataset, which enables researchers to measure a classifier's robustness to systematic error. For example, we find a normal ResNet-50 trained on ImageNet can have 1%∼5% accuracy difference due to the systematic error. Together our evaluation and dataset may aid future work toward real-world robustness and practical generalization.

Cite

CITATION STYLE

APA

Wang, Y., Li, Y., Gong, R., Xiao, T., & Yu, F. (2021). Real World Robustness from Systematic Noise. In AdvM 2021 - Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia, co-located with ACM MM 2021 (pp. 42–48). Association for Computing Machinery, Inc. https://doi.org/10.1145/3475724.3483607

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free