Training Neural Networks with Random Noise Images for Adversarial Robustness

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Despite their high accuracy, deep neural networks (DNNs) are vulnerable to adversarial examples. Currently, adversarial training is the mainstream defense approach against adversarial examples. However, given the unknown nature of adversarial attacks in real life, this approach has fundamental limitations in practical use, as it is impossible to obtain sufficient adversarial examples for the training. In this paper, we propose RanTrain, a simple training approach which employs a background class with random noise images to augment the original DNN model and training data, without requiring any adversarial examples. Experiments have shown that RanTrain works effectively with different datasets and various DNN structures, and it significantly increases the robustness of DNNs to adversarial examples.

Cite

CITATION STYLE

APA

Park, J. Y., Liu, L., Li, J., & Liu, J. (2021). Training Neural Networks with Random Noise Images for Adversarial Robustness. In International Conference on Information and Knowledge Management, Proceedings (pp. 3358–3362). Association for Computing Machinery. https://doi.org/10.1145/3459637.3482205

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free