Intelligent Computing Systems

  • Tsihrintzis G
  • Virvou M
  • Jain L
N/ACitations
Citations of this article
67Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent research has found deep neural networks to be vulner- able, by means of prediction error, to images corrupted by small amounts of non-random noise. These images, known as adversarial examples are created by exploiting the input to output mapping of the network. For the MNIST database, we observe in this paper how well the known reg- ularization/robustness methods improve generalization performance of deep neural networks when classifying adversarial examples and exam- ples perturbed with random noise. We conduct a comparison of these methods with our proposed robustness method, an ensemble of mod- els trained on adversarial examples, able to clearly reduce prediction error. Apart from robustness experiments, human classification accuracy for adversarial examples and examples perturbed with random noise is measured. Obtained human classification accuracy is compared to the accuracy of deep neural networks measured in the same experimental settings. The results indicate, human performance does not suffer from neural network adversarial noise

Cite

CITATION STYLE

APA

Tsihrintzis, G. A., Virvou, M., & Jain, L. C. (2016). Intelligent Computing Systems (pp. 1–4). https://doi.org/10.1007/978-3-662-49179-9_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free