Learning to Learn from Mistakes: Robust Optimization for Adversarial Noise

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Sensitivity to adversarial noise hinders deployment of machine learning algorithms in security-critical applications. Although many adversarial defenses have been proposed, robustness to adversarial noise remains an open problem. The most compelling defense, adversarial training, requires a substantial increase in processing time and it has been shown to overfit on the training data. In this paper, we aim to overcome these limitations by training robust models in low data regimes and transfer adversarial knowledge between different models. We train a meta-optimizer which learns to robustly optimize a model using adversarial examples and is able to transfer the knowledge learned to new models, without the need to generate new adversarial examples. Experimental results show the meta-optimizer is consistent across different architectures and data sets, suggesting it is possible to automatically patch adversarial vulnerabilities.

Cite

CITATION STYLE

APA

Serban, A., Poll, E., & Visser, J. (2020). Learning to Learn from Mistakes: Robust Optimization for Adversarial Noise. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12396 LNCS, pp. 467–478). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-61609-0_37

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free