A moving target defense against adversarial machine learning

26Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Adversarial Machine Learning has become the latest threat with the ubiquitous presence of machine learning. In this paper we propose a Moving Target Defense approach to defend against adversarial machine learning, i.e., instead of manipulating the machine learning algorithms, we suggest a switching scheme among machine learning algorithms to defend against adversarial attack. We model the problem as a Stackelberg game between the attacker and the defender. We propose a switching strategy which is the Stackelberg equilibrium of the game. We test our method against rational, and boundedly rational attackers. We show that designing a method against a rational attacker is enough in most scenarios. We show that even under very harsh constraints, e.g., no attack-cost, and availability of attacks which can bring down the accuracy to 0, it is possible to achieve reasonable accuracy in the context of classification. This work shows, that in addition to switching among algorithms, one can think of introducing randomness in tuning parameters, and model choices to achieve better defense against adversarial machine learning.

Cite

CITATION STYLE

APA

Roy, A., Chhabra, A., Kamhoua, C. A., & Mohapatra, P. (2019). A moving target defense against adversarial machine learning. In Proceedings of the 4th ACM/IEEE Symposium on Edge Computing, SEC 2019 (pp. 383–388). Association for Computing Machinery, Inc. https://doi.org/10.1145/3318216.3363338

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free