EI-MTD: Moving Target Defense for Edge Intelligence against Adversarial Attacks

12Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Edge intelligence has played an important role in constructing smart cities, but the vulnerability of edge nodes to adversarial attacks becomes an urgent problem. A so-called adversarial example can fool a deep learning model on an edge node for misclassification. Due to the transferability property of adversarial examples, an adversary can easily fool a black-box model by a local substitute model. Edge nodes in general have limited resources, which cannot afford a complicated defense mechanism like that on a cloud data center. To address the challenge, we propose a dynamic defense mechanism, namely EI-MTD. The mechanism first obtains robust member models of small size through differential knowledge distillation from a complicated teacher model on a cloud data center. Then, a dynamic scheduling policy, which builds on a Bayesian Stackelberg game, is applied to the choice of a target model for service. This dynamic defense mechanism can prohibit the adversary from selecting an optimal substitute model for black-box attacks. We also conduct extensive experiments to evaluate the proposed mechanism, and results show that EI-MTD could protect edge intelligence effectively against adversarial attacks in black-box settings.

Cite

CITATION STYLE

APA

Qian, Y., Guo, Y., Shao, Q., Wang, J., Wang, B., Gu, Z., … Wu, C. (2022). EI-MTD: Moving Target Defense for Edge Intelligence against Adversarial Attacks. ACM Transactions on Privacy and Security, 25(3). https://doi.org/10.1145/3517806

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free