SEAT: Similarity Encoder by Adversarial Training for Detecting Model Extraction Attack Queries

37Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Given black-box access to the prediction API, model extraction attacks can steal the functionality of models deployed in the cloud. In this paper, we introduce the SEAT detector, which detects black-box model extraction attacks so that the defender can terminate malicious accounts. SEAT has a similarity encoder trained by adversarial training. Using the similarity encoder, SEAT detects accounts that make queries that indicate a model extraction attack in progress and cancels these accounts. We evaluate our defense against existing model extraction attacks and against new adaptive attacks introduced in this paper. Our results show that even against adaptive attackers, SEAT increases the cost of model extraction attacks by 3.8 times to 16 times.

Cite

CITATION STYLE

APA

Zhang, Z., Chen, Y., & Wagner, D. (2021). SEAT: Similarity Encoder by Adversarial Training for Detecting Model Extraction Attack Queries. In AISec 2021 - Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security, co-located with CCS 2021 (pp. 37–48). Association for Computing Machinery, Inc. https://doi.org/10.1145/3474369.3486863

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free