A Protection against the Extraction of Neural Network Models

0Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Given oracle access to a Neural Network (NN), it is possible to extract its underlying model. We here introduce a protection by adding parasitic layers which keep the underlying NN’s predictions mostly unchanged while complexifying the task of reverse-engineering. Our countermeasure relies on approximating a noisy identity mapping with a Convolutional NN. We explain why the introduction of new parasitic layers complexifies the attacks. We report experiments regarding the performance and the accuracy of the protected NN.

Cite

CITATION STYLE

APA

Chabanne, H., Despiegel, V., & Guiga, L. (2021). A Protection against the Extraction of Neural Network Models. In International Conference on Information Systems Security and Privacy (pp. 258–269). Science and Technology Publications, Lda. https://doi.org/10.5220/0010373302580269

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free