Preventing Neural Network Weight Stealing via Network Obfuscation

2Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep Neural Networks are robust to minor perturbations of the learned network parameters and their minor modifications do not change the overall network response significantly. This allows space for model stealing, where a malevolent attacker can steal an already trained network, modify the weights and claim the new network his own intellectual property. In certain cases this can prevent the free distribution and application of networks in the embedded domain. In this paper, we propose a method for creating an equivalent version of an already trained fully connected deep neural network that can prevent network stealing, namely, it produces the same responses and classification accuracy, but it is extremely sensitive to weight changes.

Cite

CITATION STYLE

APA

Szentannai, K., Al-Afandi, J., & Horváth, A. (2020). Preventing Neural Network Weight Stealing via Network Obfuscation. In Advances in Intelligent Systems and Computing (Vol. 1230 AISC, pp. 1–11). Springer. https://doi.org/10.1007/978-3-030-52243-8_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free