The large model size, high computational operations, and vulnerability against membership inference attack (MIA) have impeded deep learning or deep neural networks (DNNs) popularity, especially on mobile devices. To address the challenge, we envision that the weight pruning technique will help DNNs against MIA while reducing model storage and computational operation. In this work, we propose a pruning algorithm, and we show that the proposed algorithm can find a subnetwork that can prevent privacy leakage from MIA and achieves competitive accuracy with the original DNNs. We also verify our theoretical insights with experiments. Our experimental results illustrate that the attack accuracy using model compression is up to 13.6% and 10% lower than that of the baseline and Min-Max game, accordingly.
CITATION STYLE
Wang, Y., Wang, C., Wang, Z., Zhou, S., Liu, H., Bi, J., … Rajasekaran, S. (2021). Against Membership Inference Attack: Pruning is All You Need. In IJCAI International Joint Conference on Artificial Intelligence (pp. 3141–3147). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/432
Mendeley helps you to discover research relevant for your work.