PAR-GAN: Improving the Generalization of Generative Adversarial Networks against Membership Inference Attacks

24Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent works have shown that Generative Adversarial Networks (GANs) may generalize poorly and thus are vulnerable to privacy attacks. In this paper, we seek to improve the generalization of GANs from a perspective of privacy protection, specifically in terms of defending against the membership inference attack (MIA) which aims to infer whether a particular sample was used for model training. We design a GAN framework, partition GAN (PAR-GAN), which consists of one generator and multiple discriminators trained over disjoint partitions of the training data. The key idea of PAR-GAN is to reduce the generalization gap by approximating a mixture distribution of all partitions of the training data. Our theoretical analysis shows that PAR-GAN can achieve global optimality just like the original GAN. Our experimental results on simulated data and multiple popular datasets demonstrate that PAR-GAN can improve the generalization of GANs while mitigating information leakage induced by MIA.

Cite

CITATION STYLE

APA

Chen, J., Wang, W. H., Gao, H., & Shi, X. (2021). PAR-GAN: Improving the Generalization of Generative Adversarial Networks against Membership Inference Attacks. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 127–137). Association for Computing Machinery. https://doi.org/10.1145/3447548.3467445

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free