An Empirical Study on the Membership Inference Attack against Tabular Data Synthesis Models

6Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Tabular data typically contains private and important information; thus, precautions must be taken before they are shared with others. Although several methods (e.g., differential privacy and k-anonymity) have been proposed to prevent information leakage, in recent years, tabular data synthesis models have become popular because they can well trade-off between data utility and privacy. However, recent research has shown that generative models for image data are susceptible to the membership inference attack, which can determine whether a given record was used to train a victim synthesis model. In this paper, we investigate the membership inference attack in the context of tabular data synthesis. We conduct experiments on 4 state-of-the-art tabular data synthesis models under two attack scenarios (i.e., one black-box and one white-box attack), and find that the membership inference attack can seriously jeopardize these models. We next conduct experiments to evaluate how well two popular differentially-private deep learning training algorithms, DP-SGD and DP-GAN, can protect the models against the attack. Our key finding is that both algorithms can largely alleviate this threat by sacrificing the generation quality.

Cite

CITATION STYLE

APA

Hyeong, J., Kim, J., Park, N., & Jajodia, S. (2022). An Empirical Study on the Membership Inference Attack against Tabular Data Synthesis Models. In International Conference on Information and Knowledge Management, Proceedings (pp. 4064–4068). Association for Computing Machinery. https://doi.org/10.1145/3511808.3557546

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free