Membership Inference Attacks against Recommender Systems

53Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recently, recommender systems have achieved promising performances and become one of the most widely used web applications. However, recommender systems are often trained on highly sensitive user data, thus potential data leakage from recommender systems may lead to severe privacy problems. In this paper, we make the first attempt on quantifying the privacy leakage of recommender systems through the lens of membership inference. In contrast with traditional membership inference against machine learning classifiers, our attack faces two main differences. First, our attack is on the user-level but not on the data sample-level. Second, the adversary can only observe the ordered recommended items from a recommender system instead of prediction results in the form of posterior probabilities. To address the above challenges, we propose a novel method by representing users from relevant items. Moreover, a shadow recommender is established to derive the labeled training data for training the attack model. Extensive experimental results show that our attack framework achieves a strong performance. In addition, we design a defense mechanism to effectively mitigate the membership inference threat of recommender systems.

Cite

CITATION STYLE

APA

Zhang, M., Ren, Z., Wang, Z., Ren, P., Chen, Z., Hu, P., & Zhang, Y. (2021). Membership Inference Attacks against Recommender Systems. In Proceedings of the ACM Conference on Computer and Communications Security (pp. 864–879). Association for Computing Machinery. https://doi.org/10.1145/3460120.3484770

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free