Differential Privacy Preservation in Deep Learning: Challenges, Opportunities and Solutions

76Citations
Citations of this article
110Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Nowadays, deep learning has been increasingly applied in real-world scenarios involving the collection and analysis of sensitive data, which often causes privacy leakage. Differential privacy is widely recognized in the majority of traditional scenarios for its rigorous mathematical guarantee. However, it is uncertain to work effectively in the deep learning model. In this paper, we introduce the privacy attacks facing the deep learning model and present them from three aspects: membership inference, training data extraction, and model extracting. Then we recall some basic theory about differential privacy and its extended concepts in deep learning scenarios. Second, in order to analyze the existing works that combine differential privacy and deep learning, we classify them by the layers differential privacy mechanism deployed, such as input layer, hidden layer, and output layer, and discuss their advantages and disadvantages. Finally, we point out several key issues to be solved and provide a broader outlook of this research direction.

Cite

CITATION STYLE

APA

Zhao, J., Chen, Y., & Zhang, W. (2019). Differential Privacy Preservation in Deep Learning: Challenges, Opportunities and Solutions. IEEE Access, 7, 48901–48911. https://doi.org/10.1109/ACCESS.2019.2909559

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free