Differentially private deep learning

2Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In recent years, deep learning has rapidly become one of the most successful approaches to machine learning. The essential idea of deep learning is to apply a multiple-layer structure to extract complex features from high-dimensional data and use those features to build models. However, deep learning models are susceptible to several types of attacks. For example, a centralized collection of photos, speech, and video clips from millions of individuals might meet with privacy risks when they are shared with others. Learning models can also disclose sensitive information. To integrate differential privacy to deep learning, we need to consider two challenges: high sensitivity and limited privacy budget. This chapter first presents the traditional Laplace method and illustrates the limitations of the method, and then present Private SGD Method, Deep Private Auto-Encoder Algorithm and Distributed Private SGD. Each of them is focusing on a particular deep learning algorithm and is dealing with those two challenges in different ways. Finally, this chapter shows several popular datasets that can be used in differentially private deep learning.

Cite

CITATION STYLE

APA

Zhu, T., Li, G., Zhou, W., & Yu, P. S. (2017). Differentially private deep learning. In Advances in Information Security (Vol. 69, pp. 67–82). Springer New York LLC. https://doi.org/10.1007/978-3-319-62004-6_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free