Dynamic Privacy Budget Allocation Improves Data Efficiency of Differentially Private Gradient Descent

9Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Protecting privacy in learning while maintaining the model performance has become increasingly critical in many applications that involve sensitive data. A popular private learning framework is differentially private learning composed of many privatized gradient iterations by noising and clipping. Under the privacy constraint, it has been shown that the dynamic policies could improve the final iterate loss, namely the quality of published models. In this talk, we will introduce these dynamic techniques for learning rate, batch size, noise magnitude and gradient clipping. Also, we discuss how the dynamic policy could change the convergence bounds which further provides insight of the impact of dynamic methods.

Author supplied keywords

Cite

CITATION STYLE

APA

Hong, J., Wang, Z., & Zhou, J. (2022). Dynamic Privacy Budget Allocation Improves Data Efficiency of Differentially Private Gradient Descent. In ACM International Conference Proceeding Series (pp. 11–35). Association for Computing Machinery. https://doi.org/10.1145/3531146.3533070

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free