DPlis: Boosting Utility of Differentially Private Deep Learning via Randomized Smoothing

  • Wang W
  • Wang T
  • Wang L
  • et al.
N/ACitations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

Deep learning techniques have achieved remarkable performance in wide-ranging tasks. However, when trained on privacy-sensitive datasets, the model parameters may expose private information in training data. Prior attempts for differentially private training, although offering rigorous privacy guarantees, lead to much lower model performance than the non-private ones. Besides, different runs of the same training algorithm produce models with large performance variance. To address these issues, we propose DPlis– Differentially Private Learning wIth Smoothing. The core idea of DPlis is to construct a smooth loss function that favors noise-resilient models lying in large flat regions of the loss landscape. We provide theoretical justification for the utility improvements of DPlis. Extensive experiments also demonstrate that DPlis can effectively boost model quality and training stability under a given privacy budget.

Cite

CITATION STYLE

APA

Wang, W., Wang, T., Wang, L., Luo, N., Zhou, P., Song, D., & Jia, R. (2021). DPlis: Boosting Utility of Differentially Private Deep Learning via Randomized Smoothing. Proceedings on Privacy Enhancing Technologies, 2021(4), 163–183. https://doi.org/10.2478/popets-2021-0065

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free