Heterogeneous Gaussian mechanism: Preserving differential privacy in deep learning with provable robustness

31Citations
Citations of this article
64Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose a novel Heterogeneous Gaussian Mechanism (HGM) to preserve differential privacy in deep neural networks, with provable robustness against adversarial examples. We first relax the constraint of the privacy budget in the traditional Gaussian Mechanism from (0, 1] to (0, infty), with a new bound of the noise scale to preserve differential privacy. The noise in our mechanism can be arbitrarily redistributed, offering a distinctive ability to address the trade-off between model utility and privacy loss. To derive provable robustness, our HGM is applied to inject Gaussian noise into the first hidden layer. Then, a tighter robustness bound is proposed. Theoretical analysis and thorough evaluations show that our mechanism notably improves the robustness of differentially private deep neural networks, compared with baseline approaches, under a variety of model attacks.

Cite

CITATION STYLE

APA

Phan, N. H., Vu, M. N., Liu, Y., Jin, R., Dou, D., Wu, X., & Thai, M. T. (2019). Heterogeneous Gaussian mechanism: Preserving differential privacy in deep learning with provable robustness. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2019-August, pp. 4753–4759). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2019/660

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free