Differentially Private Learning of Distributed Deep Models

8Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This study presents an optimal differential privacy framework for learning of distributed deep models. The deep models, consisting of a nested composition of mappings, are learned analytically in a private setting using variational optimization methodology. An optimal (ϵ,I)-differentially private noise adding mechanism is used and the effect of added data noise on the utility is alleviated using a rule-based fuzzy system. The private local data is separated from globally shared data through a privacy-wall and a fuzzy model is used to aggregate robustly the local deep fuzzy models for building the global model.

Cite

CITATION STYLE

APA

Kumar, M., Rossbory, M., Moser, B. A., & Freudenthaler, B. (2020). Differentially Private Learning of Distributed Deep Models. In UMAP 2020 Adjunct - Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization (pp. 193–200). Association for Computing Machinery, Inc. https://doi.org/10.1145/3386392.3399562

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free