Privacy-preserving deep learning: Revisited and enhanced

103Citations
Citations of this article
87Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We build a privacy-preserving deep learning system in which many learning participants perform neural network-based deep learning over a combined dataset of all, without actually revealing the partici-pants’ local data to a curious server. To that end, we revisit the previous work by Shokri and Shmatikov (ACM CCS 2015) and point out that local data information may be actually leaked to an honest-but-curious server. We then move on to fix that problem via building an enhanced system with following properties: (1) no information is leaked to the server; and (2) accuracy is kept intact, compared to that of the ordinary deep learning system also over the combined dataset. Our system makes use of additively homomorphic encryption, and we show that our usage of encryption adds little overhead to the ordinary deep learning system.

Cite

CITATION STYLE

APA

Phong, L. T., Aono, Y., Hayashi, T., Wang, L., & Moriai, S. (2017). Privacy-preserving deep learning: Revisited and enhanced. In Communications in Computer and Information Science (Vol. 719, pp. 100–110). Springer Verlag. https://doi.org/10.1007/978-981-10-5421-1_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free