Differential privacy preservation for deep auto-encoders: An application of human behavior prediction

181Citations
Citations of this article
197Readers
Mendeley users who have this article in their library.

Abstract

In recent years, deep learning has spread beyond both academia and industry with many exciting real-world applications. The development of deep learning has presented obvious privacy issues. However, there has been lack of scientific study about privacy preservation in deep learning. In this paper, we concentrate on the auto-encoder, a fundamental component in deep learning, and propose the deep private auto-encoder (dPA). Our main idea is to enforce -differential privacy by perturbing the objective functions of the traditional deep auto-encoder, rather than its results.We apply the dPA to human behavior prediction in a health social network. Theoretical analysis and thorough experimental evaluations show that the dPA is highly effective and efficient, and it significantly outperforms existing solutions.

Cite

CITATION STYLE

APA

Phan, N. H., Wang, Y., Wu, X., & Dou, D. (2016). Differential privacy preservation for deep auto-encoders: An application of human behavior prediction. In 30th AAAI Conference on Artificial Intelligence, AAAI 2016 (pp. 1309–1316). AAAI press. https://doi.org/10.1609/aaai.v30i1.10165

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free