An efficient 3-party framework for privacy-preserving neural network inference

6Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In the era of big data, users pay more attention to data privacy issues in many application fields, such as healthcare, finance, and so on. However, in the current application scenarios of machine learning as a service, service providers require users’ private inputs to complete neural network inference tasks. Previous works have shown that some cryptographic tools can be used to achieve the secure neural network inference, but the performance gap is still existed to make those techniques practical. In this paper, we focus on the efficiency problem of privacy-preserving neural network inference and propose novel 3-party secure protocols to implement amounts of nonlinear activation functions such as ReLU and Sigmod, etc. Experiments on five popular neural network models demonstrate that our protocols achieve about $$1.2\times $$–$$11.8\times $$ and $$1.08\times $$–$$4.8\times $$ performance improvement than the state-of-the-art 3-party protocols (SecureNN[28]) in terms of computation and communication overhead. Furthermore, we are the first to implement the privacy-preserving inference of graph convolutional networks.

Cite

CITATION STYLE

APA

Shen, L., Chen, X., Shi, J., Dong, Y., & Fang, B. (2020). An efficient 3-party framework for privacy-preserving neural network inference. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12308 LNCS, pp. 419–439). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58951-6_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free