PFDLIS: Privacy-preserving and fair deep learning inference service under publicly verifiable covert security setting

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

The recent popularity and widespread use of deep learning heralds an era of artificial intelligence. Thanks to the emergence of a deep learning inference service, non-professional clients can enjoy the improvements and profits brought by artificial intelligence as well. However, the input data of the client may be sensitive so that the client does not want to send its input data to the server. Similarly, the pre-trained model of the server is valuable and the server is unwilling to make the model parameters public. Therefore, we propose a privacy-preserving and fair scheme for a deep learning inference service based on secure three-party computation and making commitments under the publicly verifiable covert security setting. We demonstrate that our scheme has the following desirable security properties—input data privacy, model privacy and defamation freeness. Finally, we conduct extensive experiments to evaluate the performance of our scheme on MNIST dataset. The experimental results verify that our scheme can achieve the same prediction accuracy as the pre-trained model with acceptable extra computational cost.

Cite

CITATION STYLE

APA

Tang, F., Hao, J., Liu, J., Wang, H., & Xian, M. (2019). PFDLIS: Privacy-preserving and fair deep learning inference service under publicly verifiable covert security setting. Electronics (Switzerland), 8(12). https://doi.org/10.3390/electronics8121488

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free