PPTIF: Privacy-Preserving Transformer Inference Framework for Language Translation

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The Transformer model has emerged as a prominent machine learning tool within the field of natural language processing. Nevertheless, running the Transformer model on resource-constrained devices presents a notable challenge that needs to be addressed. Although outsourcing services can significantly reduce the computational overhead associated with using the model, it also incurs privacy risks to the provider's proprietary model and the client's sensitive data. In this paper, we propose an efficient privacy-preserving Transformer inference framework (PPTIF) for language translation tasks based on three-party replicated secret-sharing techniques. PPTIF offers a secure approach for users to leverage Transformer-based applications, such as language translation, while maintaining the confidentiality of their original input and inference results, thereby preventing any disclosure to the cloud server. Meanwhile, PPTIF ensures robust protection for the model parameters, guaranteeing their integrity and confidentiality. In PPTIF, we design a series of interaction protocols to implement the secure computation of Transformer components, namely secure Encoder and secure Decoder. To improve the efficiency of PPTIF, we optimize the computation of the Scaled Dot-Product Attention (Transformer's core operation) under secret sharing, effectively reducing its computation and communication overhead. Compared with Privformer, the optimized Masked Multi-Head Attention achieves about 1.7× lower runtime and 2.3× lower communication. In total, PPTIF achieve about 1.3× lower runtime and 1.2× lower communication. The effectiveness and security of PPTIF have been rigorously evaluated through comprehensive theoretical analysis and experimental validation.

Cite

CITATION STYLE

APA

Liu, Y., & Su, Q. (2024). PPTIF: Privacy-Preserving Transformer Inference Framework for Language Translation. IEEE Access, 12, 48881–48897. https://doi.org/10.1109/ACCESS.2024.3384268

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free