Secure neural network inference has been a promising solution to private Deep-Learning-as-a-Service, which enables the service provider and user to execute neural network inference without revealing their private inputs. However, the expensive overhead of current schemes is still an obstacle when applied in real applications. In this work, we present Meteor, an online communication-efficient and fast secure 3-party computation neural network inference system aginst semi-honest adversary in honest-majority. The main contributions of Meteor are two-fold: i) We propose a new and improved 3-party secret sharing scheme stemming from the linearity of replicated secret sharing, and design efficient protocols for the basic cryptographic primitives, including linear operations, multiplication, most significant bit extraction, and multiplexer. ii) Furthermore, we build efficient and secure blocks for the widely used neural network operators such as Matrix Multiplication, ReLU, and Maxpool, along with exploiting several specific optimizations for better efficiency. Our total communication with the setup phase is a little larger than SecureNN (PoPETs'19) and Falcon (PoPETs'21), two state-of-the-art solutions, but the gap is not significant when the online phase must be optimized as a priority. Using Meteor, we perform extensive evaluations on various neural networks. Compared to SecureNN and Falcon, we reduce the online communication costs by up to 25.6 × and 1.5 ×, and improve the running-time by at most 9.8 × (resp. 8.1 ×) and 1.5 × (resp. 2.1 ×) in LAN (resp. WAN) for the online inference.
CITATION STYLE
Dong, Y., Xiaojun, C., Jing, W., Kaiyun, L., & Wang, W. (2023). Meteor: Improved Secure 3-Party Neural Network Inference with Reducing Online Communication Costs. In ACM Web Conference 2023 - Proceedings of the World Wide Web Conference, WWW 2023 (pp. 2087–2098). Association for Computing Machinery, Inc. https://doi.org/10.1145/3543507.3583272
Mendeley helps you to discover research relevant for your work.