Distributed learning has emerged as a useful tool for analyzing data stored in multiple geographic locations, especially when the distributed data sets are large and hard to move around, or the data owner is reluctant to put data into the Cloud due to privacy concerns. In distributed learning, only the locally computed models are uploaded to the fusion server, which however may still cause privacy issues since the fusion server could implement various inference attacks from its observations. To address this problem, we propose a secure distributed learning system that aims to utilize the additive property of partial homomorphic encryption to prevent direct exposure of the computed models to the fusion server. Furthermore, we propose two optimization mechanisms for applying partial homomorphic encryption to model parameters in order to improve the overall efficiency. Through experimental analysis, we demonstrate the effectiveness of our proposed mechanisms in practical distributed learning systems. Furthermore, we analyze the relationship between the computational time in the training process and several important system parameters, which can serve as a useful guide for selecting proper parameters for balancing the trade-off among model accuracy, model security and system overhead.
CITATION STYLE
Liu, C., Chakraborty, S., & Verma, D. (2019). Secure Model Fusion for Distributed Learning Using Partial Homomorphic Encryption. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11550 LNCS, pp. 154–179). Springer Verlag. https://doi.org/10.1007/978-3-030-17277-0_9
Mendeley helps you to discover research relevant for your work.