Privacy-Preserving Federated Learning Framework with General Aggregation and Multiparty Entity Matching

18Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The requirement for data sharing and privacy has brought increasing attention to federated learning. However, the existing aggregation models are too specialized and deal less with users' withdrawal issue. Moreover, protocols for multiparty entity matching are rarely covered. Thus, there is no systematic framework to perform federated learning tasks. In this paper, we systematically propose a privacy-preserving federated learning framework (PFLF) where we first construct a general secure aggregation model in federated learning scenarios by combining the Shamir secret sharing with homomorphic cryptography to ensure that the aggregated value can be decrypted correctly only when the number of participants is greater than t. Furthermore, we propose a multiparty entity matching protocol by employing secure multiparty computing to solve the entity alignment problems and a logistic regression algorithm to achieve privacy-preserving model training and support the withdrawal of users in vertical federated learning (VFL) scenarios. Finally, the security analyses prove that PFLF preserves the data privacy in the honest-but-curious model, and the experimental evaluations show PFLF attains consistent accuracy with the original model and demonstrates the practical feasibility.

Cite

CITATION STYLE

APA

Zhou, Z., Tian, Y., & Peng, C. (2021). Privacy-Preserving Federated Learning Framework with General Aggregation and Multiparty Entity Matching. Wireless Communications and Mobile Computing, 2021. https://doi.org/10.1155/2021/6692061

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free