Information Leakage by Model Weights on Federated Learning

22Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Federated learning aggregates data from multiple sources while protecting privacy, which makes it possible to train efficient models in real scenes. However, although federated learning uses encrypted security aggregation, its decentralised nature makes it vulnerable to malicious attackers. A deliberate attacker can subtly control one or more participants and upload malicious model parameter updates, but the aggregation server cannot detect it due to encrypted privacy protection. Based on these problems, we find a practical and novel security risk in the design of federal learning. We propose an attack for conspired malicious participants to adjust the training data strategically so that the weight of a certain dimension in the aggregation model will rise or fall with a pattern. The trend of weights or parameters in the aggregation model forms meaningful signals, which is the risk of information leakage. The leakage is exposed to other participants in this federation but only available for participants who reach an agreement with the malicious participant, i.e., the receiver must be able to understand patterns of changes in weights. The attack effect is evaluated and verified on open-source code and data sets.

Cite

CITATION STYLE

APA

Xu, X., Wu, J., Yang, M., Luo, T., Duan, X., Li, W., … Wu, B. (2020). Information Leakage by Model Weights on Federated Learning. In PPMLP 2020 - Proceedings of the 2020 Workshop on Privacy-Preserving Machine Learning in Practice (pp. 31–36). Association for Computing Machinery, Inc. https://doi.org/10.1145/3411501.3419423

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free