In privacy-preserving cross-device federated learning, users train a global model on their local data and submit encrypted local models, while an untrusted central server aggregates the encrypted models to obtain an updated global model. Prior work has demonstrated how to verify the correctness of aggregation in such a setting. However, such verification relies on strong assumptions, such as a trusted setup among all users under unreliable network conditions, or it suffers from expensive cryptographic operations, such as bilinear pairing. In this paper, we scrutinize the verification mechanism of prior work and propose a model recovery attack, demonstrating that most local models can be leaked within a reasonable time (e.g., 98\%98% of encrypted local models are recovered within 21 h). Then, we propose VerSA, a verifiable secure aggregation protocol for cross-device federated learning. VerSA does not require any trusted setup for verification between users while minimizing the verification cost by enabling both the central server and users to utilize only a lightweight pseudorandom generator to prove and verify the correctness of model aggregation. We experimentally confirm the efficiency of VerSA under diverse datasets, demonstrating that VerSA is orders of magnitude faster than verification in prior work.
CITATION STYLE
Hahn, C., Kim, H., Kim, M., & Hur, J. (2023). VerSA: Verifiable Secure Aggregation for Cross-Device Federated Learning. IEEE Transactions on Dependable and Secure Computing, 20(1), 36–52. https://doi.org/10.1109/TDSC.2021.3126323
Mendeley helps you to discover research relevant for your work.