Can Collaborative Learning Be Private, Robust and Scalable?

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In federated learning for medical image analysis, the safety of the learning protocol is paramount. Such settings can often be compromised by adversaries that target either the private data used by the federation or the integrity of the model itself. This requires the medical imaging community to develop mechanisms to train collaborative models that are private and robust against adversarial data. In response to these challenges, we propose a practical open-source framework to study the effectiveness of combining differential privacy, model compression and adversarial training to improve the robustness of models against adversarial samples under train- and inference-time attacks. Using our framework, we achieve competitive model performance, a significant reduction in model’s size and an improved empirical adversarial robustness without a severe performance degradation, critical in medical image analysis.

Cite

CITATION STYLE

APA

Usynin, D., Klause, H., Paetzold, J. C., Rueckert, D., & Kaissis, G. (2022). Can Collaborative Learning Be Private, Robust and Scalable? In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13573 LNCS, pp. 37–46). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-18523-6_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free