FedRN: Exploiting k-Reliable Neighbors Towards Robust Federated Learning

22Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Robustness is becoming another important challenge of federated learning in that the data collection process in each client is naturally accompanied by noisy labels. However, it is far more complex and challenging owing to varying levels of data heterogeneity and noise over clients, which exacerbates the client-to-client performance discrepancy. In this work, we propose a robust federated learning method called FedRN, which exploits k-reliable neighbors with high data expertise or similarity. Our method helps mitigate the gap between low- and high-performance clients by training only with a selected set of clean examples, identified by a collaborative model that is built based on the reliability score over clients. We demonstrate the superiority of FedRN via extensive evaluations on three real-world or synthetic benchmark datasets. Compared with existing robust methods, the results show that FedRN significantly improves the test accuracy in the presence of noisy labels.

Cite

CITATION STYLE

APA

Kim, S., Shin, W., Jang, S., Song, H., & Yun, S. Y. (2022). FedRN: Exploiting k-Reliable Neighbors Towards Robust Federated Learning. In International Conference on Information and Knowledge Management, Proceedings (pp. 972–981). Association for Computing Machinery. https://doi.org/10.1145/3511808.3557322

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free