Federated learning is a distributed learning framework that is communication efficient and provides protection over participating users' raw training data. One outstanding challenge of federate learning comes from the users' heterogeneity, and learning from such data may yield biased and unfair models for minority groups. While adversarial learning is commonly used in centralized learning for mitigating bias, there are significant barriers when extending it to the federated framework. In this work, we study these barriers and address them by proposing a novel approach Federated Adversarial DEbiasing (FADE). FADE does not require users' sensitive group information for debiasing and offers users the freedom to opt-out from the adversarial component when privacy or computational costs become a concern. We show that ideally, FADE can attain the same global optimality as the one by the centralized algorithm. We then analyze when its convergence may fail in practice and propose a simple yet effective method to address the problem. Finally, we demonstrate the effectiveness of the proposed framework through extensive empirical studies, including the problem settings of unsupervised domain adaptation and fair learning. Our codes and pretrained models are available at: https://github.com/illidanlab/FADE.
CITATION STYLE
Hong, J., Zhu, Z., Yu, S., Wang, Z., Dodge, H. H., & Zhou, J. (2021). Federated Adversarial Debiasing for Fair and Transferable Representations. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 617–627). Association for Computing Machinery. https://doi.org/10.1145/3447548.3467281
Mendeley helps you to discover research relevant for your work.