Federated Learning Biases in Heterogeneous Edge-Devices - A Case-study

4Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Critical machine learning applications (medical image guidance, task prediction, anomaly detection) require large amounts of data that could not be sufficiently supplied from a single entity, so multiple edge devices collaboratively train their collected data. But this raises privacy and overhead concerns. Federated learning (FL) can be a promising solution to enable these applications while preserving data privacy and mitigating communication overhead. However, an FL model originating from edge deployments with heterogeneous resources may be biased towards a set of devices. We observe that existing bias mitigation techniques in FL focus mainly on the bias that originates from label heterogeneity (due to the skewed distribution of data). We argue that sample feature heterogeneity due to different feature representations at devices is a major contributor to bias in FL. In this paper, we present an analysis of the bias that arises from sampling feature heterogeneity, and analyze the potential of existing performance enhancing techniques (normalization) to overcome bias. Our results demonstrate that normalization techniques do not eliminate bias and motivate the need for dedicated bias mitigation techniques in FL.

Cite

CITATION STYLE

APA

Selialia, K., Chandio, Y., & Anwar, F. M. (2022). Federated Learning Biases in Heterogeneous Edge-Devices - A Case-study. In SenSys 2022 - Proceedings of the 20th ACM Conference on Embedded Networked Sensor Systems (pp. 980–986). Association for Computing Machinery, Inc. https://doi.org/10.1145/3560905.3568305

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free