Currently, the data collected by the Internet of Things (IoT) still relies on the cloud-centric data aggregation and processing approach for preparing machine learning models. This approach puts the privacy of the participants at risk. In this paper, federated learning (FL) is proposed for privacy-preserving collaborative model training on data distributed across IoT users. To motivate participants, we must incentivize the whole process by rewarding each participant for their contribution to the training process of the federated learning model. The process of collective training takes place over a long duration of time and multiple iterations. However, participants in the training process may have varying levels of willingness to participate (WTP) and may contribute duplicate or poor-quality data. Therefore, in each iteration, participants must be rewarded based on their contribution in that specific iteration. In this paper, a methodology to reward each participant based on their contribution and a model aggregation technique are proposed. The aggregation technique uses Polyak-averaging to aggregate weights of local models, with the weightage assigned to each local model being proportional to its accuracy on the test dataset. Performance evaluation shows that the federated learning model formed using our aggregation approach achieves the performance level of machine learning as we perform more iterations and performs slightly better than the model formed using the FedAvg algorithm. Additionally, our incentivization methodology provides better performance-based rewards compared to other profit-sharing schemes.
CITATION STYLE
Hassija, V., Chawla, V., Chamola, V., & Sikdar, B. (2023). Incentivization and Aggregation Schemes for Federated Learning Applications. IEEE Transactions on Machine Learning in Communications and Networking, 1, 185–196. https://doi.org/10.1109/tmlcn.2023.3302811
Mendeley helps you to discover research relevant for your work.