DePRL: Achieving Linear Convergence Speedup in Personalized Decentralized Learning with Shared Representations

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

Decentralized learning has emerged as an alternative method to the popular parameter-server framework which suffers from high communication burden, single-point failure and scalability issues due to the need of a central server. However, most existing works focus on a single shared model for all workers regardless of the data heterogeneity problem, rendering the resulting model performing poorly on individual workers. In this work, we propose a novel personalized decentralized learning algorithm named DePRL via shared representations. Our algorithm relies on ideas from representation learning theory to learn a low-dimensional global representation collaboratively among all workers in a fully decentralized manner, and a user-specific low-dimensional local head leading to a personalized solution for each worker. We show that DePRL achieves, for the first time, a provable linear speedup for convergence with general non-linear representations (i.e., the convergence rate is improved linearly with respect to the number of workers). Experimental results support our theoretical findings showing the superiority of our method in data heterogeneous environments.

Cite

CITATION STYLE

APA

Xiong, G., Yan, G., Wang, S., & Li, J. (2024). DePRL: Achieving Linear Convergence Speedup in Personalized Decentralized Learning with Shared Representations. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 16103–16111). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i14.29543

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free