Simple, Efficient and Convenient Decentralized Multi-task Learning for Neural Networks

3Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Machine learning, and in particular neural networks, require large amounts of data, which is increasingly highly distributed (e.g. over user devices, or independent storage systems). Aggregating this data at one site for learning can be unpractical due to network costs, legal constraints, or privacy concerns. Decentralized machine learning holds the potential to address these concerns, but unfortunately, most of the approaches proposed so far for distributed learning with neural networks are mono-task, and do not transfer easily to multi-task problems. In this paper, we propose a novel learning method for neural networks that is decentralized, multi-task, and that keeps users’ data local. Our approach works with different learning algorithms, on various types of neural networks. We formally analyze the convergence of our method, and we evaluate its efficiency in a range of neural networks and learning algorithms, demonstrating its benefits in terms of learning quality and convergence.

Cite

CITATION STYLE

APA

Bouchra Pilet, A., Frey, D., & Taïani, F. (2021). Simple, Efficient and Convenient Decentralized Multi-task Learning for Neural Networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12695 LNCS, pp. 37–49). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-74251-5_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free