Abstract
Distributed learning systems are increasingly being adopted for a variety of applications as centralized training becomes unfeasible. A few architectures have emerged to divide and conquer the computational load, or to run privacy-aware deep learning models, using split or federated learning. Each architecture has benefits and drawbacks. In this work, we compare the efficiency and privacy performance of two distributed learning architectures that combine the principles of split and federated learning, trying to get the best of both. In particular, our design goal is to reduce the computational power required by each client in Federated Learning and to parallelize Split Learning. We share some initial lessons learned from our implementation that leverages the PySyft and PyGrid libraries.
Author supplied keywords
Cite
CITATION STYLE
Turina, V., Zhang, Z., Esposito, F., & Matta, I. (2020). Combining split and federated architectures for efficiency and privacy in deep learning. In CoNEXT 2020 - Proceedings of the 16th International Conference on Emerging Networking EXperiments and Technologies (pp. 562–563). Association for Computing Machinery, Inc. https://doi.org/10.1145/3386367.3431678
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.