StitchNet: Distributed On-Device Model Partitioning Over Edge Devices Under Volatile Wireless Links

1Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Distributed deep learning architecture can achieve scalable learning and inference capability at resource-constrained edge devices. Although the parallelization-based approaches have actively been investigated in the edge computing context, they are not designed for the devices that are usually wireless and mobile, causing the substantial link and device failure issue. We propose a semi-distributed deep learning architecture, StitchNet, based on model parallelism for volatile wireless edge networks. Our algorithm first classifies a set of effective neurons with a substantial impact on their connected neurons across layers. Then, an opportunistic neuron assignment is employed to ensure the full forward and backward propagation paths by stitching the subsets of the model across the devices with path redundancy via neuron cloning for securing high resilience to network and device uncertainty. Simulation-based experiments demonstrate that StitchNet has achieved high inference quality on visual classification tasks even under the volatile lossy network environment, by making edge devices collaboratively find and stitch their in-ward and out-ward edge paths with a reasonable communication overhead.

Cite

CITATION STYLE

APA

Lee, J., Cho, J., & Lee, H. (2022). StitchNet: Distributed On-Device Model Partitioning Over Edge Devices Under Volatile Wireless Links. IEEE Access, 10, 110616–110627. https://doi.org/10.1109/ACCESS.2022.3215083

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free