Communication-Oriented Model Fine-Tuning for Packet-Loss Resilient Distributed Inference under Highly Lossy IoT Networks

14Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The distributed inference (DI) framework has gained traction as a technique for real-time applications empowered by cutting-edge deep machine learning (ML) on resource-constrained Internet of things (IoT) devices. In DI, computational tasks are offloaded from the IoT device to the edge server via lossy IoT networks. However, generally, there is a communication system-level trade-off between communication latency and reliability; thus, to provide accurate DI results, a reliable and high-latency communication system is required to be adapted, which results in non-negligible end-to-end latency of the DI. This motivated us to improve the trade-off between the communication latency and accuracy by efforts on ML techniques. Specifically, we have proposed a communication-oriented model tuning (COMtune), which aims to achieve highly accurate DI with low-latency but unreliable communication links. In COMtune, the key idea is to fine-tune the ML model by emulating the effect of unreliable communication links through the application of the dropout technique. This enables the DI system to obtain robustness against unreliable communication links. Our ML experiments revealed that COMtune enables accurate predictions with low latency and under lossy networks.

Cite

CITATION STYLE

APA

Itahara, S., Nishio, T., Koda, Y., & Yamamoto, K. (2022). Communication-Oriented Model Fine-Tuning for Packet-Loss Resilient Distributed Inference under Highly Lossy IoT Networks. IEEE Access, 10, 14969–14979. https://doi.org/10.1109/ACCESS.2022.3149336

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free