Dynamic Computation Offloading with Deep Reinforcement Learning in Edge Network

2Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

With the booming proliferation of user requests in the Internet of Things (IoT) network, Edge Computing (EC) is emerging as a promising paradigm for the provision of flexible and reliable services. Considering the resource constraints of IoT devices, for some delay-aware user requests, a heavy-workload IoT device may not respond on time. EC has sparked a popular wave of offloading user requests to edge servers at the edge of the network. The orchestration of user-requested offloading schemes creates a remarkable challenge regarding the delay in user requests and the energy consumption of IoT devices in edge networks. To solve this challenge, we propose a dynamic computation offloading strategy consisting of the following: (i) we propose the concept of intermediate nodes, which can minimize the delay in user requests and the energy consumption of the current tasks handled by IoT devices by dynamically combining task-offloading and service migration strategies; (ii) based on the workload of the current network, the intermediate node selection problem is modeled as a multi-dimensional Markov Decision Process (MDP) space, and a deep reinforcement learning algorithm is implemented to reduce the large MDP space and make a fast decision. Experimental results show that this strategy is superior to the existing baseline methods to reduce delays in user requests and the energy consumption of IoT devices.

Cite

CITATION STYLE

APA

Bai, Y., Li, X., Wu, X., & Zhou, Z. (2023). Dynamic Computation Offloading with Deep Reinforcement Learning in Edge Network. Applied Sciences (Switzerland), 13(3). https://doi.org/10.3390/app13032010

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free