Edge Computing Resource Allocation Algorithm for NB-IoT Based on Deep Reinforcement Learning

4Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Mobile edge computing (MEC) technology guarantees the privacy and security of large-scale data in the Narrowband-IoT (NB-IoT) by deploying MEC servers near base stations to provide sufficient computing, storage, and data processing capacity to meet the delay and energy consumption requirements of NB-IoT terminal equipment. For the NB-IoT MEC system, this paper proposes a resource allocation algorithm based on deep reinforcement learning to optimize the total cost of task offloading and execution. Since the formulated problem is a mixed-integer non-linear programming (MINLP), we cast our problem as a multi-Agent distributed deep reinforcement learning (DRL) problem and address it using dueling Q-learning network algorithm. Simulation results show that compared with the deep Q-learning network and the all-local cost and all-offload cost algorithms, the proposed algorithm can effectively guarantee the success rates of task offloading and execution. In addition, when the execution task volume is 200 KBit, the total system cost of the proposed algorithm can be reduced by at least 1.3%,and when the execution task volume is 600 KBit, the total cost of system execution tasks can be reduced by 16.7% at most.

Cite

CITATION STYLE

APA

Chu, J., Pan, C., Wang, Y., Yun, X., & Li, X. (2023). Edge Computing Resource Allocation Algorithm for NB-IoT Based on Deep Reinforcement Learning. IEICE Transactions on Communications, E106.B(5), 439–447. https://doi.org/10.1587/transcom.2022EBP3076

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free