5G communication resource allocation strategy for mobile edge computing based on deep deterministic policy gradient

  • He J
N/ACitations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Distributed base station deployment, limited server resources and dynamically changing end users in mobile edge networks make the design of computing offloading schemes extremely challenging. Considering the advantages of deep reinforcement learning (DRL) in dealing with dynamic complex problems, this paper designs an optimal computing offloading and resource allocation strategy. Firstly, the authors consider a multi‐user mobile edge network scenario consisting of Macro‐cell Base Station (MBS), Small‐cell Base Station (SBS) and multiple terminal devices, the communication overhead and calculation overhead generated are formulated and described in detail. Besides, combined with the deterministic delay of tasks, the optimization objective of this paper is clarified to comprehensively consider system energy consumption. Then, a learning algorithm based on Deep Deterministic Policy Gradient (DDPG) is proposed to minimize system energy consumption. Finally, simulation experiments show that the authors’ proposed DDPG algorithm can effectively optimize the target value, and the total system energy consumption is only 15.6 J, which is better than other compared algorithms. It is also proved that the proposed algorithm has excellent communication resource allocation ability.

Cite

CITATION STYLE

APA

He, J. (2023). 5G communication resource allocation strategy for mobile edge computing based on deep deterministic policy gradient. The Journal of Engineering, 2023(3). https://doi.org/10.1049/tje2.12250

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free