Computation Offloading in Multi-UAV-Enhanced Mobile Edge Networks: A Deep Reinforcement Learning Approach

6Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, we investigate an unmanned aerial vehicle- (UAV-) enhanced mobile edge computing network (MUEMN), where multiple UAVs are deployed as aerial edge servers to provide computing services for ground moving equipment (GME). Each GME is trained to simulate movement by a Gauss-Markov random model in this MUEMN. Under the condition of limited energy cost, UAV dynamically plans its flight position according to the movement trend of GME. Our objective is to minimize the total energy consumption of GME by jointly optimizing the offloading decisions of GME and the flight positions of UAVs. More explicitly, we model the optimization problem as a Markov decision process and achieve real-time offloading decisions via deep reinforcement learning algorithm according to the dynamic system state, where the asynchronous advantage actor-critic (A3C) framework with asynchronous characteristics is leveraged to accelerate the learning process. Finally, numerical results confirm that our proposed A3C-based offloading strategy can effectively reduce the total of energy consumption of GME and ensure the continuous operation of the GME.

Cite

CITATION STYLE

APA

Li, B., Yu, S., Su, J., Ou, J., & Fan, D. (2022). Computation Offloading in Multi-UAV-Enhanced Mobile Edge Networks: A Deep Reinforcement Learning Approach. Wireless Communications and Mobile Computing, 2022. https://doi.org/10.1155/2022/6216372

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free