Dynamic deployment method based on double deep Q-network in UAV-assisted MEC systems

5Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The unmanned aerial vehicle (UAV) assisted mobile edge computing (MEC) system leverages the high maneuverability of UAVs to provide efficient computing services to terminals. A dynamic deployment algorithm based on double deep Q-networks (DDQN) is suggested to address issues with energy limitation and obstacle avoidance when providing edge services to terminals by UAV. First, the energy consumption of the UAV and the fairness of the terminal’s geographic location are jointly optimized in the case of multiple obstacles and multiple terminals on the ground. And the UAV can avoid obstacles. Furthermore, a double deep Q-network was introduced to address the slow convergence and risk of falling into local optima during the optimization problem training process. Also included in the learning process was a pseudo count exploration strategy. Finally, the improved DDQN algorithm achieves faster convergence and a higher average system reward, according to experimental results. Regarding the fairness of geographic locations of terminals, the improved DDQN algorithm outperforms Q-learning, DQN, and DDQN algorithms by 50%, 20%, and 15.38%, respectively, and the stability of the improved algorithm is also validated.

Cite

CITATION STYLE

APA

Zhang, S., Zhang, L., Xu, F., Cheng, S., Su, W., & Wang, S. (2023). Dynamic deployment method based on double deep Q-network in UAV-assisted MEC systems. Journal of Cloud Computing, 12(1). https://doi.org/10.1186/s13677-023-00507-6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free