Federated Deep Reinforcement Learning for Online Task Offloading and Resource Allocation in WPC-MEC Networks

24Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Mobile edge computing (MEC) is considered a more effective technological solution for developing the Internet of Things (IoT) by providing cloud-like capabilities for mobile users. This article combines wireless powered communication (WPC) technology with an MEC network, where a base station (BS) can transfer wireless energy to edge users (EUs) and execute computation-intensive tasks through task offloading. Traditional numerical optimization methods are time-consuming approaches for solving this problem in time-varying wireless channels, and centralized deep reinforcement learning (DRL) is not stable in large-scale dynamic IoT networks. Therefore, we propose a federated DRL-based online task offloading and resource allocation (FDOR) algorithm. In this algorithm, DRL is executed in EUs, and federated learning (FL) uses the distributed architecture of MEC to aggregate and update the parameters. To further solve the problem of the non-IID data of mobile EUs, we devise an adaptive method that automatically adjusts the FDOR algorithm's learning rate. Simulation results demonstrate that the proposed FDOR algorithm is superior to the traditional numerical optimization method and the existing DRL algorithm in four aspects: convergence speed, execution delay, overall calculation rate and stability in large-scale and dynamic IoT.

Cite

CITATION STYLE

APA

Zang, L., Zhang, X., & Guo, B. (2022). Federated Deep Reinforcement Learning for Online Task Offloading and Resource Allocation in WPC-MEC Networks. IEEE Access, 10, 9856–9867. https://doi.org/10.1109/ACCESS.2022.3144415

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free