Multi-Agent Reinforcement Learning-Based Resource Allocation Scheme for UAV-Assisted Internet of Remote Things Systems

3Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Multi-layered communication networks including satellites and unmanned aerial vehicles (UAVs) with remote sensing capability are expected to be an essential part of next-generation wireless communication systems. It has been reported that deep reinforcement learning algorithm brings performance improvement in various practical wireless communication environments. However, it is anticipated that the computational complexity will be a critical issue as the number of devices in the network is significantly increased. To resolve this problem, in this paper we propose a multi-agent reinforcement learning (MARL)-based resource allocation scheme for UAV-assisted Internet of remote things (IoRT) systems. The UAV and IoRT sensors are set to be MARL agents, which are independently trained to minimize energy consumption cost for communication by controlling the transmit power and bandwidth. It is shown that the UAV agent can reduce energy consumption by 70.9195 kJ, while the IoRT sensor agents yield 20.5756 kJ reduction, which are 65.4 % and 71.97 % reductions compared to the initial state of each agent. Moreover, the effects from the hyperparameters of the neural episodic control (NEC) baseline algorithm are investigated in terms of power consumption.

Cite

CITATION STYLE

APA

Lee, D., Sun, Y. G., Kim, S. H., Kim, J. H., Shin, Y., Kim, D. I., & Kim, J. Y. (2023). Multi-Agent Reinforcement Learning-Based Resource Allocation Scheme for UAV-Assisted Internet of Remote Things Systems. IEEE Access, 11, 53155–53164. https://doi.org/10.1109/ACCESS.2023.3279401

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free