Handover Decision Making for Dense HetNets: A Reinforcement Learning Approach

7Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, we consider the problem of decision making in the context of a dense heterogeneous network with a macro base station and multiple small base stations. We propose a deep Q-learning based algorithm that efficiently minimizes the overall energy consumption by taking into account both the energy consumption from transmission and overheads, and various network information such as channel conditions and causal association information. The proposed algorithm is designed based on the centralized training with decentralized execution (CTDE) framework in which a centralized training agent manages the replay buffer for training its deep Q-network by gathering state, action, and reward information reported from the distributed agents that execute the actions. We perform several numerical evaluations and demonstrate that the proposed algorithm provides significant energy savings over other contemporary mechanisms depending on overhead costs, especially when additional energy consumption is required for handover procedure.

Cite

CITATION STYLE

APA

Song, Y., Lim, S. H., & Jeon, S. W. (2023). Handover Decision Making for Dense HetNets: A Reinforcement Learning Approach. IEEE Access, 11, 24737–24751. https://doi.org/10.1109/ACCESS.2023.3254557

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free