Deep reinforcement learning for dynamic power allocation in cell-free mmwave massive MIMO

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Numerical optimization has been investigated for decades to solve complex problems in wireless communication systems. This has resulted in many effective methods, e.g., the weighted minimum mean square error (WMMSE) algorithm. However, these methods often incur a high computational cost, making their application to time-constrained problems difficult. Recently data-driven methods have attracted a lot of attention due to their near-optimal performance with affordable computational cost. Deep reinforcement learning (DRL) is one of the most promising optimization methods for future wireless communication systems. In this paper, we investigate the DRL method, using a deep Q-network (DQN), to allocate the downlink transmission power in cell-free (CF) mmWave massive multiple-input multiple-output (MIMO) systems. We consider the sum spectral efficiency (SE) optimization for systems with mobile user equipment (UEs). The DQN is trained by the rewards of trial-and-error interactions with the environment over time. It takes as input the long-term fading information and it outputs the downlink transmission power values. The numerical results, obtained for a particular 3GPP scenario, show that DQN outperforms WMMSE in terms of sum-SE and has a much lower computational complexity.

Cite

CITATION STYLE

APA

Zhao, Y., Niemegeers, I., & de Groot, S. H. (2021). Deep reinforcement learning for dynamic power allocation in cell-free mmwave massive MIMO. In Proceedings of the 18th International Conference on Wireless Networks and Mobile Systems, WINSYS 2021 (pp. 33–45). SciTePress. https://doi.org/10.5220/0010617300330045

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free