Self-adaptive power control with deep reinforcement learning for millimeter-wave Internet-of-vehicles video caching

18Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Video delivery and caching over the millimeter-wave (mmWave) spectrum is a promising technology for high data rate and efficient frequency utilization in many applications, including distributed vehicular networks. However, due to the short handoff duration, calibrating both optimal power allocation of each base station toward its associated vehicles and cache allocation are challenging for their computational complexity. Heretofore, most video delivery applications were based on on-line or off-line algorithms, and they were limited to compute and optimize high dimensional objectives within low-delay in large scale vehicular networks. On the other hand, deep reinforcement learning is shown for learning such scale of a problem with an optimized policy learning phase. In this paper, we propose deep deterministic policy gradient-based power control of mmWave base station (mBS) and proactive cache allocation toward mBSs in distributed mmWave Internet-of-vehicle (IoV) networks. Simulation results validate the performance of the proposed caching scheme in terms of quality of the provisioned video and playback stall in various scales of IoV networks.

Cite

CITATION STYLE

APA

Kwon, D., Kim, J., Mohaisen, D. A., & Lee, W. (2020). Self-adaptive power control with deep reinforcement learning for millimeter-wave Internet-of-vehicles video caching. Journal of Communications and Networks, 22(4), 326–337. https://doi.org/10.1109/JCN.2020.000022

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free