Edge Caching for D2D Enabled Hierarchical Wireless Networks with Deep Reinforcement Learning

1Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Edge caching is a promising method to deal with the traffic explosion problem towards future network. In order to satisfy the demands of user requests, the contents can be proactively cached locally at the proximity to users (e.g., base stations or user device). Recently, some learning-based edge caching optimizations are discussed. However, most of the previous studies explore the influence of dynamic and constant expanding action and caching space, leading to unpracticality and low efficiency. In this paper, we study the edge caching optimization problem by utilizing the Double Deep Q-network (Double DQN) learning framework to maximize the hit rate of user requests. Firstly, we obtain the Device-to-Device (D2D) sharing model by considering both online and offline factors and then we formulate the optimization problem, which is proved as NP-hard. Then the edge caching replacement problem is derived by Markov decision process (MDP). Finally, an edge caching strategy based on Double DQN is proposed. The experimental results based on large-scale actual traces show the effectiveness of the proposed framework.

Cite

CITATION STYLE

APA

Li, W., Wang, C., Li, D., Hu, B., Wang, X., & Ren, J. (2019). Edge Caching for D2D Enabled Hierarchical Wireless Networks with Deep Reinforcement Learning. Wireless Communications and Mobile Computing, 2019. https://doi.org/10.1155/2019/2561069

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free