Deep Q-Learning-Based Content Caching with Update Strategy for Fog Radio Access Networks

46Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In order to improve the edge caching efficiency of the fog radio access network (F-RAN), this paper put forward a distributed deep Q-learning-based content caching scheme based on user preference prediction and content popularity prediction. Given that the constraint that the storage capacity of each device is limited, and the optimization problem is formulated so as to maximize the caching hit rate. Specifically, by taking users' selfishness into consideration, user preference is predicted in an offline manner by applying popular topic models. Then, the online predicted content popularity is achieved by combining the network topology relationship together with the obtained user preference. Finally, with the predicted user preference and content popularity, the deep Q-learning network (DQN)-based content caching algorithm is proposed to achieve the optimal content caching strategy. Moreover, we further present a content update policy with user preference and content popularity prediction, so that the proposed algorithm can handle the variations of contents popularity in a timely manner. Simulation results demonstrate that the proposed scheme achieves better caching hit rate compared with existing algorithms.

Cite

CITATION STYLE

APA

Jiang, F., Yuan, Z., Sun, C., & Wang, J. (2019). Deep Q-Learning-Based Content Caching with Update Strategy for Fog Radio Access Networks. IEEE Access, 7, 97505–97514. https://doi.org/10.1109/ACCESS.2019.2927836

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free