Collaborative Caching in Edge Computing via Federated Learning and Deep Reinforcement Learning

0Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

By deploying resources in the vicinity of users, edge caching can substantially reduce the latency for users to retrieve content and relieve the pressure on the backbone network. Due to the capacity limitation of caching and the dynamic nature of user requests, how to allocate caching resources reasonably must be considered. Some edge caching studies improve network performance by predicting content popularity and actively caching the most popular content, thereby ignoring the privacy and security issues caused by the need to collect user information at the central unit. To this end, a collaborative caching strategy based on federated learning is proposed. First, federated learning is used to make distributed predictions of the preferences of users in the nodes to develop an effective content caching policy. Then, the problem of allocating caching resources to optimize the cost of video providers is formulated as a Markov decision process, and a reinforcement learning method is used to optimize the caching decisions. Compared with several basic caching strategies in terms of cache hit rate, transmission delay, and cost, the simulation results show that the proposed content caching strategy reduces the cost of video providers, and has higher cache hit rate and lower average transmission delay.

Cite

CITATION STYLE

APA

Wang, Y., & Chen, J. (2022). Collaborative Caching in Edge Computing via Federated Learning and Deep Reinforcement Learning. Wireless Communications and Mobile Computing, 2022. https://doi.org/10.1155/2022/7212984

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free