Toward Edge-Assisted Video Content Intelligent Caching with Long Short-Term Memory Learning

48Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Nowadays video content has contributed to the majority of Internet traffic, which brings great challenge to the network infrastructure. Fortunately, the emergence of edge computing has provided a promising way to reduce the video load on the network by caching contents closer to users.But caching replacement algorithm is essential for the cache efficiency considering the limited cache space under existing edge-Assisted network architecture. To investigate the challenges and opportunities inside, we first measure the performance of five state-of-The-Art caching algorithms based on three real-world datasets. Our observation shows that state-of-The-Art caching replacement algorithms suffer from following weaknesses: 1) the rule-based replacement approachs (e.g., LFU,LRU) cannot adapt under different scenarios; 2) data-driven forecast approaches only work efficiently on specific scenarios or datasets, as the extracted features working on one dataset may not work on another one. Motivated by these observations and edge-Assisted computation capacity, we then propose an edge-Assisted intelligent caching replacement framework LSTM-C based on deep Long Short-Term Memory network, which contains two types of modules: 1) four basic modules manage the coordination among content requests, content replace, cache space, service management; 2) three learning-based modules enable the online deep learning to provide intelligent caching strategy. Supported by this design, LSTM-C learns the pattern of content popularity at long and short time scales as well as determines the cache replacement policy. Most important, LSTM-C represents the request pattern with built-in memory cells, thus requires no data pre-processing, pre-programmed model or additional information. Our experiment results show that LSTM-C outperforms state-of-The-Art methods in cache hit rate on three real-Traces of video requests. When the cache size is limited, LSTM-C outperforms baselines by 20%32% in cache hit rate. We also show that the training and predicting time of one iteration are 8.6~ms and 300~\mu s on average respectively, which are fast enough for online operations.

Cite

CITATION STYLE

APA

Zhang, C., Pang, H., Liu, J., Tang, S., Zhang, R., Wang, D., & Sun, L. (2019). Toward Edge-Assisted Video Content Intelligent Caching with Long Short-Term Memory Learning. IEEE Access, 7, 152832–152846. https://doi.org/10.1109/ACCESS.2019.2947067

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free