A Large-scale Analysis of Hundreds of In-memory Key-value Cache Clusters at Twitter

30Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

Abstract

Modern web services use in-memory caching extensively to increase throughput and reduce latency. There have been several workload analyses of production systems that have fueled research in improving the effectiveness of in-memory caching systems. However, the coverage is still sparse considering the wide spectrum of industrial cache use cases. In this work, we significantly further the understanding of real-world cache workloads by collecting production traces from 153 in-memory cache clusters at Twitter, sifting through over 80 TB of data, and sometimes interpreting the workloads in the context of the business logic behind them. We perform a comprehensive analysis to characterize cache workloads based on traffic pattern, time-to-live (TTL), popularity distribution, and size distribution. A fine-grained view of different workloads uncover the diversity of use cases: many are far more write-heavy or more skewed than previously shown and some display unique temporal patterns. We also observe that TTL is an important and sometimes defining parameter of cache working sets. Our simulations show that ideal replacement strategy in production caches can be surprising, for example, FIFO works the best for a large number of workloads.

Cite

CITATION STYLE

APA

Yang, J., Yue, Y., & Rashmi, K. V. (2021). A Large-scale Analysis of Hundreds of In-memory Key-value Cache Clusters at Twitter. In ACM Transactions on Storage (Vol. 17). Association for Computing Machinery. https://doi.org/10.1145/3468521

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free