Reducing latency and network load using location-aware memcache architectures

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This work explores how data-locality in a web datacenter can impact the performance of the Memcache caching system. Memcache is a distributed key/value datastore used to cache frequently accessed data such as database requests, HTML page snippets, or any text string. Any client can store, manipulate, or retrieve data quickly by locating the data in the Memcache system using a hashing strategy based on the key. To speed Memcache, we explore alternate storage strategies where data is stored closer to the writer. Two novel Memcache architectures are proposed, based on multi-cpu caching strategies. A model is developed to predict Memcache performance given a web application's usage profile, network variables, and a memcache architecture. Five architecture variants are analyzed and further evaluated in a miniature web farm using the MediaWiki open-source web application. Our results verified our model and we observed a 66% reduction in core network traffic and a 23% reduction in Memcache response time under certain network conditions. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Talaga, P. G., & Chapin, S. J. (2013). Reducing latency and network load using location-aware memcache architectures. In Lecture Notes in Business Information Processing (Vol. 140 LNBIP, pp. 53–69). Springer Verlag. https://doi.org/10.1007/978-3-642-36608-6_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free