Consistent hashing and random trees: distributed caching protocols for relieving hot spots on the World Wide Web

  • Karger D
  • Leightonl T
  • Lewinl D
 et al. 
  • 331


    Mendeley users who have this article in their library.
  • 995


    Citations of this article.


Wedescribe a family of caching protocols for distrib-uted networks that can be used to decrease or eliminate the occurrence of hot spots in the network. Our protocols are particularly designed for use with very large networks such as the Internet, where delays caused by hot spots can be severe, and where it is not feasible for every server to have complete information about the current state of the entire network. The protocols are easy to implement using existing network protocols such as TCP/fF’,and require very little overhead. The protocols work with local control, make efficient use of existing resources, and scale gracefully as the network grows. Our caching protocols are based on a special kind of hashing that we call consistent hashing. Roughly speaking, a consistent hash function is one which changes nr.inimaflyas the range of the function changes. Through the development of good consistent hash functions, we are able to develop caching protocols which do not require users to have a current or even consistent view of the network. We believe that consistent hash functions may eventually prove to be useful in other applications such as distributed name servers and/or quorum systems.

Author-supplied keywords

  • data-partitioning
  • data-placement
  • dht
  • distributed-systems

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document


  • David Karger

  • Tom Leightonl

  • Daniel Lewinl

  • Eric Lehman

  • Tom Leighton

  • Rina Panigrahy

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free