Prefetching using Markov predictors

  • Joseph D
  • Grunwald D
  • 102


    Mendeley users who have this article in their library.
  • 91


    Citations of this article.


Prefetching is one approach to reducing the latency of memory
operations in modern computer systems. In this paper, we describe the
Markov prefetcher. This prefetcher acts as an interface between the
on-chip and off-chip cache and can be added to existing computer
designs. The Markov prefetcher is distinguished by prefetching multiple
reference predictions from the memory subsystem, and then prioritizing
the delivery of those references to the processor. This design results
in a prefetching system that provides good coverage, is accurate, and
produces timely results that can be effectively used by the processor.
We also explored a range of techniques that can be used to reduce the
bandwidth demands of prefetching, leading to improved memory system
performance. In our cycle-level simulations, the Markov Prefetcher
reduces the overall execution stalls due to instruction and data memory
operations by an average of 54 percent for various commercial benchmarks
while only using two-thirds the memory of a demand-fetch cache

Author-supplied keywords

  • Cache
  • Memory
  • Prefetching

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document


  • Doug Joseph

  • Dirk Grunwald

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free