CacheFlow: A short-term optimal cache management policy for data driven multithreading

11Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

With Data Driven Multithreading a thread is scheduled for execution only if all of its inputs have been produced and placed in the processor's local memory. Scheduling based on data availability may be used to exploit short-term optimal cache management policies. Such policies include firing a thread for execution only if its code and data are already placed in the cache. Furthermore, blocks associated to threads scheduled for execution in the near future, are not replaced until the thread starts its execution. We call this short-term optimal cache management policy the CacheFlow policy. Simulation results, on a 32-node system with CacheFlow, for eight scientific applications, have shown a significant reduction in the cache miss ratio. This results in an average speedup improvement of 18% when the basic prefetch CacheFlow policy is used, compared to the baseline data driven multithreading policy. This paper also presents two techniques to further improve the performance of CacheFlow: conflict avoidance and thread reordering. The results have shown an average speedup improvement of 26% and 31% for these two techniques, respectively. © Springer-Verlag 2004.

Cite

CITATION STYLE

APA

Kyriacou, C., Evripidou, P., & Trancoso, P. (2004). CacheFlow: A short-term optimal cache management policy for data driven multithreading. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3149, 561–570. https://doi.org/10.1007/978-3-540-27866-5_73

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free