Scalable shared-cache management by containing thrashing workloads

18Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Multi-core processors with shared last-level caches are vulnerable to performance inefficiencies and fairness issues when the cache is not carefully managed between the multiple cores. Cache partitioning is an effective method for isolating poorly-interacting threads from each other, but designing a mechanism with simple logic and low area overhead will be important for incorporating such schemes in future embedded multi-core processors. In this work, we identify that major performance problems only arise when one or more "thrashing" applications exist. We propose a simple yet effective Thrasher Caging (TC) cache management scheme that specifically targets these thrashing applications. © 2010 Springer-Verlag.

Cite

CITATION STYLE

APA

Xie, Y., & Loh, G. H. (2010). Scalable shared-cache management by containing thrashing workloads. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5952 LNCS, pp. 262–276). https://doi.org/10.1007/978-3-642-11515-8_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free