Dual-access way-prediction cache for embedded systems

2Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Way-prediction (WP) caches have advantages of reducing power consumption and latency for highly associative data caches and thus are favorable for embedded systems. In this paper, we propose an enhanced way-prediction cache, dual-access way-prediction (DAWP) cache, to cope with the weakness of the WP cache. The prediction logic designed for the DAWP cache contains a scaled index table, a global history register, and a fully associative cache to achieve higher prediction accuracy, which eventually yields less energy consumption and latency. In our practice, performance measurement is done with a simulation model, which is implemented with SimpleScalar and CACTI, and nine SPEC2000 benchmark programs. Our experimental results show that the proposed DAWP cache is highly efficient in power and latency for highly associative cache structures. The efficiency is increased with the increasing associativity, and the testing results with 64 KB cache show that the DAWP cache achieves 16.45% ∼ 75.85% power gain and 4.91% ∼ 26.96% latency gain for 2-way ∼ 32-way structures, respectively. It is also observed that the random replacement policy yields betterefficiency in power and latency than the LRU (least recently used) policy with the DAWP cache. © 2014 Chu and Park.

Cite

CITATION STYLE

APA

Chu, Y., & Park, J. H. (2014). Dual-access way-prediction cache for embedded systems. Eurasip Journal on Embedded Systems, 2014. https://doi.org/10.1186/1687-3963-2014-16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free