Novel programming paradigms enable the concurrent execution and the dynamic run-time rescheduling of several competing applications on large heterogeneous multi-core systems. However, today the cache memory is still statically allocated at design time. This leads to a distribution of memory resources that is optimized for an average use case. This paper introduces adaptive cache structures to be able to cope with the agility of dynamic run-time systems on future heterogeneous multi-core platforms. To go beyond the state of the art, the cache model is an implemented HDL realization capable of dynamic run-time adaptations of various cache strategies, parameters and settings. Different design trade-offs are weighted against each other and a modular implementation is presented. This hardware representation makes it possible to deeply integrate the adaptive cache into an existing processor microarchitecture. The contribution of this paper is the application-specific run-time adaptation of the adaptive cache architecture that directly represents the available memory resources of the underlying hardware. The evaluation shows very efficient resource utilization while the cache set size is in-or decreased. Also, performance gains in terms of cache’s miss rate and application’s run-time are shown. The architecture’s capabilities of performing in a multi-core use case and the potential for future power savings are also presented in an application scenario.
CITATION STYLE
Tradowsky, C., Cordero, E., Orsinger, C., Vesper, M., & Becker, J. (2016). Adaptive cache structures. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9637, pp. 87–99). Springer Verlag. https://doi.org/10.1007/978-3-319-30695-7_7
Mendeley helps you to discover research relevant for your work.