Today’s computing systems still mostly consist of homogeneous multi-core processing systems with statically allocated computing resources. Looking into the future, these computing systems will evolve to heterogeneous processing systems with more diverse processing units and new requirements. With multiple applications running concurrently on these many-core platforms, these applications compete for computational resources and thus processing power. However, not all applications are able to efficiently make use of all available resources at all times, which leads to the challenge to efficiently allocate tasks to computational resources during run-time. This issue is especially crucial when looking at cache resources, where the bandwidth and the available resources strongly bound computation times. For example, streaming based algorithms will run concurrently with block-based computations, which leads to an inefficient allocation of cache resources. In this paper, we propose a dynamic cache architecture that enables the parameterization and the resource allocation of cache memory resources between cores during run-time. The reallocation is done with only little overhead such that each algorithm class can be more efficiently executed on the many-core platform. We contribute with a cache architecture that is for the first time prototyped on programmable hardware to demonstrate the feasibility of the proposed approach. At last we evaluate the overhead introduced by the increased flexibility of the hardware architecture.
CITATION STYLE
Tradowsky, C., Cordero, E., Orsinger, C., Vesper, M., & Becker, J. (2016). A dynamic cache architecture for efficient memory resource allocation in many-core systems. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9625, pp. 343–351). Springer Verlag. https://doi.org/10.1007/978-3-319-30481-6_29
Mendeley helps you to discover research relevant for your work.