Abstract
A new cooperative caching mechanism, PACA, along with a caching algorithm, LRU-Interleaved, and an aggressive prefetching algorithm, Full-File-On-Open, are presented. The caching algorithm is especially targeted to parallel machines running a microkernel-based operating system. It avoids the cache coherence problem with no loss in performance. Comparing our algorithm with another cooperative cache one (N-Chance Forwarding), in the above environment, better results have been obtained by LRU-Interleaved. We also evaluate an aggressive prefetching algorithm that highly increases read performance taking advantage of the huge caches cooperative caching offers.
Cite
CITATION STYLE
Cortes, T., Girona, S., & Labarta, J. (1996). PACA: A cooperative file system cache for parallel machines. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1123, pp. 477–486). Springer Verlag. https://doi.org/10.1007/3-540-61626-8_65
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.