Loop termination prediction

9Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deeply pipelined high performance processors require highly accurate branch prediction to drive their instruction fetch. However there remains a class of events which are not easily predictable by standard two level predictors. One such event is loop termination. In deeply nested loops, loop terminations can account for a significant amount of the mispredictions. We propose two techniques for dealing with loop terminations. A simple hardware extension to existing prediction architectures called Loop Termination Prediction is presented, which captures the long regular repeating patterns of loops. In addition, a software technique called Branch Splitting is examined, which breaks loops with iteration counts above the detection of current predictors into smaller loops that may be effectively captured. Our results show that for many programs adding a small loop termination buffer can reduce the missprediction rate by up to a difference of 2%.

Cite

CITATION STYLE

APA

Sherwood, T., & Calder, B. (2000). Loop termination prediction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1940, pp. 73–87). Springer Verlag. https://doi.org/10.1007/3-540-39999-2_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free