In comparison to automatic parallelization, which is thoroughly studied in the literature [31, 33], classical analyses and optimizations of explicitly parallel programs were more or less neglected. This may be due to the fact that naive adaptations of the sequential techniques fail [24], and their straightforward correct ones have unacceptable costs caused by the interleavings, which manifest the possible executions of a parallel program. Recently, however, we showed that unidirectional bitvector analyses can be performed for parallel programs as easily and as efficiently as for sequential ones [17], a necessary condition for the successful transfer of the classical optimizations to the parallel setting. In this article we focus on possible subsequent code motion transformations, which turn out to require much more care than originally conjectured [17]. Essentially, this is due to the fact that interleaving semantics, although being adequate for correctness considerations, fails when it comes to reasoning about efficiency of parallel programs. This deficiency, however, can be overcome by strengthening the specific treatment of synchronization points. © 1999 ACM.
CITATION STYLE
Knoop, J., & Steffen, B. (1999). Code motion for explicitly parallel programs. SIGPLAN Notices (ACM Special Interest Group on Programming Languages), 34(8), 13–24. https://doi.org/10.1145/329366.301106
Mendeley helps you to discover research relevant for your work.