Flow analysis is a ubiquitous and much-studied component of compiler technology-and its variations abound. Amongst the most well known is Shivers' 0CFA; however, the best known algorithm for 0CFA requires time cubic in the size of the analyzed program and is unlikely to be improved. Consequently, several analyses have been designed to approximate 0CFA by trading precision for faster computation. Henglein's simple closure analysis, for example, forfeits the notion of directionality in flows and enjoys an "almost linear" time algorithm. But in making trade-offs between precision and complexity, what has been given up and what has been gained? Where do these analyses differ and where do they coincide? We identify a core language-the linear λ-calculus- where 0CFA, simple closure analysis, and many other known approximations or restrictions to 0CFA are rendered identical. Moreover, for this core language, analysis corresponds with (instrumented) evaluation. Because analysis faithfully captures evaluation, and because the linear λ-calculus is complete for ptime, we derive ptime-completeness results for all of these analyses. © 2008 Springer-Verlag.
CITATION STYLE
Van Horn, D., & Mairson, H. G. (2008). Flow analysis, linearity, and PTIME. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5079 LNCS, pp. 255–269). https://doi.org/10.1007/978-3-540-69166-2_17
Mendeley helps you to discover research relevant for your work.