scholarly journals Interprocedural Control Flow Analysis (Extended version)

1998 ◽  
Vol 27 (538) ◽  
Author(s):  
Flemming Nielson ◽  
Hanne Riis Nielson

Control Flow Analysis is a widely used approach for analysing functional and object oriented programs and recently it has also successfully been used to analyse more challenging notions of computation involving concurrency. However, once the applications become more demanding also the analysis needs to be more precise in its ability to deal with mutable state (or side-effects) and to perform polyvariant (or context-sensitive) analysis. Several insights in Data Flow Analysis and Abstract Interpretation show how to do so for imperative programs but the techniques have not had much impact on Control Flow Analysis because of the less abstract way in which the techniques are normally expressed. In this paper we show how to incorporate a number of key insights from Data Flow Analysis involving such advanced interprocedural techniques as call strings and assumption sets using Abstract Interpretation to induce the analyses from a general collecting semantics.

Author(s):  
Johannes Späth

AbstractA precise static data-flow analysis transforms the program into a context-sensitive and field-sensitive approximation of the program. It is challenging to design an analysis of this precision efficiently due to the fact that the analysis is undecidable per se. Synchronized pushdown systems (SPDS) present a highly precise approximation of context-sensitive and field-sensitive data-flow analysis. This chapter presents some data-flow analyses that SPDS can be used for. Further on, this chapter summarizes two other contributions of the thesis “Synchronized Pushdown System for Pointer and Data-Flow Analysis” called Boomerang and IDEal. Boomerang is a demand-driven pointer analysis that builds on top of SPDS and minimizes the highly computational effort of a whole-program pointer analysis by restricting the computation to the minimal program slice necessary for an individual query. IDEal is a generic and efficient framework for data-flow analyses, e.g., typestate analysis. IDEal resolves pointer relations automatically and efficiently by the help of Boomerang. This reduces the burden of implementing pointer relations into an analysis. Further on, IDEal performs strong updates, which makes the analysis sound and precise.


2008 ◽  
Vol 17 (03) ◽  
pp. 259-282 ◽  
Author(s):  
RANIA KHALAF ◽  
OLIVER KOPP ◽  
FRANK LEYMANN

Continuous process improvement (CPI) may require a BPEL process to be split amongst different participants. In this paper, we enable splitting standard BPEL — without requiring any new middleware for the case of flat flows. The solution also supports splitting loops and scopes that have compensation and/or fault handlers. When splitting loops and scopes, we extend existing Web services standards and frameworks in a standard compliant manner in order to support the resulting split control (not data) between the fragments. Data dependencies, however, are handled directly using BPEL constructs placed in the fragments even for split loops and scopes. We present a solution that uses a BPEL process, partition information, and results of data-flow analysis to produce a BPEL process for each participant. The collective behavior of these participant processes recreates the control and data flow of the non-split process. Previous work presented process splitting using a variant of BPEL where data flow is modeled explicitly using "data links". We reuse the control flow aspect from that work as well as the control flow aspect from our work on splitting loops and scopes, focusing in this paper on maintaining the data dependencies in standard BPEL.


1999 ◽  
Vol 7 (3-4) ◽  
pp. 247-260
Author(s):  
Sungdo Moon ◽  
Byoungro So ◽  
Mary W. Hall

This paper demonstrates that significant improvements to automatic parallelization technology require that existing systems be extended in two ways: (1) they must combine high‐quality compile‐time analysis with low‐cost run‐time testing; and (2) they must take control flow into account during analysis. We support this claim with the results of an experiment that measures the safety of parallelization at run time for loops left unparallelized by the Stanford SUIF compiler’s automatic parallelization system. We present results of measurements on programs from two benchmark suites – SPECFP95and NASsample benchmarks – which identify inherently parallel loops in these programs that are missed by the compiler. We characterize remaining parallelization opportunities, and find that most of the loops require run‐time testing, analysis of control flow, or some combination of the two. We present a new compile‐time analysis technique that can be used to parallelize most of these remaining loops. This technique is designed to not only improve the results of compile‐time parallelization, but also to produce low‐cost, directed run‐time tests that allow the system to defer binding of parallelization until run‐time when safety cannot be proven statically. We call this approachpredicated array data‐flow analysis. We augment array data‐flow analysis, which the compiler uses to identify independent and privatizable arrays, by associating predicates with array data‐flow values. Predicated array data‐flow analysis allows the compiler to derive “optimistic” data‐flow values guarded by predicates; these predicates can be used to derive a run‐time test guaranteeing the safety of parallelization.


1981 ◽  
Vol 10 (131) ◽  
Author(s):  
Flemming Nielson

<p>Abstract Interpretation (P. Cousot, R. Cousot and others) is a method for program analysis that is able to describe many data flow analyses. We investigate and weaken the assumptions made in abstract interpretation and express abstract interpretation within Denotational Semantics. As an example we specify constant propagation.</p><p>Some authors have used abstract interpretation to formulate ''available expressions'' (a so-called ''history-sensitive'' data flow analysis). Our development of ''available expressions'' is better justified, semantically.</p><p>In traditional data flow analysis and abstract interpretation it is generally assumed that the ''Meet Over all Paths'' solution is wanted. We prove that the solution specified by our approach is the ''Meet Over all Paths'' solution to a certain system of equations obtained from the program.</p><p>To indicate the usefulness of our approach we show how to validate a class of program transformations, including ''constant folding''.</p><p>Throughout this paper we use a toy language consisting of declarations, expressions and commands (involving conditional and iteration). Excluded are procedures and jumps.</p>


1997 ◽  
Vol 4 (2) ◽  
Author(s):  
David A. Schmidt

We systematically apply the principles of Cousot-Cousot-style abstract interpretation (a.i.) to the hierarchy of operational semantics definitions - flowchart, big-step, and small-step semantics. For each semantics format we examine the principles of safety and liveness interpretations, first-order and second-order analyses, and termination properties. Application of a.i. to data-flow analysis, model checking, closure analysis, and concurrency theory are demonstrated. Our primary contributions are separating the concerns of safety, termination, and efficiency of representation and showing how a.i. principles apply uniformly to the various levels of the operational semantics hierarchy and their applications.


Sign in / Sign up

Export Citation Format

Share Document