Efficient interprocedural array data-flow analysis for automatic program parallelization

2000 ◽  
Vol 26 (3) ◽  
pp. 244-261 ◽  
Author(s):  
Junjie Gu ◽  
Zhiyuan Li
1999 ◽  
Vol 7 (3-4) ◽  
pp. 247-260
Author(s):  
Sungdo Moon ◽  
Byoungro So ◽  
Mary W. Hall

This paper demonstrates that significant improvements to automatic parallelization technology require that existing systems be extended in two ways: (1) they must combine high‐quality compile‐time analysis with low‐cost run‐time testing; and (2) they must take control flow into account during analysis. We support this claim with the results of an experiment that measures the safety of parallelization at run time for loops left unparallelized by the Stanford SUIF compiler’s automatic parallelization system. We present results of measurements on programs from two benchmark suites – SPECFP95and NASsample benchmarks – which identify inherently parallel loops in these programs that are missed by the compiler. We characterize remaining parallelization opportunities, and find that most of the loops require run‐time testing, analysis of control flow, or some combination of the two. We present a new compile‐time analysis technique that can be used to parallelize most of these remaining loops. This technique is designed to not only improve the results of compile‐time parallelization, but also to produce low‐cost, directed run‐time tests that allow the system to defer binding of parallelization until run‐time when safety cannot be proven statically. We call this approachpredicated array data‐flow analysis. We augment array data‐flow analysis, which the compiler uses to identify independent and privatizable arrays, by associating predicates with array data‐flow values. Predicated array data‐flow analysis allows the compiler to derive “optimistic” data‐flow values guarded by predicates; these predicates can be used to derive a run‐time test guaranteeing the safety of parallelization.


1997 ◽  
Vol 07 (04) ◽  
pp. 359-370 ◽  
Author(s):  
Xin Yuan ◽  
Rajiv Gupta ◽  
Rami Melhem

Exhaustive global array data flow analysis for communication optimization is expensive and considered to be impractical for large programs. This paper proposes a demand-driven analysis approach that reduces the analysis cost by computing only the data flow information related to optimizations. In addition, the analysis cost of our scheme can be effectively managed by trading space for time or compromising precision for time.


Sign in / Sign up

Export Citation Format

Share Document