sequential program
Recently Published Documents


TOTAL DOCUMENTS

48
(FIVE YEARS 3)

H-INDEX

9
(FIVE YEARS 0)

Author(s):  
S. Blom ◽  
S. Darabi ◽  
M. Huisman ◽  
M. Safari

AbstractA commonly used approach to develop deterministic parallel programs is to augment a sequential program with compiler directives that indicate which program blocks may potentially be executed in parallel. This paper develops a verification technique to reason about such compiler directives, in particular to show that they do not change the behaviour of the program. Moreover, the verification technique is tool-supported and can be combined with proving functional correctness of the program. To develop our verification technique, we propose a simple intermediate representation (syntax and semantics) that captures the main forms of deterministic parallel programs. This language distinguishes three kinds of basic blocks: parallel, vectorised and sequential blocks, which can be composed using three different composition operators: sequential, parallel and fusion composition. We show how a widely used subset of OpenMP can be encoded into this intermediate representation. Our verification technique builds on the notion of iteration contract to specify the behaviour of basic blocks; we show that if iteration contracts are manually specified for single blocks, then that is sufficient to automatically reason about data race freedom of the composed program. Moreover, we also show that it is sufficient to establish functional correctness on a linearised version of the original program to conclude functional correctness of the parallel program. Finally, we exemplify our approach on an example OpenMP program, and we discuss how tool support is provided.


2021 ◽  
pp. 265-284
Author(s):  
Tsubasa Shoshi ◽  
Takuma Ishikawa ◽  
Naoki Kobayashi ◽  
Ken Sakayori ◽  
Ryosuke Sato ◽  
...  

2018 ◽  
Vol 7 (4.6) ◽  
pp. 150
Author(s):  
Parwat Singh Anjanaa ◽  
N. Naga Maruthia ◽  
Sagar Gujjunooria ◽  
Madhu Orugantib

The advancement of computer systems such as multi-core and multiprocessor systems resulted in much faster computing than earlier. However, the efficient utilization of these rich computing resources is still an emerging area. For efficient utilization of computing resources, many optimization techniques have been developed, some techniques at compile time and some at runtime. When all the information required for parallel execution is known at compile time, then optimization compilers can reasonably parallelize a sequential program. However, optimization compiler fails when it encounters compile time unknowns in the program. A conventional solution for such problem can be performing parallelization at runtime. In this article, we propose three different solutions to parallelize a loop having an irregularity in the array of array references, with and without dependencies. More specifically, we introduce runtime check based parallelization technique for the static irregular references without dependency, Inspector-Executor based parallelization technique for static irregular references with dependencies, and finally Speculative parallelization technique (BitTLS) for dynamic irregular references with dependencies. For pro ling the runtime information, shared and private data structures are used. To detect the dependencies between footprints and for synchronization of threads at runtime, we use bit level operations. A window based scheduling policy is employed to schedule the iterations to the threads.   


2018 ◽  
Vol 44 (5) ◽  
pp. 491-511 ◽  
Author(s):  
Zhenzhou Tian ◽  
Ting Liu ◽  
Qinghua Zheng ◽  
Eryue Zhuang ◽  
Ming Fan ◽  
...  

2015 ◽  
Vol 54 (4) ◽  
pp. 540-545
Author(s):  
V. Yu. Korolev ◽  
R. L. Smelyanskii ◽  
T. R. Smelyanskii ◽  
A. V. Shalimov

2015 ◽  
Vol 2015 ◽  
pp. 1-12
Author(s):  
D. C. Kiran ◽  
S. Gurunarayanan ◽  
Janardan Prasad Misra ◽  
Abhijeet Nawal

This work discusses various compiler level global scheduling techniques for multicore processors. The main contribution of the work is to delegate the job of exploiting fine grained parallelism to the compiler, thereby reducing the hardware overhead and the programming complexity. This goal is achieved by decomposing a sequential program into multiple subblocks and constructing subblock dependency graph (SDG). The proposed schedulers select subblocks from the SDG and schedules it on different cores, by ensuring the correct order of execution of subblocks. In conjunction with parallelization techniques, locality optimizations are performed to minimize communication overhead between the cores. The results observed are indicative of better and balanced speed-up per watt.


Sign in / Sign up

Export Citation Format

Share Document