OPUS: HETEROGENEOUS COMPUTING WITH DATA PARALLEL TASKS

1999 ◽  
Vol 09 (02) ◽  
pp. 275-289 ◽  
Author(s):  
ERWIN LAURE ◽  
PIYUSH MEHROTRA ◽  
HANS ZIMA

The coordination language Opus is an object-based extension of High Performance Fortran (HPF) that supports the integration of coarse-grain task parallelism with HPF-style data parallelism. In this paper we discuss Opus in the Context of multidisciplinary applications (MDAs) which execute in a heterogencous environment. After outlining the major properties of such applications and a number of different approaches towards providing language and tool support for MDAs we describe the salientfeatures of Opus and its implementation, emphasizing the issues related to the coordination of data-parallel HPF programs in a heterogencous environment.

1997 ◽  
Vol 6 (4) ◽  
pp. 345-362 ◽  
Author(s):  
Barbara Chapman ◽  
Matthew Haines ◽  
Piyush Mehrotra ◽  
Hans Zima ◽  
John Van Rosendale

Data parallel languages, such as High Performance Fortran, can be successfully applied to a wide range of numerical applications.However, many advanced scientific and engineering applications are multidisciplinary and heterogeneous in nature, and thus do not fit well into the data parallel paradigm. In this paper we present Opus, a language designed to fill this gap. The central concept of Opus is a mechanism called ShareD Abstractions (SDA). An SDA can be used as a computation server, i.e., a locus of computational activity, or as a data repository for sharing data between asynchronous tasks. SDAs can be internally data parallel, providing support for the integration of data and task parallelism as well as nested task parallelism. They can thus be used to express multidisciplinary applications in a natural and efficient way. In this paper we describe the features of the language through a series of examples and give an overview of the runtime support required to implement these concepts in parallel and distributed environments.


1997 ◽  
Vol 07 (04) ◽  
pp. 437-449 ◽  
Author(s):  
Robert Schreiber

This paper introduces the ideas that underly the data-parallel language High Performance Fortran (HPF) and the new ideas in version 2 of HPF. It first reviews HPF's key language elements. It discusses the meaning of data parallelism and the limitations of HPF version 1 as a data-parallel programming language. The second part of the paper is a review of the development of version 2 of HPF. The extended language, under development in 1996, includes a richer data mapping capability; an extension to the independent loop that allows reduction operations in the loop range; a means for directing the mapping of computation as well as data; and a way to specify concurrent execution of several parallel tasks on disjoint subsets of processors.


1995 ◽  
Vol 04 (01n02) ◽  
pp. 33-53 ◽  
Author(s):  
ARVIND K. BANSAL

Associative computation is characterized by seamless intertwining of search-by-content and data parallel computation. The search-by-content paradigm is natural to scalable high performance heterogeneous computing since the use of tagged data avoids the need for explicit addressing mechanisms. In this paper, the author presents an algebra for associative logic programming, an associative resolution scheme, and a generic framework of an associative abstract instruction set. The model is based on the integration of data alignment and the use of two types of bags: data element bags and filter bags of Boolean values to select and restrict computation on data elements. The use of filter bags integrated with data alignment reduces computation and data transfer overhead, and the use of tagged data reduces overhead of preparing data before data transmission. The abstract instruction set has been illustrated by an example. Performance results are presented for a simulation in a homogeneous address space.


1997 ◽  
Vol 6 (1) ◽  
pp. 3-27 ◽  
Author(s):  
Corinne Ancourt ◽  
Fabien Coelho ◽  
FranÇois Irigoin ◽  
Ronan Keryell

High Performance Fortran (HPF) was developed to support data parallel programming for single-instruction multiple-data (SIMD) and multiple-instruction multiple-data (MIMD) machines with distributed memory. The programmer is provided a familiar uniform logical address space and specifies the data distribution by directives. The compiler then exploits these directives to allocate arrays in the local memories, to assign computations to elementary processors, and to migrate data between processors when required. We show here that linear algebra is a powerful framework to encode HPF directives and to synthesize distributed code with space-efficient array allocation, tight loop bounds, and vectorized communications forINDEPENDENTloops. The generated code includes traditional optimizations such as guard elimination, message vectorization and aggregation, and overlap analysis. The systematic use of an affine framework makes it possible to prove the compilation scheme correct.


1997 ◽  
Vol 6 (1) ◽  
pp. 153-158 ◽  
Author(s):  
Terry W. Clark ◽  
Reinhard v. Hanxleden ◽  
Ken Kennedy

To efficiently parallelize a scientific application with a data-parallel compiler requires certain structural properties in the source program, and conversely, the absence of others. A recent parallelization effort of ours reinforced this observation and motivated this correspondence. Specifically, we have transformed a Fortran 77 version of GROMOS, a popular dusty-deck program for molecular dynamics, into Fortran D, a data-parallel dialect of Fortran. During this transformation we have encountered a number of difficulties that probably are neither limited to this particular application nor do they seem likely to be addressed by improved compiler technology in the near future. Our experience with GROMOS suggests a number of points to keep in mind when developing software that may at some time in its life cycle be parallelized with a data-parallel compiler. This note presents some guidelines for engineering data-parallel applications that are compatible with Fortran D or High Performance Fortran compilers.


1996 ◽  
Vol 5 (4) ◽  
pp. 337-364 ◽  
Author(s):  
Yu Hu ◽  
S. Lennart Johnsson

The optimization techniques for hierarchical O(N) N-body algorithms described here focus on managing the data distribution and the data references, both between the memories of different nodes and within the memory hierarchy of each node. We show how the techniques can be expressed in data-parallel languages, such as High Performance Fortran (HPF) and Connection Machine Fortran (CMF). The effectiveness of our techniques is demonstrated on an implementation of Anderson's hierarchical O(N) N-body method for the Connection Machine system CM-5/5E. Of the total execution time, communication accounts for about 10–20% of the total time, with the average efficiency for arithmetic operations being about 40% and the total efficiency (including communication) being about 35%. For the CM-5E, a performance in excess of 60 Mflop/s per node (peak 160 Mflop/s per node) has been measured.


1995 ◽  
Vol 4 (2) ◽  
pp. 87-113 ◽  
Author(s):  
John Merlin ◽  
Anthony Hey

High Performance Fortran (HPF) is an informal standard for extensions to Fortran 90 to assist its implementation on parallel architectures, particularly for data-parallel computation. Among other things, it includes directives for specifying data distribution across multiple memories, and concurrent execution features. This article provides a tutorial introduction to the main features of HPF.


2021 ◽  
Vol 21 ◽  
pp. 1-13
Author(s):  
Pin Xu ◽  
Masato Edahiro ◽  
Kondo Masaki

In this paper, we propose a method to automatically generate parallelized code from Simulink models, while exploiting both task and data parallelism. Building on previous research, we propose a model-based parallelizer (MBP) that exploits task parallelism and assigns tasks to CPU cores using a hierarchical clustering method. We also propose amethod in which data-parallel SYCL code is generated from Simulink models; computations with data parallelism are expressed in the form of S-Function Builder blocks and are executed in a heterogeneous computing environment. Most parts of the procedure can be automated with scripts, and the two methods can be applied together. In the evaluation, the data-parallel programs generated using our proposed method achieved a maximum speedup of approximately 547 times, compared to sequential programs, without observable differences in the computed results. In addition, the programs generated while exploiting both task and data parallelism were confirmed to have achieved better performance than those exploiting either one of the two.


Sign in / Sign up

Export Citation Format

Share Document