Communication Generation for Block-Cyclic Distributions

1997 ◽  
Vol 07 (02) ◽  
pp. 195-202 ◽  
Author(s):  
A. Venkatachar ◽  
J. Ramanujam ◽  
A. Thirumalai

Data-parallel languages such as High Performance Fortran, Vienna Fortran and Fortran D include directives such as alignment and distribution that describe how data and computation are mapped onto the processors in a distributed-memory multiprocessor. A compiler for HPF that generates code for each processor has to compute the sequence of local memory addresses accessed by each processor and the sequence of send and receives for a given processor to access non-local data. In this paper, we present a novel approach for the generation of communication sets that exploits a pttern of send-receive index pairs. In addition, we present an algorithm for code generation. Experimental results demonstrate the viability of this technique.

1997 ◽  
Vol 6 (4) ◽  
pp. 345-362 ◽  
Author(s):  
Barbara Chapman ◽  
Matthew Haines ◽  
Piyush Mehrotra ◽  
Hans Zima ◽  
John Van Rosendale

Data parallel languages, such as High Performance Fortran, can be successfully applied to a wide range of numerical applications.However, many advanced scientific and engineering applications are multidisciplinary and heterogeneous in nature, and thus do not fit well into the data parallel paradigm. In this paper we present Opus, a language designed to fill this gap. The central concept of Opus is a mechanism called ShareD Abstractions (SDA). An SDA can be used as a computation server, i.e., a locus of computational activity, or as a data repository for sharing data between asynchronous tasks. SDAs can be internally data parallel, providing support for the integration of data and task parallelism as well as nested task parallelism. They can thus be used to express multidisciplinary applications in a natural and efficient way. In this paper we describe the features of the language through a series of examples and give an overview of the runtime support required to implement these concepts in parallel and distributed environments.


1997 ◽  
Vol 6 (1) ◽  
pp. 153-158 ◽  
Author(s):  
Terry W. Clark ◽  
Reinhard v. Hanxleden ◽  
Ken Kennedy

To efficiently parallelize a scientific application with a data-parallel compiler requires certain structural properties in the source program, and conversely, the absence of others. A recent parallelization effort of ours reinforced this observation and motivated this correspondence. Specifically, we have transformed a Fortran 77 version of GROMOS, a popular dusty-deck program for molecular dynamics, into Fortran D, a data-parallel dialect of Fortran. During this transformation we have encountered a number of difficulties that probably are neither limited to this particular application nor do they seem likely to be addressed by improved compiler technology in the near future. Our experience with GROMOS suggests a number of points to keep in mind when developing software that may at some time in its life cycle be parallelized with a data-parallel compiler. This note presents some guidelines for engineering data-parallel applications that are compatible with Fortran D or High Performance Fortran compilers.


1996 ◽  
Vol 5 (4) ◽  
pp. 337-364 ◽  
Author(s):  
Yu Hu ◽  
S. Lennart Johnsson

The optimization techniques for hierarchical O(N) N-body algorithms described here focus on managing the data distribution and the data references, both between the memories of different nodes and within the memory hierarchy of each node. We show how the techniques can be expressed in data-parallel languages, such as High Performance Fortran (HPF) and Connection Machine Fortran (CMF). The effectiveness of our techniques is demonstrated on an implementation of Anderson's hierarchical O(N) N-body method for the Connection Machine system CM-5/5E. Of the total execution time, communication accounts for about 10–20% of the total time, with the average efficiency for arithmetic operations being about 40% and the total efficiency (including communication) being about 35%. For the CM-5E, a performance in excess of 60 Mflop/s per node (peak 160 Mflop/s per node) has been measured.


2000 ◽  
Vol 10 (02n03) ◽  
pp. 189-200 ◽  
Author(s):  
THOMAS BRANDES

On distributed memory architectures data parallel compilers emulate the global address space by distributing the data onto the processors according to the mapping directives of the user and by generating automatically explicit inter-processor communication. A shadow is additionally allocated local memory to keep on one processor also non-local values of the data that is accessed or defined by this processor. While shadow edges are already well studied for structured grids, this paper focuses on its use for applications with unstructured grids where updates on the shadow edges involve unstructured communication with complex communication schedules. The use of shadow edges is considered for High Performance Fortran (HPF) as the de facto standard language for writing data parallel programs in Fortran. A library with a HPF binding provides the explicit control of unstructured shadows and their communication schedules, also called halos. This halo library allows writing HPF programs with a performance close to hand-coded message-passing versions but where the user is freed of the burden to calculate shadow sizes and communication schedules and to do the exchanging of data with explicit message passing commands. In certain situations, the HPF compiler can create and use halos automatically. This paper shows the advantages and also the limits of this approach. The halo library and an automatic support of halos have been implemented within the ADAPTOR HPF compilation system. The performance results verify the effectiveness of the chosen approach.


1997 ◽  
Vol 6 (1) ◽  
pp. 3-27 ◽  
Author(s):  
Corinne Ancourt ◽  
Fabien Coelho ◽  
FranÇois Irigoin ◽  
Ronan Keryell

High Performance Fortran (HPF) was developed to support data parallel programming for single-instruction multiple-data (SIMD) and multiple-instruction multiple-data (MIMD) machines with distributed memory. The programmer is provided a familiar uniform logical address space and specifies the data distribution by directives. The compiler then exploits these directives to allocate arrays in the local memories, to assign computations to elementary processors, and to migrate data between processors when required. We show here that linear algebra is a powerful framework to encode HPF directives and to synthesize distributed code with space-efficient array allocation, tight loop bounds, and vectorized communications forINDEPENDENTloops. The generated code includes traditional optimizations such as guard elimination, message vectorization and aggregation, and overlap analysis. The systematic use of an affine framework makes it possible to prove the compilation scheme correct.


1997 ◽  
Vol 07 (04) ◽  
pp. 437-449 ◽  
Author(s):  
Robert Schreiber

This paper introduces the ideas that underly the data-parallel language High Performance Fortran (HPF) and the new ideas in version 2 of HPF. It first reviews HPF's key language elements. It discusses the meaning of data parallelism and the limitations of HPF version 1 as a data-parallel programming language. The second part of the paper is a review of the development of version 2 of HPF. The extended language, under development in 1996, includes a richer data mapping capability; an extension to the independent loop that allows reduction operations in the loop range; a means for directing the mapping of computation as well as data; and a way to specify concurrent execution of several parallel tasks on disjoint subsets of processors.


1995 ◽  
Vol 4 (2) ◽  
pp. 87-113 ◽  
Author(s):  
John Merlin ◽  
Anthony Hey

High Performance Fortran (HPF) is an informal standard for extensions to Fortran 90 to assist its implementation on parallel architectures, particularly for data-parallel computation. Among other things, it includes directives for specifying data distribution across multiple memories, and concurrent execution features. This article provides a tutorial introduction to the main features of HPF.


1999 ◽  
Vol 09 (02) ◽  
pp. 275-289 ◽  
Author(s):  
ERWIN LAURE ◽  
PIYUSH MEHROTRA ◽  
HANS ZIMA

The coordination language Opus is an object-based extension of High Performance Fortran (HPF) that supports the integration of coarse-grain task parallelism with HPF-style data parallelism. In this paper we discuss Opus in the Context of multidisciplinary applications (MDAs) which execute in a heterogencous environment. After outlining the major properties of such applications and a number of different approaches towards providing language and tool support for MDAs we describe the salientfeatures of Opus and its implementation, emphasizing the issues related to the coordination of data-parallel HPF programs in a heterogencous environment.


Sign in / Sign up

Export Citation Format

Share Document