Nonblocking Data Structures for Distributed-Memory Machines: Stacks as an Example

Author(s):  
Thanh-Dang Diep ◽  
Karl Furlinger
2006 ◽  
Vol 17 (02) ◽  
pp. 251-270 ◽  
Author(s):  
THOMAS RAUBER ◽  
GUDULA RÜNGER

Multiprocessor task (M-task) programming is a suitable parallel programming model for coding application problems with an inherent modular structure. An M-task can be executed on a group of processors of arbitrary size, concurrently to other M-tasks of the same application program. The data of a multiprocessor task program usually include composed data structures, like vectors or arrays. For distributed memory machines or cluster platforms, those composed data structures are distributed within one or more processor groups. Thus, a concise parallel programming model for M-tasks requires a standardized distributed data format for composed data structures. Additionally, functions for data re-distribution with respect to different data distributions and different processor group layouts are needed to glue program parts together. In this paper, we present a data re-distribution library which extends the M-task programming with Tlib, a library providing operations to split processor groups and to map M-tasks to processor groups.


1993 ◽  
Vol 2 (4) ◽  
pp. 179-192
Author(s):  
Sandeep Bhatt ◽  
Marina Chen ◽  
James Cowie ◽  
Cheng-Yee Lin ◽  
Pangfeng Liu

This article reports on experiments from our ongoing project whose goal is to develop a C++ library which supports adaptive and irregular data structures on distributed memory supercomputers. We demonstrate the use of our abstractions in implementing "tree codes" for large-scale N-body simulations. These algorithms require dynamically evolving treelike data structures, as well as load-balancing, both of which are widely believed to make the application difficult and cumbersome to program for distributed-memory machines. The ease of writing the application code on top of our C++ library abstractions (which themselves are application independent), and the low overhead of the resulting C++ code (over hand-crafted C code) supports our belief that object-oriented approaches are eminently suited to programming distributed-memory machines in a manner that (to the applications programmer) is architecture-independent. Our contribution in parallel programming methodology is to identify and encapsulate general classes of communication and load-balancing strategies useful across applications and MIMD architectures. This article reports experimental results from simulations of half a million particles using multiple methods.


1995 ◽  
Vol 17 (2) ◽  
pp. 233-263 ◽  
Author(s):  
Anne Rogers ◽  
Martin C. Carlisle ◽  
John H. Reppy ◽  
Laurie J. Hendren

1992 ◽  
Vol 1 (1) ◽  
pp. 31-50 ◽  
Author(s):  
Barbara Chapman ◽  
Piyush Mehrotra ◽  
Hans Zima

Exploiting the full performance potential of distributed memory machines requires a careful distribution of data across the processors. Vienna Fortran is a language extension of Fortran which provides the user with a wide range of facilities for such mapping of data structures. In contrast to current programming practice, programs in Vienna Fortran are written using global data references. Thus, the user has the advantages of a shared memory programming paradigm while explicitly controlling the data distribution. In this paper, we present the language features of Vienna Fortran for FORTRAN 77, together with examples illustrating the use of these features.


1995 ◽  
Vol 2 (2) ◽  
pp. 18-29 ◽  
Author(s):  
Ynan-Shin Hwang ◽  
R. Das ◽  
J.H. Saltz ◽  
M. Hodoscek ◽  
B.R. Brooks

Sign in / Sign up

Export Citation Format

Share Document