An Environment for Portable Distributed Memory Parallel Programming

Author(s):  
C. Clémençon ◽  
A. Endo ◽  
J. Fritscher ◽  
A. Müller ◽  
R. Rühl ◽  
...  
2020 ◽  
Vol 30 (3) ◽  
pp. 28-33 ◽  
Author(s):  
S. A. Pryadko ◽  
A. Yu. Troshin ◽  
V. D. Kozlov ◽  
A. E. Ivanov

The article describes various options for speeding up calculations on computer systems. These features are closely related to the architecture of these complexes. The objective of this paper is to provide necessary information when selecting the capability for the speeding process of solving the computation problem. The main features implemented using the following models are described: programming in systems with shared memory, programming in systems with distributed memory, and programming on graphics accelerators (video cards). The basic concept, principles, advantages, and disadvantages of each of the considered programming models are described. All standards for writing programs described in the article can be used both on Linux and Windows operating systems. The required libraries are available and compatible with the C/C++ programming language. The article concludes with recommendations on the use of a particular technology, depending on the type of task to be solved.


2006 ◽  
Vol 17 (02) ◽  
pp. 251-270 ◽  
Author(s):  
THOMAS RAUBER ◽  
GUDULA RÜNGER

Multiprocessor task (M-task) programming is a suitable parallel programming model for coding application problems with an inherent modular structure. An M-task can be executed on a group of processors of arbitrary size, concurrently to other M-tasks of the same application program. The data of a multiprocessor task program usually include composed data structures, like vectors or arrays. For distributed memory machines or cluster platforms, those composed data structures are distributed within one or more processor groups. Thus, a concise parallel programming model for M-tasks requires a standardized distributed data format for composed data structures. Additionally, functions for data re-distribution with respect to different data distributions and different processor group layouts are needed to glue program parts together. In this paper, we present a data re-distribution library which extends the M-task programming with Tlib, a library providing operations to split processor groups and to map M-tasks to processor groups.


1992 ◽  
Vol 1 (2) ◽  
pp. 177-183 ◽  
Author(s):  
Ashish Deshpande ◽  
Martin Schultz

Linda is a coordination language inverted by David Gelernter at Yale University, which when combined with a computation language (like C) yields a high-level parallel programming language for MIMD machines. Linda is based on a virtual shared associative memory containing objects called tuples. Skeptics have long claimed that Linda programs could not be efficient on distributed memory architectures. In this paper, we address this claim by discussing C-Linda's performance in solving a particular scientific computing problem, the shallow water equations, and make comparisons with alternatives available on various shared and distributed memory parallel machines.


1995 ◽  
Vol 18 (3) ◽  
pp. 365-378
Author(s):  
Yeh‐Ching Chung ◽  
Wu‐Hsun Ho ◽  
Chia‐Cheng Liu

2003 ◽  
Vol 13 (03) ◽  
pp. 437-448 ◽  
Author(s):  
ANTONIO J. DORTA ◽  
JESÚS A. GONZÁLEZ ◽  
CASIANO RODRÍGUEZ ◽  
FRANCISCO DE SANDE

The skeletal approach to the development of parallel applications has been revealed to be one of the most successful and has been widely explored in the recent years. The goal of this approach is to develop a methodology of parallel programming based on a restricted set of parallel constructs. This paper presents llc, a parallel skeletal language, the theoretical model that gives support to the language and a prototype implementation for its compiler. The language is based on directives, uses a C-like syntax and gives support to the most widely used skeletal constructs. llCoMP is a source to source compiler for the language built on top of MPI. We evaluate the performance of our prototype compiler using four different parallel architectures and three algorithms. We present the results obtained in both shared and distributed memory architectures. Our model guarantees the portability of the language to any platform and its simplicity greatly eases its implementation.


Sign in / Sign up

Export Citation Format

Share Document