Experience with MIMD message-passing systems: Towards general purpose parallel computing

Author(s):  
A. J. G. Hey
2005 ◽  
Vol 18 (2) ◽  
pp. 219-224
Author(s):  
Emina Milovanovic ◽  
Natalija Stojanovic

Because many universities lack the funds to purchase expensive parallel computers, cost effective alternatives are needed to teach students about parallel processing. Free software is available to support the three major paradigms of parallel computing. Parallaxis is a sophisticated SIMD simulator which runs on a variety of platforms.jBACI shared memory simulator supports the MIMD model of computing with a common shared memory. PVM and MPI allow students to treat a network of workstations as a message passing MIMD multicomputer with distributed memory. Each of this software tools can be used in a variety of courses to give students experience with parallel algorithms.


Author(s):  
Ning Yang ◽  
Shiaaulir Wang ◽  
Paul Schonfeld

A Parallel Genetic Algorithm (PGA) is used for a simulation-based optimization of waterway project schedules. This PGA is designed to distribute a Genetic Algorithm application over multiple processors in order to speed up the solution search procedure for a very large combinational problem. The proposed PGA is based on a global parallel model, which is also called a master-slave model. A Message-Passing Interface (MPI) is used in developing the parallel computing program. A case study is presented, whose results show how the adaption of a simulation-based optimization algorithm to parallel computing can greatly reduce computation time. Additional techniques which are found to further improve the PGA performance include: (1) choosing an appropriate task distribution method, (2) distributing simulation replications instead of different solutions, (3) avoiding the simulation of duplicate solutions, (4) avoiding running multiple simulations simultaneously in shared-memory processors, and (5) avoiding using multiple processors which belong to different clusters (physical sub-networks).


Author(s):  
J. Apostolakis ◽  
L. M. Bertolotto ◽  
C. E. Bruschini ◽  
P. Calafiura ◽  
F. Gagliardi ◽  
...  

Author(s):  
Yu-Cheng Chou ◽  
Harry H. Cheng

Message Passing Interface (MPI) is a standardized library specification designed for message-passing parallel programming on large-scale distributed systems. A number of MPI libraries have been implemented to allow users to develop portable programs using the scientific programming languages, Fortran, C and C++. Ch is an embeddable C/C++ interpreter that provides an interpretive environment for C/C++ based scripts and programs. Combining Ch with any MPI C/C++ library provides the functionality for rapid development of MPI C/C++ programs without compilation. In this article, the method of interfacing Ch scripts with MPI C implementations is introduced by using the MPICH2 C library as an example. The MPICH2-based Ch MPI package provides users with the ability to interpretively run MPI C program based on the MPICH2 C library. Running MPI programs through the MPICH2-based Ch MPI package across heterogeneous platforms consisting of Linux and Windows machines is illustrated. Comparisons for the bandwidth, latency, and parallel computation speedup between C MPI, Ch MPI, and MPI for Python in an Ethernet-based environment comprising identical Linux machines are presented. A Web-based example is given to demonstrate the use of Ch and MPICH2 in C based CGI scripting to facilitate the development of Web-based applications for parallel computing.


2012 ◽  
Vol 433-440 ◽  
pp. 2892-2898
Author(s):  
Guang Lei Fei ◽  
Jian Guo Ning ◽  
Tian Bao Ma

Parallel computing has been applied in many fields, and the parallel computing platform system, PC cluster based on MPI (Message Passing Interface) library under Linux operating system is a cost-effectiveness approach to parallel compute. In this paper, the key algorithm of parallel program of explosion and impact is presented. The techniques of solving data dependence and realizing communication between subdomain are proposed. From the test of program, the portability of MMIC-3D parallel program is satisfied, and compared with the single computer, PC cluster can improve the calculation speed and enlarge the scale greatly.


Author(s):  
C. D. Moen ◽  
G. H. Evans ◽  
S. P. Domino ◽  
S. P. Burns

We present a turbulent combustion code for modeling heat transfer in fires that arise in accident scenarios. The code is a component of a multi-mechanics framework and is based on a domain-decomposition, message-passing approach to parallel computing. The turbulent combustion code is based on a vertex-centered, finite-volume scheme for 3D unstructured meshes. The multi-mechanics nature of the frameworks allows us to couple to a conduction heat transfer code for conjugate heat transfer problems or a participating media radiation code for radiation transport in soot-laden flows. We describe our numerical methods, our approach to parallel computing, and the multi-mechanics frameworks. We demonstrate parallel performance using some example verification problems.


1997 ◽  
Vol 40 (1) ◽  
pp. 19-34 ◽  
Author(s):  
Jehoshua Bruck ◽  
Danny Dolev ◽  
Ching-Tien Ho ◽  
Marcel-Cătălin Roşu ◽  
Ray Strong

2017 ◽  
Vol 2017 ◽  
pp. 1-12
Author(s):  
Chunlei Chen ◽  
Li He ◽  
Huixiang Zhang ◽  
Hao Zheng ◽  
Lei Wang

Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions.


2004 ◽  
Vol 10 (9) ◽  
pp. 1335-1357
Author(s):  
Takashi Nagata

This paper presents a general and efficient formulation applicable to a vast variety of rigid and flexible multibody systems. It is based on a variable-gain error correction with scaling and adaptive control of the convergence parameter. The methodology has the following distinctive features. (i) All types of holonomic and non-holonomic equality constraints as well as a class of inequalities can be treated in a plain and unified manner. (ii) Stability of the constraints is assured. (iii) The formulation has an order Ncomputational cost in terms of both the constrained and unconstrained degrees of freedom, regardless of the system topology. (iv) Unlike the traditional recursive order Nalgorithms, it is quite amenable to parallel computation. (v) Because virtually no matrix operations are involved, it can be implemented to very simple general-purpose simulation programs. Noting the advantages, the algorithm has been realized as a C++ code supporting distributed processing through the Message-Passing Interface (MPI). Versatility, dynamical validity and efficiency of the approach are demonstrated through numerical studies of several particular systems including a crawler and a flexible space structure.


Sign in / Sign up

Export Citation Format

Share Document