Parallel Processing on Distributed Memory Multiprocessors

Author(s):  
Max Lemke ◽  
Anton Schüller ◽  
Karl Solchenbach ◽  
Ulrich Trottenberg
2005 ◽  
Vol 18 (2) ◽  
pp. 219-224
Author(s):  
Emina Milovanovic ◽  
Natalija Stojanovic

Because many universities lack the funds to purchase expensive parallel computers, cost effective alternatives are needed to teach students about parallel processing. Free software is available to support the three major paradigms of parallel computing. Parallaxis is a sophisticated SIMD simulator which runs on a variety of platforms.jBACI shared memory simulator supports the MIMD model of computing with a common shared memory. PVM and MPI allow students to treat a network of workstations as a message passing MIMD multicomputer with distributed memory. Each of this software tools can be used in a variety of courses to give students experience with parallel algorithms.


2007 ◽  
Vol 340-341 ◽  
pp. 371-376 ◽  
Author(s):  
Kenichiro Mori ◽  
Y. Kanno

The 3-D rigid-plastic finite element method using a diagonal matrix was applied to parallel processing using a distributed memory type PC cluster. The cluster composed of cheap PCs becomes common as a low-cost system in the parallel processing. Since the computers in the distributed memory type PC cluster have individual memory units, the transfer of date among computers during the computation is required, and thus the time for the data transfer is taken into consideration. The renewal of data in each computation is limited because of the time of data transfer unlike the shared memory type workstation. This brings about the delay of data renewal. A data transfer scheme was investigated to optimize the total computational time in the parallel processing. The effect of the delay of date renewal on the convergence of the solution was examined in the simulation of upsetting of rectangular block with an inclined tool by means of a cluster composed of 4 PCs and 100MBit/s Ethernet.


2006 ◽  
Vol 17 (02) ◽  
pp. 287-301 ◽  
Author(s):  
MOURAD HAKEM ◽  
FRANCK BUTELLE

In this paper we present an efficient algorithm for compile-time scheduling and clustering of parallel programs onto parallel processing systems with distributed memory, which is called The Dynamic Critical Path Scheduling DCPS. The DCPS is superior to several other algorithms from the literature in terms of computational complexity, processors consumption and solution quality. DCPS has a time complexity of O(e + v log v), as opposed to DSC algorithm O((e+v) log v) which is the best known algorithm. Experimental results demonstrate the superiority of DCPS over the DSC algorithm.


2006 ◽  
Vol 2006.19 (0) ◽  
pp. 559-560
Author(s):  
Tsunakiyo Iribe ◽  
Toshimitsu Fujisawa ◽  
Seiichi Koshizuka ◽  
Genki Yagawa

Sign in / Sign up

Export Citation Format

Share Document