Simulation-Based Scheduling of Waterway Projects Using a Parallel Genetic Algorithm

Author(s):  
Ning Yang ◽  
Shiaaulir Wang ◽  
Paul Schonfeld

A Parallel Genetic Algorithm (PGA) is used for a simulation-based optimization of waterway project schedules. This PGA is designed to distribute a Genetic Algorithm application over multiple processors in order to speed up the solution search procedure for a very large combinational problem. The proposed PGA is based on a global parallel model, which is also called a master-slave model. A Message-Passing Interface (MPI) is used in developing the parallel computing program. A case study is presented, whose results show how the adaption of a simulation-based optimization algorithm to parallel computing can greatly reduce computation time. Additional techniques which are found to further improve the PGA performance include: (1) choosing an appropriate task distribution method, (2) distributing simulation replications instead of different solutions, (3) avoiding the simulation of duplicate solutions, (4) avoiding running multiple simulations simultaneously in shared-memory processors, and (5) avoiding using multiple processors which belong to different clusters (physical sub-networks).

Author(s):  
Ning Yang ◽  
Shiaaulir Wang ◽  
Paul Schonfeld

A Parallel Genetic Algorithm (PGA) is used for a simulation-based optimization of waterway project schedules. This PGA is designed to distribute a Genetic Algorithm application over multiple processors in order to speed up the solution search procedure for a very large combinational problem. The proposed PGA is based on a global parallel model, which is also called a master-slave model. A Message-Passing Interface (MPI) is used in developing the parallel computing program. A case study is presented, whose results show how the adaption of a simulation-based optimization algorithm to parallel computing can greatly reduce computation time. Additional techniques which are found to further improve the PGA performance include: (1) choosing an appropriate task distribution method, (2) distributing simulation replications instead of different solutions, (3) avoiding the simulation of duplicate solutions, (4) avoiding running multiple simulations simultaneously in shared-memory processors, and (5) avoiding using multiple processors which belong to different clusters (physical sub-networks).


Author(s):  
Ning Yang ◽  
Shiaaulir Wang ◽  
Paul Schonfeld

A Parallel Genetic Algorithm (PGA) is used for a simulation-based optimization of waterway project schedules. This PGA is designed to distribute a Genetic Algorithm application over multiple processors in order to speed up the solution search procedure for a very large combinational problem. The proposed PGA is based on a global parallel model, which is also called a master-slave model. A Message-Passing Interface (MPI) is used in developing the parallel computing program. A case study is presented, whose results show how the adaption of a simulation-based optimization algorithm to parallel computing can greatly reduce computation time. Additional techniques which are found to further improve the PGA performance include: (1) choosing an appropriate task distribution method, (2) distributing simulation replications instead of different solutions, (3) avoiding the simulation of duplicate solutions, (4) avoiding running multiple simulations simultaneously in shared-memory processors, and (5) avoiding using multiple processors which belong to different clusters (physical sub-networks).


Author(s):  
Peng Wen ◽  
Wei Qiu

A constrained interpolation profile (CIP) method has been developed to solve 2-D water entry problems. This paper presents the further development of the numerical method using staggered grids and a parallel computing algorithm. In this work, the multi-phase slamming problems, governed by the Navier-Stokes (N-S) equations, are solved by a CIP-based finite difference method. The interfaces between different phases (solid, water and air) are captured using density functions. A parallel computing algorithm based on the Message Passing Interface (MPI) method and the domain decomposition scheme was implemented to speed up the computations. The effect of decomposition scheme on the solution and the speed-up were studied. Validation studies were carried out for the water entry of various 2-D wedges and a ship section. The predicted slamming force, pressure distribution and free surface elevation are compared with experimental results and other numerical results.


2014 ◽  
Vol 20 (4) ◽  
pp. 477-484 ◽  
Author(s):  
Sarfraz Munir ◽  
Raja Rizwan Hussain ◽  
A. B. M. Saiful Islam

Parallel computing briskly diminishes computation time through simultaneous use of multiple computing resources. In this research, parallel computing techniques have been developed to parallelize a program for obtaining a response of single degree of freedom (SDOF) structure under earthquake loading. The study uses Distributed Memory Processors (DMP) hardware architecture and Message Passing Interface (MPI) compilers directives to parallelize the program. The program is made parallel by domain decomposition. Concurrency in the program is created by dividing the program into two parts to run on different computers, calculating forced response and free response of the first half and the second half. Parallel framework successfully creates concurrency and finds structural responses in significant lesser time than sequential programs.


2014 ◽  
Vol 16 (3) ◽  
pp. 599-611 ◽  
Author(s):  
George M. Petrov ◽  
Jack Davis

AbstractThe implicit 2D3V particle-in-cell (PIC) code developed to study the interaction of ultrashort pulse lasers with matter [G. M. Petrov and J. Davis, Computer Phys. Comm. 179, 868 (2008); Phys. Plasmas 18, 073102 (2011)] has been parallelized using MPI (Message Passing Interface). The parallelization strategy is optimized for a small number of computer cores, up to about 64. Details on the algorithm implementation are given with emphasis on code optimization by overlapping computations with communications. Performance evaluation for 1D domain decomposition has been made on a small Linux cluster with 64 computer cores for two typical regimes of PIC operation: “particle dominated”, for which the bulk of the computation time is spent on pushing particles, and “field dominated”, for which computing the fields is prevalent. For a small number of computer cores, less than 32, the MPI implementation offers a significant numerical speed-up. In the “particle dominated” regime it is close to the maximum theoretical one, while in the “field dominated” regime it is about 75-80 % of the maximum speed-up. For a number of cores exceeding 32, performance degradation takes place as a result of the adopted 1D domain decomposition. The code parallelization will allow future implementation of atomic physics and extension to three dimensions.


2019 ◽  
Vol 28 ◽  
pp. 01031
Author(s):  
Rafal Szczepanski ◽  
Tomasz Tarczewski ◽  
Lech M. Grzesiak

Nowadays the simulation is inseparable part of researcher's work. Its computation time may significantly exceed the experiment time. On the other hand, multi-core processors can be used to reduce computation time by using parallel computing. The parallel computing can be employed to decrease the overall simulation time. In this paper the parallel computing is used to speed-up the auto-tuning process of state feedback speed controller for PMSM drive.


2012 ◽  
Vol 263-266 ◽  
pp. 1315-1318
Author(s):  
Kun Ming Yu ◽  
Ming Gong Lee

This paper is to discuss how Python can be used in designing a cluster parallel computation environment in numerical solution of some block predictor-corrector method for ordinary differential equations. In the parallel process, MPI-2(message passing interface) is used as a standard of MPICH2 to communicate between CPUs. The operation of data receiving and sending are operated and controlled by mpi4py which is based on Python. Implementation of a block predictor-corrector numerical method with one and two CPUs respectively is used to test the performance of some initial value problem. Minor speed up is obtained due to small size problems and few CPUs used in the scheme, though the establishment of this scheme by Python is valuable due to very few research has been carried in this kind of parallel structure under Python.


Author(s):  
Yu-Cheng Chou ◽  
Harry H. Cheng

Message Passing Interface (MPI) is a standardized library specification designed for message-passing parallel programming on large-scale distributed systems. A number of MPI libraries have been implemented to allow users to develop portable programs using the scientific programming languages, Fortran, C and C++. Ch is an embeddable C/C++ interpreter that provides an interpretive environment for C/C++ based scripts and programs. Combining Ch with any MPI C/C++ library provides the functionality for rapid development of MPI C/C++ programs without compilation. In this article, the method of interfacing Ch scripts with MPI C implementations is introduced by using the MPICH2 C library as an example. The MPICH2-based Ch MPI package provides users with the ability to interpretively run MPI C program based on the MPICH2 C library. Running MPI programs through the MPICH2-based Ch MPI package across heterogeneous platforms consisting of Linux and Windows machines is illustrated. Comparisons for the bandwidth, latency, and parallel computation speedup between C MPI, Ch MPI, and MPI for Python in an Ethernet-based environment comprising identical Linux machines are presented. A Web-based example is given to demonstrate the use of Ch and MPICH2 in C based CGI scripting to facilitate the development of Web-based applications for parallel computing.


Sign in / Sign up

Export Citation Format

Share Document