scholarly journals PiCOPIC: 2.5-D PARTICLE-IN-CELL CODE, OPTIMIZED FOR SIMULATION OF BEAM-PLASMA INTERACTIONS

2020 ◽  
pp. 59-63 ◽  
Author(s):  
O.K. Vynnyk ◽  
I.O. Anisimov

Original code optimized forsimulation of interactions between plasma and charged particles beams and bunches, based on particle in cell method described. The code is electromagnetic and fully relativistic with 2.5D axial symmetric geometry. Binary coulomb particle collisions are taken into account. The code is fully parallelized and designed for computer systems with shared memory. Ability to extend supported platforms to systems with distributed memory (like computer clusters or grids) is embedded into code architecture.

1993 ◽  
Vol 2 (4) ◽  
pp. 203-216
Author(s):  
Steve W. Otto

We discuss a set of parallel array classes, MetaMP, for distributed-memory architectures. The classes are implemented in C++ and interface to the PVM or Intel NX message-passing systems. An array class implements a partitioned array as a set of objects distributed across the nodes – a "collective" object. Object methods hide the low-level message-passing and implement meaningful array operations. These include transparent guard strips (or sharing regions) that support finite-difference stencils, reductions and multibroadcasts for support of pivoting and row operations, and interpolation/contraction operations for support of multigrid algorithms. The concept of guard strips is generalized to an object implementation of lightweight sharing mechanisms for finite element method (FEM) and particle-in-cell (PIC) algorithms. The sharing is accomplished through the mechanism of weak memory coherence and can be efficiently implemented. The price of the efficient implementation is memory usage and the need to explicitly specify the coherence operations. An intriguing feature of this programming model is that it maps well to both distributed-memory and shared-memory architectures.


2021 ◽  
Vol 26 ◽  
pp. 1-67
Author(s):  
Patrick Dinklage ◽  
Jonas Ellert ◽  
Johannes Fischer ◽  
Florian Kurpicz ◽  
Marvin Löbel

We present new sequential and parallel algorithms for wavelet tree construction based on a new bottom-up technique. This technique makes use of the structure of the wavelet trees—refining the characters represented in a node of the tree with increasing depth—in an opposite way, by first computing the leaves (most refined), and then propagating this information upwards to the root of the tree. We first describe new sequential algorithms, both in RAM and external memory. Based on these results, we adapt these algorithms to parallel computers, where we address both shared memory and distributed memory settings. In practice, all our algorithms outperform previous ones in both time and memory efficiency, because we can compute all auxiliary information solely based on the information we obtained from computing the leaves. Most of our algorithms are also adapted to the wavelet matrix , a variant that is particularly suited for large alphabets.


1966 ◽  
Vol 92 (6) ◽  
pp. 205-228
Author(s):  
Melvin L. Baron ◽  
Charles E. Christian ◽  
Oleg Skidan

2018 ◽  
Author(s):  
E. A. Berendeev ◽  
A. A. Efimova ◽  
G. I. Dudnikova

2004 ◽  
Vol 7 (1) ◽  
pp. 65-72
Author(s):  
Akira Nomoto ◽  
Yasuo Watanabe ◽  
Wataru Kaneko ◽  
Shugo Nakamura ◽  
Kentaro Shimizu

2020 ◽  
Vol 30 (3) ◽  
pp. 28-33 ◽  
Author(s):  
S. A. Pryadko ◽  
A. Yu. Troshin ◽  
V. D. Kozlov ◽  
A. E. Ivanov

The article describes various options for speeding up calculations on computer systems. These features are closely related to the architecture of these complexes. The objective of this paper is to provide necessary information when selecting the capability for the speeding process of solving the computation problem. The main features implemented using the following models are described: programming in systems with shared memory, programming in systems with distributed memory, and programming on graphics accelerators (video cards). The basic concept, principles, advantages, and disadvantages of each of the considered programming models are described. All standards for writing programs described in the article can be used both on Linux and Windows operating systems. The required libraries are available and compatible with the C/C++ programming language. The article concludes with recommendations on the use of a particular technology, depending on the type of task to be solved.


2005 ◽  
Vol 18 (2) ◽  
pp. 219-224
Author(s):  
Emina Milovanovic ◽  
Natalija Stojanovic

Because many universities lack the funds to purchase expensive parallel computers, cost effective alternatives are needed to teach students about parallel processing. Free software is available to support the three major paradigms of parallel computing. Parallaxis is a sophisticated SIMD simulator which runs on a variety of platforms.jBACI shared memory simulator supports the MIMD model of computing with a common shared memory. PVM and MPI allow students to treat a network of workstations as a message passing MIMD multicomputer with distributed memory. Each of this software tools can be used in a variety of courses to give students experience with parallel algorithms.


Sign in / Sign up

Export Citation Format

Share Document