scholarly journals Guest Editorial: High-Level Parallel Programming with Algorithmic Skeletons

2017 ◽  
Vol 46 (1) ◽  
pp. 1-3
Author(s):  
Sergei Gorlatch ◽  
Herbert Kuchen
2018 ◽  
Vol 47 (2) ◽  
pp. 161-163
Author(s):  
J. Daniel García ◽  
Arturo Gonzalez-Escribano

2016 ◽  
Vol 45 (2) ◽  
pp. 199-202
Author(s):  
Marco Danelutto ◽  
Susanna Pelagatti ◽  
Massimo Torquati

2013 ◽  
Vol 42 (4) ◽  
pp. 525-528
Author(s):  
Gaetan Hains ◽  
Youry Khmelevsky

Author(s):  
Breno A. de Melo Menezes ◽  
Nina Herrmann ◽  
Herbert Kuchen ◽  
Fernando Buarque de Lima Neto

AbstractParallel implementations of swarm intelligence algorithms such as the ant colony optimization (ACO) have been widely used to shorten the execution time when solving complex optimization problems. When aiming for a GPU environment, developing efficient parallel versions of such algorithms using CUDA can be a difficult and error-prone task even for experienced programmers. To overcome this issue, the parallel programming model of Algorithmic Skeletons simplifies parallel programs by abstracting from low-level features. This is realized by defining common programming patterns (e.g. map, fold and zip) that later on will be converted to efficient parallel code. In this paper, we show how algorithmic skeletons formulated in the domain specific language Musket can cope with the development of a parallel implementation of ACO and how that compares to a low-level implementation. Our experimental results show that Musket suits the development of ACO. Besides making it easier for the programmer to deal with the parallelization aspects, Musket generates high performance code with similar execution times when compared to low-level implementations.


Author(s):  
Loris Belcastro ◽  
Fabrizio Marozzo ◽  
Domenico Talia ◽  
Paolo Trunfio

2021 ◽  
Vol 24 (1) ◽  
pp. 157-183
Author(s):  
Никита Андреевич Катаев

Automation of parallel programming is important at any stage of parallel program development. These stages include profiling of the original program, program transformation, which allows us to achieve higher performance after program parallelization, and, finally, construction and optimization of the parallel program. It is also important to choose a suitable parallel programming model to express parallelism available in a program. On the one hand, the parallel programming model should be capable to map the parallel program to a variety of existing hardware resources. On the other hand, it should simplify the development of the assistant tools and it should allow the user to explore the parallel program the assistant tools generate in a semi-automatic way. The SAPFOR (System FOR Automated Parallelization) system combines various approaches to automation of parallel programming. Moreover, it allows the user to guide the parallelization if necessary. SAPFOR produces parallel programs according to the high-level DVMH parallel programming model which simplify the development of efficient parallel programs for heterogeneous computing clusters. This paper focuses on the approach to semi-automatic parallel programming, which SAPFOR implements. We discuss the architecture of the system and present the interactive subsystem which is useful to guide the SAPFOR through program parallelization. We used the interactive subsystem to parallelize programs from the NAS Parallel Benchmarks in a semi-automatic way. Finally, we compare the performance of manually written parallel programs with programs the SAPFOR system builds.


2018 ◽  
Vol 84 ◽  
pp. 22-31 ◽  
Author(s):  
Adrián Castelló ◽  
Rafael Mayo ◽  
Kevin Sala ◽  
Vicenç Beltran ◽  
Pavan Balaji ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document