parallel programming
Recently Published Documents


TOTAL DOCUMENTS

1880
(FIVE YEARS 197)

H-INDEX

46
(FIVE YEARS 4)

2021 ◽  
Vol 28 (4) ◽  
pp. 394-412
Author(s):  
Andrew M. Mironov

The paper presents a new mathematical model of parallel programs, on the basis of which it is possible, in particular, to verify parallel programs presented on a certain subset of the parallel programming interface MPI. This model is based on the concepts of a sequential and distributed process. A parallel program is modeled as a distributed process in which sequential processes communicate by asynchronously sending and receiving messages over channels. The main advantage of the described model is the ability to simulate and verify parallel programs that generate an indefinite number of sequential processes. The proposed model is illustrated by the application of verification of the matrix multiplication MPI program.


2021 ◽  
Author(s):  
Volodymyr Slipchenko ◽  
Liubov Poliahushko ◽  
Olha Krush
Keyword(s):  

Electronics ◽  
2021 ◽  
Vol 10 (21) ◽  
pp. 2681
Author(s):  
Joonmoo Huh ◽  
Deokwoo Lee

Shared memory is the most popular parallel programming model for multi-core processors, while message passing is generally used for large distributed machines. However, as the number of cores on a chip increases, the relative merits of shared memory versus message passing change, and we argue that message passing becomes a viable, high performing, and parallel programming model. To demonstrate this hypothesis, we compare a shared memory architecture with a new message passing architecture on a suite of applications tuned for each system independently. Perhaps surprisingly, the fundamental behaviors of the applications studied in this work, when optimized for both models, are very similar to each other, and both could execute efficiently on multicore architectures despite many implementations being different from each other. Furthermore, if hardware is tuned to support message passing by supporting bulk message transfer and the elimination of unnecessary coherence overheads, and if effective support is available for global operations, then some applications would perform much better on a message passing architecture. Leveraging our insights, we design a message passing architecture that supports both memory-to-memory and cache-to-cache messaging in hardware. With the new architecture, message passing is able to outperform its shared memory counterparts on many of the applications due to the unique advantages of the message passing hardware as compared to cache coherence. In the best case, message passing achieves up to a 34% increase in speed over its shared memory counterpart, and it achieves an average 10% increase in speed. In the worst case, message passing is slowed down in two applications—CG (conjugate gradient) and FT (Fourier transform)—because it could not perform well on the unique data sharing patterns as its counterpart of shared memory. Overall, our analysis demonstrates the importance of considering message passing as a high performing and hardware-supported programming model on future multicore architectures.


2021 ◽  
Vol 23 (6) ◽  
pp. 77-80
Author(s):  
Timothy G. Mattson ◽  
Todd A. Anderson ◽  
Giorgis Georgakoudis ◽  
Konrad Hinsen ◽  
Anshu Dubey
Keyword(s):  

2021 ◽  
Author(s):  
Zane Fink ◽  
Simeng Liu ◽  
Jaemin Choi ◽  
Matthias Diener ◽  
Laxmikant V. Kale

2021 ◽  
Author(s):  
Xiaochun Zhang ◽  
Timothy M. Jones ◽  
Simone Campanoni

Sign in / Sign up

Export Citation Format

Share Document