partitioned global address space
Recently Published Documents


TOTAL DOCUMENTS

39
(FIVE YEARS 6)

H-INDEX

8
(FIVE YEARS 0)

Author(s):  
Pascal Jungblut ◽  
Karl Fürlinger

AbstractThe Partitioned Global Address Space (PGAS) programming model brings intuitive shared memory semantics to distributed memory systems. Even with an abstract and unifying virtual global address space it is, however, challenging to use the full potential of different systems. Without explicit support by the implementation node-local operations have to be optimized manually for each architecture. A goal of this work is to offer a user-friendly programming model that provides portable performance across systems. In this paper we present an approach to integrate node-level programming abstractions with the PGAS programming model. We describe the hierarchical data distribution with local patterns and our implementation, MEPHISTO, in C++ using two existing projects. The evaluation of MEPHISTO shows that our approach achieves portable performance while requiring only minimal changes to port it from a CPU-based system to a GPU-based one using a CUDA or HIP back-end.


Author(s):  
Hitoshi Murai ◽  
Masahiro Nakao ◽  
Mitsuhisa Sato

AbstractXcalableMP (XMP) is a directive-based language extension of Fortran and C for distributed-memory parallel computers, and can be classified as a partitioned global address space (PGAS) language. One of the remarkable characteristics of XMP is that it supports both global-view and local-view parallel programming. This chapter describes the programming model and language specification of XMP.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Marek Nowicki

AbstractSorting algorithms are among the most commonly used algorithms in computer science and modern software. Having efficient implementation of sorting is necessary for a wide spectrum of scientific applications. This paper describes the sorting algorithm written using the partitioned global address space (PGAS) model, implemented using the Parallel Computing in Java (PCJ) library. The iterative implementation description is used to outline the possible performance issues and provide means to resolve them. The key idea of the implementation is to have an efficient building block that can be easily integrated into many application codes. This paper also presents the performance comparison of the PCJ implementation with the MapReduce approach, using Apache Hadoop TeraSort implementation. The comparison serves to show that the performance of the implementation is good enough, as the PCJ implementation shows similar efficiency to the Hadoop implementation.


2018 ◽  
Vol 11 (10) ◽  
pp. 3983-3997 ◽  
Author(s):  
Vladimir V. Kalmykov ◽  
Rashit A. Ibrayev ◽  
Maxim N. Kaurkin ◽  
Konstantin V. Ushakov

Abstract. We present a new version of the Compact Modeling Framework (CMF3.0) developed for the software environment of stand-alone and coupled global geophysical fluid models. The CMF3.0 is designed for use on high- and ultrahigh-resolution models on massively parallel supercomputers.The key features of the previous CMF, version 2.0, are mentioned to reflect progress in our research. In CMF3.0, the message passing interface (MPI) approach with a high-level abstract driver, optimized coupler interpolation and I/O algorithms is replaced with the Partitioned Global Address Space (PGAS) paradigm communications scheme, while the central hub architecture evolves into a set of simultaneously working services. Performance tests for both versions are carried out. As an addition, some information about the parallel realization of the EnOI (Ensemble Optimal Interpolation) data assimilation method and the nesting technology, as program services of the CMF3.0, is presented.


Sign in / Sign up

Export Citation Format

Share Document