scholarly journals Compact Modeling Framework v3.0 for high-resolution global ocean–ice–atmosphere models

2018 ◽  
Vol 11 (10) ◽  
pp. 3983-3997 ◽  
Author(s):  
Vladimir V. Kalmykov ◽  
Rashit A. Ibrayev ◽  
Maxim N. Kaurkin ◽  
Konstantin V. Ushakov

Abstract. We present a new version of the Compact Modeling Framework (CMF3.0) developed for the software environment of stand-alone and coupled global geophysical fluid models. The CMF3.0 is designed for use on high- and ultrahigh-resolution models on massively parallel supercomputers.The key features of the previous CMF, version 2.0, are mentioned to reflect progress in our research. In CMF3.0, the message passing interface (MPI) approach with a high-level abstract driver, optimized coupler interpolation and I/O algorithms is replaced with the Partitioned Global Address Space (PGAS) paradigm communications scheme, while the central hub architecture evolves into a set of simultaneously working services. Performance tests for both versions are carried out. As an addition, some information about the parallel realization of the EnOI (Ensemble Optimal Interpolation) data assimilation method and the nesting technology, as program services of the CMF3.0, is presented.

2018 ◽  
Author(s):  
Vladimir V. Kalmykov ◽  
Rashit A. Ibrayev ◽  
Maxim N. Kaurkin ◽  
Konstantin V. Ushakov

Abstract. We present new version of the Compact Modeling Framework (CMF3.0) developed for providing the software environment for stand-alone and coupled models of the Global geophysical fluids. The CMF3.0 designed for implementation high and ultra-high resolution models at massive-parallel supercomputers. The key features of the previous CMF version (2.0) are mentioned for reflecting progress in our researches. In the CMF3.0 pure MPI approach with high-level abstract driver, optimized coupler interpolation and I/O algorithms is replaced with PGAS paradigm communications scheme, while central hub architecture evolves to the set of simultaneously working services. Performance tests for both versions are carried out. As addition a parallel realisation of the EnOI (Ensemble Optimal Interpolation) data assimilation method as program service of CMF3.0 is presented.


2017 ◽  
Vol 2017 ◽  
pp. 1-12 ◽  
Author(s):  
Anuj Sharma ◽  
Irene Moulitsas

High-resolution numerical methods and unstructured meshes are required in many applications of Computational Fluid Dynamics (CFD). These methods are quite computationally expensive and hence benefit from being parallelized. Message Passing Interface (MPI) has been utilized traditionally as a parallelization strategy. However, the inherent complexity of MPI contributes further to the existing complexity of the CFD scientific codes. The Partitioned Global Address Space (PGAS) parallelization paradigm was introduced in an attempt to improve the clarity of the parallel implementation. We present our experiences of converting an unstructured high-resolution compressible Navier-Stokes CFD solver from MPI to PGAS Coarray Fortran. We present the challenges, methodology, and performance measurements of our approach using Coarray Fortran. With the Cray compiler, we observe Coarray Fortran as a viable alternative to MPI. We are hopeful that Intel and open-source implementations could be utilized in the future.


2017 ◽  
Vol 10 (3) ◽  
pp. 1091-1106 ◽  
Author(s):  
Laurent Bessières ◽  
Stéphanie Leroux ◽  
Jean-Michel Brankart ◽  
Jean-Marc Molines ◽  
Marie-Pierre Moine ◽  
...  

Abstract. This paper presents the technical implementation of a new, probabilistic version of the NEMO ocean–sea-ice modelling system. Ensemble simulations with N members running simultaneously within a single executable, and interacting mutually if needed, are made possible through an enhanced message-passing interface (MPI) strategy including a double parallelization in the spatial and ensemble dimensions. An example application is then given to illustrate the implementation, performances, and potential use of this novel probabilistic modelling tool. A large ensemble of 50 global ocean–sea-ice hindcasts has been performed over the period 1960–2015 at eddy-permitting resolution (1∕4°) for the OCCIPUT (oceanic chaos – impacts, structure, predictability) project. This application aims to simultaneously simulate the intrinsic/chaotic and the atmospherically forced contributions to the ocean variability, from mesoscale turbulence to interannual-to-multidecadal timescales. Such an ensemble indeed provides a unique way to disentangle and study both contributions, as the forced variability may be estimated through the ensemble mean, and the intrinsic chaotic variability may be estimated through the ensemble spread.


Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1330
Author(s):  
Junjie Zhang ◽  
Lukas Razik ◽  
Sigurd Hofsmo Jakobsen ◽  
Salvatore D’Arco ◽  
Andrea Benigni

In this paper we introduce an approach to accelerate many-scenario (i.e., hundreds to thousands) power system simulations which is based on a highly scalable and flexible open-source software environment. In this approach, the parallel execution of simulations follows the single program, multiple data (SPMD) paradigm, where the dynamic simulation program is executed in parallel and takes different inputs to generate different scenarios. The power system is modeled using an existing Modelica library and compiled to a simulation executable using the OpenModelica Compiler. Furthermore, the parallel simulation is performed with the aid of a message-passing interface (MPI) and the approach includes dynamic workload balancing. Finally, benchmarks with the simulation environment are performed on high-performance computing (HPC) clusters with four test cases. The results show high scalability and a considerable parallel speedup of the proposed approach in the simulation of all scenarios.


2020 ◽  
Vol 15 ◽  
Author(s):  
Weiwen Zhang ◽  
Long Wang ◽  
Theint Theint Aye ◽  
Juniarto Samsudin ◽  
Yongqing Zhu

Background: Genotype imputation as a service is developed to enable researchers to estimate genotypes on haplotyped data without performing whole genome sequencing. However, genotype imputation is computation intensive and thus it remains a challenge to satisfy the high performance requirement of genome wide association study (GWAS). Objective: In this paper, we propose a high performance computing solution for genotype imputation on supercomputers to enhance its execution performance. Method: We design and implement a multi-level parallelization that includes job level, process level and thread level parallelization, enabled by job scheduling management, message passing interface (MPI) and OpenMP, respectively. It involves job distribution, chunk partition and execution, parallelized iteration for imputation and data concatenation. Due to the design of multi-level parallelization, we can exploit the multi-machine/multi-core architecture to improve the performance of genotype imputation. Results: Experiment results show that our proposed method can outperform the Hadoop-based implementation of genotype imputation. Moreover, we conduct the experiments on supercomputers to evaluate the performance of the proposed method. The evaluation shows that it can significantly shorten the execution time, thus improving the performance for genotype imputation. Conclusion: The proposed multi-level parallelization, when deployed as an imputation as a service, will facilitate bioinformatics researchers in Singapore to conduct genotype imputation and enhance the association study.


2021 ◽  
Vol 13 (12) ◽  
pp. 2402
Author(s):  
Weifu Sun ◽  
Jin Wang ◽  
Yuheng Li ◽  
Junmin Meng ◽  
Yujia Zhao ◽  
...  

Based on the optimal interpolation (OI) algorithm, a daily fusion product of high-resolution global ocean columnar atmospheric water vapor with a resolution of 0.25° was generated in this study from multisource remote sensing observations. The product covers the period from 2003 to 2018, and the data represent a fusion of microwave radiometer observations, including those from the Special Sensor Microwave Imager Sounder (SSMIS), WindSat, Advanced Microwave Scanning Radiometer for Earth Observing System sensor (AMSR-E), Advanced Microwave Scanning Radiometer 2 (AMSR2), and HY-2A microwave radiometer (MR). The accuracy of this water vapor fusion product was validated using radiosonde water vapor observations. The comparative results show that the overall mean deviation (Bias) is smaller than 0.6 mm; the root mean square error (RMSE) and standard deviation (SD) are better than 3 mm, and the mean absolute deviation (MAD) and correlation coefficient (R) are better than 2 mm and 0.98, respectively.


2019 ◽  
Vol 214 ◽  
pp. 05010 ◽  
Author(s):  
Giulio Eulisse ◽  
Piotr Konopka ◽  
Mikolaj Krzewicki ◽  
Matthias Richter ◽  
David Rohr ◽  
...  

ALICE is one of the four major LHC experiments at CERN. When the accelerator enters the Run 3 data-taking period, starting in 2021, ALICE expects almost 100 times more Pb-Pb central collisions than now, resulting in a large increase of data throughput. In order to cope with this new challenge, the collaboration had to extensively rethink the whole data processing chain, with a tighter integration between Online and Offline computing worlds. Such a system, code-named ALICE O2, is being developed in collaboration with the FAIR experiments at GSI. It is based on the ALFA framework which provides a generalized implementation of the ALICE High Level Trigger approach, designed around distributed software entities coordinating and communicating via message passing. We will highlight our efforts to integrate ALFA within the ALICE O2 environment. We analyze the challenges arising from the different running environments for production and development, and conclude on requirements for a flexible and modular software framework. In particular we will present the ALICE O2 Data Processing Layer which deals with ALICE specific requirements in terms of Data Model. The main goal is to reduce the complexity of development of algorithms and managing a distributed system, and by that leading to a significant simplification for the large majority of the ALICE users.


Energies ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 2284
Author(s):  
Krzysztof Przystupa ◽  
Mykola Beshley ◽  
Olena Hordiichuk-Bublivska ◽  
Marian Kyryk ◽  
Halyna Beshley ◽  
...  

The problem of analyzing a big amount of user data to determine their preferences and, based on these data, to provide recommendations on new products is important. Depending on the correctness and timeliness of the recommendations, significant profits or losses can be obtained. The task of analyzing data on users of services of companies is carried out in special recommendation systems. However, with a large number of users, the data for processing become very big, which causes complexity in the work of recommendation systems. For efficient data analysis in commercial systems, the Singular Value Decomposition (SVD) method can perform intelligent analysis of information. With a large amount of processed information we proposed to use distributed systems. This approach allows reducing time of data processing and recommendations to users. For the experimental study, we implemented the distributed SVD method using Message Passing Interface, Hadoop and Spark technologies and obtained the results of reducing the time of data processing when using distributed systems compared to non-distributed ones.


1996 ◽  
Vol 22 (6) ◽  
pp. 789-828 ◽  
Author(s):  
William Gropp ◽  
Ewing Lusk ◽  
Nathan Doss ◽  
Anthony Skjellum

Sign in / Sign up

Export Citation Format

Share Document