scholarly journals Development of a probabilistic ocean modelling system based on NEMO 3.5: application at eddying resolution

2017 ◽  
Vol 10 (3) ◽  
pp. 1091-1106 ◽  
Author(s):  
Laurent Bessières ◽  
Stéphanie Leroux ◽  
Jean-Michel Brankart ◽  
Jean-Marc Molines ◽  
Marie-Pierre Moine ◽  
...  

Abstract. This paper presents the technical implementation of a new, probabilistic version of the NEMO ocean–sea-ice modelling system. Ensemble simulations with N members running simultaneously within a single executable, and interacting mutually if needed, are made possible through an enhanced message-passing interface (MPI) strategy including a double parallelization in the spatial and ensemble dimensions. An example application is then given to illustrate the implementation, performances, and potential use of this novel probabilistic modelling tool. A large ensemble of 50 global ocean–sea-ice hindcasts has been performed over the period 1960–2015 at eddy-permitting resolution (1∕4°) for the OCCIPUT (oceanic chaos – impacts, structure, predictability) project. This application aims to simultaneously simulate the intrinsic/chaotic and the atmospherically forced contributions to the ocean variability, from mesoscale turbulence to interannual-to-multidecadal timescales. Such an ensemble indeed provides a unique way to disentangle and study both contributions, as the forced variability may be estimated through the ensemble mean, and the intrinsic chaotic variability may be estimated through the ensemble spread.

2016 ◽  
Author(s):  
Laurent Bessières ◽  
Stéphanie Leroux ◽  
Jean-Michel Brankart ◽  
Jean-Marc Molines ◽  
Marie-Pierre Moine ◽  
...  

Abstract. This paper presents the technical implementation of a new, probabilistic version of the NEMO ocean/sea-ice modelling system. Ensemble simulations with N members running simultaneously within a single executable, and interacting mutually if needed, are made possible through an enhanced MPI strategy including a double parallelization in the spatial and ensemble dimensions. An example application is then given to illustrate the implementation, performances and potential use of this novel probabilistic modelling tool. A large ensemble of 50 global ocean/sea-ice hindcasts has been performed over the period 1960–2015 at eddy-permitting resolution (1/4°) for the OCCIPUT project. This application is aimed to simultaneously simulate the intrinsic/chaotic and the atmospherically-forced contributions to the ocean variability, from meso-scale turbulence to interannual-to-multidecadal time scales. Such an ensemble indeed provides a unique way to disantangle and study both contributions, as the forced variability may be estimated through the ensemble mean, and the intrinsic chaotic variability may be estimated through the ensemble spread.


2021 ◽  
Author(s):  
Maciej Muzyka ◽  
Jaromir Jakacki ◽  
Anna Przyborska

<p>The Regional Ocean Modelling System has been begun to implement for region of Baltic Sea.  A preliminary curvilinear grid with horizontal resolution ca. 2.3 km has been prepared based on the grid, which was used in previous application in our research group (in Parallel Ocean Program and in standalone version of Los Alamos Sea Ice Model - CICE).  Currently the grid has 30 sigma layers, but the final number of levels will be adjusted accordingly.</p><p>So far we’ve successfully compiled the model on our machine, run test cases and created Baltic Sea case, which is working with mentioned Baltic grid. The following parameters: air pressure, humidity, surface temperature, long and shortwave radiation, precipitation and wind components are used as an atmospheric forcing. The data arrive from our operational atmospheric model - Weather Research and Forecasting Model (WRF).</p><p>Our main goal is to create efficient system for hindcast and forecast simulations of Baltic Sea together with sea ice component by coupling ROMS with CICE. The reason for choosing these two models is an active community that takes care about model’s developments and updates. Authors also intend to work more closely with the CICE model to improve its agreement with satellite measurements in the Baltic region.<br><br>Calculations were carried out at the Academic Computer Centre in Gdańsk.</p>


2018 ◽  
Vol 11 (10) ◽  
pp. 3983-3997 ◽  
Author(s):  
Vladimir V. Kalmykov ◽  
Rashit A. Ibrayev ◽  
Maxim N. Kaurkin ◽  
Konstantin V. Ushakov

Abstract. We present a new version of the Compact Modeling Framework (CMF3.0) developed for the software environment of stand-alone and coupled global geophysical fluid models. The CMF3.0 is designed for use on high- and ultrahigh-resolution models on massively parallel supercomputers.The key features of the previous CMF, version 2.0, are mentioned to reflect progress in our research. In CMF3.0, the message passing interface (MPI) approach with a high-level abstract driver, optimized coupler interpolation and I/O algorithms is replaced with the Partitioned Global Address Space (PGAS) paradigm communications scheme, while the central hub architecture evolves into a set of simultaneously working services. Performance tests for both versions are carried out. As an addition, some information about the parallel realization of the EnOI (Ensemble Optimal Interpolation) data assimilation method and the nesting technology, as program services of the CMF3.0, is presented.


2021 ◽  
Author(s):  
Julia Selivanova ◽  
Doroteaciro Iovino

<p>Ocean reanalyses (ORAs) are used extensively in polar research, hence their realism should be assessed regularly. Here the ORAs performance in the Antarctic region is analyzed with specific emphasis on sea ice concentration and thickness. We used four global ocean-sea ice products: C-GLORSv7, FOAM-GLOSEA5v13, GLORYS2v4, and ORAS5, and their ensemble mean GREP (provided by CMEMS) within the 1993 to 2018 period. All ORAs use the NEMO ocean model in a global eddy-permitting configuration (1/4° horizontal resolution and 75 vertical levels) and are forced by the ECMWF ERA-Interim atmospheric reanalysis.</p><p>Here we examine the ability of ORAs to reproduce sea ice properties in the Southern Ocean taking into account regional characteristics and sea ice types. Seasonal and interannual variability of sea ice concentration (SIC) and sea ice thickness (SIT) is examined in the hemispheric domain and in five sub-regions for three different sea ice classes: pack ice (SIC ≥ 80%), marginal ice zone (MIZ) (15% ≤ SIC < 80%), and sparse ice (0 < SIC <15%).  Modeled sea ice properties are compared to a set of satellite products: NSIDC CDR, Ifremer/CERSAT, and EUMETSAT OSI-SAF for SIC and Envisat and CryoSat-2 for SIT, together with PIOMAS and GIOMAS reanalyses. We revealed shortcomings of reanalysis systems to be improved in the future representation of Antarctic sea ice. Additionally, we focused on the assessment of the GREP ensemble mean product. We found that for certain metrics GREP minimizes the single errors and outperforms individual members. The evidence from this study implies that GREP can be a feasible product for a number of applications.</p>


1997 ◽  
Vol 25 ◽  
pp. 111-115 ◽  
Author(s):  
Achim Stössel

This paper investigates the long-term impact of sea ice on global climate using a global sea-ice–ocean general circulation model (OGCM). The sea-ice component involves state-of-the-art dynamics; the ocean component consists of a 3.5° × 3.5° × 11 layer primitive-equation model. Depending on the physical description of sea ice, significant changes are detected in the convective activity, in the hydrographic properties and in the thermohaline circulation of the ocean model. Most of these changes originate in the Southern Ocean, emphasizing the crucial role of sea ice in this marginally stably stratified region of the world's oceans. Specifically, if the effect of brine release is neglected, the deep layers of the Southern Ocean warm up considerably; this is associated with a weakening of the Southern Hemisphere overturning cell. The removal of the commonly used “salinity enhancement” leads to a similar effect. The deep-ocean salinity is almost unaffected in both experiments. Introducing explicit new-ice thickness growth in partially ice-covered gridcells leads to a substantial increase in convective activity, especially in the Southern Ocean, with a concomitant significant cooling and salinification of the deep ocean. Possible mechanisms for the resulting interactions between sea-ice processes and deep-ocean characteristics are suggested.


2020 ◽  
Vol 15 ◽  
Author(s):  
Weiwen Zhang ◽  
Long Wang ◽  
Theint Theint Aye ◽  
Juniarto Samsudin ◽  
Yongqing Zhu

Background: Genotype imputation as a service is developed to enable researchers to estimate genotypes on haplotyped data without performing whole genome sequencing. However, genotype imputation is computation intensive and thus it remains a challenge to satisfy the high performance requirement of genome wide association study (GWAS). Objective: In this paper, we propose a high performance computing solution for genotype imputation on supercomputers to enhance its execution performance. Method: We design and implement a multi-level parallelization that includes job level, process level and thread level parallelization, enabled by job scheduling management, message passing interface (MPI) and OpenMP, respectively. It involves job distribution, chunk partition and execution, parallelized iteration for imputation and data concatenation. Due to the design of multi-level parallelization, we can exploit the multi-machine/multi-core architecture to improve the performance of genotype imputation. Results: Experiment results show that our proposed method can outperform the Hadoop-based implementation of genotype imputation. Moreover, we conduct the experiments on supercomputers to evaluate the performance of the proposed method. The evaluation shows that it can significantly shorten the execution time, thus improving the performance for genotype imputation. Conclusion: The proposed multi-level parallelization, when deployed as an imputation as a service, will facilitate bioinformatics researchers in Singapore to conduct genotype imputation and enhance the association study.


Energies ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 2284
Author(s):  
Krzysztof Przystupa ◽  
Mykola Beshley ◽  
Olena Hordiichuk-Bublivska ◽  
Marian Kyryk ◽  
Halyna Beshley ◽  
...  

The problem of analyzing a big amount of user data to determine their preferences and, based on these data, to provide recommendations on new products is important. Depending on the correctness and timeliness of the recommendations, significant profits or losses can be obtained. The task of analyzing data on users of services of companies is carried out in special recommendation systems. However, with a large number of users, the data for processing become very big, which causes complexity in the work of recommendation systems. For efficient data analysis in commercial systems, the Singular Value Decomposition (SVD) method can perform intelligent analysis of information. With a large amount of processed information we proposed to use distributed systems. This approach allows reducing time of data processing and recommendations to users. For the experimental study, we implemented the distributed SVD method using Message Passing Interface, Hadoop and Spark technologies and obtained the results of reducing the time of data processing when using distributed systems compared to non-distributed ones.


1996 ◽  
Vol 22 (6) ◽  
pp. 789-828 ◽  
Author(s):  
William Gropp ◽  
Ewing Lusk ◽  
Nathan Doss ◽  
Anthony Skjellum

Sign in / Sign up

Export Citation Format

Share Document