The Scalability of Embedded Structured Grids and Unstructured Grids in Large Scale Ice Sheet Modeling on Distributed Memory Parallel Computers

Author(s):  
Phillip M. Dickens ◽  
Christopher Dufour ◽  
James Fastook
2008 ◽  
Vol 05 (02) ◽  
pp. 273-287
Author(s):  
LI CHEN ◽  
HIROSHI OKUDA

This paper describes a parallel visualization library for large-scale datasets developed in the HPC-MW project. Three parallel frameworks are provided in the library to satisfy different requirements of applications. Meanwhile, it is applicable for a variety of mesh types covering particles, structured grids and unstructured grids. Many techniques have been employed to improve the quality of the visualization. High speedup performance has been achieved by some hardware-oriented optimization strategies on different platforms, from PC clusters to the Earth Simulator. Good results have been obtained on some typical parallel platforms, thus demonstrating the feasibility and effectiveness of our library.


2021 ◽  
Author(s):  
Michele Petrini ◽  
Miren Vizcaino ◽  
Raymond Sellevold ◽  
Laura Muntjewerf ◽  
Sotiria Georgiou ◽  
...  

<p>Previous coupled climate-ice sheet modeling studies indicate that the warming threshold leading to multi-millennial, large-scale deglaciation of the Greenland Ice Sheet (GrIS) is in the range of 1.6-3.0 K above the pre-industrial climate. These studies either used an intermediate complexity RCM (Robinson et al. 2012) or a low resolution GCM (Gregory et al., 2020) coupled to a zero-order ISM. Here, we investigate the warming threshold and long-term response time of the GrIS using the higher-order Community Ice Sheet Model version 2 (CISM2, Lipscomb et al. 2019), forced with surface mass balance (SMB) calculated with the Community Earth System Model version 2 (CESM2, Danabasoglu et al. 2020). We use different forcing climatologies from a coupled CESM2/CISM2 simulation under high greenhouse gas forcing (Muntjewerf et al. 2020), where each climatology corresponds to a different global warming level in the range of 1-8.5 K above the pre-industrial climate. The SMB, which is calculated in CESM2 using an advanced energy balance scheme at multiple elevation classes (Muntjewerf et al. 2020), is downscaled during runtime to CISM2, thus allowing to account for the surface elevation feedback. In all the simulations the forcing is cycled until the ice sheet is fully deglaciated or has reached a new equilibrium. In a first set of simulations, we find that for a warming level higher than 5.2 K above pre-industrial the ice sheet will disappear, with the timing ranging between 2000 (+8.5 K) and 6000 years (+5.2 K). At a warming level of 2.8 K above pre-industrial, the ice loss does not exceed 2 m SLE, and most of the retreat occurs in the first 10,000 years in the south-west and central-west basins. In contrast, with a higher warming level of 3.6 K above pre-industrial as much as 7 m SLE of ice are loss in 20,000 years, with primary contributions from the western, northern and north-eastern basins. We will conclude by showing preliminary results from a second set of simulations focusing on the 2.8-3.6 K warming above pre-industrial interval.</p>


2005 ◽  
Vol 51 (172) ◽  
pp. 3-14 ◽  
Author(s):  
Fabie Gillet-Chaulet ◽  
Olivier Gagliardini ◽  
Jacques Meyssonnier ◽  
Maurine Montagnat ◽  
Olivier Castelnau

AbstractFor accurate ice-sheet flow modelling, the anisotropic behaviour of ice must be taken fully into account. However, physically based micro-macro (μ-M) models for the behaviour of an anisotropic ice polycrystal are too complex to be implemented easily in large-scale ice-sheet flow models. An easy and efficient method to remedy this is presented. Polar ice is assumed to behave as a linearly viscous orthotropic material whose general flow law (GOLF) depends on six parameters, and its orthotropic fabric is described by an ‘orientation distribution function’ (ODF) depending on two parameters. A method to pass from the ODF to a discrete description of the fabric, and vice versa, is presented. Considering any available μ-M model, the parameters of the GOLF that fit the response obtained by running this μ-M model are calculated for any set of ODF parameters. It is thus possible to tabulate the GOLF over a grid in the space of the ODF parameters. This step is performed once and for all. Ice-sheet flow models need the general form of the GOLF to be implemented in the available code (once), then, during each individual run, to retrieve the GOLF parameters from the table by interpolation. As an application example, the GOLF is tabulated using three different μ-M models and used to derive the rheological properties of ice along the Greenland Icecore Project (GRIP) ice core.


Fluids ◽  
2021 ◽  
Vol 6 (11) ◽  
pp. 395
Author(s):  
Hui Liu ◽  
Zhangxin Chen ◽  
Xiaohu Guo ◽  
Lihua Shen

Reservoir simulation is to solve a set of fluid flow equations through porous media, which are partial differential equations from the petroleum engineering industry and described by Darcy’s law. This paper introduces the model, numerical methods, algorithms and parallel implementation of a thermal reservoir simulator that is designed for numerical simulations of a thermal reservoir with multiple components in three-dimensional domain using distributed-memory parallel computers. Its full mathematical model is introduced with correlations for important properties and well modeling. Efficient numerical methods (discretization scheme, matrix decoupling methods, and preconditioners), parallel computing technologies, and implementation details are presented. The numerical methods applied in this paper are suitable for large-scale thermal reservoir simulations with dozens of thousands of CPU cores (MPI processes), which are efficient and scalable. The simulator is designed for giant models with billions or even trillions of grid blocks using hundreds of thousands of CPUs, which is our main focus. The validation part is compared with CMG STARS, which is one of the most popular and mature commercial thermal simulators. Numerical experiments show that our results match commercial simulators, which confirms the correctness of our methods and implementations. SAGD simulation with 7406 well pairs is also presented to study the effectiveness of our numerical methods. Scalability testings demonstrate that our simulator can handle giant models with billions of grid blocks using 100,800 CPU cores and the simulator has good scalability.


2021 ◽  
Vol 26 ◽  
pp. 1-67
Author(s):  
Patrick Dinklage ◽  
Jonas Ellert ◽  
Johannes Fischer ◽  
Florian Kurpicz ◽  
Marvin Löbel

We present new sequential and parallel algorithms for wavelet tree construction based on a new bottom-up technique. This technique makes use of the structure of the wavelet trees—refining the characters represented in a node of the tree with increasing depth—in an opposite way, by first computing the leaves (most refined), and then propagating this information upwards to the root of the tree. We first describe new sequential algorithms, both in RAM and external memory. Based on these results, we adapt these algorithms to parallel computers, where we address both shared memory and distributed memory settings. In practice, all our algorithms outperform previous ones in both time and memory efficiency, because we can compute all auxiliary information solely based on the information we obtained from computing the leaves. Most of our algorithms are also adapted to the wavelet matrix , a variant that is particularly suited for large alphabets.


Sign in / Sign up

Export Citation Format

Share Document