scholarly journals High-resolution stochastic downscaling method for ocean forecasting models and its application to the Red Sea dynamics

Ocean Science ◽  
2021 ◽  
Vol 17 (4) ◽  
pp. 891-907
Author(s):  
Georgy I. Shapiro ◽  
Jose M. Gonzalez-Ondina ◽  
Vladimir N. Belokopytov

Abstract. High-resolution modelling of a large ocean domain requires significant computational resources. The main purpose of this study is to develop an efficient tool for downscaling the lower-resolution data such as those available from Copernicus Marine Environment Monitoring Service (CMEMS). Common methods of downscaling CMEMS ocean models utilise their lower-resolution output as boundary conditions for local, higher-resolution hydrodynamic ocean models. Such methods reveal greater details of spatial distribution of ocean variables; however, they increase the cost of computations and often reduce the model skill due to the so called “double penalty” effect. This effect is a common problem for many high-resolution models where predicted features are displaced in space or time. This paper presents a stochastic–deterministic downscaling (SDD) method, which is an efficient tool for downscaling of ocean models based on the combination of deterministic and stochastic approaches. The ability of the SDD method is first demonstrated in an idealised case when the true solution is known a priori. Then the method is applied to create an operational Stochastic Model of the Red Sea (SMORS), with the parent model being the Mercator Global Ocean Analysis and Forecast System at 1/12∘ resolution. The stochastic component of the model is data-driven rather than equation-driven, and it is applied to the areas smaller than the Rossby radius, within which distributions of ocean variables are more coherent than over a larger distance. The method, based on objective analysis, is similar to what is used for data assimilation in ocean models and stems from the philosophy of 2-D turbulence. SMORS produces finer-resolution (1/24∘ latitude mesh) oceanographic data using the output from a coarser-resolution (1/12∘ mesh) parent model available from CMEMS. The values on the fine-resolution mesh are computed under conditions of minimisation of the cost function, which represents the error between the model and true solution. SMORS has been validated against sea surface temperature and ARGO float observations. Comparisons show that the model and observations are in good agreement and SMORS is not subject to the “double penalty” effect. SMORS is very fast to run on a typical desktop PC and can be relocated to another area of the ocean.

2021 ◽  
Author(s):  
Georgy I. Shapiro ◽  
Jose M. Gonzalez-Ondina ◽  
Vladimir N. Belokopytov

Abstract. High-resolution modelling of a large ocean domain requires significant computational resources. The main purpose of this study is to develop an efficient tool for downscaling the lower resolution data such as available from Copernicus Marine Environment Monitoring Service (CMEMS). Common methods of downscaling CMEMS ocean models utilize their lower resolution output as boundary conditions for local, higher resolution hydrodynamic ocean models. Such methods reveal greater details of spatial distribution of ocean variables; however, they increase the cost of computations, and often reduce the model skill due to the so called double penalty effect. This effect is a common problem for many high-resolution models where predicted features are displaced in space or time. This paper presents a Stochastic Deterministic Downscaling (SDD) method, which is an efficient tool for downscaling of ocean models based on the combination of deterministic and stochastic approaches. The ability of the SDD method is first demonstrated in an idealised case when the true solution is known a priori. Then the method is applied to create an operational eddy-resolving Stochastic Model of the Red Sea (SMORS) with the parent model being the eddy-permitting Mercator Global Ocean Analysis and Forecast System. The stochastic component is data-driven rather than equation-driven and applied to the areas smaller than the Rossby radius, where distributions of ocean variables are more coherent. The method, based on objective analysis, is similar to what is used for data assimilation in ocean models, and stems from the philosophy of 2D turbulence. The SMORS model produces higher resolution (1/24th degree latitude mesh) oceanographic data using the output from a coarser resolution (1/12th degree mesh) parent model available from CMEMS. The values on the high-resolution mesh are computed under condition of minimisation of the cost function which represents the error between the model and true solution. The SMORS model has been validated against Sea Surface Temperature and ARGO floats observations. Comparisons show that the model and observations are in good agreement and SMORS is not subject to the ‘double penalty’ effect. SMORS is very fast to run on a typical desktop PC and can be relocated to another area of the ocean.


2018 ◽  
Author(s):  
Vladimir V. Kalmykov ◽  
Rashit A. Ibrayev ◽  
Maxim N. Kaurkin ◽  
Konstantin V. Ushakov

Abstract. We present new version of the Compact Modeling Framework (CMF3.0) developed for providing the software environment for stand-alone and coupled models of the Global geophysical fluids. The CMF3.0 designed for implementation high and ultra-high resolution models at massive-parallel supercomputers. The key features of the previous CMF version (2.0) are mentioned for reflecting progress in our researches. In the CMF3.0 pure MPI approach with high-level abstract driver, optimized coupler interpolation and I/O algorithms is replaced with PGAS paradigm communications scheme, while central hub architecture evolves to the set of simultaneously working services. Performance tests for both versions are carried out. As addition a parallel realisation of the EnOI (Ensemble Optimal Interpolation) data assimilation method as program service of CMF3.0 is presented.


2017 ◽  
Vol 10 (1) ◽  
pp. 499-523 ◽  
Author(s):  
Jason Holt ◽  
Patrick Hyder ◽  
Mike Ashworth ◽  
James Harle ◽  
Helene T. Hewitt ◽  
...  

Abstract. Accurately representing coastal and shelf seas in global ocean models represents one of the grand challenges of Earth system science. They are regions of immense societal importance through the goods and services they provide, hazards they pose and their role in global-scale processes and cycles, e.g. carbon fluxes and dense water formation. However, they are poorly represented in the current generation of global ocean models. In this contribution, we aim to briefly characterise the problem, and then to identify the important physical processes, and their scales, needed to address this issue in the context of the options available to resolve these scales globally and the evolving computational landscape.We find barotropic and topographic scales are well resolved by the current state-of-the-art model resolutions, e.g. nominal 1∕12°, and still reasonably well resolved at 1∕4°; here, the focus is on process representation. We identify tides, vertical coordinates, river inflows and mixing schemes as four areas where modelling approaches can readily be transferred from regional to global modelling with substantial benefit. In terms of finer-scale processes, we find that a 1∕12° global model resolves the first baroclinic Rossby radius for only  ∼  8 % of regions  <  500 m deep, but this increases to  ∼  70 % for a 1∕72° model, so resolving scales globally requires substantially finer resolution than the current state of the art.We quantify the benefit of improved resolution and process representation using 1∕12° global- and basin-scale northern North Atlantic nucleus for a European model of the ocean (NEMO) simulations; the latter includes tides and a k-ε vertical mixing scheme. These are compared with global stratification observations and 19 models from CMIP5. In terms of correlation and basin-wide rms error, the high-resolution models outperform all these CMIP5 models. The model with tides shows improved seasonal cycles compared to the high-resolution model without tides. The benefits of resolution are particularly apparent in eastern boundary upwelling zones.To explore the balance between the size of a globally refined model and that of multiscale modelling options (e.g. finite element, finite volume or a two-way nesting approach), we consider a simple scale analysis and a conceptual grid refining approach. We put this analysis in the context of evolving computer systems, discussing model turnaround time, scalability and resource costs. Using a simple cost model compared to a reference configuration (taken to be a 1∕4° global model in 2011) and the increasing performance of the UK Research Councils' computer facility, we estimate an unstructured mesh multiscale approach, resolving process scales down to 1.5 km, would use a comparable share of the computer resource by 2021, the two-way nested multiscale approach by 2022, and a 1∕72° global model by 2026. However, we also note that a 1∕12° global model would not have a comparable computational cost to a 1° global model in 2017 until 2027. Hence, we conclude that for computationally expensive models (e.g. for oceanographic research or operational oceanography), resolving scales to  ∼  1.5 km would be routinely practical in about a decade given substantial effort on numerical and computational development. For complex Earth system models, this extends to about 2 decades, suggesting the focus here needs to be on improved process parameterisation to meet these challenges.


2007 ◽  
Vol 16 (1-2) ◽  
pp. 76-94 ◽  
Author(s):  
William J. Merryfield ◽  
Robert B. Scott

2020 ◽  
Author(s):  
Eric P. Chassignet ◽  
Stephen G. Yeager ◽  
Baylor Fox-Kemper ◽  
Alexandra Bozec ◽  
Fred Castruccio ◽  
...  

Abstract. This paper presents global comparisons of fundamental global climate variables from a suite of four pairs of matched low- and high-resolution ocean and sea-ice simulations that are obtained following the OMIP-2 protocol (Griffies et al., 2016) and integrated for one cycle (1958–2018) of the JRA55-do atmospheric state and runoff dataset (Tsujino et al., 2018). Our goal is to assess the robustness of climate-relevant improvements in ocean simulations (mean and variability) associated with moving from coarse (~ 1º) to eddy-resolving (~ 0.1º) horizontal resolutions. The models are diverse in their numerics and parameterizations, but each low-resolution and high-resolution pair of models is matched so as to isolate, to the extent possible, the effects of horizontal resolution. A variety of observational datasets are used to assess the fidelity of simulated temperature and salinity, sea surface height, kinetic energy, heat and volume transports, and sea ice distribution. This paper provides a crucial benchmark for future studies comparing and improving different schemes in any of the models used in this study or similar ones. The biases in the low-resolution simulations are familiar and their gross features – position, strength, and variability of western boundary currents, equatorial currents, and Antarctic Circumpolar Current – are significantly improved in the high-resolution models. However, despite the fact that the high-resolution models "resolve" most of these features, the improvements in temperature or salinity are inconsistent among the different model families and some regions show increased bias over their low-resolution counterparts. Greatly enhanced horizontal resolution does not deliver unambiguous bias improvement in all regions for all models.


Author(s):  
Eric P. Chassignet ◽  
Xiaobiao Xu

AbstractEddying global ocean models are now routinely used for ocean prediction, and the value-added of a better representation of the observed ocean variability and western boundary currents at that resolution is currently being evaluated in climate models. This overview article begins with a brief summary of the impact on ocean model biases of resolving eddies in several global ocean-sea ice numerical simulations. Then, a series of North and Equatorial Atlantic configurations are used to show that an increase of the horizontal resolution from eddy-resolving to submesoscale-enabled together with the inclusion of high-resolution bathymetry and tides significantly improve the models’ abilities to represent the observed ocean variability and western boundary currents. However, the computational cost of these simulations is extremely large, and for these simulations to become routine, close collaborations with computer scientists are essential to ensure that numerical codes can take full advantage of the latest computing architecture.


2020 ◽  
Author(s):  
Ric Crocker ◽  
Jan Maksymczuk ◽  
Marion Mittermaier ◽  
Marina Tonani ◽  
Christine Pequignet

Abstract. The Met Office currently runs two operational ocean forecasting configurations for the North West European Shelf, an eddy-permitting model with a resolution of 7 km (AMM7), and an eddy-resolving model at 1.5 km (AMM15). Whilst qualitative assessments have demonstrated the benefits brought by the increased resolution of AMM15, particularly in the ability to resolve fine-scale features, it has been difficult to show this quantitatively, especially in forecast mode. Application of typical assessment metrics such as the root mean square error have been inconclusive, as the high-resolution model tends to be penalised more severely (double-penalty effect). An assessment of SST has been made at in-situ observation locations using a single-observation-neighbourhood-forecast (SO-NF) spatial method known as the High-Resolution Assessment (HiRA) framework, which utilises ensemble and probabilistic forecast verification metrics such as the Continuous Ranked Probability Score (CRPS). It is found that through the application of HiRA it is possible to identify improvements in the higher resolution model which were not apparent using typical grid scale assessments. This work suggests that future comparative assessments of ocean models with different resolutions would benefit from using HiRA as part of the evaluation process, as it gives a more equitable and appropriate reflection of model performance at higher resolutions.


2017 ◽  
Vol 120 ◽  
pp. 120-136 ◽  
Author(s):  
Helene T. Hewitt ◽  
Michael J. Bell ◽  
Eric P. Chassignet ◽  
Arnaud Czaja ◽  
David Ferreira ◽  
...  

2020 ◽  
Vol 13 (9) ◽  
pp. 4595-4637 ◽  
Author(s):  
Eric P. Chassignet ◽  
Stephen G. Yeager ◽  
Baylor Fox-Kemper ◽  
Alexandra Bozec ◽  
Frederic Castruccio ◽  
...  

Abstract. This paper presents global comparisons of fundamental global climate variables from a suite of four pairs of matched low- and high-resolution ocean and sea ice simulations that are obtained following the OMIP-2 protocol (Griffies et al., 2016) and integrated for one cycle (1958–2018) of the JRA55-do atmospheric state and runoff dataset (Tsujino et al., 2018). Our goal is to assess the robustness of climate-relevant improvements in ocean simulations (mean and variability) associated with moving from coarse (∼ 1∘) to eddy-resolving (∼ 0.1∘) horizontal resolutions. The models are diverse in their numerics and parameterizations, but each low-resolution and high-resolution pair of models is matched so as to isolate, to the extent possible, the effects of horizontal resolution. A variety of observational datasets are used to assess the fidelity of simulated temperature and salinity, sea surface height, kinetic energy, heat and volume transports, and sea ice distribution. This paper provides a crucial benchmark for future studies comparing and improving different schemes in any of the models used in this study or similar ones. The biases in the low-resolution simulations are familiar, and their gross features – position, strength, and variability of western boundary currents, equatorial currents, and the Antarctic Circumpolar Current – are significantly improved in the high-resolution models. However, despite the fact that the high-resolution models “resolve” most of these features, the improvements in temperature and salinity are inconsistent among the different model families, and some regions show increased bias over their low-resolution counterparts. Greatly enhanced horizontal resolution does not deliver unambiguous bias improvement in all regions for all models.


Author(s):  
H.S. von Harrach ◽  
D.E. Jesson ◽  
S.J. Pennycook

Phase contrast TEM has been the leading technique for high resolution imaging of materials for many years, whilst STEM has been the principal method for high-resolution microanalysis. However, it was demonstrated many years ago that low angle dark-field STEM imaging is a priori capable of almost 50% higher point resolution than coherent bright-field imaging (i.e. phase contrast TEM or STEM). This advantage was not exploited until Pennycook developed the high-angle annular dark-field (ADF) technique which can provide an incoherent image showing both high image resolution and atomic number contrast.This paper describes the design and first results of a 300kV field-emission STEM (VG Microscopes HB603U) which has improved ADF STEM image resolution towards the 1 angstrom target. The instrument uses a cold field-emission gun, generating a 300 kV beam of up to 1 μA from an 11-stage accelerator. The beam is focussed on to the specimen by two condensers and a condenser-objective lens with a spherical aberration coefficient of 1.0 mm.


Sign in / Sign up

Export Citation Format

Share Document