scholarly journals An efficient training scheme that improves the forecast skill of a supermodel

2017 ◽  
Author(s):  
Francine Schevenhoven ◽  
Frank Selten

Abstract. Weather and climate models have improved steadily over time as witnessed by objective skill scores, although significant model errors remain. Given these imperfect models, predictions might be improved by combining them dynamically into a so-called supermodel. In this paper a new training scheme to construct such a supermodel is explored using a technique called Cross Pollination in Time (CPT). In the CPT approach the models exchange states during the prediction. The number of possible predictions grows quickly with time and a strategy to retain only a small number of predictions, called pruning, needs to be developed. The method is explored using low-order dynamical systems and applied to a global atmospheric model. The results indicate that the CPT training is efficient and leads to a supermodel with improved forecast quality as compared to the individual models. Due to its computational efficiency, the technique is suited for application to state-of-the art high-dimensional weather and climate models.

2017 ◽  
Vol 8 (2) ◽  
pp. 429-438 ◽  
Author(s):  
Francine J. Schevenhoven ◽  
Frank M. Selten

Abstract. Weather and climate models have improved steadily over time as witnessed by objective skill scores, although significant model errors remain. Given these imperfect models, predictions might be improved by combining them dynamically into a so-called supermodel. In this paper a new training scheme to construct such a supermodel is explored using a technique called cross pollination in time (CPT). In the CPT approach the models exchange states during the prediction. The number of possible predictions grows quickly with time, and a strategy to retain only a small number of predictions, called pruning, needs to be developed. The method is explored using low-order dynamical systems and applied to a global atmospheric model. The results indicate that the CPT training is efficient and leads to a supermodel with improved forecast quality as compared to the individual models. Due to its computational efficiency, the technique is suited for application to state-of-the art high-dimensional weather and climate models.


2021 ◽  
Author(s):  
Christian Zeman ◽  
Christoph Schär

<p>Since their first operational application in the 1950s, atmospheric numerical models have become essential tools in weather and climate prediction. As such, they are a constant subject to changes, thanks to advances in computer systems, numerical methods, and the ever increasing knowledge about the atmosphere of Earth. Many of the changes in today's models relate to seemingly unsuspicious modifications, associated with minor code rearrangements, changes in hardware infrastructure, or software upgrades. Such changes are meant to preserve the model formulation, yet the verification of such changes is challenged by the chaotic nature of our atmosphere - any small change, even rounding errors, can have a big impact on individual simulations. Overall this represents a serious challenge to a consistent model development and maintenance framework.</p><p>Here we propose a new methodology for quantifying and verifying the impacts of minor atmospheric model changes, or its underlying hardware/software system, by using ensemble simulations in combination with a statistical hypothesis test. The methodology can assess effects of model changes on almost any output variable over time, and can also be used with different hypothesis tests.</p><p>We present first applications of the methodology with the regional weather and climate model COSMO. The changes considered include a major system upgrade of the supercomputer used, the change from double to single precision floating-point representation, changes in the update frequency of the lateral boundary conditions, and tiny changes to selected model parameters. While providing very robust results, the methodology also shows a large sensitivity to more significant model changes, making it a good candidate for an automated tool to guarantee model consistency in the development cycle.</p>


2014 ◽  
Vol 1 (2) ◽  
pp. 1283-1312
Author(s):  
M. Abbas ◽  
A. Ilin ◽  
A. Solonen ◽  
J. Hakkarainen ◽  
E. Oja ◽  
...  

Abstract. In this work, we consider the Bayesian optimization (BO) approach for tuning parameters of complex chaotic systems. Such problems arise, for instance, in tuning the sub-grid scale parameterizations in weather and climate models. For such problems, the tuning procedure is generally based on a performance metric which measures how well the tuned model fits the data. This tuning is often a computationally expensive task. We show that BO, as a tool for finding the extrema of computationally expensive objective functions, is suitable for such tuning tasks. In the experiments, we consider tuning parameters of two systems: a simplified atmospheric model and a low-dimensional chaotic system. We show that BO is able to tune parameters of both the systems with a low number of objective function evaluations and without the need of any gradient information.


2019 ◽  
Author(s):  
Francine Schevenhoven ◽  
Frank Selten ◽  
Alberto Carrassi ◽  
Noel Keenlyside

Abstract. Recent studies demonstrate that weather and climate predictions potentially improve by dynamically combining different models into a so called "supermodel". Here we focus on the weighted supermodel – the supermodel's time derivative is a weighted superposition of the time-derivatives of the imperfect models, referred to as weighted supermodeling. A crucial step is to train the weights of the supermodel on the basis of historical observations. Here we apply two different training methods to a supermodel of up to four different versions of the global atmosphere-ocean-land model SPEEDO. The standard version is regarded as truth. The first training method is based on an idea called Cross Pollination in Time (CPT), where models exchange states during the training. The second method is a synchronization based learning rule, originally developed for parameter estimation. We demonstrate that both training methods yield climate simulations and weather predictions of superior quality as compared to the individual model versions. Supermodel predictions also outperform predictions based on the commonly used Multi-Model Ensemble (MME) mean. Furthermore we find evidence that negative weights can improve predictions in cases where model errors do not cancel (for instance all models are warm with respect to the truth). In principle the proposed training schemes are applicable to state-of-the-art models and historical observations. A prime advantage of the proposed training schemes is that in the present context relatively short training periods suffice to find good solutions. Additional work needs to be done to assess the limitations due to incomplete and noisy data, to combine models that are structurally different (different resolution and state representation for instance) and to evaluate cases for which the truth falls outside of the model class.


2019 ◽  
Vol 147 (5) ◽  
pp. 1699-1712 ◽  
Author(s):  
Bo Christiansen

Abstract In weather and climate sciences ensemble forecasts have become an acknowledged community standard. It is often found that the ensemble mean not only has a low error relative to the typical error of the ensemble members but also that it outperforms all the individual ensemble members. We analyze ensemble simulations based on a simple statistical model that allows for bias and that has different variances for observations and the model ensemble. Using generic simplifying geometric properties of high-dimensional spaces we obtain analytical results for the error of the ensemble mean. These results include a closed form for the rank of the ensemble mean among the ensemble members and depend on two quantities: the ensemble variance and the bias both normalized with the variance of observations. The analytical results are used to analyze the GEFS reforecast where the variances and bias depend on lead time. For intermediate lead times between 20 and 100 h the two terms are both around 0.5 and the ensemble mean is only slightly better than individual ensemble members. For lead times larger than 240 h the variance term is close to 1 and the bias term is near 0.5. For these lead times the ensemble mean outperforms almost all individual ensemble members and its relative error comes close to −30%. These results are in excellent agreement with the theory. The simplifying properties of high-dimensional spaces can be applied not only to the ensemble mean but also to, for example, the ensemble spread.


2016 ◽  
Vol 29 (4) ◽  
pp. 1477-1496 ◽  
Author(s):  
Penelope Maher ◽  
Steven C. Sherwood

Abstract Expansion of the tropics will likely affect subtropical precipitation, but observed and modeled precipitation trends disagree with each other. Moreover, the dynamic processes at the tropical edge and their interactions with precipitation are not well understood. This study assesses the skill of climate models to reproduce observed Australian precipitation variability at the tropical edge. A multivariate linear independence approach distinguishes between direct (causal) and indirect (circumstantial) precipitation drivers that facilitate clearer attribution of model errors and skill. This approach is applied to observed precipitation and ERA-Interim reanalysis data and a representative subset of four models from phase 5 of the Coupled Model Intercomparison Project (CMIP5) and their CMIP3 counterparts. The drivers considered are El Niño–Southern Oscillation, southern annular mode, Indian Ocean dipole, blocking, and four tropical edge metrics (position and intensity of the subtropical ridge and subtropical jet). These models are skillful in representing the covariability of drivers and their influence on precipitation. However, skill scores have not improved in the CMIP5 subset relative to CMIP3 in either respect. The Australian precipitation response to a poleward-located Hadley cell edge remains uncertain, as opposing drying and moistening mechanisms complicate the net response. Higher skill in simulating driver covariability is not consistently mirrored by higher precipitation skill. This provides further evidence that modeled precipitation does not respond correctly to large-scale flow patterns; further improvements in parameterized moist physics are needed before the subtropical precipitation responses can be fully trusted. The multivariate linear independence approach could be applied more widely for practical model evaluation.


OENO One ◽  
2017 ◽  
Vol 51 (2) ◽  
pp. 99-105 ◽  
Author(s):  
Andrew Sturman ◽  
Peyman Zawar-Reza ◽  
Iman Soltanzadeh ◽  
Marwan Katurji ◽  
Valérie Bonnardot ◽  
...  

Grapevines are highly sensitive to environmental conditions, with variability in weather and climate (particularly temperature) having a significant influence on wine quality, quantity and style. Improved knowledge of spatial and temporal variations in climate and their impact on grapevine response allows better decision-making to help maintain a sustainable wine industry in the context of medium to long term climate change. This paper describes recent research into the application of mesoscale weather and climate models that aims to improve our understanding of climate variability at high spatial (1 km and less) and temporal (hourly) resolution within vineyard regions of varying terrain complexity. The Weather Research and Forecasting (WRF) model has been used to simulate the weather and climate in the complex terrain of the Marlborough region of New Zealand. The performance of the WRF model in reproducing the temperature variability across vineyard regions is assessed through comparison with automatic weather stations. Coupling the atmospheric model with bioclimatic indices and phenological models (e.g. Huglin, cool nights, Grapevine Flowering Véraison model) also provides useful insights into grapevine response to spatial variability of climate during the growing season, as well as assessment of spatial variability in the optimal climate conditions for specific grape varieties.


OENO One ◽  
2017 ◽  
Vol 51 (2) ◽  
pp. 99 ◽  
Author(s):  
Andrew Sturman ◽  
Peyman Zawar-Reza ◽  
Iman Soltanzadeh ◽  
Marwan Katurji ◽  
Valérie Bonnardot ◽  
...  

<p>Grapevines are highly sensitive to environmental conditions, with variability in weather and climate (particularly temperature) having a significant influence on wine quality, quantity and style. Improved knowledge of spatial and temporal variations in climate and their impact on grapevine response allows better decision-making to help maintain a sustainable wine industry in the context of medium to long term climate change. This paper describes recent research into the application of mesoscale weather and climate models that aims to improve our understanding of climate variability at high spatial (1 km and less) and temporal (hourly) resolution within vineyard regions of varying terrain complexity. The Weather Research and Forecasting (WRF) model has been used to simulate the weather and climate in the complex terrain of the Marlborough region of New Zealand. The performance of the WRF model in reproducing the temperature variability across vineyard regions is assessed through comparison with automatic weather stations. Coupling the atmospheric model with bioclimatic indices and phenological models (e.g. Huglin, cool nights, Grapevine Flowering Véraison model) also provides useful insights into grapevine response to spatial variability of climate during the growing season, as well as assessment of spatial variability in the optimal climate conditions for specific grape varieties.</p>


2018 ◽  
Vol 31 (4) ◽  
pp. 1587-1596 ◽  
Author(s):  
Bo Christiansen

When comparing climate models to observations, it is often observed that the mean over many models has smaller errors than most or all of the individual models. This paper will show that a general consequence of the nonintuitive geometric properties of high-dimensional spaces is that the ensemble mean often outperforms the individual ensemble members. This also explains why the ensemble mean often has an error that is 30% smaller than the median error of the individual ensemble members. The only assumption that needs to be made is that the observations and the models are independently drawn from the same distribution. An important and relevant property of high-dimensional spaces is that independent random vectors are almost always orthogonal. Furthermore, while the lengths of random vectors are large and almost equal, the ensemble mean is special, as it is located near the otherwise vacant center. The theory is first explained by an analysis of Gaussian- and uniformly distributed vectors in high-dimensional spaces. A subset of 17 models from the CMIP5 multimodel ensemble is then used to demonstrate the validity and robustness of the theory in realistic settings.


2019 ◽  
Vol 147 (5) ◽  
pp. 1447-1469 ◽  
Author(s):  
Julie Bessac ◽  
Adam H. Monahan ◽  
Hannah M. Christensen ◽  
Nils Weitzel

Abstract Subgrid-scale (SGS) velocity variations result in gridscale sea surface flux enhancements that must be parameterized in weather and climate models. Traditional parameterizations are deterministic in that they assign a unique value of the SGS velocity flux enhancement to any given configuration of the resolved state. In this study, we assess the statistics of SGS velocity flux enhancement over a range of averaging scales (as a proxy for varying model resolution) through systematic coarse-graining of a convection-permitting atmospheric model simulation over the Indian Ocean and west Pacific warm pool. Conditioning the statistics of the SGS velocity flux enhancement on 1) the fluxes associated with the resolved winds and 2) the precipitation rate, we find that the lack of a separation between “resolved” and “unresolved” scales results in a distribution of flux enhancements for each configuration of the resolved state. That is, the SGS velocity flux enhancement should be represented stochastically rather than deterministically. The spatial and temporal statistics of the SGS velocity flux enhancement are investigated by using basic descriptive statistics and through a fit to an anisotropic space–time covariance structure. Potential spatial inhomogeneities of the statistics of the SGS velocity flux enhancement are investigated through regional analysis, although because of the relatively short duration of the simulation (9 days) distinguishing true inhomogeneity from sampling variability is difficult. Perspectives for the implementation of such a stochastic parameterization in weather and climate models are discussed.


Sign in / Sign up

Export Citation Format

Share Document