scholarly journals The Application of Nonlinear Local Lyapunov Vectors to Ensemble Predictions in Lorenz Systems

2014 ◽  
Vol 71 (9) ◽  
pp. 3554-3567 ◽  
Author(s):  
Jie Feng ◽  
Ruiqiang Ding ◽  
Deqiang Liu ◽  
Jianping Li

Abstract Nonlinear local Lyapunov vectors (NLLVs) are developed to indicate orthogonal directions in phase space with different perturbation growth rates. In particular, the first few NLLVs are considered to be an appropriate orthogonal basis for the fast-growing subspace. In this paper, the NLLV method is used to generate initial perturbations and implement ensemble forecasts in simple nonlinear models (the Lorenz63 and Lorenz96 models) to explore the validity of the NLLV method. The performance of the NLLV method is compared comprehensively and systematically with other methods such as the bred vector (BV) and the random perturbation (Monte Carlo) methods. In experiments using the Lorenz63 model, the leading NLLV (LNLLV) captured a more precise direction, and with a faster growth rate, than any individual bred vector. It may be the larger projection on fastest-growing analysis errors that causes the improved performance of the new method. Regarding the Lorenz96 model, two practical measures—namely the spread–skill relationship and the Brier score—were used to assess the reliability and resolution of these ensemble schemes. Overall, the ensemble spread of NLLVs is more consistent with the errors of the ensemble mean, which indicates the better performance of NLLVs in simulating the evolution of analysis errors. In addition, the NLLVs perform significantly better than the BVs in terms of reliability and the random perturbations in resolution.

2004 ◽  
Vol 11 (3) ◽  
pp. 399-409 ◽  
Author(s):  
F. Atger

Abstract. The relative impact of model quality and ensemble deficiencies, on the performance of ensemble based probabilistic forecasts, is investigated from a set of idealized experiments. Data are generated according to a statistical model, the validation of which is achieved by comparing generated data to ECMWF ensemble forecasts and analyses. The performance of probabilistic forecasts is evaluated through the reliability and resolution terms of the Brier score. Results are as follows. (i) Resolution appears essentially attributable to the average level of forecast skill. (ii) The lack of reliability comes primarily from forecast bias, and to a lower extent from the ensemble being systematically under-dispersive (or over-dispersive). (iii) Forecast skill contributes very little to reliability in the absence of forecast bias, and this impact is entirely due to the finiteness of the ensemble population. (iv) In the presence of forecast bias, reducing forecast skill leads to improve the reliability. This unexpected feature comes from the fact that lower forecast skill leads to a larger ensemble spread, that compensates for the strong proportion of outliers consequent to forecast bias. (v) The lack of ensemble skill, i.e. non systematic errors affecting both ensemble mean and ensemble spread, contributes little, but significantly, to the lack of reliability and resolution.


2019 ◽  
Vol 147 (5) ◽  
pp. 1699-1712 ◽  
Author(s):  
Bo Christiansen

Abstract In weather and climate sciences ensemble forecasts have become an acknowledged community standard. It is often found that the ensemble mean not only has a low error relative to the typical error of the ensemble members but also that it outperforms all the individual ensemble members. We analyze ensemble simulations based on a simple statistical model that allows for bias and that has different variances for observations and the model ensemble. Using generic simplifying geometric properties of high-dimensional spaces we obtain analytical results for the error of the ensemble mean. These results include a closed form for the rank of the ensemble mean among the ensemble members and depend on two quantities: the ensemble variance and the bias both normalized with the variance of observations. The analytical results are used to analyze the GEFS reforecast where the variances and bias depend on lead time. For intermediate lead times between 20 and 100 h the two terms are both around 0.5 and the ensemble mean is only slightly better than individual ensemble members. For lead times larger than 240 h the variance term is close to 1 and the bias term is near 0.5. For these lead times the ensemble mean outperforms almost all individual ensemble members and its relative error comes close to −30%. These results are in excellent agreement with the theory. The simplifying properties of high-dimensional spaces can be applied not only to the ensemble mean but also to, for example, the ensemble spread.


2009 ◽  
Vol 137 (7) ◽  
pp. 2365-2379 ◽  
Author(s):  
David A. Unger ◽  
Huug van den Dool ◽  
Edward O’Lenic ◽  
Dan Collins

A regression model was developed for use with ensemble forecasts. Ensemble members are assumed to represent a set of equally likely solutions, one of which will best fit the observation. If standard linear regression assumptions apply to the best member, then a regression relationship can be derived between the full ensemble and the observation without explicitly identifying the best member for each case. The ensemble regression equation is equivalent to linear regression between the ensemble mean and the observation, but is applied to each member of the ensemble. The “best member” error variance is defined in terms of the correlation between the ensemble mean and the observations, their respective variances, and the ensemble spread. A probability density function representing the ensemble prediction is obtained from the normalized sum of the best-member error distribution applied to the regression forecast from each ensemble member. Ensemble regression was applied to National Centers for Environmental Prediction (NCEP) Climate Forecast System (CFS) forecasts of seasonal mean Niño-3.4 SSTs on historical forecasts for the years 1981–2005. The skill of the ensemble regression was about the same as that of the linear regression on the ensemble mean when measured by the continuous ranked probability score (CRPS), and both methods produced reliable probabilities. The CFS spread appears slightly too high for its skill, and the CRPS of the CFS predictions can be slightly improved by reducing its ensemble spread to about 0.8 of its original value prior to regression calibration.


2010 ◽  
Vol 138 (9) ◽  
pp. 3634-3655 ◽  
Author(s):  
Munehiko Yamaguchi ◽  
Sharanya J. Majumdar

Abstract Ensemble initial perturbations around Typhoon Sinlaku (2008) produced by ECMWF, NCEP, and the Japan Meteorological Agency (JMA) ensembles are compared using The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) data, and the dynamical mechanisms of perturbation growth associated with the tropical cyclone (TC) motion are investigated for the ECMWF and NCEP ensembles. In the comparison, it is found that the vertical and horizontal distributions of initial perturbations as well as the amplitude are quite different among the three NWP centers before, during, and after the recurvature of Sinlaku. In addition, it turns out that those variations cause a difference in the TC motion not only at the initial time but also during the subsequent forecast period. The ECMWF ensemble exhibits relatively large perturbation growth, which results from 1) the baroclinic energy conversion in a vortex, 2) the baroclinic energy conversion associated with the midlatitude waves, and 3) the barotropic energy conversion in a vortex. Those features are less distinctive in the NCEP ensemble. A statistical verification shows that the ensemble spread of TC track predictions in NCEP (ECMWF) is larger than ECMWF (NCEP) for 1- (3-) day forecasts on average. It can be inferred that while the ECMWF ensemble starts from a relatively small amplitude of initial perturbations, the growth of the perturbations helps to amplify the ensemble spread of tracks. On the other hand, a relatively large amplitude of initial perturbations seems to play a role in producing the ensemble spread of tracks in the NCEP ensemble.


2017 ◽  
Vol 18 (11) ◽  
pp. 2873-2891 ◽  
Author(s):  
Yu Zhang ◽  
Limin Wu ◽  
Michael Scheuerer ◽  
John Schaake ◽  
Cezar Kongoli

Abstract This article compares the skill of medium-range probabilistic quantitative precipitation forecasts (PQPFs) generated via two postprocessing mechanisms: 1) the mixed-type meta-Gaussian distribution (MMGD) model and 2) the censored shifted Gamma distribution (CSGD) model. MMGD derives the PQPF by conditioning on the mean of raw ensemble forecasts. CSGD, on the other hand, is a regression-based mechanism that estimates PQPF from a prescribed distribution by adjusting the climatological distribution according to the mean, spread, and probability of precipitation (POP) of raw ensemble forecasts. Each mechanism is applied to the reforecast of the Global Ensemble Forecast System (GEFS) to yield a postprocessed PQPF over lead times between 24 and 72 h. The outcome of an evaluation experiment over the mid-Atlantic region of the United States indicates that the CSGD approach broadly outperforms the MMGD in terms of both the ensemble mean and the reliability of distribution, although the performance gap tends to be narrow, and at times mixed, at higher precipitation thresholds (>5 mm). Analysis of a rare storm event demonstrates the superior reliability and sharpness of the CSGD PQPF and underscores the issue of overforecasting by the MMGD PQPF. This work suggests that the CSGD’s incorporation of ensemble spread and POP does help enhance its skill, particularly for light forecast amounts, but CSGD’s model structure and its use of optimization in parameter estimation likely play a more determining role in its outperformance.


2017 ◽  
Vol 30 (9) ◽  
pp. 3185-3196 ◽  
Author(s):  
Tongtiegang Zhao ◽  
James C. Bennett ◽  
Q. J. Wang ◽  
Andrew Schepen ◽  
Andrew W. Wood ◽  
...  

GCMs are used by many national weather services to produce seasonal outlooks of atmospheric and oceanic conditions and fluxes. Postprocessing is often a necessary step before GCM forecasts can be applied in practice. Quantile mapping (QM) is rapidly becoming the method of choice by operational agencies to postprocess raw GCM outputs. The authors investigate whether QM is appropriate for this task. Ensemble forecast postprocessing methods should aim to 1) correct bias, 2) ensure forecasts are reliable in ensemble spread, and 3) guarantee forecasts are at least as skillful as climatology, a property called “coherence.” This study evaluates the effectiveness of QM in achieving these aims by applying it to precipitation forecasts from the POAMA model. It is shown that while QM is highly effective in correcting bias, it cannot ensure reliability in forecast ensemble spread or guarantee coherence. This is because QM ignores the correlation between raw ensemble forecasts and observations. When raw forecasts are not significantly positively correlated with observations, QM tends to produce negatively skillful forecasts. Even when there is significant positive correlation, QM cannot ensure reliability and coherence for postprocessed forecasts. Therefore, QM is not a fully satisfactory method for postprocessing forecasts where the issues of bias, reliability, and coherence pre-exist. Alternative postprocessing methods based on ensemble model output statistics (EMOS) are available that achieve not only unbiased but also reliable and coherent forecasts. This is shown with one such alternative, the Bayesian joint probability modeling approach.


Author(s):  
George H. Cheng ◽  
Adel Younis ◽  
Kambiz Haji Hajikolaei ◽  
G. Gary Wang

Mode Pursuing Sampling (MPS) was developed as a global optimization algorithm for optimization problems involving expensive black box functions. MPS has been found to be effective and efficient for problems of low dimensionality, i.e., the number of design variables is less than ten. A previous conference publication integrated the concept of trust regions into the MPS framework to create a new algorithm, TRMPS, which dramatically improved performance and efficiency for high dimensional problems. However, although TRMPS performed better than MPS, it was unproven against other established algorithms such as GA. This paper introduces an improved algorithm, TRMPS2, which incorporates guided sampling and low function value criterion to further improve algorithm performance for high dimensional problems. TRMPS2 is benchmarked against MPS and GA using a suite of test problems. The results show that TRMPS2 performs better than MPS and GA on average for high dimensional, expensive, and black box (HEB) problems.


2015 ◽  
Vol 57 ◽  
Author(s):  
Andre Kristofer Pattantyus ◽  
Steven Businger

<div class="page" title="Page 1"><div class="section"><div class="layoutArea"><div class="column"><p><span>Deterministic model forecasts do not convey to the end users the forecast uncertainty the models possess as a result of physics parameterizations, simplifications in model representation of physical processes, and errors in initial conditions. This lack of understanding leads to a level of uncertainty in the forecasted value when only a single deterministic model forecast is available. Increasing computational power and parallel software architecture allows multiple simulations to be carried out simultaneously that yield useful measures of model uncertainty that can be derived from ensemble model results. The Hybrid Single Particle Lagrangian Integration Trajectory and Dispersion model has the ability to generate ensemble forecasts. A meteorological ensemble was formed to create probabilistic forecast products and an ensemble mean forecast for volcanic emissions from the Kilauea volcano that impacts the state of Hawai’i. The probabilistic forecast products show uncertainty in pollutant concentrations that are especially useful for decision-making regarding public health. Initial comparison of the ensemble mean forecasts with observations and a single model forecast show improvements in event timing for both sulfur dioxide and sulfate aerosol forecasts. </span></p></div></div></div></div><p> </p>


2020 ◽  
pp. 1-24
Author(s):  
James M. Borg ◽  
Alastair Channon

In a recent article by Borg and Channon it was shown that social information alone, decoupled from any within-lifetime learning, can result in improved performance on a food-foraging task compared to when social information is unavailable. Here we assess whether access to social information leads to significant behavioral differences both when access to social information leads to improved performance on the task, and when it does not: Do any behaviors resulting from social-information use, such as movement and increased agent interaction, persist even when the ability to discriminate between poisonous and non-poisonous food is no better than when social-information is unavailable? Using a neuroevolutionary artificial life simulation, we show that social-information use can lead to the emergence of behaviors that differ from when social information is unavailable, and that these behaviors act as a promoter of agent interaction. The results presented here suggest that the introduction of social information is sufficient, even when decoupled from within-lifetime learning, for the emergence of pro-social behaviors. We believe this work to be the first use of an artificial evolutionary system to explore the behavioral consequences of social-information use in the absence of within-lifetime learning.


2020 ◽  
Vol 148 (7) ◽  
pp. 2645-2669
Author(s):  
Craig S. Schwartz ◽  
May Wong ◽  
Glen S. Romine ◽  
Ryan A. Sobash ◽  
Kathryn R. Fossell

Abstract Five sets of 48-h, 10-member, convection-allowing ensemble (CAE) forecasts with 3-km horizontal grid spacing were systematically evaluated over the conterminous United States with a focus on precipitation across 31 cases. The various CAEs solely differed by their initial condition perturbations (ICPs) and central initial states. CAEs initially centered about deterministic Global Forecast System (GFS) analyses were unequivocally better than those initially centered about ensemble mean analyses produced by a limited-area single-physics, single-dynamics 15-km continuously cycling ensemble Kalman filter (EnKF), strongly suggesting relative superiority of the GFS analyses. Additionally, CAEs with flow-dependent ICPs derived from either the EnKF or multimodel 3-h forecasts from the Short-Range Ensemble Forecast (SREF) system had higher fractions skill scores than CAEs with randomly generated mesoscale ICPs. Conversely, due to insufficient spread, CAEs with EnKF ICPs had worse reliability, discrimination, and dispersion than those with random and SREF ICPs. However, members in the CAE with SREF ICPs undesirably clustered by dynamic core represented in the ICPs, and CAEs with random ICPs had poor spinup characteristics. Collectively, these results indicate that continuously cycled EnKF mean analyses were suboptimal for CAE initialization purposes and suggest that further work to improve limited-area continuously cycling EnKFs over large regional domains is warranted. Additionally, the deleterious aspects of using both multimodel and random ICPs suggest efforts toward improving spread in CAEs with single-physics, single-dynamics, flow-dependent ICPs should continue.


Sign in / Sign up

Export Citation Format

Share Document