scholarly journals Model Error Estimation Employing an Ensemble Data Assimilation Approach

2006 ◽  
Vol 134 (5) ◽  
pp. 1337-1354 ◽  
Author(s):  
Dusanka Zupanski ◽  
Milija Zupanski

Abstract A methodology for model error estimation is proposed and examined in this study. It provides estimates of the dynamical model state, the bias, and the empirical parameters by combining three approaches: 1) ensemble data assimilation, 2) state augmentation, and 3) parameter and model bias estimation. Uncertainties of these estimates are also determined, in terms of the analysis and forecast error covariances, employing the same methodology. The model error estimation approach is evaluated in application to Korteweg–de Vries–Burgers (KdVB) numerical model within the framework of maximum likelihood ensemble filter (MLEF). Experimental results indicate improved filter performance due to model error estimation. The innovation statistics also indicate that the estimated uncertainties are reliable. On the other hand, neglecting model errors—either in the form of an incorrect model parameter, or a model bias—has detrimental effects on data assimilation, in some cases resulting in filter divergence. Although the method is examined in a simplified model framework, the results are encouraging. It remains to be seen how the methodology performs in applications to more complex models.

2005 ◽  
Vol 133 (11) ◽  
pp. 3132-3147 ◽  
Author(s):  
Thomas M. Hamill ◽  
Jeffrey S. Whitaker

Abstract Insufficient model resolution is one source of model error in numerical weather predictions. Methods for parameterizing this error in ensemble data assimilations are explored here. Experiments were conducted with a two-layer primitive equation model, where the assumed true state was a T127 forecast simulation. Ensemble data assimilations were performed with the same model at T31 resolution, assimilating imperfect observations drawn from the T127 forecast. By design, the magnitude of errors due to model truncation was much larger than the error growth due to initial condition uncertainty, making this a stringent test of the ability of an ensemble-based data assimilation to deal with model error. Two general methods, “covariance inflation” and “additive error,” were considered for parameterizing the model error at the resolved scales (T31 and larger) due to interaction with the unresolved scales (T32 to T127). Covariance inflation expanded the background forecast members’ deviations about the ensemble mean, while additive error added specially structured noise to each ensemble member forecast before the update step. The method of parameterizing this model error had a substantial effect on the accuracy of the ensemble data assimilation. Covariance inflation produced ensembles with analysis errors that were no lower than the analysis errors from three-dimensional variational (3D-Var) assimilation, and for the method to avoid filter divergence, the assimilations had to be periodically reseeded. Covariance inflation uniformly expanded the model spread; however, the actual growth of model errors depended on the dynamics, growing proportionally more in the midlatitudes. The inappropriately uniform inflation progressively degradated the capacity of the ensemble to span the actual forecast error. The most accurate model-error parameterization was an additive model-error parameterization, which reduced the error difference between 3D-Var and a near-perfect assimilation system by ∼40%. In the lowest-error simulations, additive errors were parameterized using samples of model error from a time series of differences between T63 and T31 forecasts. Scaled samples of differences between model forecast states separated by 24 h were also tested as additive error parameterizations, as well as scaled samples of the T31 model state’s anomaly from the T31 model climatology. The latter two methods produced analyses that were progressively less accurate. The decrease in accuracy was likely due to their inappropriately long spatial correlation length scales.


2013 ◽  
Vol 141 (6) ◽  
pp. 1804-1821 ◽  
Author(s):  
J. P. Hacker ◽  
W. M. Angevine

Abstract Experiments with the single-column implementation of the Weather Research and Forecasting Model provide a basis for deducing land–atmosphere coupling errors in the model. Coupling occurs both through heat and moisture fluxes through the land–atmosphere interface and roughness sublayer, and turbulent heat, moisture, and momentum fluxes through the atmospheric surface layer. This work primarily addresses the turbulent fluxes, which are parameterized following the Monin–Obukhov similarity theory applied to the atmospheric surface layer. By combining ensemble data assimilation and parameter estimation, the model error can be characterized. Ensemble data assimilation of 2-m temperature and water vapor mixing ratio, and 10-m wind components, forces the model to follow observations during a month-long simulation for a column over the well-instrumented Atmospheric Radiation Measurement (ARM) Central Facility near Lamont, Oklahoma. One-hour errors in predicted observations are systematically small but nonzero, and the systematic errors measure bias as a function of local time of day. Analysis increments for state elements nearby (15 m AGL) can be too small or have the wrong sign, indicating systematically biased covariances and model error. Experiments using the ensemble filter to objectively estimate a parameter controlling the thermal land–atmosphere coupling show that the parameter adapts to offset the model errors, but that the errors cannot be eliminated. Results suggest either structural errors or further parametric errors that may be difficult to estimate. Experiments omitting atypical observations such as soil and flux measurements lead to qualitatively similar deductions, showing the potential for assimilating common in situ observations as an inexpensive framework for deducing and isolating model errors.


2005 ◽  
Vol 133 (12) ◽  
pp. 3431-3449 ◽  
Author(s):  
D. M. Barker

Abstract Ensemble data assimilation systems incorporate observations into numerical models via solution of the Kalman filter update equations, and estimates of forecast error covariances derived from ensembles of model integrations. In this paper, a particular algorithm, the ensemble square root filter (EnSRF), is tested in a limited-area, polar numerical weather prediction (NWP) model: the Antarctic Mesoscale Prediction System (AMPS). For application in the real-time AMPS, the number of model integrations that can be run to provide forecast error covariances is limited, resulting in an ensemble sampling error that degrades the analysis fit to observations. In this work, multivariate, climatologically plausible forecast error covariances are specified via averaged forecast difference statistics. Ensemble representations of the “true” forecast errors, created using randomized control variables of the fifth-generation Pennsylvania State University–National Center for Atmospheric Research (PSU–NCAR) Mesoscale Model (MM5) three-dimensional variational (3DVAR) data assimilation system, are then used to assess the dependence of sampling error on ensemble size, data density, and localization of covariances using simulated observation networks. Results highlight the detrimental impact of ensemble sampling error on the analysis increment structure of correlated, but unobserved fields—an issue not addressed by the spatial covariance localization techniques used to date. A 12-hourly cycling EnSRF/AMPS assimilation/forecast system is tested for a two-week period in December 2002 using real, conventional (surface, rawinsonde, satellite retrieval) observations. The dependence of forecast scores on methods used to maintain ensemble spread and the inclusion of perturbations to lateral boundary conditions are studied.


2021 ◽  
Author(s):  
Hannes Helmut Bauser ◽  
Daniel Berg ◽  
Kurt Roth

Abstract. Data assimilation methods are used throughout the geosciences to combine information from uncertain models and uncertain measurement data. However, the characteristics of geophysical systems differ and may be distinguished between divergent and convergent systems. In divergent systems initially nearby states will drift apart, while they will coalesce in convergent systems. This difference has implications for the application of sequential ensemble data assimilation methods. This study explores these implications on two exemplary systems: the divergent Lorenz-96 model and the convergent description of soil water movement by the Richards equation. The results show that sequential ensemble data assimilation methods require a sufficient divergent component. This makes the transfer of the methods from divergent to convergent systems challenging. We demonstrate through a set of case studies that it is imperative to represent model errors adequately and incorporate parameter uncertainties in ensemble data assimilation in convergent systems.


2015 ◽  
Vol 143 (10) ◽  
pp. 3893-3911 ◽  
Author(s):  
Soyoung Ha ◽  
Judith Berner ◽  
Chris Snyder

Abstract Mesoscale forecasts are strongly influenced by physical processes that are either poorly resolved or must be parameterized in numerical models. In part because of errors in these parameterizations, mesoscale ensemble data assimilation systems generally suffer from underdispersiveness, which can limit the quality of analyses. Two explicit representations of model error for mesoscale ensemble data assimilation are explored: a multiphysics ensemble in which each member’s forecast is based on a distinct suite of physical parameterization, and stochastic kinetic energy backscatter in which small noise terms are included in the forecast model equations. These two model error techniques are compared with a baseline experiment that includes spatially and temporally adaptive covariance inflation, in a domain over the continental United States using the Weather Research and Forecasting (WRF) Model for mesoscale ensemble forecasts and the Data Assimilation Research Testbed (DART) for the ensemble Kalman filter. Verification against independent observations and Rapid Update Cycle (RUC) 13-km analyses for the month of June 2008 showed that including the model error representation improved not only the analysis ensemble, but also short-range forecasts initialized from these analyses. Explicitly accounting for model uncertainty led to a better-tuned ensemble spread, a more skillful ensemble mean, and higher probabilistic scores, as well as significantly reducing the need for inflation. In particular, the stochastic backscatter scheme consistently outperformed both the multiphysics approach and the control run with adaptive inflation over almost all levels of the atmosphere both deterministically and probabilistically.


2020 ◽  
Vol 10 (24) ◽  
pp. 9010
Author(s):  
Sujeong Lim ◽  
Myung-Seo Koo ◽  
In-Hyuk Kwon ◽  
Seon Ki Park

Ensemble data assimilation systems generally suffer from underestimated background error covariance that leads to a filter divergence problem—the analysis diverges from the natural state by ignoring the observation influence due to the diminished estimation of model uncertainty. To alleviate this problem, we have developed and implemented the stochastically perturbed hybrid physical–dynamical tendencies to the local ensemble transform Kalman filter in a global numerical weather prediction model—the Korean Integrated Model (KIM). This approach accounts for the model errors associated with computational representations of underlying partial differential equations and the imperfect physical parameterizations. The new stochastic perturbation hybrid tendencies scheme generally improved the background error covariances in regions where the ensemble spread was not sufficiently expressed by the control experiment that used an additive inflation and the relaxation to prior spread method.


2015 ◽  
Vol 143 (12) ◽  
pp. 5073-5090 ◽  
Author(s):  
Craig H. Bishop ◽  
Bo Huang ◽  
Xuguang Wang

Abstract A consistent hybrid ensemble filter (CHEF) for using hybrid forecast error covariance matrices that linearly combine aspects of both climatological and flow-dependent matrices within a nonvariational ensemble data assimilation scheme is described. The CHEF accommodates the ensemble data assimilation enhancements of (i) model space ensemble covariance localization for satellite data assimilation and (ii) Hodyss’s method for improving accuracy using ensemble skewness. Like the local ensemble transform Kalman filter (LETKF), the CHEF is computationally scalable because it updates local patches of the atmosphere independently of others. Like the sequential ensemble Kalman filter (EnKF), it serially assimilates batches of observations and uses perturbed observations to create ensembles of analyses. It differs from the deterministic (no perturbed observations) ensemble square root filter (ESRF) and the EnKF in that (i) its analysis correction is unaffected by the order in which observations are assimilated even when localization is required, (ii) it uses accurate high-rank solutions for the posterior error covariance matrix to serially assimilate observations, and (iii) it accommodates high-rank hybrid error covariance models. Experiments were performed to assess the effect on CHEF and ESRF analysis accuracy of these differences. In the case where both the CHEF and the ESRF used tuned localized ensemble covariances for the forecast error covariance model, the CHEF’s advantage over the ESRF increased with observational density. In the case where the CHEF used a hybrid error covariance model but the ESRF did not, the CHEF had a substantial advantage for all observational densities.


2021 ◽  
Vol 25 (6) ◽  
pp. 3319-3329
Author(s):  
Hannes Helmut Bauser ◽  
Daniel Berg ◽  
Kurt Roth

Abstract. Data assimilation methods are used throughout the geosciences to combine information from uncertain models and uncertain measurement data. However, the characteristics of geophysical systems differ and may be distinguished between divergent and convergent systems. In divergent systems initially nearby states will drift apart, while they will coalesce in convergent systems. This difference has implications for the application of sequential ensemble data assimilation methods. This study explores these implications on two exemplary systems, i.e., the divergent Lorenz 96 model and the convergent description of soil water movement by the Richards equation. The results show that sequential ensemble data assimilation methods require a sufficient divergent component. This makes the transfer of the methods from divergent to convergent systems challenging. We demonstrate, through a set of case studies, that it is imperative to represent model errors adequately and incorporate parameter uncertainties in ensemble data assimilation in convergent systems.


2015 ◽  
Vol 143 (7) ◽  
pp. 2600-2610 ◽  
Author(s):  
Lili Lei ◽  
Joshua P. Hacker

Abstract Objective data assimilation methods such as variational and ensemble algorithms are attractive from a theoretical standpoint. Empirical nudging approaches are computationally efficient and can get around some amount of model error by using arbitrarily large nudging coefficients. In an attempt to take advantage of the strengths of both methods for analyses, combined nudging-ensemble approaches have been recently proposed. Here the two-scale Lorenz model is used to elucidate how the forecast error from nudging, ensemble, and nudging-ensemble schemes varies with model error. As expected, an ensemble filter and smoother are closest to optimal when model errors are small or absent. Model error is introduced by varying model forcing, coupling between scales, and spatial filtering. Nudging approaches perform relatively better with increased model error; use of poor ensemble covariance estimates when model error is large harms the nudging-ensemble performance. Consequently, nudging-ensemble methods always produce error levels between the objective ensemble filters and empirical nudging, and can never provide analyses or short-range forecasts with lower errors than both. As long as the nudged state and the ensemble-filter state are close enough, the ensemble statistics are useful for the nudging, and fully coupling the ensemble and nudging by centering the ensemble on the nudged state is not necessary. An ensemble smoother produces the overall smallest errors except for with very large model errors. Results are qualitatively independent of tuning parameters such as covariance inflation and localization.


Sign in / Sign up

Export Citation Format

Share Document