scholarly journals Accounting for Model Error from Unresolved Scales in Ensemble Kalman Filters by Stochastic Parameterization

2017 ◽  
Vol 145 (9) ◽  
pp. 3709-3723 ◽  
Author(s):  
Fei Lu ◽  
Xuemin Tu ◽  
Alexandre J. Chorin

The use of discrete-time stochastic parameterization to account for model error due to unresolved scales in ensemble Kalman filters is investigated by numerical experiments. The parameterization quantifies the model error and produces an improved non-Markovian forecast model, which generates high quality forecast ensembles and improves filter performance. Results are compared with the methods of dealing with model error through covariance inflation and localization (IL), using as an example the two-layer Lorenz-96 system. The numerical results show that when the ensemble size is sufficiently large, the parameterization is more effective in accounting for the model error than IL; if the ensemble size is small, IL is needed to reduce sampling error, but the parameterization further improves the performance of the filter. This suggests that in real applications where the ensemble size is relatively small, the filter can achieve better performance than pure IL if stochastic parameterization methods are combined with IL.

2016 ◽  
Vol 144 (12) ◽  
pp. 4667-4686
Author(s):  
Mark L. Psiaki

Abstract A new type of ensemble filter is developed, one that stores and updates its state information in an efficient square root information filter form. It addresses two shortcomings of conventional ensemble Kalman filters: the coarse characterization of random forecast model error effects and the overly optimistic approximation of the estimation error statistics. The new filter uses an assumed a priori covariance approximation that is full rank but sparse, possibly with a dense low-rank increment. This matrix can be used to develop a nominal square root information equation for the system state and uncertainty. The measurements are used to develop an additional low-rank square root information equation. New algorithms provide forecasts and analyses of these increments at a computational cost comparable to that of existing ensemble Kalman filters. Model error effects are implicit in the a priori covariance time history, thereby obviating one of the reasons for including an inflation operation. The use of an a priori full-rank covariance allows the analysis operations to improve the state estimate without the need for a localization adjustment. This new filter exhibited worse performance than a typical covariance square root ensemble Kalman filter when operating on the Lorenz-96 problem in a chaotic regime. It excelled on a version of the Lorenz-96 problem where nonlinearities in the forecast model were weak, where the state vector uncertainty lay predominantly in a small subspace, and where the observations were spatially sparse. Such a problem might be representative of ionospheric space weather data assimilation where forcing variability can dominate the state uncertainty and where remote sensing data coverage can be sparse.


2017 ◽  
Vol 145 (3) ◽  
pp. 985-1001 ◽  
Author(s):  
Michèle De La Chevrotière ◽  
John Harlim

A data-driven method for improving the correlation estimation in serial ensemble Kalman filters is introduced. The method finds a linear map that transforms, at each assimilation cycle, the poorly estimated sample correlation into an improved correlation. This map is obtained from an offline training procedure without any tuning as the solution of a linear regression problem that uses appropriate sample correlation statistics obtained from historical data assimilation outputs. In an idealized OSSE with the Lorenz-96 model and for a range of linear and nonlinear observation models, the proposed scheme improves the filter estimates, especially when the ensemble size is small relative to the dimension of the state space.


2015 ◽  
Vol 143 (5) ◽  
pp. 1554-1567 ◽  
Author(s):  
Lars Nerger

Abstract Ensemble square root filters can either assimilate all observations that are available at a given time at once, or assimilate the observations in batches or one at a time. For large-scale models, the filters are typically applied with a localized analysis step. This study demonstrates that the interaction of serial observation processing and localization can destabilize the analysis process, and it examines under which conditions the instability becomes significant. The instability results from a repeated inconsistent update of the state error covariance matrix that is caused by the localization. The inconsistency is present in all ensemble Kalman filters, except for the classical ensemble Kalman filter with perturbed observations. With serial observation processing, its effect is small in cases when the assimilation changes the ensemble of model states only slightly. However, when the assimilation has a strong effect on the state estimates, the interaction of localization and serial observation processing can significantly deteriorate the filter performance. In realistic large-scale applications, when the assimilation changes the states only slightly and when the distribution of the observations is irregular and changing over time, the instability is likely not significant.


2012 ◽  
Vol 140 (9) ◽  
pp. 3078-3089 ◽  
Author(s):  
Jeffrey S. Whitaker ◽  
Thomas M. Hamill

Abstract Inflation of ensemble perturbations is employed in ensemble Kalman filters to account for unrepresented error sources. The authors propose a multiplicative inflation algorithm that inflates the posterior ensemble in proportion to the amount that observations reduce the ensemble spread, resulting in more inflation in regions of dense observations. This is justified since the posterior ensemble variance is more affected by sampling errors in these regions. The algorithm is similar to the “relaxation to prior” algorithm proposed by Zhang et al., but it relaxes the posterior ensemble spread back to the prior instead of the posterior ensemble perturbations. The new inflation algorithm is compared to the method of Zhang et al. and simple constant covariance inflation using a two-level primitive equation model in an environment that includes model error. The new method performs somewhat better, although the method of Zhang et al. produces more balanced analyses whose ensemble spread grows faster. Combining the new multiplicative inflation algorithm with additive inflation is found to be superior to either of the methods used separately. Tests with large and small ensembles, with and without model error, suggest that multiplicative inflation is better suited to account for unrepresented observation-network-dependent assimilation errors such as sampling error, while model errors, which do not depend on the observing network, are better treated by additive inflation. A combination of additive and multiplicative inflation can provide a baseline for evaluating more sophisticated stochastic treatments of unrepresented background errors. This is demonstrated by comparing the performance of a stochastic kinetic energy backscatter scheme with additive inflation as a parameterization of model error.


2012 ◽  
Vol 140 (2) ◽  
pp. 528-542 ◽  
Author(s):  
Ibrahim Hoteit ◽  
Xiaodong Luo ◽  
Dinh-Tuan Pham

This paper investigates an approximation scheme of the optimal nonlinear Bayesian filter based on the Gaussian mixture representation of the state probability distribution function. The resulting filter is similar to the particle filter, but is different from it in that the standard weight-type correction in the particle filter is complemented by the Kalman-type correction with the associated covariance matrices in the Gaussian mixture. The authors show that this filter is an algorithm in between the Kalman filter and the particle filter, and therefore is referred to as the particle Kalman filter (PKF). In the PKF, the solution of a nonlinear filtering problem is expressed as the weighted average of an “ensemble of Kalman filters” operating in parallel. Running an ensemble of Kalman filters is, however, computationally prohibitive for realistic atmospheric and oceanic data assimilation problems. For this reason, the authors consider the construction of the PKF through an “ensemble” of ensemble Kalman filters (EnKFs) instead, and call the implementation the particle EnKF (PEnKF). It is shown that different types of the EnKFs can be considered as special cases of the PEnKF. Similar to the situation in the particle filter, the authors also introduce a resampling step to the PEnKF in order to reduce the risk of weights collapse and improve the performance of the filter. Numerical experiments with the strongly nonlinear Lorenz-96 model are presented and discussed.


2014 ◽  
Vol 142 (12) ◽  
pp. 4499-4518 ◽  
Author(s):  
Yicun Zhen ◽  
Fuqing Zhang

Abstract This study proposes a variational approach to adaptively determine the optimum radius of influence for ensemble covariance localization when uncorrelated observations are assimilated sequentially. The covariance localization is commonly used by various ensemble Kalman filters to limit the impact of covariance sampling errors when the ensemble size is small relative to the dimension of the state. The probabilistic approach is based on the premise of finding an optimum localization radius that minimizes the distance between the Kalman update using the localized sampling covariance versus using the true covariance, when the sequential ensemble Kalman square root filter method is used. The authors first examine the effectiveness of the proposed method for the cases when the true covariance is known or can be approximated by a sufficiently large ensemble size. Not surprisingly, it is found that the smaller the true covariance distance or the smaller the ensemble, the smaller the localization radius that is needed. The authors further generalize the method to the more usual scenario that the true covariance is unknown but can be represented or estimated probabilistically based on the ensemble sampling covariance. The mathematical formula for this probabilistic and adaptive approach with the use of the Jeffreys prior is derived. Promising results and limitations of this new method are discussed through experiments using the Lorenz-96 system.


2013 ◽  
Vol 141 (11) ◽  
pp. 4140-4153 ◽  
Author(s):  
Jeffrey Anderson ◽  
Lili Lei

Abstract Localization is a method for reducing the impact of sampling errors in ensemble Kalman filters. Here, the regression coefficient, or gain, relating ensemble increments for observed quantity y to increments for state variable x is multiplied by a real number α defined as a localization. Localization of the impact of observations on model state variables is required for good performance when applying ensemble data assimilation to large atmospheric and oceanic problems. Localization also improves performance in idealized low-order ensemble assimilation applications. An algorithm that computes localization from the output of an ensemble observing system simulation experiment (OSSE) is described. The algorithm produces localizations for sets of pairs of observations and state variables: for instance, all state variables that are between 300- and 400-km horizontal distance from an observation. The algorithm is applied in a low-order model to produce localizations from the output of an OSSE and the computed localizations are then used in a new OSSE. Results are compared to assimilations using tuned localizations that are approximately Gaussian functions of the distance between an observation and a state variable. In most cases, the empirically computed localizations produce the lowest root-mean-square errors in subsequent OSSEs. Localizations derived from OSSE output can provide guidance for localization in real assimilation experiments. Applying the algorithm in large geophysical applications may help to tune localization for improved ensemble filter performance.


2011 ◽  
Vol 139 (1) ◽  
pp. 117-131 ◽  
Author(s):  
Thomas M. Hamill ◽  
Jeffrey S. Whitaker

Abstract The spread of an ensemble of weather predictions initialized from an ensemble Kalman filter may grow slowly relative to other methods for initializing ensemble predictions, degrading its skill. Several possible causes of the slow spread growth were evaluated in perfect- and imperfect-model experiments with a two-layer primitive equation spectral model of the atmosphere. The causes examined were the covariance localization, the additive noise used to stabilize the assimilation method and parameterize the system error, and the model error itself. In these experiments, the flow-independent additive noise was the biggest factor in constraining spread growth. Preevolving additive noise perturbations were tested as a way to make the additive noise more flow dependent. This modestly improved the data assimilation and ensemble predictions, both in the two-layer model results and in a brief test of the assimilation of real observations into a global multilevel spectral primitive equation model. More generally, these results suggest that methods for treating model error in ensemble Kalman filters that greatly reduce the flow dependency of the background-error covariances may increase the filter analysis error and decrease the rate of forecast spread growth.


Sign in / Sign up

Export Citation Format

Share Document