scholarly journals Expanding the validity of the ensemble Kalman filter without the intrinsic need for inflation

2015 ◽  
Vol 22 (6) ◽  
pp. 645-662 ◽  
Author(s):  
M. Bocquet ◽  
P. N. Raanes ◽  
A. Hannart

Abstract. The ensemble Kalman filter (EnKF) is a powerful data assimilation method meant for high-dimensional nonlinear systems. But its implementation requires somewhat ad hoc procedures such as localization and inflation. The recently developed finite-size ensemble Kalman filter (EnKF-N) does not require multiplicative inflation meant to counteract sampling errors. Aside from the practical interest in avoiding the tuning of inflation in perfect model data assimilation experiments, it also offers theoretical insights and a unique perspective on the EnKF. Here, we revisit, clarify and correct several key points of the EnKF-N derivation. This simplifies the use of the method, and expands its validity. The EnKF is shown to not only rely on the observations and the forecast ensemble, but also on an implicit prior assumption, termed hyperprior, that fills in the gap of missing information. In the EnKF-N framework, this assumption is made explicit through a Bayesian hierarchy. This hyperprior has so far been chosen to be the uninformative Jeffreys prior. Here, this choice is revisited to improve the performance of the EnKF-N in the regime where the analysis is strongly dominated by the prior. Moreover, it is shown that the EnKF-N can be extended with a normal-inverse Wishart informative hyperprior that introduces additional information on error statistics. This can be identified as a hybrid EnKF–3D-Var counterpart to the EnKF-N.

2015 ◽  
Vol 2 (4) ◽  
pp. 1091-1136 ◽  
Author(s):  
M. Bocquet ◽  
P. N. Raanes ◽  
A. Hannart

Abstract. The ensemble Kalman filter (EnKF) is a powerful data assimilation method meant for high-dimensional nonlinear systems. But its implementation requires fixes such as localization and inflation. The recently developed finite-size ensemble Kalman filter (EnKF-N) does not require multiplicative inflation meant to counteract sampling errors. Aside from the practical interest of avoiding the tuning of inflation in perfect model data assimilation experiments, it also offers theoretical insights and a unique perspective on the EnKF. Here, we revisit, clarify and correct several key points of the EnKF-N derivation. This simplifies the use of the method, and expands its validity. The EnKF is shown to not only rely on the observations and the forecast ensemble but also on an implicit prior assumption, termed hyperprior, that fills in the gap of missing information. In the EnKF-N framework, this assumption is made explicit through a Bayesian hierarchy. This hyperprior has been so far chosen to be the uninformative Jeffreys' prior. Here, this choice is revisited to improve the performance of the EnKF-N in the regime where the analysis strongly relaxes to the prior. Moreover, it is shown that the EnKF-N can be extended with a normal-inverse-Wishart informative hyperprior that additionally introduces climatological error statistics. This can be identified as a hybrid 3D-Var/EnKF counterpart to the EnKF-N.


SPE Journal ◽  
2011 ◽  
Vol 16 (02) ◽  
pp. 294-306 ◽  
Author(s):  
Lingzao Zeng ◽  
Haibin Chang ◽  
Dongxiao Zhang

Summary The ensemble Kalman filter (EnKF) has been used widely for data assimilation. Because the EnKF is a Monte Carlo-based method, a large ensemble size is required to reduce the sampling errors. In this study, a probabilistic collocation-based Kalman filter (PCKF) is developed to adjust the reservoir parameters to honor the production data. It combines the advantages of the EnKF for dynamic data assimilation and the polynomial chaos expansion (PCE) for efficient uncertainty quantification. In this approach, all the system parameters and states and the production data are approximated by the PCE. The PCE coefficients are solved with the probabilistic collocation method (PCM). Collocation realizations are constructed by choosing collocation point sets in the random space. The simulation for each collocation realization is solved forward in time independently by means of an existing deterministic solver, as in the EnKF method. In the analysis step, the needed covariance is approximated by the PCE coefficients. In this study, a square-root filter is employed to update the PCE coefficients. After the analysis, new collocation realizations are constructed. With the parameter collocation realizations as the inputs and the state collocation realizations as initial conditions, respectively, the simulations are forwarded to the next analysis step. Synthetic 2D water/oil examples are used to demonstrate the applicability of the PCKF in history matching. The results are compared with those from the EnKF on the basis of the same analysis. It is shown that the estimations provided by the PCKF are comparable to those obtained from the EnKF. The biggest improvement of the PCKF comes from the leading PCE approximation, with which the computational burden of the PCKF can be greatly reduced by means of a smaller number of simulation runs, and the PCKF outperforms the EnKF for a similar computational effort. When the correlation ratio is much smaller, the PCKF still provides estimations with a better accuracy for a small computational effort.


2011 ◽  
Vol 18 (5) ◽  
pp. 735-750 ◽  
Author(s):  
M. Bocquet

Abstract. The main intrinsic source of error in the ensemble Kalman filter (EnKF) is sampling error. External sources of error, such as model error or deviations from Gaussianity, depend on the dynamical properties of the model. Sampling errors can lead to instability of the filter which, as a consequence, often requires inflation and localization. The goal of this article is to derive an ensemble Kalman filter which is less sensitive to sampling errors. A prior probability density function conditional on the forecast ensemble is derived using Bayesian principles. Even though this prior is built upon the assumption that the ensemble is Gaussian-distributed, it is different from the Gaussian probability density function defined by the empirical mean and the empirical error covariance matrix of the ensemble, which is implicitly used in traditional EnKFs. This new prior generates a new class of ensemble Kalman filters, called finite-size ensemble Kalman filter (EnKF-N). One deterministic variant, the finite-size ensemble transform Kalman filter (ETKF-N), is derived. It is tested on the Lorenz '63 and Lorenz '95 models. In this context, ETKF-N is shown to be stable without inflation for ensemble size greater than the model unstable subspace dimension, at the same numerical cost as the ensemble transform Kalman filter (ETKF). One variant of ETKF-N seems to systematically outperform the ETKF with optimally tuned inflation. However it is shown that ETKF-N does not account for all sampling errors, and necessitates localization like any EnKF, whenever the ensemble size is too small. In order to explore the need for inflation in this small ensemble size regime, a local version of the new class of filters is defined (LETKF-N) and tested on the Lorenz '95 toy model. Whatever the size of the ensemble, the filter is stable. Its performance without inflation is slightly inferior to that of LETKF with optimally tuned inflation for small interval between updates, and superior to LETKF with optimally tuned inflation for large time interval between updates.


2007 ◽  
Vol 135 (10) ◽  
pp. 3484-3495 ◽  
Author(s):  
Brian J. Etherton

Abstract An ensemble Kalman filter (EnKF) estimates the error statistics of a model forecast using an ensemble of model forecasts. One use of an EnKF is data assimilation, resulting in the creation of an increment to the first-guess field at the observation time. Another use of an EnKF is to propagate error statistics of a model forecast forward in time, such as is done for optimizing the location of adaptive observations. Combining these two uses of an ensemble Kalman filter, a “preemptive forecast” can be generated. In a preemptive forecast, the increment to the first-guess field is, using ensembles, propagated to some future time and added to the future control forecast, resulting in a new forecast. This new forecast requires no more time to produce than the time needed to run a data assimilation scheme, as no model integration is necessary. In an observing system simulation experiment (OSSE), a barotropic vorticity model was run to produce a 300-day “nature run.” The same model, run with a different vorticity forcing scheme, served as the forecast model. The model produced 24- and 48-h forecasts for each of the 300 days. The model was initialized every 24 h by assimilating observations of the nature run using a hybrid ensemble Kalman filter–three-dimensional variational data assimilation (3DVAR) scheme. In addition to the control forecast, a 64-member forecast ensemble was generated for each of the 300 days. Every 24 h, given a set of observations, the 64-member ensemble, and the control run, an EnKF was used to create 24-h preemptive forecasts. The preemptive forecasts were more accurate than the unmodified, original 48-h forecasts, though not quite as accurate as the 24-h forecast obtained from a new model integration initialized by assimilating the same observations as were used in the preemptive forecasts. The accuracy of the preemptive forecasts improved significantly when 1) the ensemble-based error statistics used by the EnKF were localized using a Schur product and 2) a model error term was included in the background error covariance matrices.


Author(s):  
Nicolas Papadakis ◽  
Etienne Mémin ◽  
Anne Cuzol ◽  
Nicolas Gengembre

2016 ◽  
Vol 66 (8) ◽  
pp. 955-971 ◽  
Author(s):  
Stéphanie Ponsar ◽  
Patrick Luyten ◽  
Valérie Dulière

Icarus ◽  
2010 ◽  
Vol 209 (2) ◽  
pp. 470-481 ◽  
Author(s):  
Matthew J. Hoffman ◽  
Steven J. Greybush ◽  
R. John Wilson ◽  
Gyorgyi Gyarmati ◽  
Ross N. Hoffman ◽  
...  

2010 ◽  
Vol 34 (8) ◽  
pp. 1984-1999 ◽  
Author(s):  
Ahmadreza Zamani ◽  
Ahmadreza Azimian ◽  
Arnold Heemink ◽  
Dimitri Solomatine

2013 ◽  
Vol 5 (6) ◽  
pp. 3123-3139 ◽  
Author(s):  
Yasumasa Miyazawa ◽  
Hiroshi Murakami ◽  
Toru Miyama ◽  
Sergey Varlamov ◽  
Xinyu Guo ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document