ensemble kalman filters
Recently Published Documents


TOTAL DOCUMENTS

83
(FIVE YEARS 15)

H-INDEX

23
(FIVE YEARS 2)

PLoS ONE ◽  
2021 ◽  
Vol 16 (3) ◽  
pp. e0248046
Author(s):  
Elizabeth Hou ◽  
Earl Lawrence ◽  
Alfred O. Hero

The ensemble Kalman filter (EnKF) is a data assimilation technique that uses an ensemble of models, updated with data, to track the time evolution of a usually non-linear system. It does so by using an empirical approximation to the well-known Kalman filter. However, its performance can suffer when the ensemble size is smaller than the state space, as is often necessary for computationally burdensome models. This scenario means that the empirical estimate of the state covariance is not full rank and possibly quite noisy. To solve this problem in this high dimensional regime, we propose a computationally fast and easy to implement algorithm called the penalized ensemble Kalman filter (PEnKF). Under certain conditions, it can be theoretically proven that the PEnKF will be accurate (the estimation error will converge to zero) despite having fewer ensemble members than state dimensions. Further, as contrasted to localization methods, the proposed approach learns the covariance structure associated with the dynamical system. These theoretical results are supported with simulations of several non-linear and high dimensional systems.


2021 ◽  
Author(s):  
Quentin Malartic ◽  
Marc Bocquet ◽  
Alban Farchi

<div>In a recent methodological paper, we have shown how a (local) ensemble Kalman filter can be used to learn both the state and the dynamics of a system in an online framework. The surrogate model is fully parametrised (for example, this could be a neural network) and the update is a two-step process: (i) a state update, possibly localised, and (ii) a parameter update consistent with the state update. In this framework, the parameters of the surrogate model are assumed to be global. <br><br>In this presentation, we show how to extend the method to the case where the surrogate model, still fully parametrised, admits both global and local parameters (typically forcing parameters). In this case, localisation can be applied not only to the state update, but also to the local parameters update. This results in a collection of new algorithms, depending on the localisation method (covariance localisation or domain localisation) and on whether localisation is applied to the state update, or to both the state and local parameter update. The algorithms are implemented and tested with success on the 40-variable Lorenz model. Finally, we show a two-dimensional illustration of the method using a multi-layer Lorenz model with radiance-like non-local observations.</div>


2021 ◽  
Vol 149 (1) ◽  
pp. 65-76
Author(s):  
Mohamad El Gharamti

AbstractModel errors and sampling errors produce inaccurate sample covariances that limit the performance of ensemble Kalman filters. Linearly hybridizing the flow-dependent ensemble-based covariance with a time-invariant background covariance matrix gives a better estimate of the true error covariance. Previous studies have shown this, both in theory and in practice. How to choose the weight for each covariance remains an open question especially in the presence of model biases. This study assumes the weighting coefficient to be a random variable and then introduces a Bayesian scheme to estimate it using the available data. The scheme takes into account the discrepancy between the ensemble mean and the observations, the ensemble variance, the static background variance, and the uncertainties in the observations. The proposed algorithm is first derived for a spatially constant weight and then this assumption is relaxed by estimating a unique scalar weight for each state variable. Using twin experiments with the 40-variable Lorenz 96 system, it is shown that the proposed scheme is able to produce quality forecasts even in the presence of severe sampling errors. The adaptive algorithm allows the hybrid filter to switch between an EnKF and a simple EnOI depending on the statistics of the ensemble. In the presence of model errors, the adaptive scheme demonstrates additional improvements compared with standard enhancements alone, such as inflation and localization. Finally, the potential of the spatially varying variant to accommodate challenging sparse observation networks is demonstrated. The computational efficiency and storage of the proposed scheme, which remain an obstacle, are discussed.


2020 ◽  
Vol 118 (2) ◽  
pp. e2015006118
Author(s):  
Connor Duffin ◽  
Edward Cripps ◽  
Thomas Stemler ◽  
Mark Girolami

We present a statistical finite element method for nonlinear, time-dependent phenomena, illustrated in the context of nonlinear internal waves (solitons). We take a Bayesian approach and leverage the finite element method to cast the statistical problem as a nonlinear Gaussian state–space model, updating the solution, in receipt of data, in a filtering framework. The method is applicable to problems across science and engineering for which finite element methods are appropriate. The Korteweg–de Vries equation for solitons is presented because it reflects the necessary complexity while being suitably familiar and succinct for pedagogical purposes. We present two algorithms to implement this method, based on the extended and ensemble Kalman filters, and demonstrate effectiveness with a simulation study and a case study with experimental data. The generality of our approach is demonstrated in SI Appendix, where we present examples from additional nonlinear, time-dependent partial differential equations (Burgers equation, Kuramoto–Sivashinsky equation).


2020 ◽  
Vol 13 (8) ◽  
pp. 3607-3625
Author(s):  
Yongjun Zheng ◽  
Clément Albergel ◽  
Simon Munier ◽  
Bertrand Bonan ◽  
Jean-Christophe Calvet

Abstract. The high computational resources and the time-consuming IO (input/output) are major issues in offline ensemble-based high-dimensional data assimilation systems. Bearing these in mind, this study proposes a sophisticated dynamically running job scheme as well as an innovative parallel IO algorithm to reduce the time to solution of an offline framework for high-dimensional ensemble Kalman filters. The dynamically running job scheme runs as many tasks as possible within a single job to reduce the queuing time and minimize the overhead of starting and/or ending a job. The parallel IO algorithm reads or writes non-overlapping segments of multiple files with an identical structure to reduce the IO times by minimizing the IO competitions and maximizing the overlapping of the MPI (Message Passing Interface) communications with the IO operations. Results based on sensitive experiments show that the proposed parallel IO algorithm can significantly reduce the IO times and have a very good scalability, too. Based on these two advanced techniques, the offline and online modes of ensemble Kalman filters are built based on PDAF (Parallel Data Assimilation Framework) to comprehensively assess their efficiencies. It can be seen from the comparisons between the offline and online modes that the IO time only accounts for a small fraction of the total time with the proposed parallel IO algorithm. The queuing time might be less than the running time in a low-loaded supercomputer such as in an operational context, but the offline mode can be nearly as fast as, if not faster than, the online mode in terms of time to solution. However, the queuing time is dominant and several times larger than the running time in a high-loaded supercomputer. Thus, the offline mode is substantially faster than the online mode in terms of time to solution, especially for large-scale assimilation problems. From this point of view, results suggest that an offline ensemble Kalman filter with an efficient implementation and a high-performance parallel file system should be preferred over its online counterpart for intermittent data assimilation in many situations.


Atmosphere ◽  
2020 ◽  
Vol 11 (4) ◽  
pp. 338
Author(s):  
Pinqiang Wang ◽  
Mengbin Zhu ◽  
Yan Chen ◽  
Weimin Zhang

Under the motivation of the great success of four-dimensional variational (4D-Var) data assimilation methods and the advantages of ensemble methods (e.g., Ensemble Kalman Filters and Particle Filters) in numerical weather prediction systems, we introduce the implicit equal-weights particle filter scheme in the weak constraint 4D-Var framework which avoids the filter degeneracy through implicit sampling in high-dimensional situations. The new variational particle smoother (varPS) method has been tested and explored using the Lorenz96 model with dimensions N x = 40 , N x = 100 , N x = 250 , and N x = 400 . The results show that the new varPS method does not suffer from the curse of dimensionality by construction and the root mean square error (RMSE) in the new varPS is comparable with the ensemble 4D-Var method. As a combination of the implicit equal-weights particle filter and weak constraint 4D-Var, the new method improves the RMSE compared with the implicit equal-weights particle filter and LETKF (local ensemble transformed Kalman filter) methods and enlarges the ensemble spread compared with ensemble 4D-Var scheme. To overcome the difficulty of the implicit equal-weights particle filter in real geophysical application, the posterior error covariance matrix is estimated using a limited ensemble and can be calculated in parallel. In general, this new varPS performs slightly better in ensemble quality (the balance between the RMSE and ensemble spread) than the ensemble 4D-Var and has the potential to be applied into real geophysical systems.


Reservoir modelling and production forecasting can provide vital inputs to the efficient management of petroleum. Since the reservoirs are highly heterogeneous and nonlinear in nature, it is often difficult to obtain accurate estimates of the spatial distribution of reservoir properties representing the reservoir and corresponding production profiles. If an accurate model of a reservoir is built, it can lead to efficient management of the reservoir. This paper describes the mathematical modelling of oil reservoirs along with various optimization techniques applicable for history matching and production forecasting. Gradient based and non-gradient based optimization techniques viz. Simulated Annealing (SA), Scatter Search (SS), Neighborhood algorithm (NA), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Ensemble Kalman Filters (EnKF) and Genetic Algorithm (GA) and their application to reservoir production history matching and performance are presented. The recent advancements and variants of these techniques applied for the purpose are also presented.


Sign in / Sign up

Export Citation Format

Share Document