scholarly journals The Maximum Likelihood Ensemble Filter with State Space Localization

Author(s):  
Milija Zupanski

AbstractNew method for ensemble data assimilation that incorporates state space covariance localization, global numerical optimization, and implied Bayesian inference, is presented. The method is referred to as the MLEF with State Space Localization (MLEF-SSL) due to its similarity with the Maximum Likelihood Ensemble Filter (MLEF). One of the novelties introduced in MLEF-SSL is the calculation of a reduced-rank localized forecast error covariance using random projection. The Hessian preconditioning is accomplished via Cholesky decomposition of the Hessian matrix, accompanied with solving triangular system of equations instead of directly inverting matrices. For ensemble update the MLEF-SSL system employs resampling of posterior perturbations. The MLEF-SSL was applied to Lorenz model II and compared to Ensemble Kalman Filter with state space localization and to MLEF with observation space localization. The observations include linear and nonlinear observation operators, each applied to integrated and point observations. Results indicate improved performance of MLEF-SSL, particularly in assimilation of integrated nonlinear observations. Resampling of posterior perturbations for ensemble update also indicates a satisfactory performance. Additional experiments were conducted to examine the sensitivity of the method to the rank of random matrix and to compare it to truncated eigenvectors of the localization matrix. The two methods are comparable in application to low-dimensional Lorenz model, except that the new method outperforms the truncated eigenvector method in case of severe rank reduction. The random basis method is simple to implement and may be more promising for realistic high-dimensional applications.

2005 ◽  
Vol 133 (6) ◽  
pp. 1710-1726 ◽  
Author(s):  
Milija Zupanski

Abstract A new ensemble-based data assimilation method, named the maximum likelihood ensemble filter (MLEF), is presented. The analysis solution maximizes the likelihood of the posterior probability distribution, obtained by minimization of a cost function that depends on a general nonlinear observation operator. The MLEF belongs to the class of deterministic ensemble filters, since no perturbed observations are employed. As in variational and ensemble data assimilation methods, the cost function is derived using a Gaussian probability density function framework. Like other ensemble data assimilation algorithms, the MLEF produces an estimate of the analysis uncertainty (e.g., analysis error covariance). In addition to the common use of ensembles in calculation of the forecast error covariance, the ensembles in MLEF are exploited to efficiently calculate the Hessian preconditioning and the gradient of the cost function. A sufficient number of iterative minimization steps is 2–3, because of superior Hessian preconditioning. The MLEF method is well suited for use with highly nonlinear observation operators, for a small additional computational cost of minimization. The consistent treatment of nonlinear observation operators through optimization is an advantage of the MLEF over other ensemble data assimilation algorithms. The cost of MLEF is comparable to the cost of existing ensemble Kalman filter algorithms. The method is directly applicable to most complex forecast models and observation operators. In this paper, the MLEF method is applied to data assimilation with the one-dimensional Korteweg–de Vries–Burgers equation. The tested observation operator is quadratic, in order to make the assimilation problem more challenging. The results illustrate the stability of the MLEF performance, as well as the benefit of the cost function minimization. The improvement is noted in terms of the rms error, as well as the analysis error covariance. The statistics of innovation vectors (observation minus forecast) also indicate a stable performance of the MLEF algorithm. Additional experiments suggest the amplified benefit of targeted observations in ensemble data assimilation.


2008 ◽  
Vol 5 (1) ◽  
pp. 11-16 ◽  
Author(s):  
Choon Ki Ahn ◽  
Pyung Soo Kim

2020 ◽  
Author(s):  
Manjula Perera ◽  
Ravindra Lokupitiya ◽  
Scott Denning ◽  
Prabir K. Patra ◽  
Dusanka Zupanski ◽  
...  

J ◽  
2019 ◽  
Vol 2 (4) ◽  
pp. 508-560
Author(s):  
Riccardo Corradini

Normally, econometric models that forecast the Italian Industrial Production Index do not exploit information already available at time t + 1 for their own main industry groupings. The new strategy proposed here uses state–space models and aggregates the estimates to obtain improved results. The performance of disaggregated models is compared at the same time with a popular benchmark model, a univariate model tailored on the whole index, with persistent not formally registered holidays, a vector autoregressive moving average model exploiting all information published on the web for main industry groupings. Tests for superior predictive ability confirm the supremacy of the aggregated forecasts over three steps horizon using absolute forecast error and quadratic forecast error as a loss function. The datasets are available online.


Author(s):  
Tobias Leibner ◽  
Mario Ohlberger

In this contribution we derive and analyze a new numerical method for kinetic equations based on a variable transformation of the moment approximation. Classical minimum-entropy moment closures are a class of reduced models for kinetic equations that conserve many of the fundamental physical properties of solutions. However, their practical use is limited by their high computational cost, as an optimization problem has to be solved for every cell in the space-time grid. In addition, implementation of numerical solvers for these models is hampered by the fact that the optimization problems are only well-defined if the moment vectors stay within the realizable set. For the same reason, further reducing these models by, e.g., reduced-basis methods is not a simple task. Our new method overcomes these disadvantages of classical approaches. The transformation is performed on the semi-discretized level which makes them applicable to a wide range of kinetic schemes and replaces the nonlinear optimization problems by inversion of the positive-definite Hessian matrix. As a result, the new scheme gets rid of the realizability-related problems. Moreover, a discrete entropy law can be enforced by modifying the time stepping scheme. Our numerical experiments demonstrate that our new method is often several times faster than the standard optimization-based scheme.


Sign in / Sign up

Export Citation Format

Share Document