Maximum Likelihood Ensemble Filter: Theoretical Aspects

2005 ◽  
Vol 133 (6) ◽  
pp. 1710-1726 ◽  
Author(s):  
Milija Zupanski

Abstract A new ensemble-based data assimilation method, named the maximum likelihood ensemble filter (MLEF), is presented. The analysis solution maximizes the likelihood of the posterior probability distribution, obtained by minimization of a cost function that depends on a general nonlinear observation operator. The MLEF belongs to the class of deterministic ensemble filters, since no perturbed observations are employed. As in variational and ensemble data assimilation methods, the cost function is derived using a Gaussian probability density function framework. Like other ensemble data assimilation algorithms, the MLEF produces an estimate of the analysis uncertainty (e.g., analysis error covariance). In addition to the common use of ensembles in calculation of the forecast error covariance, the ensembles in MLEF are exploited to efficiently calculate the Hessian preconditioning and the gradient of the cost function. A sufficient number of iterative minimization steps is 2–3, because of superior Hessian preconditioning. The MLEF method is well suited for use with highly nonlinear observation operators, for a small additional computational cost of minimization. The consistent treatment of nonlinear observation operators through optimization is an advantage of the MLEF over other ensemble data assimilation algorithms. The cost of MLEF is comparable to the cost of existing ensemble Kalman filter algorithms. The method is directly applicable to most complex forecast models and observation operators. In this paper, the MLEF method is applied to data assimilation with the one-dimensional Korteweg–de Vries–Burgers equation. The tested observation operator is quadratic, in order to make the assimilation problem more challenging. The results illustrate the stability of the MLEF performance, as well as the benefit of the cost function minimization. The improvement is noted in terms of the rms error, as well as the analysis error covariance. The statistics of innovation vectors (observation minus forecast) also indicate a stable performance of the MLEF algorithm. Additional experiments suggest the amplified benefit of targeted observations in ensemble data assimilation.

2021 ◽  
Author(s):  
Saori Nakashita ◽  
Takeshi Enomoto

<p>Satellite observations have been a growing source for data assimilation in the operational numerical weather prediction. Remotely sensed observations require a nonlinear observation operator.  Most ensemble-based data assimilation methods are formulated for tangent linear observation operators, which are often substituted by nonlinear observation operators. By contrast, the Maximum Likelihood Ensemble Filter (MLEF), which has features of both variational and ensemble approaches, is formulated for linear and nonlinear operators in an identical form and can use non-differentiable observation operators.<span> </span></p><p>In this study, we investigate the performance of MLEF and Ensemble Transform Kalman Filter (ETKF) with the tangent linear and nonlinear observation operators in assimilation experiments of nonlinear observations with a one-dimensional Burgers model.</p><p>The ETKF analysis with the nonlinear operator diverges when the observation error is small due to unrealistically large increments associated with the high order observation terms. The filter divergence can be avoided by localization of the extent of observation influence, but the analysis error is still larger than that of MLEF. In contrast, MLEF is found to be more stable and accurate without localization owing to the minimization of the cost function. Notably, MLEF can make an accurate analysis solution even without covariance inflation, eliminating the labor of parameter adjustment. In addition, the smaller observation error is, or the stronger observation nonlinearity is, MLEF with the nonlinear operators can assimilate observations more effectively than MLEF with the tangent linear operators. This result indicates that MLEF can incorporate nonlinear effects and evaluate the observation term in the cost function appropriately. These encouraging results imply that MLEF is suitable for assimilation of satellite observations with high nonlinearity.</p>


2020 ◽  
Author(s):  
Milija Zupanski

<p>High-dimensional ensemble data assimilation applications require error covariance localization in order to address the problem of insufficient degrees of freedom, typically accomplished using the observation-space covariance localization. However, this creates a challenge for vertically integrated observations, such as satellite radiances, aerosol optical depth, etc., since the exact observation location in vertical does not exist. For nonlinear problems, there is an implied inconsistency in iterative minimization due to using observation-space localization which effectively prevents finding the optimal global minimizing solution. Using state-space localization, however, in principal resolves both issues associated with observation space localization.</p><p> </p><p>In this work we present a new nonlinear ensemble data assimilation method that employs covariance localization in state space and finds an optimal analysis solution. The new method resembles “modified ensembles” in the sense that ensemble size is increased in the analysis, but it differs in methodology used to create ensemble modifications, calculate the analysis error covariance, and define the initial ensemble perturbations for data assimilation cycling. From a practical point of view, the new method is considerably more efficient and potentially applicable to realistic high-dimensional data assimilation problems. A distinct characteristic of the new algorithm is that the localized error covariance and minimization are global, i.e. explicitly defined over all state points. The presentation will focus on examining feasible options for estimating the analysis error covariance and for defining the initial ensemble perturbations.</p>


2015 ◽  
Vol 143 (10) ◽  
pp. 3925-3930 ◽  
Author(s):  
Benjamin Ménétrier ◽  
Thomas Auligné

Abstract The control variable transform (CVT) is a keystone of variational data assimilation. In publications using such a technique, the background term of the transformed cost function is defined as a canonical inner product of the transformed control variable with itself. However, it is shown in this paper that this practical definition of the cost function is not correct if the CVT uses a square root of the background error covariance matrix that is not square. Fortunately, it is then shown that there is a manifold of the control space for which this flaw has no impact, and that most minimizers used in practice precisely work in this manifold. It is also shown that both correct and practical transformed cost functions have the same minimum. This explains more rigorously why the CVT is working in practice. The case of a singular is finally detailed, showing that the practical cost function still reaches the best linear unbiased estimate (BLUE).


2015 ◽  
Vol 143 (9) ◽  
pp. 3804-3822 ◽  
Author(s):  
Zhijin Li ◽  
James C. McWilliams ◽  
Kayo Ide ◽  
John D. Farrara

Abstract A multiscale data assimilation (MS-DA) scheme is formulated for fine-resolution models. A decomposition of the cost function is derived for a set of distinct spatial scales. The decomposed cost function allows for the background error covariance to be estimated separately for the distinct spatial scales, and multi-decorrelation scales to be explicitly incorporated in the background error covariance. MS-DA minimizes the partitioned cost functions sequentially from large to small scales. The multi-decorrelation length scale background error covariance enhances the spreading of sparse observations and prevents fine structures in high-resolution observations from being overly smoothed. The decomposition of the cost function also provides an avenue for mitigating the effects of scale aliasing and representativeness errors that inherently exist in a multiscale system, thus further improving the effectiveness of the assimilation of high-resolution observations. A set of one-dimensional experiments is performed to examine the properties of the MS-DA scheme. Emphasis is placed on the assimilation of patchy high-resolution observations representing radar and satellite measurements, alongside sparse observations representing those from conventional in situ platforms. The results illustrate how MS-DA improves the effectiveness of the assimilation of both these types of observations simultaneously.


2018 ◽  
Vol 146 (11) ◽  
pp. 3605-3622 ◽  
Author(s):  
Elizabeth A. Satterfield ◽  
Daniel Hodyss ◽  
David D. Kuhl ◽  
Craig H. Bishop

Abstract Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble-derived covariance matrix is equal to the true error covariance matrix. Here, we describe a simple and intuitively compelling method to fit calibration functions of the ensemble sample variance to the mean of the distribution of true error variances, given an ensemble estimate. We demonstrate that the use of such calibration functions is consistent with theory showing that, when sampling error in the prior variance estimate is considered, the gain that minimizes the posterior error variance uses the expected true prior variance, given an ensemble sample variance. Once the calibration function has been fitted, it can be combined with ensemble-based and climatologically based error correlation information to obtain a generalized hybrid error covariance model. When the calibration function is chosen to be a linear function of the ensemble variance, the generalized hybrid error covariance model is the widely used linear hybrid consisting of a weighted sum of a climatological and an ensemble-based forecast error covariance matrix. However, when the calibration function is chosen to be, say, a cubic function of the ensemble sample variance, the generalized hybrid error covariance model is a nonlinear function of the ensemble estimate. We consider idealized univariate data assimilation and multivariate cycling ensemble data assimilation to demonstrate that the generalized hybrid error covariance model closely approximates the optimal weights found through computationally expensive tuning in the linear case and, in the nonlinear case, outperforms any plausible linear model.


2015 ◽  
Vol 143 (12) ◽  
pp. 5073-5090 ◽  
Author(s):  
Craig H. Bishop ◽  
Bo Huang ◽  
Xuguang Wang

Abstract A consistent hybrid ensemble filter (CHEF) for using hybrid forecast error covariance matrices that linearly combine aspects of both climatological and flow-dependent matrices within a nonvariational ensemble data assimilation scheme is described. The CHEF accommodates the ensemble data assimilation enhancements of (i) model space ensemble covariance localization for satellite data assimilation and (ii) Hodyss’s method for improving accuracy using ensemble skewness. Like the local ensemble transform Kalman filter (LETKF), the CHEF is computationally scalable because it updates local patches of the atmosphere independently of others. Like the sequential ensemble Kalman filter (EnKF), it serially assimilates batches of observations and uses perturbed observations to create ensembles of analyses. It differs from the deterministic (no perturbed observations) ensemble square root filter (ESRF) and the EnKF in that (i) its analysis correction is unaffected by the order in which observations are assimilated even when localization is required, (ii) it uses accurate high-rank solutions for the posterior error covariance matrix to serially assimilate observations, and (iii) it accommodates high-rank hybrid error covariance models. Experiments were performed to assess the effect on CHEF and ESRF analysis accuracy of these differences. In the case where both the CHEF and the ESRF used tuned localized ensemble covariances for the forecast error covariance model, the CHEF’s advantage over the ESRF increased with observational density. In the case where the CHEF used a hybrid error covariance model but the ESRF did not, the CHEF had a substantial advantage for all observational densities.


2017 ◽  
Vol 145 (6) ◽  
pp. 2071-2082 ◽  
Author(s):  
Le Duc ◽  
Kazuo Saito

Abstract In the hybrid variational–ensemble data assimilation schemes preconditioned on the square root of background covariance , is a linear map from the model space to a higher-dimensional space. Because of the use of the nonsquare matrix , the transformed cost function still contains the inverse of . To avoid this inversion, all studies have used the diagonal quadratic form of the background term in practice without any justification. This study has shown that this practical cost function belongs to a class of cost functions that come into play whenever the minimization problem is transformed from the model space to a higher-dimension space. Each such cost function is associated with a vector in the kernel of (Ker), leading to an infinite number of these cost functions in which the practical cost function corresponds to the zero vector. These cost functions are shown to be the natural extension of the transformed one from the orthogonal complement of Ker to the full control space. In practice, these cost functions are reduced to a practical form where calculation does not require a predefined vector in Ker, and are as valid as the transformed one in the control space. That means the minimization process is not needed to be restricted to any subspace, which is contrary to the previous studies. This was demonstrated using a real observation data assimilation system. The theory justifies the use of the practical cost function and its variant in the hybrid variational–ensemble data assimilation method.


Author(s):  
Victor Shutyaev ◽  
Arthur Vidard ◽  
François-Xavier Le Dimet ◽  
Igor Gejadze

AbstractThe problem of variational data assimilation for a nonlinear evolution model is formulated as an optimal control problem to find the initial condition. The optimal solution (analysis) error arises due to the errors in the input data (background and observation errors). Under the Gaussian assumption the optimal solution error covariance can be constructed using the Hessian of the auxiliary data assimilation problem. The aim of this paper is to study the evolution of model errors via data assimilation. The optimal solution error covariances are derived in the case of imperfect model and for the weak constraint formulation, when the model euations determine the cost functional.


Author(s):  
M. Zupanski ◽  
S. J. Fletcher ◽  
I. M. Navon ◽  
B. Uzunoglu ◽  
R. P. Heikes ◽  
...  

2021 ◽  
Vol 25 (3) ◽  
pp. 931-944
Author(s):  
Johann M. Lacerda ◽  
Alexandre A. Emerick ◽  
Adolfo P. Pires

Sign in / Sign up

Export Citation Format

Share Document