error covariance
Recently Published Documents


TOTAL DOCUMENTS

615
(FIVE YEARS 140)

H-INDEX

44
(FIVE YEARS 4)

Abstract We describe a method for the efficient generation of the covariance operators of a variational data assimilation scheme which is suited to implementation on a massively parallel computer. The elementary components of this scheme are what we call ‘beta filters’, since they are based on the same spatial profiles possessed by the symmetric beta distributions of probability theory. These approximately Gaussian (bell-shaped) polynomials blend smoothly to zero at the ends of finite intervals, which makes them better suited to parallelization than the present quasi-Gaussian ‘recursive filters’ used in operations at NCEP. These basic elements are further combined at a hierarchy of different spatial scales into an overall multigrid structure formulated to preserve the necessary self-adjoint attribute possessed by any valid covariance operator. This paper describes the underlying idea of the beta filter and discusses how generalized Helmholtz operators can be enlisted to weight the elementary contributions additively in such a way that the covariance operators may exhibit realistic negative sidelobes, which are not easily obtained through the recursive filter paradigm. The main focus of the paper is on the basic logistics of the multigrid structure by which more general covariance forms are synthesized from the basic quasi-Gaussian elements. We describe several ideas on how best to organize computation, which led us to a generalization of this structure which made it practical so that it can efficiently perform with any rectangular arrangement of processing elements. Some simple idealized examples of the applications of these ideas are given.


Author(s):  
Sibo Cheng ◽  
Mingming Qiu

AbstractData assimilation techniques are widely used to predict complex dynamical systems with uncertainties, based on time-series observation data. Error covariance matrices modeling is an important element in data assimilation algorithms which can considerably impact the forecasting accuracy. The estimation of these covariances, which usually relies on empirical assumptions and physical constraints, is often imprecise and computationally expensive, especially for systems of large dimensions. In this work, we propose a data-driven approach based on long short term memory (LSTM) recurrent neural networks (RNN) to improve both the accuracy and the efficiency of observation covariance specification in data assimilation for dynamical systems. Learning the covariance matrix from observed/simulated time-series data, the proposed approach does not require any knowledge or assumption about prior error distribution, unlike classical posterior tuning methods. We have compared the novel approach with two state-of-the-art covariance tuning algorithms, namely DI01 and D05, first in a Lorenz dynamical system and then in a 2D shallow water twin experiments framework with different covariance parameterization using ensemble assimilation. This novel method shows significant advantages in observation covariance specification, assimilation accuracy, and computational efficiency.


2021 ◽  
Vol 9 (12) ◽  
pp. 1461
Author(s):  
Jose M. Gonzalez-Ondina ◽  
Lewis Sampson ◽  
Georgy I. Shapiro

Data assimilation methods are an invaluable tool for operational ocean models. These methods are often based on a variational approach and require the knowledge of the spatial covariances of the background errors (differences between the numerical model and the true values) and the observation errors (differences between true and measured values). Since the true values are never known in practice, the error covariance matrices containing values of the covariance functions at different locations, are estimated approximately. Several methods have been devised to compute these matrices, one of the most widely used is the one developed by Hollingsworth and Lönnberg (H-L). This method requires to bin (combine) the data points separated by similar distances, compute covariances in each bin and then to find a best fit covariance function. While being a helpful tool, the H-L method has its limitations. We have developed a new mathematical method for computing the background and observation error covariance functions and therefore the error covariance matrices. The method uses functional analysis which allows to overcome some shortcomings of the H-L method, for example, the assumption of statistical isotropy. It also eliminates the intermediate steps used in the H-L method such as binning the innovations (differences between observations and the model), and the computation of innovation covariances for each bin, before the best-fit curve can be found. We show that the new method works in situations where the standard H-L method experiences difficulties, especially when observations are scarce. It gives a better estimate than the H-L in a synthetic idealised case where the true covariance function is known. We also demonstrate that in many cases the new method allows to use the separable convolution mathematical algorithm to increase the computational speed significantly, up to an order of magnitude. The Projection Method (PROM) also allows computing 2D and 3D covariance functions in addition to the standard 1D case.


2021 ◽  
Vol 73 (1) ◽  
Author(s):  
Jan Saynisch-Wagner ◽  
Julien Baerenzung ◽  
Aaron Hornschild ◽  
Christopher Irrgang ◽  
Maik Thomas

AbstractSatellite-measured tidal magnetic signals are of growing importance. These fields are mainly used to infer Earth’s mantle conductivity, but also to derive changes in the oceanic heat content. We present a new Kalman filter-based method to derive tidal magnetic fields from satellite magnetometers: KALMAG. The method’s advantage is that it allows to study a precisely estimated posterior error covariance matrix. We present the results of a simultaneous estimation of the magnetic signals of 8 major tides from 17 years of Swarm and CHAMP data. For the first time, robustly derived posterior error distributions are reported along with the reported tidal magnetic fields. The results are compared to other estimates that are either based on numerical forward models or on satellite inversions of the same data. For all comparisons, maximal differences and the corresponding globally averaged RMSE are reported. We found that the inter-product differences are comparable with the KALMAG-based errors only in a global mean sense. Here, all approaches give values of the same order, e.g., 0.09 nT-0.14 nT for M2. Locally, the KALMAG posterior errors are up to one order smaller than the inter-product differences, e.g., 0.12 nT vs. 0.96 nT for M2. Graphical Abstract


Abstract Recent numerical weather prediction systems have significantly improved medium-range forecasts by implementing hybrid background error covariance, for which climatological (static) and ensemble-based (flow-dependent) error covariance are combined. While the hybrid approach has been investigated mainly in variational systems, this study aims at exploring methods for implementing the hybrid approach for the local ensemble transform Kalman filter (LETKF). Following Kretchmer et al. (2015), the present study constructed hybrid background error covariance by adding collections of climatological perturbations to the forecast ensemble. In addition, this study proposes a new localization method that attenuates the ensemble perturbation (Z-localization) instead of inflating observation error variance (R-localization). A series of experiments with a simplified global atmospheric model revealed that the hybrid LETKF resulted in smaller forecast errors than the LETKF, especially in sparsely observed regions. Due to the larger ensemble enabled by the hybrid approach, optimal localization length scales for the hybrid LETKF were larger than those for the LETKF. With the LETKF, the Z-localization resulted in similar forecast errors as the R-localization. However, Z-localization has an advantage in enabling to apply different localization scales for flow-dependent perturbation and climatological static perturbations with the hybrid LETKF. The optimal localization for climatological perturbations was slightly larger than that for flow-dependent perturbations. This study proposes Optimal EigenDecomposition (OED) ETKF formulation to reduce computational costs. The computational expense of the OED ETKF formulation became significantly smaller than that of standard ETKF formulations as the number of climatological perturbations was increased beyond a few hundred.


2021 ◽  
Author(s):  
Pascal Marquet ◽  
Pauline Martinet ◽  
Jean-François Mahfouf ◽  
Alina Lavinia Barbu ◽  
Benjamin Ménétrier

Abstract. This study aims at introducing two conservative thermodynamic variables (moist-air entropy potential temperature and total water content) into a one-dimensional variational data assimilation system (1D-Var) to demonstrate the benefit for future operational assimilation schemes. This system is assessed using microwave brightness temperatures from a ground-based radiometer installed during the field campaign SOFGO3D dedicated to fog forecast improvement. An underlying objective is to ease the specification of background error covariance matrices that are currently highly dependent on weather conditions making difficult the optimal retrievals of cloud and thermodynamic properties during fog conditions. Background error covariance matrices for these new conservative variables have thus been computed by an ensemble approach based on the French convective scale model AROME, for both all-weather and fog conditions. A first result shows that the use of these matrices for the new variables reduces some dependencies to the meteorological conditions (diurnal cycle, presence or not of clouds) compared to usual variables (temperature, specific humidity). Then, two 1D-Var experiments (classical vs. conservative variables) are evaluated over a full diurnal cycle characterized by a stratus-evolving radiative fog situation, using hourly brightness temperatures. Results show, as expected, that analysed brightness temperatures by the 1D-Var are much closer to the observed ones than background values for both variable choices. This is especially the case for channels sensitive to water vapour and liquid water. On the other hand, analysis increments in model space (water vapour, liquid water) show significant differences between the two sets of variables.


Author(s):  
Winston C Chow

A Kalman filter estimation of the state of a system is merely a random vector that has a normal, also called Gaussian, distribution. Elementary statistics teaches any Gaussian distribution is completely and uniquely characterized by its mean and covariance (variance if univariate). Such characterization is required for statistical inference problems on a Gaussian random vector. This mean and composite covariance of a Kalman filter estimate of a system state will be derived here. The derived covariance is in recursive form. One must not confuse it with the “error covariance” output of a Kalman filter. Potential applications, including geological ones, of the derivation are described and illustrated with a simple example.


2021 ◽  
Author(s):  
Eviatar Bach ◽  
Michael Ghil

Abstract. We present a simple innovation-based model error covariance estimation method for Kalman filters. The method is based on Berry and Sauer (2013) and the simplification results from assuming known observation error covariance. We carry out experiments with a prescribed model error covariance using a Lorenz (1996) model and ensemble Kalman filter. The prescribed error covariance matrix is recovered with high accuracy.


Sign in / Sign up

Export Citation Format

Share Document