A new method for computing horizontally anisotropic background error covariance matrices for data assimilation in ocean models.

Author(s):  
Jose M Gonzalez-Ondina ◽  
Lewis Sampson ◽  
Georgy Shapiro

<p>Current operational ocean modelling systems often use variational data assimilation (DA) to improve the skill of the ocean predictions by combining the numerical model with observational data. Many modern methods are derivatives of objective (optimal) interpolation techniques developed by L. S. Gandin in the 1950s, which requires computation of the background error covariance matrix (BECM), and much research has been devoted into overcoming the difficulties surrounding its calculation and improving its accuracy. In practice, due to time and memory constraints, the BECM is never fully computed. Instead, a simplified model is used, where the correlation at each point is modelled using a simple function while the variance and length scales are computed using error estimation methods such as the Hollingsworth-Lonnberg  or the NMC (National Meteorological Centre). Usually, the correlation is assumed to be horizontally isotropic, or to have a predefined anisotropy based on latitude. However, observations indicate that horizontal diffusion is sometimes anisotropic, hence this has to be propagated into BECM. It is suggested that including these anisotropies would improve the accuracy of the model predictions.</p><p>We present a new method to compute the BECM which allows to extract horizontal anisotropic components from observational data. Our method, unlike current techniques, is fundamentally multidimensional and can be applied to 2D or 3D sets of un-binned data. It also works better than other methods when observations are sparse, so there is no penalty when trying to extract the additional anisotropic components from the data.</p><p>Data Assimilation tools like NEMOVar use a matrix decomposition technique for the BECM in order to minimise the cost function. Our method is well suited to work with this type of decomposition, producing the different components of the decomposition which can be readily used by NEMOVar.</p><p>We have been able to show the spatial stability of our method to quantify anisotropy in areas of sparse observations. While also demonstrating the importance of including anisotropic representation within the background error. Using the coastal regions of the Arabian Sea, it is possible to analyse where improvements to diffusion can be included. Further extensions of this method could lead to a fully anisotropic diffusion operator for the calculation of BECM in NEMOVar. However further testing and optimization are needed to correctly implement this into operational assimilation systems.</p>

2010 ◽  
Vol 138 (8) ◽  
pp. 3356-3365 ◽  
Author(s):  
Olivier Pannekoucke ◽  
Laurent Vezard

Abstract In this note, a stochastic integration scheme is proposed as an alternative to a deterministic integration scheme, usually employed for the diffusion operator in data assimilation. The stochastic integration scheme is no more than a simple interpolation of the initial condition in lieu of the deterministic integration. Furthermore, this also presents a potential in high performance computing. For the classic preconditioned minimizing problem, the stochastic integration is employed to implement the square root of the background error covariance matrix, while its adjoint is obtained from the adjoint code of the square root code. In a first part, the stochastic integration method and its weak convergence are detailed. Then the practical use of this approach in data assimilation is described. It is illustrated in a 1D test bed, where it is shown to run smoothly for background error covariance modeling, with nearest-neighbor interpolations, and O(100) particles.


2018 ◽  
Vol 33 (2) ◽  
pp. 561-582 ◽  
Author(s):  
Kuan-Jen Lin ◽  
Shu-Chih Yang ◽  
Shuyi S. Chen

Abstract Ensemble-based data assimilation (EDA) has been used for tropical cyclone (TC) analysis and prediction with some success. However, the TC position spread determines the structure of the TC-related background error covariance and affects the performance of EDA. With an idealized experiment and a real TC case study, it is demonstrated that observations in the core region cannot be optimally assimilated when the TC position spread is large. To minimize the negative impact from large position uncertainty, a TC-centered EDA approach is implemented in the Weather Research and Forecasting (WRF) Model–local ensemble transform Kalman filter (WRF-LETKF) assimilation system. The impact of TC-centered EDA on TC analysis and prediction of Typhoon Fanapi (2010) is evaluated. Using WRF Model nested grids with 4-km grid spacing in the innermost domain, the focus is on EDA using dropsonde data from the Impact of Typhoons on the Ocean in the Pacific field campaign. The results show that the TC structure in the background mean state is improved and that unrealistically large ensemble spread can be alleviated. The characteristic horizontal scale in the background error covariance is smaller and narrower compared to those derived from the conventional EDA approach. Storm-scale corrections are improved using dropsonde data, which is more favorable for TC development. The analysis using the TC-centered EDA is in better agreement with independent observations. The improved analysis ameliorates model shock and improves the track forecast during the first 12 h and landfall at 72 h. The impact on intensity prediction is mixed with a better minimum sea level pressure and overestimated peak winds.


2020 ◽  
Author(s):  
Lewis Sampson ◽  
Jose M. Gonzalez-Ondina ◽  
Georgy Shapiro

<p>Data assimilation (DA) is a critical component for most state-of-the-art ocean prediction systems, which optimally combines model data and observational measurements to obtain an improved estimate of the modelled variables, by minimizing a cost function. The calculation requires the knowledge of the background error covariance matrix (BECM) as a weight for the quality of the model results, and an observational error covariance matrix (OECM) which weights the observational data.</p><p>Computing the BECM would require knowing the true values of the physical variables, which is not feasible. Instead, the BECM is estimated from model results and observations by using methods like National Meteorological Centre (NMC) or the Hollingsworth and Lönnberg (1984) (H-L). These methods have some shortcomings which make them unfit in some situations, which includes being fundamentally one-dimensional and making a suboptimal use of observations.</p><p>We have produced a novel method for error estimation, using an analysis of observations minus background data (innovations), which attempts to improve on some of these shortcomings. In particular, our method better infers information from observations, requiring less data to produce statistically robust results. We do this by minimizing a linear combination of functions to fit the data using a specifically tailored inner product, referred to as an inner product analysis (IPA).</p><p>We are able to produce quality BECM estimations even in data sparse domains, with notably better results in conditions of scarce observational data. By using a sample of observations, with decreasing sample size, we show that the stability and efficiency of our method, when compared to that of the H-L approach, does not deteriorate nearly as much as the number of data points decrease. We have found that we are able to continually produce error estimates with a reduced set of data, whereas the H-L method will begin to produce spurious values for smaller samples.</p><p>Our method works very well in combination with standard tools like NEMOVar by providing the required standard deviations and length-scales ratios. We have successfully ran this in the Arabian Sea for multiple seasons and compared the results with the H-L (in optimal conditions, when plenty of data is available), spatially the methods perform equally well. When we look at the root mean square error (RMSE) we see very similar performances, with each method giving better results for some seasons and worse for others.</p>


2016 ◽  
Vol 144 (2) ◽  
pp. 591-606 ◽  
Author(s):  
Chengsi Liu ◽  
Ming Xue

Abstract Ensemble–variational data assimilation algorithms that can incorporate the time dimension (four-dimensional or 4D) and combine static and ensemble-derived background error covariances (hybrid) are formulated in general forms based on the extended control variable and the observation-space-perturbation approaches. The properties and relationships of these algorithms and their approximated formulations are discussed. The main algorithms discussed include the following: 1) the standard ensemble 4DVar (En4DVar) algorithm incorporating ensemble-derived background error covariance through the extended control variable approach, 2) the 4DEnVar neglecting the time propagation of the extended control variable (4DEnVar-NPC), 3) the 4D ensemble–variational algorithm based on observation space perturbation (4DEnVar), and 4) the 4DEnVar with no propagation of covariance localization (4DEnVar-NPL). Without the static background error covariance term, none of the algorithms requires the adjoint model except for En4DVar. Costly applications of the tangent linear model to localized ensemble perturbations can be avoided by making the NPC and NPL approximations. It is proven that En4DVar and 4DEnVar are mathematically equivalent, while 4DEnVar-NPC and 4DEnVar-NPL are mathematically equivalent. Such equivalences are also demonstrated by single-observation assimilation experiments with a 1D linear advection model. The effects of the non-flow-following or stationary localization approximations are also examined through the experiments. All of the above algorithms can include the static background error covariance term to establish a hybrid formulation. When the static term is included, all algorithms will require a tangent linear model and an adjoint model. The first guess at appropriate time (FGAT) approximation is proposed to avoid the tangent linear and adjoint models. Computational costs of the algorithms are also discussed.


2010 ◽  
Vol 3 (4) ◽  
pp. 1783-1827 ◽  
Author(s):  
K. Singh ◽  
M. Jardak ◽  
A. Sandu ◽  
K. Bowman ◽  
M. Lee ◽  
...  

Abstract. Chemical data assimilation attempts to optimally use noisy observations along with imperfect model predictions to produce a better estimate of the chemical state of the atmosphere. It is widely accepted that a key ingredient for successful data assimilation is a realistic estimation of the background error distribution. Particularly important is the specification of the background error covariance matrix, which contains information about the magnitude of the background errors and about their correlations. Most models currently use diagonal background covariance matrices. As models evolve toward finer resolutions, the diagonal background covariance matrices become increasingly inaccurate, since they captures less of the spatial error correlations. This paper discusses an efficient computational procedure for constructing non-diagonal background error covariance matrices which account for the spatial correlations of errors. The benefits of using the non-diagonal covariance matrices for variational data assimilation with chemical transport models are illustrated.


2016 ◽  
Vol 82 (12) ◽  
pp. 1035-1048 ◽  
Author(s):  
Zhijin Li ◽  
Xiaoping Cheng ◽  
William I. Gustafson Jr. ◽  
Andrew M. Vogelmann

2018 ◽  
Vol 146 (5) ◽  
pp. 1367-1381 ◽  
Author(s):  
Jean-François Caron ◽  
Mark Buehner

Abstract Scale-dependent localization (SDL) consists of applying the appropriate (i.e., different) amount of localization to different ranges of background error covariance spatial scales while simultaneously assimilating all of the available observations. The SDL method proposed by Buehner and Shlyaeva for ensemble–variational (EnVar) data assimilation was tested in a 3D-EnVar version of the Canadian operational global data assimilation system. It is shown that a horizontal-scale-dependent horizontal localization leads to implicit vertical-level-dependent, variable-dependent, and location-dependent horizontal localization. The results from data assimilation cycles show that horizontal-scale-dependent horizontal covariance localization is able to improve the forecasts up to day 5 in the Northern Hemisphere extratropical summer period and up to day 7 in the Southern Hemisphere extratropical winter period. In the tropics, use of SDL results in improvements similar to what can be obtained by increasing the uniform amount of spatial localization. An investigation of the dynamical balance in the resulting analysis increments demonstrates that SDL does not further harm the balance between the mass and the rotational wind fields, as compared to the traditional localization approach. Potential future applications for the SDL method are also discussed.


Sign in / Sign up

Export Citation Format

Share Document