scholarly journals Measuring information content from observations for data assimilation: spectral formulations and their implications to observational data compression

2011 ◽  
Vol 63 (4) ◽  
pp. 793-804 ◽  
Author(s):  
Qin Xu
2021 ◽  
pp. 50-66
Author(s):  
V. N. Stepanov ◽  
◽  
Yu. D. Resnyanskii ◽  
B. S. Strukov ◽  
A. A. Zelen’ko ◽  
...  

The quality of simulation of model fields is analyzed depending on the assimilation of various types of data using the PDAF software product assimilating synthetic data into the NEMO global ocean model. Several numerical experiments are performed to simulate the ocean–sea ice system. Initially, free model was run with different values of the coefficients of horizontal turbulent viscosity and diffusion, but with the same atmospheric forcing. The model output obtained with higher values of these coefficients was used to determine the first guess fields in subsequent experiments with data assimilation, while the model results with lower values of the coefficients were assumed to be true states, and a part of these results was used as synthetic observations. The results are analyzed that are assimilation of various types of observational data using the Kalman filter included through the PDAF to the NEMO model with real bottom topography. It is shown that a degree of improving model fields in the process of data assimilation is highly dependent on the structure of data at the input of the assimilation procedure.


2016 ◽  
Vol 144 (8) ◽  
pp. 2927-2945
Author(s):  
Nedjeljka Žagar ◽  
Jeffrey Anderson ◽  
Nancy Collins ◽  
Timothy Hoar ◽  
Kevin Raeder ◽  
...  

Abstract Global data assimilation systems for numerical weather prediction (NWP) are characterized by significant uncertainties in tropical analysis fields. Furthermore, the largest spread of global ensemble forecasts in the short range on all scales is in the tropics. The presented results suggest that these properties hold even in the perfect-model framework and the ensemble Kalman filter data assimilation with a globally homogeneous network of wind and temperature profiles. The reasons for this are discussed by using the modal analysis, which provides information about the scale dependency of analysis and forecast uncertainties and information about the efficiency of data assimilation to reduce the prior uncertainties in the balanced and inertio-gravity dynamics. The scale-dependent representation of variance reduction of the prior ensemble by the data assimilation shows that the peak efficiency of data assimilation is on the synoptic scales in the midlatitudes that are associated with quasigeostrophic dynamics. In contrast, the variance associated with the inertia–gravity modes is less successfully reduced on all scales. A smaller information content of observations on planetary scales with respect to the synoptic scales is discussed in relation to the large-scale tropical uncertainties that current data assimilation methodologies do not address successfully. In addition, it is shown that a smaller reduction of the large-scale uncertainties in the prior state for NWP in the tropics than in the midlatitudes is influenced by the applied radius for the covariance localization.


2020 ◽  
Author(s):  
Lewis Sampson ◽  
Jose M. Gonzalez-Ondina ◽  
Georgy Shapiro

<p>Data assimilation (DA) is a critical component for most state-of-the-art ocean prediction systems, which optimally combines model data and observational measurements to obtain an improved estimate of the modelled variables, by minimizing a cost function. The calculation requires the knowledge of the background error covariance matrix (BECM) as a weight for the quality of the model results, and an observational error covariance matrix (OECM) which weights the observational data.</p><p>Computing the BECM would require knowing the true values of the physical variables, which is not feasible. Instead, the BECM is estimated from model results and observations by using methods like National Meteorological Centre (NMC) or the Hollingsworth and Lönnberg (1984) (H-L). These methods have some shortcomings which make them unfit in some situations, which includes being fundamentally one-dimensional and making a suboptimal use of observations.</p><p>We have produced a novel method for error estimation, using an analysis of observations minus background data (innovations), which attempts to improve on some of these shortcomings. In particular, our method better infers information from observations, requiring less data to produce statistically robust results. We do this by minimizing a linear combination of functions to fit the data using a specifically tailored inner product, referred to as an inner product analysis (IPA).</p><p>We are able to produce quality BECM estimations even in data sparse domains, with notably better results in conditions of scarce observational data. By using a sample of observations, with decreasing sample size, we show that the stability and efficiency of our method, when compared to that of the H-L approach, does not deteriorate nearly as much as the number of data points decrease. We have found that we are able to continually produce error estimates with a reduced set of data, whereas the H-L method will begin to produce spurious values for smaller samples.</p><p>Our method works very well in combination with standard tools like NEMOVar by providing the required standard deviations and length-scales ratios. We have successfully ran this in the Arabian Sea for multiple seasons and compared the results with the H-L (in optimal conditions, when plenty of data is available), spatially the methods perform equally well. When we look at the root mean square error (RMSE) we see very similar performances, with each method giving better results for some seasons and worse for others.</p>


2013 ◽  
Vol 1 (1) ◽  
pp. 106-138 ◽  
Author(s):  
K. Singh ◽  
A. Sandu ◽  
M. Jardak ◽  
K. W. Bowman ◽  
M. Lee

2021 ◽  
Vol 28 (4) ◽  
pp. 633-649
Author(s):  
Yumeng Chen ◽  
Alberto Carrassi ◽  
Valerio Lucarini

Abstract. Data assimilation (DA) aims at optimally merging observational data and model outputs to create a coherent statistical and dynamical picture of the system under investigation. Indeed, DA aims at minimizing the effect of observational and model error and at distilling the correct ingredients of its dynamics. DA is of critical importance for the analysis of systems featuring sensitive dependence on the initial conditions, as chaos wins over any finitely accurate knowledge of the state of the system, even in absence of model error. Clearly, the skill of DA is guided by the properties of dynamical system under investigation, as merging optimally observational data and model outputs is harder when strong instabilities are present. In this paper we reverse the usual angle on the problem and show that it is indeed possible to use the skill of DA to infer some basic properties of the tangent space of the system, which may be hard to compute in very high-dimensional systems. Here, we focus our attention on the first Lyapunov exponent and the Kolmogorov–Sinai entropy and perform numerical experiments on the Vissio–Lucarini 2020 model, a recently proposed generalization of the Lorenz 1996 model that is able to describe in a simple yet meaningful way the interplay between dynamical and thermodynamical variables.


2013 ◽  
Vol 10 (2) ◽  
pp. 2029-2065 ◽  
Author(s):  
S. V. Weijs ◽  
N. van de Giesen ◽  
M. B. Parlange

Abstract. When inferring models from hydrological data or calibrating hydrological models, we might be interested in the information content of those data to quantify how much can potentially be learned from them. In this work we take a perspective from (algorithmic) information theory (AIT) to discuss some underlying issues regarding this question. In the information-theoretical framework, there is a strong link between information content and data compression. We exploit this by using data compression performance as a time series analysis tool and highlight the analogy to information content, prediction, and learning (understanding is compression). The analysis is performed on time series of a set of catchments, searching for the mechanisms behind compressibility. We discuss both the deeper foundation from algorithmic information theory, some practical results and the inherent difficulties in answering the question: "How much information is contained in this data?". The conclusion is that the answer to this question can only be given once the following counter-questions have been answered: (1) Information about which unknown quantities? (2) What is your current state of knowledge/beliefs about those quantities? Quantifying information content of hydrological data is closely linked to the question of separating aleatoric and epistemic uncertainty and quantifying maximum possible model performance, as addressed in current hydrological literature. The AIT perspective teaches us that it is impossible to answer this question objectively, without specifying prior beliefs. These beliefs are related to the maximum complexity one is willing to accept as a law and what is considered as random.


2019 ◽  
Vol 486 (4) ◽  
pp. 421-425
Author(s):  
V. P. Shutyaev ◽  
F.-X. Le Dimet

The problem of variational data assimilation for a nonlinear evolutionary model is formulated as an optimal control problem to find simultaneously unknown parameters and the initial state of the model. The response function is considered as a functional of the optimal solution found as a result of assimilation. The sensitivity of the functional to observational data is studied. The gradient of the functional with respect to observations is associated with the solution of a nonstandard problem involving a system of direct and adjoint equations. On the basis of the Hessian of the original cost function, the solvability of the nonstandard problem is studied. An algorithm for calculating the gradient of the response function with respect to observational data is formulated and justified.


Author(s):  
Kazuyoshi Suzuki 1 ◽  
Milija ZUPANSKI 2

Regions of the cryosphere, including the poles, that are currently unmonitored are expanding, therefore increasing the importance of satellite observations for such regions. With the increasing availability of satellite data in recent years, data assimilation research that combines forecasting models with observational data has begun to flourish. Coupled land/ice-atmosphere/ocean models generally improve the forecasting ability of models. Data assimilation plays an important role in such coupled models, by providing initial conditions and/or empirical parameter estimation. Coupled data assimilation can generally be divided into three types: uncoupled, weakly coupled, or strongly coupled. This review provides an overview of coupled data assimilation, introduces examples of its use in research on sea ice-ocean interactions and the land, and discusses its future outlook. Assimilation of coupled data constitutes an effective method for monitoring cold regions for which observational data are scarce and should prove useful for climate change research and the design of efficient monitoring networks in the future.


Sign in / Sign up

Export Citation Format

Share Document