scholarly journals The Diffusion Kernel Filter Applied to Lagrangian Data Assimilation

2009 ◽  
Vol 137 (12) ◽  
pp. 4386-4400 ◽  
Author(s):  
Paul Krause ◽  
Juan M. Restrepo

Abstract The diffusion kernel filter is a sequential particle-method approach to data assimilation of time series data and evolutionary models. The method is applicable to nonlinear/non-Gaussian problems. Within branches of prediction it parameterizes small fluctuations of Brownian-driven paths about deterministic paths. Its implementation is relatively straightforward, provided a tangent linear model is available. A by-product of the parameterization is a bound on the infinity norm of the covariance matrix of such fluctuations (divided by the grid model dimension). As such it can be used to define a notion of “prediction” itself. It can also be used to assess the short time sensitivity of the deterministic history to Brownian noise or Gaussian initial perturbations. In pure oceanic Lagrangian data assimilation, the dynamics and the statistics are nonlinear and non-Gaussian, respectively. Both of these characteristics challenge conventional methods, such as the extended Kalman filter and the popular ensemble Kalman filter. The diffusion kernel filter is proposed as an alternative and is evaluated here on a problem that is often used as a test bed for Lagrangian data assimilation: it consists of tracking point vortices and passive drifters, using a dynamical model and data, both of which have known error statistics. It is found that the diffusion kernel filter captures the first few moments of the random dynamics, with a computational cost that is competitive with a particle filter estimation strategy. The authors also introduce a clustered version of the diffusion kernel filter (cDKF), which is shown to be significantly more efficient with regard to computational cost, at the expense of a slight degradation in the description of the statistics of the dynamical history. Upon parallelizing branches of prediction, cDKF can be computationally competitive with EKF.

2010 ◽  
Vol 138 (4) ◽  
pp. 1050-1083 ◽  
Author(s):  
John Harlim ◽  
Andrew J. Majda

Abstract Filtering sparsely turbulent signals from nature is a central problem of contemporary data assimilation. Here, sparsely observed turbulent signals from nature are generated by solutions of two-layer quasigeostrophic models with turbulent cascades from baroclinic instability in two separate regimes with varying Rossby radius mimicking the atmosphere and the ocean. In the “atmospheric” case, large-scale turbulent fluctuations are dominated by barotropic zonal jets with non-Gaussian statistics while the “oceanic” case has large-scale blocking regime transitions with barotropic zonal jets and large-scale Rossby waves. Recently introduced, cheap radical linear stochastic filtering algorithms utilizing mean stochastic models (MSM1, MSM2) that have judicious model errors are developed here as well as a very recent cheap stochastic parameterization extended Kalman filter (SPEKF), which includes stochastic parameterization of additive and multiplicative bias corrections “on the fly.” These cheap stochastic reduced filters as well as a local least squares ensemble adjustment Kalman filter (LLS-EAKF) are compared on the test bed with 36 sparse regularly spaced observations for their skill in recovering turbulent spectra, spatial pattern correlations, and RMS errors. Of these four algorithms, the cheap SPEKF algorithm has the superior overall skill on the stringent test bed, comparable to LLS-EAKF in the atmospheric regime with and without model error and far superior to LLS-EAKF in the ocean regime. LLS-EAKF has special difficulty and high computational cost in the ocean regime with small Rossby radius, which creates stiffness in the perfect dynamics. The even cheaper mean stochastic model, MSM1, has high skill, which is comparable to SPEKF for the oceanic case while MSM2 has significantly worse filtering performance than MSM1 with the same inexpensive computational cost. This is interesting because MSM1 is based on a simple new regression strategy while MSM2 relies on the conventional regression strategy used in stochastic models for shear turbulence.


2021 ◽  
Vol 5 (1) ◽  
pp. 51
Author(s):  
Enriqueta Vercher ◽  
Abel Rubio ◽  
José D. Bermúdez

We present a new forecasting scheme based on the credibility distribution of fuzzy events. This approach allows us to build prediction intervals using the first differences of the time series data. Additionally, the credibility expected value enables us to estimate the k-step-ahead pointwise forecasts. We analyze the coverage of the prediction intervals and the accuracy of pointwise forecasts using different credibility approaches based on the upper differences. The comparative results were obtained working with yearly time series from the M4 Competition. The performance and computational cost of our proposal, compared with automatic forecasting procedures, are presented.


2021 ◽  
Author(s):  
Marie Turčičová ◽  
Jan Mandel ◽  
Kryštof Eben

<p>A widely popular group of data assimilation methods in meteorological and geophysical sciences is formed by filters based on Monte-Carlo approximation of the traditional Kalman filter, e.g. <span>E</span><span>nsemble Kalman filter </span><span>(EnKF)</span><span>, </span><span>E</span><span>nsemble </span><span>s</span><span>quare-root filter and others. Due to the computational cost, ensemble </span><span>size </span><span>is </span><span>usually </span><span>small </span><span>compar</span><span>ed</span><span> to the dimension of the </span><span>s</span><span>tate </span><span>vector. </span><span>Traditional </span> <span>EnKF implicitly uses the sample covariance which is</span><span> a poor estimate of the </span><span>background covariance matrix - singular and </span><span>contaminated by </span><span>spurious correlations. </span></p><p><span>W</span><span>e focus on modelling the </span><span>background </span><span>covariance matrix by means of </span><span>a linear model for its inverse. This is </span><span>particularly </span><span>useful</span> <span>in</span><span> Gauss-Markov random fields (GMRF), </span><span>where</span> <span>the inverse covariance matrix has </span><span>a banded </span><span>structure</span><span>. </span><span>The parameters of the model are estimated by the</span><span> score matching </span><span>method which </span><span>provides</span><span> estimators in a closed form</span><span>, cheap to compute</span><span>. The resulting estimate</span><span> is a key component of the </span><span>proposed </span><span>ensemble filtering algorithms. </span><span>Under the assumption that the state vector is a GMRF in every time-step, t</span><span>he Score matching filter with Gaussian resamplin</span><span>g (SMF-GR) </span><span>gives</span><span> in every time-step a consistent (in the large ensemble limit) estimator of mean and covariance matrix </span><span>of the forecast and analysis distribution</span><span>. Further, we propose a filtering method called Score matching ensemble filter (SMEF), based on regularization of the EnK</span><span>F</span><span>. Th</span><span>is</span><span> filter performs well even for non-Gaussian systems with non-linear dynamic</span><span>s</span><span>. </span><span>The performance of both filters is illustrated on a simple linear convection model and Lorenz-96.</span></p>


Author(s):  
Vincenzo Punzo ◽  
Domenico Josto Formisano ◽  
Vincenzo Torrieri

Difficulty in obtaining accurate car-following data has traditionally been regarded as a considerable drawback in understanding real phenomena and has affected the development and validation of traffic microsimulation models. Recent advancements in digital technology have opened up new horizons in the conduct of research in this field. Despite the high degrees of precision of these techniques, estimation of time series data of speeds and accelerations from positions with the required accuracy is still a demanding task. The core of the problem is filtering the noisy trajectory data for each vehicle without altering platoon data consistency; i.e., the speeds and accelerations of following vehicles must be estimated so that the resulting intervehicle spacings are equal to the real one. Otherwise, negative spacings can also easily occur. The task was achieved in this study by considering vehicles of a platoon as a sole dynamic system and reducing several estimation problems to a single consistent one. This process was accomplished by means of a nonstationary Kalman filter that used measurements and time-varying error information from differential Global Positioning System devices. The Kalman filter was fruitfully applied here to estimation of the speed of the whole platoon by including intervehicle spacings as additional measurements (assumed to be reference measurements). The closed solution of an optimization problem that ensures strict observation of the true intervehicle spacings concludes the estimation process. The stationary counterpart of the devised filter is suitable for application to position data, regardless of the data collection technique used, e.g., video cameras.


2020 ◽  
Vol 10 (12) ◽  
pp. 4124
Author(s):  
Baoquan Wang ◽  
Tonghai Jiang ◽  
Xi Zhou ◽  
Bo Ma ◽  
Fan Zhao ◽  
...  

For the task of time-series data classification (TSC), some methods directly classify raw time-series (TS) data. However, certain sequence features are not evident in the time domain and the human brain can extract visual features based on visualization to classify data. Therefore, some researchers have converted TS data to image data and used image processing methods for TSC. While human perceptionconsists of a combination of human senses from different aspects, existing methods only use sequence features or visualization features. Therefore, this paper proposes a framework for TSC based on fusion features (TSC-FF) of sequence features extracted from raw TS and visualization features extracted from Area Graphs converted from TS. Deep learning methods have been proven to be useful tools for automatically learning features from data; therefore, we use long short-term memory with an attention mechanism (LSTM-A) to learn sequence features and a convolutional neural network with an attention mechanism (CNN-A) for visualization features, in order to imitate the human brain. In addition, we use the simplest visualization method of Area Graph for visualization features extraction, avoiding loss of information and additional computational cost. This article aims to prove that using deep neural networks to learn features from different aspects and fusing them can replace complex, artificially constructed features, as well as remove the bias due to manually designed features, in order to avoid the limitations of domain knowledge. Experiments on several open data sets show that the framework achieves promising results, compared with other methods.


2015 ◽  
Vol 143 (4) ◽  
pp. 1347-1367 ◽  
Author(s):  
Julian Tödter ◽  
Bodo Ahrens

Abstract The ensemble Kalman filter (EnKF) and its deterministic variants, mostly square root filters such as the ensemble transform Kalman filter (ETKF), represent a popular alternative to variational data assimilation schemes and are applied in a wide range of operational and research activities. Their forecast step employs an ensemble integration that fully respects the nonlinear nature of the analyzed system. In the analysis step, they implicitly assume the prior state and observation errors to be Gaussian. Consequently, in nonlinear systems, the analysis mean and covariance are biased, and these filters remain suboptimal. In contrast, the fully nonlinear, non-Gaussian particle filter (PF) only relies on Bayes’s theorem, which guarantees an exact asymptotic behavior, but because of the so-called curse of dimensionality it is exposed to weight collapse. Here, it is shown how to obtain a new analysis ensemble whose mean and covariance exactly match the Bayesian estimates. This is achieved by a deterministic matrix square root transformation of the forecast ensemble, and subsequently a suitable random rotation that significantly contributes to filter stability while preserving the required second-order statistics. The properties and performance of the proposed algorithm are further investigated via a set of experiments. They indicate that such a filter formulation can increase the analysis quality, even for relatively small ensemble sizes, compared to other ensemble filters in nonlinear, non-Gaussian scenarios. Localization enhances the potential applicability of this PF-inspired scheme in larger-dimensional systems. The proposed algorithm, which is fairly easy to implement and computationally efficient, is referred to as the nonlinear ensemble transform filter (NETF).


2017 ◽  
Vol 145 (5) ◽  
pp. 1897-1918 ◽  
Author(s):  
Jonathan Poterjoy ◽  
Ryan A. Sobash ◽  
Jeffrey L. Anderson

Abstract Particle filters (PFs) are Monte Carlo data assimilation techniques that operate with no parametric assumptions for prior and posterior errors. A data assimilation method introduced recently, called the local PF, approximates the PF solution within neighborhoods of observations, thus allowing for its use in high-dimensional systems. The current study explores the potential of the local PF for atmospheric data assimilation through cloud-permitting numerical experiments performed for an idealized squall line. Using only 100 ensemble members, experiments using the local PF to assimilate simulated radar measurements demonstrate that the method provides accurate analyses at a cost comparable to ensemble filters currently used in weather models. Comparisons between the local PF and an ensemble Kalman filter demonstrate benefits of the local PF for producing probabilistic analyses of non-Gaussian variables, such as hydrometeor mixing ratios. The local PF also provides more accurate forecasts than the ensemble Kalman filter, despite yielding higher posterior root-mean-square errors. A major advantage of the local PF comes from its ability to produce more physically consistent posterior members than the ensemble Kalman filter, which leads to fewer spurious model adjustments during forecasts. This manuscript presents the first successful application of the local PF in a weather prediction model and discusses implications for real applications where nonlinear measurement operators and nonlinear model processes limit the effectiveness of current Gaussian data assimilation techniques.


Sign in / Sign up

Export Citation Format

Share Document